[GitHub] [hadoop] jianghuazhu commented on a change in pull request #2292: HDFS-15565.Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread GitBox


jianghuazhu commented on a change in pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292#discussion_r486101882



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -705,7 +705,6 @@ static private int doBalance(Collection namenodes,
 LOG.info("excluded nodes = " + p.getExcludedNodes());
 LOG.info("source nodes = " + p.getSourceNodes());
 checkKeytabAndInit(conf);
-System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");

Review comment:
   ok 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481266&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481266
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 06:23
Start Date: 10/Sep/20 06:23
Worklog Time Spent: 10m 
  Work Description: JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690016227


   @umamaheswararao  Totally make sense, updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481266)
Time Spent: 8h 10m  (was: 8h)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690016227


   @umamaheswararao  Totally make sense, updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690012492


   Thanks @JohnZZGithub I agree it's always better to have refactoring patch 
separated to make the reviews easier.
   In this case, we need to do minor refactors to follow the coding rules. I 
would be happy to review your refactoring patch later if you would like to 
contribute. Thanks :-)   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481264
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 06:14
Start Date: 10/Sep/20 06:14
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690012492


   Thanks @JohnZZGithub I agree it's always better to have refactoring patch 
separated to make the reviews easier.
   In this case, we need to do minor refactors to follow the coding rules. I 
would be happy to review your refactoring patch later if you would like to 
contribute. Thanks :-)   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481264)
Time Spent: 8h  (was: 7h 50m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481257&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481257
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:48
Start Date: 10/Sep/20 05:48
Worklog Time Spent: 10m 
  Work Description: JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690001235


   @umamaheswararao  Sure. In fact, I did a major refactor in our internal repo 
to make it switch case-based. 
(https://github.com/apache/hadoop/pull/424/files#diff-69fd14ba63365b6a428bf7142c463990R511)
 However, I didn't put it here as it's harder to review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481257)
Time Spent: 7h 50m  (was: 7h 40m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-690001235


   @umamaheswararao  Sure. In fact, I did a major refactor in our internal repo 
to make it switch case-based. 
(https://github.com/apache/hadoop/pull/424/files#diff-69fd14ba63365b6a428bf7142c463990R511)
 However, I didn't put it here as it's harder to review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481240&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481240
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:28
Start Date: 10/Sep/20 05:28
Worklog Time Spent: 10m 
  Work Description: umamaheswararao edited a comment on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689992391


   I think the following new lines added into this method. Thats the reason it 
is reporting longer method warning..
   
   ```
   } else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_REGEX)) {
   final String target = si.getValue();
   String linkKeyPath = null;
   final String linkRegexPrefix = Constants.CONFIG_VIEWFS_LINK_REGEX + 
".";
   // settings#.linkKey
   String settingsAndLinkKeyPath = 
src.substring(linkRegexPrefix.length());
   int settingLinkKeySepIndex = settingsAndLinkKeyPath
   .indexOf(RegexMountPoint.SETTING_SRCREGEX_SEP);
   if (settingLinkKeySepIndex == -1) {
 // There's no settings
 linkKeyPath = settingsAndLinkKeyPath;
 settings = null;
   } else {
 // settings#.linkKey style configuration
 // settings from settings#.linkKey
 settings =
 settingsAndLinkKeyPath.substring(0, settingLinkKeySepIndex);
 // linkKeyPath
 linkKeyPath = settingsAndLinkKeyPath.substring(
 settings.length() + RegexMountPoint.SETTING_SRCREGEX_SEP
 .length());
   }
   linkType = LinkType.REGEX;
   linkEntries.add(
   new LinkEntry(linkKeyPath, target, linkType, settings, ugi,
   config));
   continue;
 }
   
   ...
   ...
   
case REGEX:
 LOGGER.info("Add regex mount point:" + le.getSrc()
 + ", target:" + le.getTarget()
 + ", interceptor settings:" + le.getSettings());
 RegexMountPoint regexMountPoint =
 new RegexMountPoint(
 this, le.getSrc(), le.getTarget(), le.getSettings());
 regexMountPoint.initialize();
 regexMountPointList.add(regexMountPoint);
 continue;
   
   ```
   
   Probably we can extract this logics to separate methods.
   I understand it needs multiple values needs to get from extarcted method, 
they are settings, linkType etc. Probably you can create LinkEntry in method 
and return LinkEntry. From there you can set linkType, settings etc.
   Check if they are possible to reduce lines in that method.
   
   move the following code to a small check method. There are two places we are 
doing same checks.
   ```
   if (src.length() != linkMergeSlashPrefix.length()) {
 throw new IOException("ViewFs: Mount points initialization error." 
+
 " Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH +
 " entry in config: " + src);
   }
   ```
   At end of the method we have some logics for handle no mount points code. IF 
we need we could extract that to handleNoMountPoints method or so.
   
   Check if this refactoring can make checkstyle happy and code can look 
cleaner.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481240)
Time Spent: 7h 40m  (was: 7.5h)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target p

[GitHub] [hadoop] umamaheswararao edited a comment on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao edited a comment on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689992391


   I think the following new lines added into this method. Thats the reason it 
is reporting longer method warning..
   
   ```
   } else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_REGEX)) {
   final String target = si.getValue();
   String linkKeyPath = null;
   final String linkRegexPrefix = Constants.CONFIG_VIEWFS_LINK_REGEX + 
".";
   // settings#.linkKey
   String settingsAndLinkKeyPath = 
src.substring(linkRegexPrefix.length());
   int settingLinkKeySepIndex = settingsAndLinkKeyPath
   .indexOf(RegexMountPoint.SETTING_SRCREGEX_SEP);
   if (settingLinkKeySepIndex == -1) {
 // There's no settings
 linkKeyPath = settingsAndLinkKeyPath;
 settings = null;
   } else {
 // settings#.linkKey style configuration
 // settings from settings#.linkKey
 settings =
 settingsAndLinkKeyPath.substring(0, settingLinkKeySepIndex);
 // linkKeyPath
 linkKeyPath = settingsAndLinkKeyPath.substring(
 settings.length() + RegexMountPoint.SETTING_SRCREGEX_SEP
 .length());
   }
   linkType = LinkType.REGEX;
   linkEntries.add(
   new LinkEntry(linkKeyPath, target, linkType, settings, ugi,
   config));
   continue;
 }
   
   ...
   ...
   
case REGEX:
 LOGGER.info("Add regex mount point:" + le.getSrc()
 + ", target:" + le.getTarget()
 + ", interceptor settings:" + le.getSettings());
 RegexMountPoint regexMountPoint =
 new RegexMountPoint(
 this, le.getSrc(), le.getTarget(), le.getSettings());
 regexMountPoint.initialize();
 regexMountPointList.add(regexMountPoint);
 continue;
   
   ```
   
   Probably we can extract this logics to separate methods.
   I understand it needs multiple values needs to get from extarcted method, 
they are settings, linkType etc. Probably you can create LinkEntry in method 
and return LinkEntry. From there you can set linkType, settings etc.
   Check if they are possible to reduce lines in that method.
   
   move the following code to a small check method. There are two places we are 
doing same checks.
   ```
   if (src.length() != linkMergeSlashPrefix.length()) {
 throw new IOException("ViewFs: Mount points initialization error." 
+
 " Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH +
 " entry in config: " + src);
   }
   ```
   At end of the method we have some logics for handle no mount points code. IF 
we need we could extract that to handleNoMountPoints method or so.
   
   Check if this refactoring can make checkstyle happy and code can look 
cleaner.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481238&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481238
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:26
Start Date: 10/Sep/20 05:26
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689992391


   I think the following new lines added into this method
   
   ```
   } else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_REGEX)) {
   final String target = si.getValue();
   String linkKeyPath = null;
   final String linkRegexPrefix = Constants.CONFIG_VIEWFS_LINK_REGEX + 
".";
   // settings#.linkKey
   String settingsAndLinkKeyPath = 
src.substring(linkRegexPrefix.length());
   int settingLinkKeySepIndex = settingsAndLinkKeyPath
   .indexOf(RegexMountPoint.SETTING_SRCREGEX_SEP);
   if (settingLinkKeySepIndex == -1) {
 // There's no settings
 linkKeyPath = settingsAndLinkKeyPath;
 settings = null;
   } else {
 // settings#.linkKey style configuration
 // settings from settings#.linkKey
 settings =
 settingsAndLinkKeyPath.substring(0, settingLinkKeySepIndex);
 // linkKeyPath
 linkKeyPath = settingsAndLinkKeyPath.substring(
 settings.length() + RegexMountPoint.SETTING_SRCREGEX_SEP
 .length());
   }
   linkType = LinkType.REGEX;
   linkEntries.add(
   new LinkEntry(linkKeyPath, target, linkType, settings, ugi,
   config));
   continue;
 }
   
   ...
   ...
   
case REGEX:
 LOGGER.info("Add regex mount point:" + le.getSrc()
 + ", target:" + le.getTarget()
 + ", interceptor settings:" + le.getSettings());
 RegexMountPoint regexMountPoint =
 new RegexMountPoint(
 this, le.getSrc(), le.getTarget(), le.getSettings());
 regexMountPoint.initialize();
 regexMountPointList.add(regexMountPoint);
 continue;
   
   ```
   
   Probably we can extract this logics to separate methods.
   I understand it needs multiple values needs to get from extarcted method, 
they are settings, linkType etc. Probably you can create LinkEntry in method 
and return LinkEntry. From there you can set linkType, settings etc.
   Check if they are possible to reduce lines in that method.
   
   move the following code to a small check method. There are two places we are 
doing same checks.
   ```
   if (src.length() != linkMergeSlashPrefix.length()) {
 throw new IOException("ViewFs: Mount points initialization error." 
+
 " Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH +
 " entry in config: " + src);
   }
   ```
   At end of the method we have some logics for handle no mount points code. IF 
we need we could extract that to handleNoMountPoints method or so.
   
   Check if this refactoring can make checkstyle happy and code can look 
cleaner.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481238)
Time Spent: 7.5h  (was: 7h 20m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer

[GitHub] [hadoop] umamaheswararao commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689992391


   I think the following new lines added into this method
   
   ```
   } else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_REGEX)) {
   final String target = si.getValue();
   String linkKeyPath = null;
   final String linkRegexPrefix = Constants.CONFIG_VIEWFS_LINK_REGEX + 
".";
   // settings#.linkKey
   String settingsAndLinkKeyPath = 
src.substring(linkRegexPrefix.length());
   int settingLinkKeySepIndex = settingsAndLinkKeyPath
   .indexOf(RegexMountPoint.SETTING_SRCREGEX_SEP);
   if (settingLinkKeySepIndex == -1) {
 // There's no settings
 linkKeyPath = settingsAndLinkKeyPath;
 settings = null;
   } else {
 // settings#.linkKey style configuration
 // settings from settings#.linkKey
 settings =
 settingsAndLinkKeyPath.substring(0, settingLinkKeySepIndex);
 // linkKeyPath
 linkKeyPath = settingsAndLinkKeyPath.substring(
 settings.length() + RegexMountPoint.SETTING_SRCREGEX_SEP
 .length());
   }
   linkType = LinkType.REGEX;
   linkEntries.add(
   new LinkEntry(linkKeyPath, target, linkType, settings, ugi,
   config));
   continue;
 }
   
   ...
   ...
   
case REGEX:
 LOGGER.info("Add regex mount point:" + le.getSrc()
 + ", target:" + le.getTarget()
 + ", interceptor settings:" + le.getSettings());
 RegexMountPoint regexMountPoint =
 new RegexMountPoint(
 this, le.getSrc(), le.getTarget(), le.getSettings());
 regexMountPoint.initialize();
 regexMountPointList.add(regexMountPoint);
 continue;
   
   ```
   
   Probably we can extract this logics to separate methods.
   I understand it needs multiple values needs to get from extarcted method, 
they are settings, linkType etc. Probably you can create LinkEntry in method 
and return LinkEntry. From there you can set linkType, settings etc.
   Check if they are possible to reduce lines in that method.
   
   move the following code to a small check method. There are two places we are 
doing same checks.
   ```
   if (src.length() != linkMergeSlashPrefix.length()) {
 throw new IOException("ViewFs: Mount points initialization error." 
+
 " Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH +
 " entry in config: " + src);
   }
   ```
   At end of the method we have some logics for handle no mount points code. IF 
we need we could extract that to handleNoMountPoints method or so.
   
   Check if this refactoring can make checkstyle happy and code can look 
cleaner.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9331) Hadoop crypto codec framework and crypto codec implementations

2020-09-09 Thread nikhil panchal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193379#comment-17193379
 ] 

nikhil panchal commented on HADOOP-9331:


Hello,

I have ORC data stored in HDFS. I have one use case, encrypt one of the column 
present in ORC data. Can anyone suggest standard steps i need to follow or what 
hadoop component i can use.

> Hadoop crypto codec framework and crypto codec implementations
> --
>
> Key: HADOOP-9331
> URL: https://issues.apache.org/jira/browse/HADOOP-9331
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Haifeng Chen
>Priority: Major
> Attachments: Hadoop Crypto Design.pdf
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
>  For use cases that deal with sensitive data, we often need to encrypt data 
> to be stored safely at rest. Hadoop common provides a codec framework for 
> compression algorithms. We start here. However because encryption algorithms 
> require some additional configuration and methods for key management, we 
> introduce a crypto codec framework that builds on the compression codec 
> framework. It cleanly distinguishes crypto algorithms from compression 
> algorithms, but shares common interfaces between them where possible, and 
> also carries extended interfaces where necessary to satisfy those needs. We 
> also introduce a generic Key type, and supporting utility methods and 
> classes, as a necessary abstraction for dealing with both Java crypto keys 
> and PGP keys.
> The task for this feature breaks into two parts:
> 1. The crypto codec framework that based on compression codec which can be 
> shared by all crypto codec implementations.
> 2. The codec implementations such as AES and others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


JohnZZGithub commented on a change in pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486071356



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   Removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481232&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481232
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:19
Start Date: 10/Sep/20 05:19
Worklog Time Spent: 10m 
  Work Description: JohnZZGithub commented on a change in pull request 
#2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486071356



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   Removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481232)
Time Spent: 7h 20m  (was: 7h 10m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481231&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481231
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:18
Start Date: 10/Sep/20 05:18
Worklog Time Spent: 10m 
  Work Description: JohnZZGithub commented on a change in pull request 
#2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486070892



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   Good catch, I guess the next addLinkRegex is used but not this one.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481231)
Time Spent: 7h 10m  (was: 7h)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


JohnZZGithub commented on a change in pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486070892



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   Good catch, I guess the next addLinkRegex is used but not this one.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao commented on a change in pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486063312



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   This method not used anywhere?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481229&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481229
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 05:08
Start Date: 10/Sep/20 05:08
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r486063312



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java
##
@@ -166,6 +166,42 @@ public static void addLinkNfly(final Configuration conf, 
final String src,
 addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets);
   }
 
+
+  /**
+   * Add a LinkRegex to the config for the specified mount table.
+   * @param conf - get mountable config from this conf
+   * @param mountTableName - the mountable name of the regex config item
+   * @param srcRegex - the src path regex expression that applies to this 
config
+   * @param targetStr - the string of target path
+   */

Review comment:
   This method not used anywhere?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481229)
Time Spent: 7h  (was: 6h 50m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193369#comment-17193369
 ] 

Akira Ajisaka commented on HADOOP-17255:


Thank you [~weichiu] for your comment.

The credential provider document 
([https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html#Provider_Types])
 says how to configure keystore provider for HDFS.
{quote}To wrap filesystem URIs with a jceks URI follow these steps:
 1. Take a filesystem URI such as hdfs://namenode:9001/users/alice/secrets.jceks
 2. Place jceks:// in front of the URL: 
jceks://hdfs://namenode:9001/users/alice/secrets.jceks
 3. Replace the second :// string with an @ symbol: 
jceks://hdfs@namenode:9001/users/alice/secrets.jceks
{quote}
Therefore I thought JavaKeyStoreProvider is supposed to work if the keystore is 
in HDFS.

If it won't work on HDFS, can we add warn or error message if the keystore 
provider is HDFS?

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481226&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481226
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 04:39
Start Date: 10/Sep/20 04:39
Worklog Time Spent: 10m 
  Work Description: JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689975586


   @umamaheswararao  Thanks a lot!
   There's one checkstyle violation. However, I guess it's not introduced by 
the patch. Yetus recognized a long function name which is unchanged in the path.
   
   > 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:487:
  protected InodeTree(final Configuration config, final String viewName,:3: 
Method length is 191 lines (max allowed is 150). [MethodLength]
   
   As for the failure UTs, I guess they are not related to the patch. 
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481226)
Time Spent: 6h 50m  (was: 6h 40m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689975586


   @umamaheswararao  Thanks a lot!
   There's one checkstyle violation. However, I guess it's not introduced by 
the patch. Yetus recognized a long function name which is unchanged in the path.
   
   > 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java:487:
  protected InodeTree(final Configuration config, final String viewName,:3: 
Method length is 191 lines (max allowed is 150). [MethodLength]
   
   As for the failure UTs, I guess they are not related to the patch. 
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao closed pull request #2292: HDFS-15565.Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread GitBox


Hexiaoqiao closed pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481216&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481216
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 10/Sep/20 03:59
Start Date: 10/Sep/20 03:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689963635


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 38s |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  22m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 34s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 50s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 59s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  22m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 10s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 33s |  root: The patch generated 1 new 
+ 182 unchanged - 1 fixed = 183 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 49s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 128m  6s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 351m 30s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checksty

[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689963635


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 38s |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  22m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 34s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 50s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 59s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 11s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  22m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 10s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 33s |  root: The patch generated 1 new 
+ 182 unchanged - 1 fixed = 183 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 49s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 128m  6s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 23s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 351m 30s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 0d714f4a57d4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubun

[jira] [Commented] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2.10

2020-09-09 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193342#comment-17193342
 ] 

Masatake Iwasaki commented on HADOOP-17254:
---

Since only hadoop-yarn-server-timelineservice-hbase* depends on hbase, bumping 
up to 1.4 might be an option even on patch release.

> Upgrade hbase to 1.2.6.1 on branch-2.10
> ---
>
> Key: HADOOP-17254
> URL: https://issues.apache.org/jira/browse/HADOOP-17254
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2.10

2020-09-09 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193341#comment-17193341
 ] 

Masatake Iwasaki commented on HADOOP-17254:
---

This is minimum dependency bumping to address 
[CVE-2018-8025|https://nvd.nist.gov/vuln/detail/CVE-2018-8025] as preparation 
of 2.10.1 release. There was no issue even if bump the hbase.version up to 
1.4.13. YARN-8936 upgraded hbase.one.version from 1.2.6 to 1.4.8 and (on 
branch-3.1 and above) without code change.

> Upgrade hbase to 1.2.6.1 on branch-2.10
> ---
>
> Key: HADOOP-17254
> URL: https://issues.apache.org/jira/browse/HADOOP-17254
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2225: HDFS-15329. Provide FileContext based ViewFSOverloadScheme implementation

2020-09-09 Thread GitBox


umamaheswararao commented on a change in pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#discussion_r484551422



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##
@@ -0,0 +1,309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileContextTestHelper;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Hdfs;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;;
+
+
+/**

Review comment:
   ViewFileSystemOverloadScheme --> ViewFsOverloadScheme

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import java.net.URISyntaxException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.AbstractFileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+import static 
org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_IGNORE_PORT_IN_MOUNT_TABLE_NAME;
+
+/**
+ * This class is AbstractFileSystem implementation corresponding to
+ * ViewFileSystemOverloadScheme. This class is extended from the ViewFs
+ * for the overloaded scheme file system. Mount link configurations and
+ * in-memory mount table building behaviors are inherited from ViewFs.
+ * Unlike ViewFs scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * To use this class, the following configurations need to be added in
+ * core-site.xml file.
+ * 1) fs.AbstractFileSystem..impl
+ *= org.apache.hadoop.fs.viewfs.ViewFsOverloadScheme
+ * 2) fs.viewfs.overload.scheme.target.abstract..impl
+ *= "
+ *
+ * Here  can be any scheme, but with that scheme there should be a
+ * hadoop compatible file system available. Second configuration value should
+ * be the respective scheme's file system implementation class.
+ * Example: if scheme is configured with "hdfs", then the 2nd configuration
+ * class name will be org.apache.hadoop.hdfs.Hdfs.
+ *
+ * Use Case 1:
+ * ===
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ *

[jira] [Commented] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2.10

2020-09-09 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193302#comment-17193302
 ] 

Michael Stack commented on HADOOP-17254:


Yeah, upgrade if you can... 1.2.x, 1.3.x are EOL'd.

> Upgrade hbase to 1.2.6.1 on branch-2.10
> ---
>
> Key: HADOOP-17254
> URL: https://issues.apache.org/jira/browse/HADOOP-17254
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2.10

2020-09-09 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193275#comment-17193275
 ] 

Mingliang Liu commented on HADOOP-17254:


Sorry I do not know what the hbase version is for the build, but the latest 
HBase 1.x version is 1.4.13 and versions prior to 1.3 have been EoL. CC: 
[~stack]

> Upgrade hbase to 1.2.6.1 on branch-2.10
> ---
>
> Key: HADOOP-17254
> URL: https://issues.apache.org/jira/browse/HADOOP-17254
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2295: HDFS-15563. Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2295:
URL: https://github.com/apache/hadoop/pull/2295#issuecomment-689903340


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 56s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 22s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   4m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 52s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 52s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m  0s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 232m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2295/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2295 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f01ab8cd82b5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2295/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2295/1/testReport/ |
   | Max. process+thread count | 4735 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://ci-hadoop.apa

[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481141&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481141
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 23:30
Start Date: 09/Sep/20 23:30
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689877639


   Great work @JohnZZGithub. 
   Thanks a lot for your hard work in this. 
   
   +1 pending jenkins clean report.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481141)
Time Spent: 6.5h  (was: 6h 20m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689877639


   Great work @JohnZZGithub. 
   Thanks a lot for your hard work in this. 
   
   +1 pending jenkins clean report.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=481138&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481138
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 23:22
Start Date: 09/Sep/20 23:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-689874828


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 21s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 42s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 45s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2061 unchanged - 
1 fixed = 2061 total (was 2062)  |
   | +1 :green_heart: |  compile  |  16m 54s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 54s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1955 unchanged - 
1 fixed = 1955 total (was 1956)  |
   | -0 :warning: |  checkstyle  |   2m 45s |  root: The patch generated 18 new 
+ 266 unchanged - 26 fixed = 284 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 37s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 41s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 21s |  hadoop-common-project/hadoop-common 
generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 36s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-689874828


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 21s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 42s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 45s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2061 unchanged - 
1 fixed = 2061 total (was 2062)  |
   | +1 :green_heart: |  compile  |  16m 54s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 54s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1955 unchanged - 
1 fixed = 1955 total (was 1956)  |
   | -0 :warning: |  checkstyle  |   2m 45s |  root: The patch generated 18 new 
+ 266 unchanged - 26 fixed = 284 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   3m 15s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 37s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 41s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 21s |  hadoop-common-project/hadoop-common 
generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 36s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m  2s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 32s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 189m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchroniz

[jira] [Commented] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193227#comment-17193227
 ] 

Wei-Chiu Chuang commented on HADOOP-17255:
--

I was always under the impression that key store isn't suppose to be in HDFS 
because it won't work. Even if you change the rename() call, there are other 
FileSystem calls invoked by JavaKeyStore that wouldn't work. Am I right?

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2266: HDFS-15554. RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#issuecomment-689860446


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 53s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 46s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 13s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32)  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new 
+ 30 unchanged - 2 fixed = 30 total (was 32)  |
   | -0 :warning: |  checkstyle  |   0m 17s |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   8m 28s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 111m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2266/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2266 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml markdownlint |
   | uname | Linux 8083f1367af7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2266/5/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/had

[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


umamaheswararao commented on a change in pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r485953722



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
##
@@ -366,6 +366,69 @@ Don't want to change scheme or difficult to copy 
mount-table configurations to a
 
 Please refer to the [View File System Overload Scheme 
Guide](./ViewFsOverloadScheme.html)
 
+Regex Pattern Based Mount Points
+
+
+The view file system mount points were a Key-Value based mapping system. It is 
not friendly for user cases which mapping config could be abstracted to rules. 
E.g. Users want to provide a GCS bucket per user and there might be thousands 
of users in total. The old key-value based approach won't work well for several 
reasons:
+
+1. The mount table is used by FileSystem clients. There's a cost to spread the 
config to all clients and we should avoid it if possible. The [View File System 
Overload Scheme Guide](./ViewFsOverloadScheme.html) could help the distribution 
by central mount table management. But the mount table still have to be updated 
on every change. The change could be greatly avoided if provide a rule-based 
mount table.
+
+2. The client have to understand all the KVs in the mount table. This is not 
ideal when the mountable grows to thousands of items. E.g. thousands of file 
systems might be initialized even users only need one. And the config itself 
will become bloated at scale.
+
+### Understand the Difference
+
+In the key-value based mount table, view file system treats every mount point 
as a partition. There's several file system APIs which will lead to operation 
on all partitions. E.g. there's an HDFS cluster with multiple mount. Users want 
to run “hadoop fs -put file viewfs://hdfs.namenode.apache.org/tmp/” cmd to copy 
data from local disk to our HDFS cluster. The cmd will trigger ViewFileSystem 
to call setVerifyChecksum() method which will initialize the file system for 
every mount point.
+For a regex-base rule mount table entry, we couldn't know what's corresponding 
path until parsing. So the regex based mount table entry will be ignored on 
such cases. The file system (ChRootedFileSystem) will be created upon 
accessing. But the underlying file system will be cached by inner cache of 
ViewFileSystem.

Review comment:
   Thanks for having separate JIRA. It make sense to me. !





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481107&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481107
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 22:20
Start Date: 09/Sep/20 22:20
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2185:
URL: https://github.com/apache/hadoop/pull/2185#discussion_r485953722



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
##
@@ -366,6 +366,69 @@ Don't want to change scheme or difficult to copy 
mount-table configurations to a
 
 Please refer to the [View File System Overload Scheme 
Guide](./ViewFsOverloadScheme.html)
 
+Regex Pattern Based Mount Points
+
+
+The view file system mount points were a Key-Value based mapping system. It is 
not friendly for user cases which mapping config could be abstracted to rules. 
E.g. Users want to provide a GCS bucket per user and there might be thousands 
of users in total. The old key-value based approach won't work well for several 
reasons:
+
+1. The mount table is used by FileSystem clients. There's a cost to spread the 
config to all clients and we should avoid it if possible. The [View File System 
Overload Scheme Guide](./ViewFsOverloadScheme.html) could help the distribution 
by central mount table management. But the mount table still have to be updated 
on every change. The change could be greatly avoided if provide a rule-based 
mount table.
+
+2. The client have to understand all the KVs in the mount table. This is not 
ideal when the mountable grows to thousands of items. E.g. thousands of file 
systems might be initialized even users only need one. And the config itself 
will become bloated at scale.
+
+### Understand the Difference
+
+In the key-value based mount table, view file system treats every mount point 
as a partition. There's several file system APIs which will lead to operation 
on all partitions. E.g. there's an HDFS cluster with multiple mount. Users want 
to run “hadoop fs -put file viewfs://hdfs.namenode.apache.org/tmp/” cmd to copy 
data from local disk to our HDFS cluster. The cmd will trigger ViewFileSystem 
to call setVerifyChecksum() method which will initialize the file system for 
every mount point.
+For a regex-base rule mount table entry, we couldn't know what's corresponding 
path until parsing. So the regex based mount table entry will be ignored on 
such cases. The file system (ChRootedFileSystem) will be created upon 
accessing. But the underlying file system will be cached by inner cache of 
ViewFileSystem.

Review comment:
   Thanks for having separate JIRA. It make sense to me. !





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 481107)
Time Spent: 6h 20m  (was: 6h 10m)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from 

[jira] [Work logged] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11452?focusedWorklogId=481098&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481098
 ]

ASF GitHub Bot logged work on HADOOP-11452:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 22:12
Start Date: 09/Sep/20 22:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #743:
URL: https://github.com/apache/hadoop/pull/743#issuecomment-689849388


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  compile  |   0m 35s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 34s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 35s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |  12m 52s |  branch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  12m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  the patch passed  |
   | -1 :x: |  compile  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   3m  8s |  root: The patch generated 587 
new + 0 unchanged - 0 fixed = 587 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 20s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new 
+ 1 unchanged - 0 fixed = 4 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 15s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 24s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m 17s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-openstack in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 267m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Should org.apache.hadoop.fs.impl.RenameHelper$RenameValidationResult 
be a _static_ inner class?  At RenameHelper.java:inner class? 

[GitHub] [hadoop] hadoop-yetus commented on pull request #743: HADOOP-11452 make rename/3 public

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #743:
URL: https://github.com/apache/hadoop/pull/743#issuecomment-689849388


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  compile  |   0m 35s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 34s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 35s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |  12m 52s |  branch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  12m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  the patch passed  |
   | -1 :x: |  compile  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   3m  8s |  root: The patch generated 587 
new + 0 unchanged - 0 fixed = 587 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 20s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new 
+ 1 unchanged - 0 fixed = 4 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 15s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 24s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m 17s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-openstack in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 267m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Should org.apache.hadoop.fs.impl.RenameHelper$RenameValidationResult 
be a _static_ inner class?  At RenameHelper.java:inner class?  At 
RenameHelper.java:[line 320] |
   | Failed junit tests | 
hadoop.fs.contract.rawlocal.TestRawlocalContractRenameEx |
   |   | hadoop.fs.viewfs.TestFcMainOperationsLocalFs |
   |   | hadoop.fs.TestSymlinkLocalFSFileSystem |
   |   | hadoop.fs.TestTrash |
   |   | hadoop.fs.TestFSMainOperationsLocalFileSystem |
   |   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
   |   | hadoop.fs.viewfs.TestViewFsLocalFs |
   |   | had

[GitHub] [hadoop] hadoop-yetus commented on pull request #610: [MAPREDUCE-7193] Review of CombineFile Code

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #610:
URL: https://github.com/apache/hadoop/pull/610#issuecomment-689838230


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 56s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  7s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 49s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 34s |  
hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 3 new + 
55 unchanged - 20 fixed = 58 total (was 75)  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 48s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 140m 46s |  hadoop-mapreduce-client-jobclient 
in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 47s |  The patch generated 1 ASF License 
warnings.  |
   |  |   | 230m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/610 |
   | JIRA Issue | MAPREDUCE-7193 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a9508182a1ba 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-610/1/testReport/ |
   | asflicense | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1320 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=481066&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481066
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 21:27
Start Date: 09/Sep/20 21:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-689831046


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  83m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint compile javac javadoc mvninstall shadedclient findbugs checkstyle 
xml |
   | uname | Linux 454ccf5dccf6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/9/testReport/ |
   | Max. process+thread count | 414

[GitHub] [hadoop] hadoop-yetus commented on pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-689831046


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  83m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint compile javac javadoc mvninstall shadedclient findbugs checkstyle 
xml |
   | uname | Linux 454ccf5dccf6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/9/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/9/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-

[jira] [Work logged] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?focusedWorklogId=481060&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481060
 ]

ASF GitHub Bot logged work on HADOOP-15891:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 21:08
Start Date: 09/Sep/20 21:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689822943


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s |  root: The patch generated 2 new 
+ 182 unchanged - 1 fixed = 184 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 23s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 43s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  96m 58s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 284m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestDFSOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux d70552001d55 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support

[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-689822943


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s |  root: The patch generated 2 new 
+ 182 unchanged - 1 fixed = 184 total (was 183)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 23s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 43s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  96m 58s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 284m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestDFSOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux d70552001d55 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 85119267be7 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/20/artifact/out/di

[GitHub] [hadoop] smengcl opened a new pull request #2295: HDFS-15563. Incorrect getTrashRoot return value when a non-snapshottable dir prefix matches the path of a snapshottable dir

2020-09-09 Thread GitBox


smengcl opened a new pull request #2295:
URL: https://github.com/apache/hadoop/pull/2295


   https://issues.apache.org/jira/browse/HDFS-15563
   
   This may impact clusters with `dfs.namenode.snapshot.trashroot.enabled` is 
set to `true`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-689816045


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  74m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint compile javac javadoc mvninstall shadedclient findbugs checkstyle 
xml |
   | uname | Linux b09effa10fbb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/8/testReport/ |
   | Max. process+thread count | 401 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/8/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


---

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=481054&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481054
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 20:53
Start Date: 09/Sep/20 20:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#issuecomment-689816045


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 17s |  The patch generated 0 new + 
104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  74m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2278 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint compile javac javadoc mvninstall shadedclient findbugs checkstyle 
xml |
   | uname | Linux b09effa10fbb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2278/8/testReport/ |
   | Max. process+thread count | 401 (

[GitHub] [hadoop] goiri commented on a change in pull request #2266: HDFS-15554. RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread GitBox


goiri commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r485892025



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
##
@@ -78,6 +83,11 @@
   "Hadoop:service=Router,name=FederationRPC";
   private static List mockMountTable;
   private static StateStoreService stateStore;
+  private static RouterRpcClient mockRpcClient;
+  private static final Map mockResponse0 =

Review comment:
   checkstyle is complaining





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2266: HDFS-15554. RBF: force router check file existence in destinations before adding/updating mount points

2020-09-09 Thread GitBox


goiri commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r485891289



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
##
@@ -128,7 +175,6 @@ public void testAddMountTable() throws IOException {
 MountTable newEntry = MountTable.newInstance(
 "/testpath", Collections.singletonMap("ns0", "/testdir"),
 Time.now(), Time.now());
-

Review comment:
   Avoid the empty change





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17181) Handle transient stream read failures in FileSystem contract tests

2020-09-09 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193153#comment-17193153
 ] 

Mingliang Liu commented on HADOOP-17181:


Added 3.4.0 as the fix version as well.

> Handle transient stream read failures in FileSystem contract tests
> --
>
> Key: HADOOP-17181
> URL: https://issues.apache.org/jira/browse/HADOOP-17181
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Seen 2x recently, failure in ITestS3AContractUnbuffer as not enough data came 
> back in the read. 
> The contract test assumes that stream.read() will return everything, but it 
> could be some buffering problem. Proposed: switch to ReadFully to see if it 
> is a quirk of the read/get or is something actually wrong with the production 
> code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17181) Handle transient stream read failures in FileSystem contract tests

2020-09-09 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-17181:
---
Fix Version/s: 3.4.0

> Handle transient stream read failures in FileSystem contract tests
> --
>
> Key: HADOOP-17181
> URL: https://issues.apache.org/jira/browse/HADOOP-17181
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Seen 2x recently, failure in ITestS3AContractUnbuffer as not enough data came 
> back in the read. 
> The contract test assumes that stream.read() will return everything, but it 
> could be some buffering problem. Proposed: switch to ReadFully to see if it 
> is a quirk of the read/get or is something actually wrong with the production 
> code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16118) S3Guard to support on-demand DDB tables

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16118?focusedWorklogId=481001&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-481001
 ]

ASF GitHub Bot logged work on HADOOP-16118:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 19:12
Start Date: 09/Sep/20 19:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #731:
URL: https://github.com/apache/hadoop/pull/731#issuecomment-689760736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  12m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  13m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-aws: The 
patch generated 2 new + 17 unchanged - 1 fixed = 19 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   4m 40s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  86m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/731 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux a138593e2a27 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / b5d24d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id:

[jira] [Updated] (HADOOP-16118) S3Guard to support on-demand DDB tables

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16118:

Labels: pull-request-available  (was: )

> S3Guard to support on-demand DDB tables
> ---
>
> Key: HADOOP-16118
> URL: https://issues.apache.org/jira/browse/HADOOP-16118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
> Attachments: HADOOP-16118-branch-3.2-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AWS now supports [on demand DDB 
> capacity|https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/]
>  
> This has lowest cost and best scalability, so could be the default capacity. 
> + add a new option to set-capacity.
> Will depend on an SDK update: created HADOOP-16117.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #731: HADOOP-16118. S3Guard to support on-demand DDB tables (branch-3.2).

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #731:
URL: https://github.com/apache/hadoop/pull/731#issuecomment-689760736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  12m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  13m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-aws: The 
patch generated 2 new + 17 unchanged - 1 fixed = 19 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   4m 40s |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  86m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/731 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux a138593e2a27 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / b5d24d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-731/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=480986&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480986
 ]

ASF GitHub Bot logged work on HADOOP-17195:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:42
Start Date: 09/Sep/20 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2294:
URL: https://github.com/apache/hadoop/pull/2294#issuecomment-689745307


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 31s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-azure: The 
patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  90m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2294 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 70cc2f756b0c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/console |
   | v

[GitHub] [hadoop] hadoop-yetus commented on pull request #2294: HADOOP-17195. ABFS Store thread pool for stream IO.

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2294:
URL: https://github.com/apache/hadoop/pull/2294#issuecomment-689745307


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 13s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 31s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-azure: The 
patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 38s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  90m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2294 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 70cc2f756b0c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e5fe3262702 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2294/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about 

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480980&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480980
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:31
Start Date: 09/Sep/20 18:31
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485828996



##
File path: hadoop-tools/hadoop-azure/.gitignore
##
@@ -1,2 +1,4 @@
 .checkstyle
-bin/
\ No newline at end of file
+bin/
+testlogs

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480980)
Time Spent: 3h 10m  (was: 3h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480979&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480979
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:30
Start Date: 09/Sep/20 18:30
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485828835



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit
+  fi
+  if ! [ "$(command -v xmlstarlet)" ]; then
+echo "Exiting. xmlstarlet is required to run the script."
+exit
+  fi
+}
+
+changeconf() {
+  xmlstarlet ed -P -L -d "/configuration/property[name='$1']" $conffile
+  xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s 
/configuration/propertyTMP -t elem -n name -v "$1" -r 
/configuration/propertyTMP -v property $conffile
+  xmlstarlet ed -P -L -s "/configuration/property[name='$1']" -t elem -n value 
-v "$2" $conffile

Review comment:
   Could not find the same documented with the xmlstarlet documentation or 
produce anything returning non zero. But still added a check for the safer side.

##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit
+  fi
+  if ! [ "$(command -v xmlstarlet)" ]; then
+echo "Exiting. xmlstarlet is required to run the script."
+exit
+  fi
+}
+
+changeconf() {
+  xmlstarlet ed -P -L -d "/configuration/property[name='$1']" $conffile
+  xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s 
/configuration/propertyTMP -t elem -n name -v "$1" -r 
/configuration/propertyTMP -v property $conffi

[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485828996



##
File path: hadoop-tools/hadoop-azure/.gitignore
##
@@ -1,2 +1,4 @@
 .checkstyle
-bin/
\ No newline at end of file
+bin/
+testlogs

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485828835



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit
+  fi
+  if ! [ "$(command -v xmlstarlet)" ]; then
+echo "Exiting. xmlstarlet is required to run the script."
+exit
+  fi
+}
+
+changeconf() {
+  xmlstarlet ed -P -L -d "/configuration/property[name='$1']" $conffile
+  xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s 
/configuration/propertyTMP -t elem -n name -v "$1" -r 
/configuration/propertyTMP -v property $conffile
+  xmlstarlet ed -P -L -s "/configuration/property[name='$1']" -t elem -n value 
-v "$2" $conffile

Review comment:
   Could not find the same documented with the xmlstarlet documentation or 
produce anything returning non zero. But still added a check for the safer side.

##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit
+  fi
+  if ! [ "$(command -v xmlstarlet)" ]; then
+echo "Exiting. xmlstarlet is required to run the script."
+exit
+  fi
+}
+
+changeconf() {
+  xmlstarlet ed -P -L -d "/configuration/property[name='$1']" $conffile
+  xmlstarlet ed -P -L -s /configuration -t elem -n propertyTMP -v "" -s 
/configuration/propertyTMP -t elem -n name -v "$1" -r 
/configuration/propertyTMP -v property $conffile
+  xmlstarlet ed -P -L -s "/configuration/property[name='$1']" -t elem -n value 
-v "$2" $conffile
+}
+
+testwithconfs() {
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Number of properties and values differ for $scenario"
+exit
+  fi
+  for ((i = 0; i < propertiessize; i++)); do
+key=${properties[$i]}
+val=${values[$i]}
+

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480975&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480975
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:29
Start Date: 09/Sep/20 18:29
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485827879



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480975)
Time Spent: 2h 50m  (was: 2h 40m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485827879



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {
+  if ! [ "$(command -v pcregrep)" ]; then
+echo "Exiting. pcregrep is required to run the script."
+exit

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480974&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480974
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:27
Start Date: 09/Sep/20 18:27
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485827246



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480974)
Time Spent: 2h 40m  (was: 2.5h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485827246



##
File path: hadoop-tools/hadoop-azure/testsupport.sh
##
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+conffile=src/test/resources/abfs-testrun-configs.xml
+bkpconffile=src/test/resources/abfs-testrun-configs_BKP.xml
+testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsfilename=
+starttime=
+
+validate() {
+  if [ -z "$scenario" ]; then
+   echo "Exiting. scenario cannot be empty"
+   exit
+  fi
+  propertiessize=${#properties[@]}
+  valuessize=${#values[@]}
+  if [ "$propertiessize" -lt 1 ] || [ "$valuessize" -lt 1 ] || [ 
"$propertiessize" -ne "$valuessize" ]; then
+echo "Exiting. Both properties and values arrays has to be populated and 
of same size. Please check for scenario $scenario"
+exit
+  fi
+}
+
+checkdependancies() {

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17236) Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640

2020-09-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193107#comment-17193107
 ] 

Hadoop QA commented on HADOOP-17236:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
57s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
18s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
19s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
44s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
12s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {

[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485823482



##
File path: 
hadoop-tools/hadoop-azure/src/test/resources/abfs-testrun-configs.xml.template
##
@@ -0,0 +1,156 @@
+
+
+
+
+
+  
+  
+
+  
+  
+fs.azure.account.auth.type
+SharedKey
+  
+
+  
+  

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480973
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:21
Start Date: 09/Sep/20 18:21
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485823482



##
File path: 
hadoop-tools/hadoop-azure/src/test/resources/abfs-testrun-configs.xml.template
##
@@ -0,0 +1,156 @@
+
+
+
+
+
+  
+  
+
+  
+  
+fs.azure.account.auth.type
+SharedKey
+  
+
+  
+  
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480966&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480966
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:12
Start Date: 09/Sep/20 18:12
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485818694



##
File path: 
hadoop-tools/hadoop-azure/src/test/resources/abfs-testrun-configs.xml.template
##
@@ -0,0 +1,156 @@
+
+
+
+
+
+  
+  
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485818694



##
File path: 
hadoop-tools/hadoop-azure/src/test/resources/abfs-testrun-configs.xml.template
##
@@ -0,0 +1,156 @@
+
+
+
+
+
+  
+  

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=480964&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480964
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 18:08
Start Date: 09/Sep/20 18:08
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485816830



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
##
@@ -64,7 +64,7 @@
   LoggerFactory.getLogger(ITestAzureBlobFileSystemDelegationSAS.class);
 
   public ITestAzureBlobFileSystemDelegationSAS() throws Exception {
-// These tests rely on specific settings in azure-auth-keys.xml:
+// These tests rely on specific settings in abfs-testrun-configs.xml:

Review comment:
   Basically we are renaming azure-auth-keys to abfs-testrun-configs. 
Because we expect the same to contain non auth related configs too.
   In the main config file (azure-test.xml the xinclude is now made to 
abfs-testrun-configs)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480964)
Time Spent: 2h 10m  (was: 2h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2278: HADOOP-17191. ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-09-09 Thread GitBox


bilaharith commented on a change in pull request #2278:
URL: https://github.com/apache/hadoop/pull/2278#discussion_r485816830



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelegationSAS.java
##
@@ -64,7 +64,7 @@
   LoggerFactory.getLogger(ITestAzureBlobFileSystemDelegationSAS.class);
 
   public ITestAzureBlobFileSystemDelegationSAS() throws Exception {
-// These tests rely on specific settings in azure-auth-keys.xml:
+// These tests rely on specific settings in abfs-testrun-configs.xml:

Review comment:
   Basically we are renaming azure-auth-keys to abfs-testrun-configs. 
Because we expect the same to contain non auth related configs too.
   In the main config file (azure-test.xml the xinclude is now made to 
abfs-testrun-configs)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #582: HDFS-14350:dfs.datanode.ec.reconstruction.threads not take effect

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #582:
URL: https://github.com/apache/hadoop/pull/582#issuecomment-689726036


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  6s |  https://github.com/apache/hadoop/pull/582 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/582 |
   | JIRA Issue | HDFS-14350 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-582/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?focusedWorklogId=480955&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480955
 ]

ASF GitHub Bot logged work on HADOOP-17165:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 17:53
Start Date: 09/Sep/20 17:53
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-689721030


   Thanks for reviewing and merging it, @sunchao and @Hexiaoqiao!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480955)
Time Spent: 2h 20m  (was: 2h 10m)

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2293: YARN-2098. App priority support in Fair Scheduler

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2293:
URL: https://github.com/apache/hadoop/pull/2293#issuecomment-689721260


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
7 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 28s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 44s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  89m 55s |  hadoop-yarn-server-resourcemanager in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 164m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
   |   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2293/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2293 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3edb8c044b9d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 773ac799c63 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2293/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2293/1/testReport/ |
   | Max. process+thread count | 892 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2293/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT 

[GitHub] [hadoop] tasanuma commented on pull request #2240: HADOOP-17165. Implement service-user feature in DecayRPCScheduler.

2020-09-09 Thread GitBox


tasanuma commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-689721030


   Thanks for reviewing and merging it, @sunchao and @Hexiaoqiao!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #766:
URL: https://github.com/apache/hadoop/pull/766#issuecomment-689716401


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  5s |  https://github.com/apache/hadoop/pull/766 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/766 |
   | JIRA Issue | YARN-9509 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-766/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-09 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17165:
--
Hadoop Flags: Reviewed

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-09 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17165:
--
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?focusedWorklogId=480921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480921
 ]

ASF GitHub Bot logged work on HADOOP-17165:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:57
Start Date: 09/Sep/20 16:57
Worklog Time Spent: 10m 
  Work Description: sunchao merged pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480921)
Time Spent: 2h  (was: 1h 50m)

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17165?focusedWorklogId=480922&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480922
 ]

ASF GitHub Bot logged work on HADOOP-17165:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:57
Start Date: 09/Sep/20 16:57
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-689690874


   Merged. Thanks @tasanuma !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480922)
Time Spent: 2h 10m  (was: 2h)

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao merged pull request #2240: HADOOP-17165. Implement service-user feature in DecayRPCScheduler.

2020-09-09 Thread GitBox


sunchao merged pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2240: HADOOP-17165. Implement service-user feature in DecayRPCScheduler.

2020-09-09 Thread GitBox


sunchao commented on pull request #2240:
URL: https://github.com/apache/hadoop/pull/2240#issuecomment-689690874


   Merged. Thanks @tasanuma !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2292: HDFS-15565.Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread GitBox


sunchao commented on a change in pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292#discussion_r485763717



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -705,7 +705,6 @@ static private int doBalance(Collection namenodes,
 LOG.info("excluded nodes = " + p.getExcludedNodes());
 LOG.info("source nodes = " + p.getSourceNodes());
 checkKeytabAndInit(conf);
-System.out.println("Time Stamp   Iteration#  Bytes Already 
Moved  Bytes Left To Move  Bytes Being Moved");

Review comment:
   Hmm why you think this is not valid? I thought this is the header line 
for the balancer progress.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=480916&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480916
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:39
Start Date: 09/Sep/20 16:39
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#discussion_r485760840



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -276,13 +258,27 @@ public void end() {
 // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressBytesDirect() throws IOException {
+if (compressedDirectBufLen == 0) {
+  return 0;
+} else {
+  // Set the position and limit of `compressedDirectBuf` for reading
+  compressedDirectBuf.position(0).limit(compressedDirectBufLen);

Review comment:
   For `decompressDirect`, `compressedDirectBuf` is set up correctly before 
calling `decompressBytesDirect`. So we don't need to do it here again. For 
`decompress`, I'm not sure, but looks like `setInputFromSavedData` also takes 
care about `compressedDirectBuf` reseting. I guess we don't need to do it in 
`decompressBytesDirect`.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480916)
Time Spent: 1h 20m  (was: 1h 10m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-09 Thread GitBox


viirya commented on a change in pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#discussion_r485760840



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -276,13 +258,27 @@ public void end() {
 // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressBytesDirect() throws IOException {
+if (compressedDirectBufLen == 0) {
+  return 0;
+} else {
+  // Set the position and limit of `compressedDirectBuf` for reading
+  compressedDirectBuf.position(0).limit(compressedDirectBufLen);

Review comment:
   For `decompressDirect`, `compressedDirectBuf` is set up correctly before 
calling `decompressBytesDirect`. So we don't need to do it here again. For 
`decompress`, I'm not sure, but looks like `setInputFromSavedData` also takes 
care about `compressedDirectBuf` reseting. I guess we don't need to do it in 
`decompressBytesDirect`.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2292: HDFS-15565.Remove the invalid code in the Balancer#doBalance() method.

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2292:
URL: https://github.com/apache/hadoop/pull/2292#issuecomment-689671513


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 51s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 33 unchanged - 1 
fixed = 33 total (was 34)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 100m  8s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 190m 43s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2292/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ccd0b8079a2d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 773ac799c63 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2292/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2292/1/testReport/ |
   | Max. process+thread count | 3966 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/j

[GitHub] [hadoop] steveloughran opened a new pull request #2294: HADOOP-17195. ABFS Store thread pool for stream IO.

2020-09-09 Thread GitBox


steveloughran opened a new pull request #2294:
URL: https://github.com/apache/hadoop/pull/2294


   
   This is the successor to #2179
   
   1. ABFS Store creates a single threadpool, configurable with fixed size or 
multiple of cores
   1. each output stream is given its own semaphored pool which limits the 
access that stream has to the pool
   
   To actually defend against OOMs the per-stream queue length is what needs to 
be managed; looking at the patch it still has the problem of #2179: you need 
one buffer per pending upload in the the pools.
   
   Ultimately the S3A Connector fixed this by going to disk buffering by 
default. A more performant design might be to have a blocking byte buffer 
factory which limits the #of buffers which the streams can request, so putting 
an upper bound on the amount of memory which a single ABFS store instance can 
demand. 
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=480908&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480908
 ]

ASF GitHub Bot logged work on HADOOP-17195:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:19
Start Date: 09/Sep/20 16:19
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2294:
URL: https://github.com/apache/hadoop/pull/2294


   
   This is the successor to #2179
   
   1. ABFS Store creates a single threadpool, configurable with fixed size or 
multiple of cores
   1. each output stream is given its own semaphored pool which limits the 
access that stream has to the pool
   
   To actually defend against OOMs the per-stream queue length is what needs to 
be managed; looking at the patch it still has the problem of #2179: you need 
one buffer per pending upload in the the pools.
   
   Ultimately the S3A Connector fixed this by going to disk buffering by 
default. A more performant design might be to have a blocking byte buffer 
factory which limits the #of buffers which the streams can request, so putting 
an upper bound on the amount of memory which a single ABFS store instance can 
demand. 
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480908)
Remaining Estimate: 0h
Time Spent: 10m

> Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs 
> ---
>
> Key: HADOOP-17195
> URL: https://issues.apache.org/jira/browse/HADOOP-17195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Bilahari T H
>Priority: Major
>  Labels: abfsactive
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OutOfMemory error due to new ThreadPools being made each time 
> AbfsOutputStream is created. Since threadPool aren't limited a lot of data is 
> loaded in buffer and thus it causes OutOfMemory error.
> Possible fixes:
> - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using 
> -t property).
> - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of 
> buffer to be loaded in threads.
> - Don't create new ThreadPools each time AbfsOutputStream is created and 
> limit the number of ThreadPools each AbfsOutputStream could create.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17195:

Labels: abfsactive pull-request-available  (was: abfsactive)

> Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs 
> ---
>
> Key: HADOOP-17195
> URL: https://issues.apache.org/jira/browse/HADOOP-17195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Bilahari T H
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> OutOfMemory error due to new ThreadPools being made each time 
> AbfsOutputStream is created. Since threadPool aren't limited a lot of data is 
> loaded in buffer and thus it causes OutOfMemory error.
> Possible fixes:
> - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using 
> -t property).
> - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of 
> buffer to be loaded in threads.
> - Don't create new ThreadPools each time AbfsOutputStream is created and 
> limit the number of ThreadPools each AbfsOutputStream could create.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-09 Thread GitBox


viirya commented on a change in pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#discussion_r485738846



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -276,13 +258,27 @@ public void end() {
 // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressBytesDirect() throws IOException {
+if (compressedDirectBufLen == 0) {
+  return 0;
+} else {
+  // Set the position and limit of `compressedDirectBuf` for reading
+  compressedDirectBuf.position(0).limit(compressedDirectBufLen);
+  // There is compressed input, decompress it now.
+  int size = Snappy.uncompressedLength((ByteBuffer) compressedDirectBuf);
+  if (size > uncompressedDirectBuf.capacity()) {

Review comment:
   `decompressDirect` also calls `decompressBytesDirect`, but the 
`uncompressedDirectBuf` is passed in as an argument. I think it has danger to 
assume `uncompressedDirectBuf` is always reset.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=480906&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480906
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 16:14
Start Date: 09/Sep/20 16:14
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#discussion_r485738846



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -276,13 +258,27 @@ public void end() {
 // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressBytesDirect() throws IOException {
+if (compressedDirectBufLen == 0) {
+  return 0;
+} else {
+  // Set the position and limit of `compressedDirectBuf` for reading
+  compressedDirectBuf.position(0).limit(compressedDirectBufLen);
+  // There is compressed input, decompress it now.
+  int size = Snappy.uncompressedLength((ByteBuffer) compressedDirectBuf);
+  if (size > uncompressedDirectBuf.capacity()) {

Review comment:
   `decompressDirect` also calls `decompressBytesDirect`, but the 
`uncompressedDirectBuf` is passed in as an argument. I think it has danger to 
assume `uncompressedDirectBuf` is always reset.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480906)
Time Spent: 1h 10m  (was: 1h)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #2028: MAPREDUCE-7279. Adds RM host name to history server web page

2020-09-09 Thread GitBox


jiwq commented on a change in pull request #2028:
URL: https://github.com/apache/hadoop/pull/2028#discussion_r485720653



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
##
@@ -1588,6 +1588,19 @@ public void stop() {
   this.jobIndexInfo =
   new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
queueName);
+
+  if (getConfig().getBoolean(
+  JHAdminConfig.MR_HISTORY_APPEND_RM_HOST_TO_HISTORY_FILE_NAME_ENABLED,
+  
JHAdminConfig.DEFAULT_MR_HISTORY_APPEND_RM_HOST_TO_HISTORY_FILE_NAME_ENABLED)) {
+
+String hostName = getConfig().get(YarnConfiguration.RM_HOSTNAME, "");

Review comment:
   I don't think so. In federation cluster, the job can be routed to any 
physical cluster.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #2028: MAPREDUCE-7279. Adds RM host name to history server web page

2020-09-09 Thread GitBox


jiwq commented on a change in pull request #2028:
URL: https://github.com/apache/hadoop/pull/2028#discussion_r485720653



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
##
@@ -1588,6 +1588,19 @@ public void stop() {
   this.jobIndexInfo =
   new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
queueName);
+
+  if (getConfig().getBoolean(
+  JHAdminConfig.MR_HISTORY_APPEND_RM_HOST_TO_HISTORY_FILE_NAME_ENABLED,
+  
JHAdminConfig.DEFAULT_MR_HISTORY_APPEND_RM_HOST_TO_HISTORY_FILE_NAME_ENABLED)) {
+
+String hostName = getConfig().get(YarnConfiguration.RM_HOSTNAME, "");

Review comment:
   I don't think so. In federation cluster, the job can be route to any 
physical cluster.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2291: HADOOP-17255. JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-09 Thread GitBox


hadoop-yetus commented on pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291#issuecomment-689650431


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 24s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  24m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 34s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 34s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  11m 33s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 212m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.crypto.key.TestKeyProviderFactory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2291 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e2b28d63239a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1d6d0d82078 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/testReport/ |
   | Max. process+thread count | 1355 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


---

[jira] [Work logged] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17255?focusedWorklogId=480892&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480892
 ]

ASF GitHub Bot logged work on HADOOP-17255:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 15:48
Start Date: 09/Sep/20 15:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291#issuecomment-689650431


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 48s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 24s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  24m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 34s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 34s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  11m 33s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 212m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.crypto.key.TestKeyProviderFactory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2291 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e2b28d63239a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1d6d0d82078 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2291/1/testReport/ |
   | Max. process+thread count | 1355 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-17166) ABFS: configure output stream thread pool

2020-09-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192963#comment-17192963
 ] 

Steve Loughran commented on HADOOP-17166:
-

merged to trunk. If you want it in 3.3.1 (and IMO, it should go in there)

cherry pick this patch, rerun the mvn verify suite, add a comment on this JIRA 
that all worked. 

> ABFS: configure output stream thread pool
> -
>
> Key: HADOOP-17166
> URL: https://issues.apache.org/jira/browse/HADOOP-17166
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17166) ABFS: Add configs for maxConcurrentRequestCount and threadpool queue size

2020-09-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17166?focusedWorklogId=480884&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480884
 ]

ASF GitHub Bot logged work on HADOOP-17166:
---

Author: ASF GitHub Bot
Created on: 09/Sep/20 15:41
Start Date: 09/Sep/20 15:41
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2179:
URL: https://github.com/apache/hadoop/pull/2179


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 480884)
Time Spent: 1h 50m  (was: 1h 40m)

> ABFS: Add configs for maxConcurrentRequestCount and threadpool queue size
> -
>
> Key: HADOOP-17166
> URL: https://issues.apache.org/jira/browse/HADOOP-17166
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…

2020-09-09 Thread GitBox


steveloughran merged pull request #2179:
URL: https://github.com/apache/hadoop/pull/2179


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17166) ABFS: configure output stream thread pool

2020-09-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17166:

Summary: ABFS: configure output stream thread pool  (was: ABFS: Add configs 
for maxConcurrentRequestCount and threadpool queue size)

> ABFS: configure output stream thread pool
> -
>
> Key: HADOOP-17166
> URL: https://issues.apache.org/jira/browse/HADOOP-17166
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >