[jira] [Commented] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Xianyin Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744917#comment-14744917
 ] 

Xianyin Xin commented on HADOOP-12409:
--

Thanks [~ste...@apache.org]. Uploaded a new version according to the comment. 
Later will create and link jiras in YARN and MAPREDUCE separately.

> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
>Priority: Minor
> Attachments: Hadoop-12409.001.patch, Hadoop-12409.002.patch
>
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Xianyin Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyin Xin updated HADOOP-12409:
-
Attachment: Hadoop-12409.002.patch

> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
>Priority: Minor
> Attachments: Hadoop-12409.001.patch, Hadoop-12409.002.patch
>
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744840#comment-14744840
 ] 

zhihai xu commented on HADOOP-12413:


I attached a patch HADOOP-12413.000.patch which skip calling 
{{ugi.getGroupNames()}} if {{groups}} is empty.

> AccessControlList should avoid calling getGroupNames in isUserInList with 
> empty groups.
> ---
>
> Key: HADOOP-12413
> URL: https://issues.apache.org/jira/browse/HADOOP-12413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12413.000.patch
>
>
> {{AccessControlList}} should avoid calling {{getGroupNames}} in 
> {{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
> call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
> {{ugi.getGroupNames()}} is an expensive operation which call shell script 
> {{id -gn  && id -Gn }} to get the list of groups. For example,
> {{ServiceAuthorizationManager#authorize}} will call blocked ACL 
> {{acls[1].isUserAllowed(user)}} to check the user permission. The default 
> value for blocked ACL  is empty
> {code}
> String defaultBlockedAcl = conf.get(   
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
>  "");
> {code}
> So every time {{authorize}} is called, {{getGroupNames}} may be called.
> It also caused the following warning message:
> {code}
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
> to get groups for user job_144171553_0005: id: job_144171553_0005: No 
> such user
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.UserGroupInformation: No groups available for user 
> job_144171553_0005
> 2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization successful for job_144171553_0005 (auth:TOKEN) for 
> protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12413:
---
Attachment: HADOOP-12413.000.patch

> AccessControlList should avoid calling getGroupNames in isUserInList with 
> empty groups.
> ---
>
> Key: HADOOP-12413
> URL: https://issues.apache.org/jira/browse/HADOOP-12413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12413.000.patch
>
>
> {{AccessControlList}} should avoid calling {{getGroupNames}} in 
> {{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
> call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
> {{ugi.getGroupNames()}} is an expensive operation which call shell script 
> {{id -gn  && id -Gn }} to get the list of groups. For example,
> {{ServiceAuthorizationManager#authorize}} will call blocked ACL 
> {{acls[1].isUserAllowed(user)}} to check the user permission. The default 
> value for blocked ACL  is empty
> {code}
> String defaultBlockedAcl = conf.get(   
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
>  "");
> {code}
> So every time {{authorize}} is called, {{getGroupNames}} may be called.
> It also caused the following warning message:
> {code}
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
> to get groups for user job_144171553_0005: id: job_144171553_0005: No 
> such user
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.UserGroupInformation: No groups available for user 
> job_144171553_0005
> 2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization successful for job_144171553_0005 (auth:TOKEN) for 
> protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12413:
---
Status: Patch Available  (was: Open)

> AccessControlList should avoid calling getGroupNames in isUserInList with 
> empty groups.
> ---
>
> Key: HADOOP-12413
> URL: https://issues.apache.org/jira/browse/HADOOP-12413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12413.000.patch
>
>
> {{AccessControlList}} should avoid calling {{getGroupNames}} in 
> {{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
> call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
> {{ugi.getGroupNames()}} is an expensive operation which call shell script 
> {{id -gn  && id -Gn }} to get the list of groups. For example,
> {{ServiceAuthorizationManager#authorize}} will call blocked ACL 
> {{acls[1].isUserAllowed(user)}} to check the user permission. The default 
> value for blocked ACL  is empty
> {code}
> String defaultBlockedAcl = conf.get(   
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
>  "");
> {code}
> So every time {{authorize}} is called, {{getGroupNames}} may be called.
> It also caused the following warning message:
> {code}
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
> to get groups for user job_144171553_0005: id: job_144171553_0005: No 
> such user
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.UserGroupInformation: No groups available for user 
> job_144171553_0005
> 2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization successful for job_144171553_0005 (auth:TOKEN) for 
> protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12413:
---
Description: 
{{AccessControlList}} should avoid calling {{getGroupNames}} in 
{{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
{{ugi.getGroupNames()}} is an expensive operation which call shell script {{id 
-gn  && id -Gn }} to get the list of groups. For example,
{{ServiceAuthorizationManager#authorize}} will call blocked ACL 
{{acls[1].isUserAllowed(user)}} to check the user permission. The default value 
for blocked ACL  is empty
{code}
String defaultBlockedAcl = conf.get(   
CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
 "");
{code}
So every time {{authorize}} is called, {{getGroupNames}} may be called.
It also caused the following warning message:
{code}
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user job_144171553_0005: id: job_144171553_0005: No such 
user
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.UserGroupInformation: No groups available for user 
job_144171553_0005
2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for job_144171553_0005 (auth:TOKEN) for 
protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
{{code}}


  was:
{{AccessControlList}} should avoid calling {{getGroupNames}} in 
{{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
{{ugi.getGroupNames()}} is an expensive operation which call shell script {{id 
-gn  && id -Gn }} to get the list of groups. For example,
{{ServiceAuthorizationManager#authorize}} will call blocked ACL 
{{acls[1].isUserAllowed(user)}} to check the user permission. The default value 
for blocked ACL  is empty
{{code}}
String defaultBlockedAcl = conf.get(   
CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
 "");
{{code}}
So every time {{authorize}} is called, {{getGroupNames}} may be called.
It also caused the following warning message:
{code}
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user job_144171553_0005: id: job_144171553_0005: No such 
user
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.UserGroupInformation: No groups available for user 
job_144171553_0005
2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for job_144171553_0005 (auth:TOKEN) for 
protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
{{code}}



> AccessControlList should avoid calling getGroupNames in isUserInList with 
> empty groups.
> ---
>
> Key: HADOOP-12413
> URL: https://issues.apache.org/jira/browse/HADOOP-12413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>
> {{AccessControlList}} should avoid calling {{getGroupNames}} in 
> {{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
> call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
> {{ugi.getGroupNames()}} is an expensive operation which call shell script 
> {{id -gn  && id -Gn }} to get the list of groups. For example,
> {{ServiceAuthorizationManager#authorize}} will call blocked ACL 
> {{acls[1].isUserAllowed(user)}} to check the user permission. The default 
> value for blocked ACL  is empty
> {code}
> String defaultBlockedAcl = conf.get(   
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
>  "");
> {code}
> So every time {{authorize}} is called, {{getGroupNames}} may be called.
> It also caused the following warning message:
> {code}
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
> to get groups for user job_144171553_0005: id: job_144171553_0005: No 
> such user
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.UserGroupInformation: No groups available for user 
> job_144171553_0005
> 2015-09-08 14:55:34,236 INFO [Socket Reade

[jira] [Created] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)
zhihai xu created HADOOP-12413:
--

 Summary: AccessControlList should avoid calling getGroupNames in 
isUserInList with empty groups.
 Key: HADOOP-12413
 URL: https://issues.apache.org/jira/browse/HADOOP-12413
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.0
Reporter: zhihai xu
Assignee: zhihai xu


{{AccessControlList}} should avoid calling {{getGroupNames}} in 
{{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
{{ugi.getGroupNames()}} is an expensive operation which call shell script {{id 
-gn  && id -Gn }} to get the list of groups. For example,
{{ServiceAuthorizationManager#authorize}} will call blocked ACL 
{{acls[1].isUserAllowed(user)}} to check the user permission. The default value 
for blocked ACL  is empty
{{code}}
String defaultBlockedAcl = conf.get(   
CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
 "");
{{code}}
So every time {{authorize}} is called, {{getGroupNames}} may be called.
It also caused the following warning message:
{code}
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user job_144171553_0005: id: job_144171553_0005: No such 
user
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.UserGroupInformation: No groups available for user 
job_144171553_0005
2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for job_144171553_0005 (auth:TOKEN) for 
protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
{{code}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12413) AccessControlList should avoid calling getGroupNames in isUserInList with empty groups.

2015-09-14 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12413:
---
Description: 
{{AccessControlList}} should avoid calling {{getGroupNames}} in 
{{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
{{ugi.getGroupNames()}} is an expensive operation which call shell script {{id 
-gn  && id -Gn }} to get the list of groups. For example,
{{ServiceAuthorizationManager#authorize}} will call blocked ACL 
{{acls[1].isUserAllowed(user)}} to check the user permission. The default value 
for blocked ACL  is empty
{code}
String defaultBlockedAcl = conf.get(   
CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
 "");
{code}
So every time {{authorize}} is called, {{getGroupNames}} may be called.
It also caused the following warning message:
{code}
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user job_144171553_0005: id: job_144171553_0005: No such 
user
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.UserGroupInformation: No groups available for user 
job_144171553_0005
2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for job_144171553_0005 (auth:TOKEN) for 
protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
{code}


  was:
{{AccessControlList}} should avoid calling {{getGroupNames}} in 
{{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
{{ugi.getGroupNames()}} is an expensive operation which call shell script {{id 
-gn  && id -Gn }} to get the list of groups. For example,
{{ServiceAuthorizationManager#authorize}} will call blocked ACL 
{{acls[1].isUserAllowed(user)}} to check the user permission. The default value 
for blocked ACL  is empty
{code}
String defaultBlockedAcl = conf.get(   
CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
 "");
{code}
So every time {{authorize}} is called, {{getGroupNames}} may be called.
It also caused the following warning message:
{code}
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user job_144171553_0005: id: job_144171553_0005: No such 
user
2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
org.apache.hadoop.security.UserGroupInformation: No groups available for user 
job_144171553_0005
2015-09-08 14:55:34,236 INFO [Socket Reader #1 for port 52715] 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization successful for job_144171553_0005 (auth:TOKEN) for 
protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
{{code}}



> AccessControlList should avoid calling getGroupNames in isUserInList with 
> empty groups.
> ---
>
> Key: HADOOP-12413
> URL: https://issues.apache.org/jira/browse/HADOOP-12413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>
> {{AccessControlList}} should avoid calling {{getGroupNames}} in 
> {{isUserInList}} with empty {{groups}}. Currently {{AccessControlList}} will 
> call {{ugi.getGroupNames()}} in {{isUserInList}} even if {{groups}} is empty. 
> {{ugi.getGroupNames()}} is an expensive operation which call shell script 
> {{id -gn  && id -Gn }} to get the list of groups. For example,
> {{ServiceAuthorizationManager#authorize}} will call blocked ACL 
> {{acls[1].isUserAllowed(user)}} to check the user permission. The default 
> value for blocked ACL  is empty
> {code}
> String defaultBlockedAcl = conf.get(   
> CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_AUTHORIZATION_DEFAULT_BLOCKED_ACL,
>  "");
> {code}
> So every time {{authorize}} is called, {{getGroupNames}} may be called.
> It also caused the following warning message:
> {code}
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
> to get groups for user job_144171553_0005: id: job_144171553_0005: No 
> such user
> 2015-09-08 14:55:34,236 WARN [Socket Reader #1 for port 52715] 
> org.apache.hadoop.security.UserGroupInformation: No groups available for user 
> job_144171553_0005
> 2015-09-08 14:55:34,236 INFO [Socket Reader #1 f

[jira] [Commented] (HADOOP-11252) RPC client write does not time out by default

2015-09-14 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744795#comment-14744795
 ] 

Wilfred Spiegelenburg commented on HADOOP-11252:


sorry I have been occupied with a number of other things over the last period. 
I finally have some cycles and will look at this over the coming days.

> RPC client write does not time out by default
> -
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-09-14 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744751#comment-14744751
 ] 

Weiwei Yang commented on HADOOP-12374:
--

Is there anybody who can help to commit this? 

> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12393) asflicense is easily tricked

2015-09-14 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744485#comment-14744485
 ] 

Kengo Seki commented on HADOOP-12393:
-

Got it. Thanks!

> asflicense is easily tricked
> 
>
> Key: HADOOP-12393
> URL: https://issues.apache.org/jira/browse/HADOOP-12393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> asflicense needs to make sure that it gets at least one report file instead 
> of assuming nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12393) asflicense is easily tricked

2015-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744471#comment-14744471
 ] 

Allen Wittenauer commented on HADOOP-12393:
---

Yeah.  We need to do *something* here, but I'm not sure what.  I think checking 
for the existence of one of the files, or we get *some* output or ... yeah. 
Lots of ways to potentially fix this one.

> asflicense is easily tricked
> 
>
> Key: HADOOP-12393
> URL: https://issues.apache.org/jira/browse/HADOOP-12393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> asflicense needs to make sure that it gets at least one report file instead 
> of assuming nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12393) asflicense is easily tricked

2015-09-14 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744468#comment-14744468
 ] 

Kengo Seki commented on HADOOP-12393:
-

Let me confirm, my understanding is:

* asflicense should fail for Kafka, because its {{gradle rat}} generates only 
rat-report.html and rat-report.xml for now.
* But because asflicense judges from only ant/mvn/gradle's status, it succeeds 
even though rat-report.txt does not exist.
* So we should fix the plugin to check at least one of the assumed output 
exists.

Is this what you meant?

> asflicense is easily tricked
> 
>
> Key: HADOOP-12393
> URL: https://issues.apache.org/jira/browse/HADOOP-12393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> asflicense needs to make sure that it gets at least one report file instead 
> of assuming nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744427#comment-14744427
 ] 

Chris Nauroth commented on HADOOP-11918:


[~eddyxu], thank you very much!

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918.000.patch, 
> HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744422#comment-14744422
 ] 

Lei (Eddy) Xu commented on HADOOP-11918:


[~cnauroth], [~steve_l] and [~Thomas Demoor].  Sorry for late reply. This JIRA 
seems slipped from my inbox...

I will pick up it this week and address Thomas and Steve's comments. Will post 
update soon..

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918.000.patch, 
> HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744331#comment-14744331
 ] 

Chris Nauroth commented on HADOOP-11918:


I'd like to get this fix committed into the codebase.  [~eddyxu], are you able 
to respond to the last round of comments from Thomas and Steve?  If you need to 
post a new patch, I'd be happy to help with code review and testing too.  
Thanks!

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918.000.patch, 
> HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-09-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-11742.

Resolution: Duplicate

I'm resolving this as a duplicate of HADOOP-11918.  Reading through the history 
on both issues, it appears that there is more recent activity on HADOOP-11918, 
and the patch there is closer to acceptance.  (If I'm mistaken, then please 
feel free to reopen this.)

> mkdir by file system shell fails on an empty bucket
> ---
>
> Key: HADOOP-11742
> URL: https://issues.apache.org/jira/browse/HADOOP-11742
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: CentOS 7
>Reporter: Takenori Sato
>Assignee: Takenori Sato
>Priority: Minor
> Attachments: HADOOP-11742-branch-2.7.001.patch, 
> HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch, 
> HADOOP-11742-branch-2.7.003-2.patch
>
>
> I have built the latest 2.7, and tried S3AFileSystem.
> Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
> follows:
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
> 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
> s3a://s3a/foo (foo)
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
> ()
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
> mkdir: `s3a://s3a/foo': No such file or directory
> {code}
> So does _ls_.
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
> 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
> ()
> 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
> ls: `s3a://s3a/': No such file or directory
> {code}
> This is how it works via s3n.
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
> {code}
> The snapshot is the following:
> {quote}
> \# git branch
> \* branch-2.7
>   trunk
> \# git log
> commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
> Author: Harsh J 
> Date:   Sun Mar 22 10:18:32 2015 +0530
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744289#comment-14744289
 ] 

Hadoop QA commented on HADOOP-10775:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 50s | The applied patch generated  4 
new checkstyle issues (total was 194, now 190). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 49s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  0s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | yarn tests |   1m 58s | Tests passed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   7m 13s | Tests failed in 
hadoop-yarn-server-nodemanager. |
| | |  79m 49s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestLocalResourcesTrackerImpl
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755798/HADOOP-10775-003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7661/console |


This message was automatically generated.

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775-003.patch, 
> HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12412) Concurrency in FileSystem$Cache is very broken

2015-09-14 Thread Michael Harris (JIRA)
Michael Harris created HADOOP-12412:
---

 Summary: Concurrency in FileSystem$Cache is very broken
 Key: HADOOP-12412
 URL: https://issues.apache.org/jira/browse/HADOOP-12412
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Michael Harris
Assignee: Michael Harris
Priority: Critical


The FileSystem cache uses a mild amount of concurrency to protect the cache 
itself, but does nothing to prevent multiple of the same filesystem from being 
constructed and initialized simultaneously.  At best, this leads to potentially 
expensive wasted work.  At worst, as is the case for Spark, it can lead to 
deadlocks/livelocks, especially when the same configuration object is passed 
into both calls.  This should be refactored to use a results cache approach 
(reference Java Concurrency in Practice chapter 5 section 6 for an example of 
how to do this correctly), which will be both higher-performance and safer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12405) Expose NN RPC via HTTP / HTTPS

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744241#comment-14744241
 ] 

Colin Patrick McCabe commented on HADOOP-12405:
---

webhdfs has different goals than NN RPC.  In particular, NN RPC can be changed 
in incompatible ways in a major release, whereas webhdfs cannot.  One of the 
main purposes of webhdfs was to provide a stable API for doing distcp across 
versions.  I do not think we should expose NN RPC calls directly unless we can 
find some way to address this.

> Expose NN RPC via HTTP / HTTPS
> --
>
> Key: HADOOP-12405
> URL: https://issues.apache.org/jira/browse/HADOOP-12405
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Haohui Mai
>
> WebHDFS needs to expose NN RPC calls to allow users to access HDFS via HTTP / 
> HTTPS.
> The current approach is to add REST APIs into WebHDFS one by one manually. It 
> requires significant efforts from a maintainability point of view. we found 
> that WebHDFS is consistently lagging behind. It's also hard to maintain the 
> REST RPC stubs.
> There are a lot of values to expose the NN RPC in a HTTP / HTTPS friendly way 
> automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10775:
---
Assignee: Steve Loughran

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775-003.patch, 
> HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744095#comment-14744095
 ] 

Hadoop QA commented on HADOOP-12321:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  24m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |  10m 20s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 58s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | mapreduce tests |   5m 25s | Tests failed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | yarn tests |   3m 14s | Tests passed in 
hadoop-yarn-server-applicationhistoryservice. |
| {color:green}+1{color} | yarn tests |   7m 44s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | yarn tests |  53m 51s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-server-web-proxy. |
| {color:red}-1{color} | hdfs tests | 168m 14s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   1m 24s | Tests failed in 
hadoop-hdfs-nfs. |
| | | 323m  0s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.mapreduce.v2.hs.TestHistoryFileManager |
|   | hadoop.mapreduce.v2.hs.TestHistoryServerFileSystemStateStoreService |
|   | hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.tools.TestJMXGet |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.nfs.nfs3.TestWrites |
|   | hadoop.hdfs.nfs.nfs3.TestExportsTable |
|   | hadoop.hdfs.nfs.nfs3.TestReaddir |
|   | hadoop.hdfs.nfs.nfs3.TestNfs3HttpServer |
|   | hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege |
|   | hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3 |
|   | hadoop.hdfs.nfs.TestMountd |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755733/HADOOP-12321-005-aggregated.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-hs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-mapreduce-client-hs.txt
 |
| hadoop-yarn-server-applicationhistoryservice test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-yarn-server-applicationhistoryservice.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| hadoop-yarn-server-web-proxy test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-yarn-server-web-proxy.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-nfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7657/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://bui

[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.1, trunk-win
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775-003.patch, 
> HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Attachment: HADOOP-10775-003.patch

Patch -003

# use the RTE-raising clause whenever all winutils shell commands
are constructed
# remove obsolete code (mostly symlink related) that only ran pre-java-7
# keep checkstyle happy with lines < 80 chars

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775-003.patch, 
> HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Status: Open  (was: Patch Available)

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.1, trunk-win
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743951#comment-14743951
 ] 

Steve Loughran commented on HADOOP-10775:
-

Checkstyle
{code}
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:52:
 Line is longer than 80 characters (found 98).
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:312:
 Line is longer than 80 characters (found 88).
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:352:
 Line is longer than 80 characters (found 89).
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:420:
 First sentence should end with a period.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:442:
 Line is longer than 80 characters (found 87).
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:449:
 First sentence should end with a period.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:463:
 First sentence should end with a period.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java:829:
 Line is longer than 80 characters (found 91).
{code}

javac
{code}
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java:[103,24]
>  [deprecation] getWinUtilsPath() in Shell has been deprecated
{code}

that is: the newly deprecated method

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10051) winutil.exe is not included in hadoop bin tarball

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10051:

Summary: winutil.exe is not included in hadoop bin tarball  (was: 
winutil.exe is not included in 2.2.0 bin tarball)

> winutil.exe is not included in hadoop bin tarball
> -
>
> Key: HADOOP-10051
> URL: https://issues.apache.org/jira/browse/HADOOP-10051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.2.0, 2.4.0, 2.5.0
>Reporter: Tsuyoshi Ozawa
>
> I don't have Windows environment, but one user who tried 2.2.0 release
> on Windows reported that released tar ball doesn't contain
> "winutil.exe" and cannot run any commands. I confirmed that winutil.exe is 
> not included in 2.2.0 bin tarball surely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12350) WASB Logging: Improve WASB Logging around deletes, reads and writes

2015-09-14 Thread Dushyanth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743888#comment-14743888
 ] 

Dushyanth commented on HADOOP-12350:


[~cnauroth] Thanks for the input chris. I have added a patch that moves the 
logging framework to Slf4j. As discussed in the JIRA I have added a private 
cleanup method inside  NativeAzureFileSystem class. 

To address the comment on AzureNativeFileSystemStore, yes the log statement is 
specifically for fatal exception that can't be retried again. 

> WASB Logging: Improve WASB Logging around deletes, reads and writes
> ---
>
> Key: HADOOP-12350
> URL: https://issues.apache.org/jira/browse/HADOOP-12350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: 0001-HADOOP-12350-Added-WASB-Logging-Statement.patch, 
> 0001-HADOOP-12350-Moving-from-commons.logging-to-slf4j-lo.patch
>
>
> Logging around the WASB component is very limited and it is disabled by 
> default. This improvement is created to add logging around Reads, Writes and 
> Deletes when Azure Storage Exception to capture the blobs that hit the 
> exception. This information is useful while communicating with the Azure 
> storage team for debugging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12350) WASB Logging: Improve WASB Logging around deletes, reads and writes

2015-09-14 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12350:
---
Attachment: 0001-HADOOP-12350-Moving-from-commons.logging-to-slf4j-lo.patch

Adding patch that changes to  slf4j logging framework. 

> WASB Logging: Improve WASB Logging around deletes, reads and writes
> ---
>
> Key: HADOOP-12350
> URL: https://issues.apache.org/jira/browse/HADOOP-12350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: 0001-HADOOP-12350-Added-WASB-Logging-Statement.patch, 
> 0001-HADOOP-12350-Moving-from-commons.logging-to-slf4j-lo.patch
>
>
> Logging around the WASB component is very limited and it is disabled by 
> default. This improvement is created to add logging around Reads, Writes and 
> Deletes when Azure Storage Exception to capture the blobs that hit the 
> exception. This information is useful while communicating with the Azure 
> storage team for debugging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12411) webhdfs client requires SPNEGO to do renew

2015-09-14 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12411:
-

 Summary: webhdfs client requires SPNEGO to do renew
 Key: HADOOP-12411
 URL: https://issues.apache.org/jira/browse/HADOOP-12411
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


Simple bug.

webhdfs (the file system) doesn't pass delegation= in its REST call to renew 
the same token.  This forces a SPNEGO (or other auth) instead of just renewing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10051) winutil.exe is not included in 2.2.0 bin tarball

2015-09-14 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743785#comment-14743785
 ] 

Ruslan Dautkhanov commented on HADOOP-10051:


Not fixed in 2.6

> winutil.exe is not included in 2.2.0 bin tarball
> 
>
> Key: HADOOP-10051
> URL: https://issues.apache.org/jira/browse/HADOOP-10051
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.2.0, 2.4.0, 2.5.0
>Reporter: Tsuyoshi Ozawa
>
> I don't have Windows environment, but one user who tried 2.2.0 release
> on Windows reported that released tar ball doesn't contain
> "winutil.exe" and cannot run any commands. I confirmed that winutil.exe is 
> not included in 2.2.0 bin tarball surely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-14 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743781#comment-14743781
 ] 

Jagadesh Kiran N commented on HADOOP-12399:
---

As changes done is on Comments ,no tests included. [~aw] please review

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch, 
> HADOOP-12399.HADOOP-12111.01.patch, HADOOP-12399.HADOOP-12111.02.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12401) Wrong function names in test-patch bigtop personality

2015-09-14 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743784#comment-14743784
 ] 

Jagadesh Kiran N commented on HADOOP-12401:
---

[~sekikn] Thanks for your update , then we can remove/comment this function 
bigtop_precompile  ?

> Wrong function names in test-patch bigtop personality
> -
>
> Key: HADOOP-12401
> URL: https://issues.apache.org/jira/browse/HADOOP-12401
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>
> In dev-support/personality/bigtop.sh:
> {code}
>  51 function bigtop_precheck_postinstall
>  52 {
>  53   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
>  54 pushd "${BASEDIR}" >/dev/null
>  55 echo_and_redirect "${PATCH_DIR}/bigtop-branch-toolchain.txt" 
> "${GRADLEW}" toolchain
>  56 popd >/dev/null
>  57   fi
>  58 }
>  59 
>  60 function bigtop_postapply_postinstall
>  61 {
>  62   if [[ ${BIGTOP_PUPPETSETUP} = "true" ]]; then
>  63 pushd "${BASEDIR}" >/dev/null
>  64 echo_and_redirect "${PATCH_DIR}/bigtop-patch-toolchain.txt" 
> "${GRADLEW}" toolchain
>  65 popd >/dev/null
>  66   fi
>  67 }
> {code}
> Their names are not proper for test-patch plugin callback functions. Maybe it 
> should be like:
> {code}
> function bigtop_precompile
> {
>   declare codebase=$1
>   if [[ ${BIGTOP_PUPPETSETUP} = "true" && ( ${codebase} = "branch" || 
> ${codebase} = "patch" ) ]]; then
> pushd "${BASEDIR}" >/dev/null
> echo_and_redirect "${PATCH_DIR}/bigtop-${codebase}-toolchain.txt" 
> "${GRADLEW}" toolchain
> popd >/dev/null
>   fi  
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743703#comment-14743703
 ] 

Steve Loughran commented on HADOOP-12409:
-

That's quite a big diff, isn't it?

# It might be safest to leave the YARN implementation where it is, as an empty, 
deprecated subclass of the common one. 
# this JIRA will need matching/linked ones in the YARN and MAPREDUCE projects, 
with the same patch file submitted under each one.

Looking at uses of {{Clock}}, I see a bigger issue, which is that there are 
lots of places which use Clock to time sleep intervals, e.g. 
{{org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl}}.
 This is dangerous as System time is not monotonic and can go backwards if NTP 
tells it to. That's bad practise and moving the clock class doesn't address it 
—merely makes it more obvious where it happens



> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
>Priority: Minor
> Attachments: Hadoop-12409.001.patch
>
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743701#comment-14743701
 ] 

Hadoop QA commented on HADOOP-10775:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  23m 57s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:red}-1{color} | javac |  10m 12s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  11m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 32s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  8 
new checkstyle issues (total was 124, now 127). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  21m 11s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | yarn tests |   1m 58s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   7m 36s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| | |  87m 48s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.io.compress.TestCodec |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755731/HADOOP-10775-002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7658/console |


This message was automatically generated.

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743675#comment-14743675
 ] 

Hadoop QA commented on HADOOP-12360:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 30s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 12s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 44s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 19s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  25m 33s | Tests failed in 
hadoop-common. |
| | |  68m 59s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common |
| Failed unit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | hadoop.fs.TestLocalFsFCStatistics |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755739/HADOOP-12360.008.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7660/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7660/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7660/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7660/console |


This message was automatically generated.

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743682#comment-14743682
 ] 

Steve Loughran commented on HADOOP-12360:
-

that findbugs report in the jenkins run is empty. If you can run it locally 
—does it show anything?  If so, and they are spurious entries, 
{{dev-support/findbugsExcludeFile.xml}} can be tweaked to exclude the files 
from the specific bugs.

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12386) RetryPolicies.RETRY_FOREVER should be able to specify a retry interval

2015-09-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743603#comment-14743603
 ] 

Sunil G commented on HADOOP-12386:
--

Test case passed locally. [~leftnoteasy] could you please take a look.

> RetryPolicies.RETRY_FOREVER should be able to specify a retry interval
> --
>
> Key: HADOOP-12386
> URL: https://issues.apache.org/jira/browse/HADOOP-12386
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12386.patch, 0002-HADOOP-12386.patch
>
>
> Problems mentioned in YARN-4113, We should be able to specify retry interval 
> in RetryPolicies.RETRY_FOREVER. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-4258) the test patch script should check for filenames that differ only in case

2015-09-14 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N reassigned HADOOP-4258:


Assignee: Jagadesh Kiran N

> the test patch script should check for filenames that differ only in case
> -
>
> Key: HADOOP-4258
> URL: https://issues.apache.org/jira/browse/HADOOP-4258
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Reporter: Owen O'Malley
>Assignee: Jagadesh Kiran N
>  Labels: test-patch
> Attachments: HADOOP-4258.001.patch, HADOOP-4258.HADOOP-12111.00.patch
>
>
> It would be nice if the test patch script warned about filenames that differ 
> only in case. We recently had a patch committed that had a pair of colliding 
> filenames and subversion broke badly on my Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11293) Factor OSType out from Shell

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743574#comment-14743574
 ] 

Hadoop QA commented on HADOOP-11293:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12698923/HADOOP-11293-branch-2-005.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / c951d56 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7659/console |


This message was automatically generated.

> Factor OSType out from Shell
> 
>
> Key: HADOOP-11293
> URL: https://issues.apache.org/jira/browse/HADOOP-11293
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, util
>Affects Versions: 2.7.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11293-branch-2-005.patch, HADOOP-11293.001.patch, 
> HADOOP-11293.002.patch, HADOOP-11293.003.patch, HADOOP-11293.004.patch, 
> HADOOP-11293.005.patch, HADOOP-11293.005.patch, HADOOP-11293.005.patch, 
> HADOOP-11293.005.patch
>
>
> Currently the code that detects the OS type is located in Shell.java. Code 
> that need to check OS type refers to Shell, even if no other stuff of Shell 
> is needed. 
> I am proposing to refactor OSType out to  its own class, so to make the 
> OSType easier to access and the dependency cleaner.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11293) Factor OSType out from Shell

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11293:

  Labels:   (was: BB2015-05-TBR)
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

resubmitting. I'm confident we'll get problems applying it, but it'll be 
fixable and then we can get this in.

> Factor OSType out from Shell
> 
>
> Key: HADOOP-11293
> URL: https://issues.apache.org/jira/browse/HADOOP-11293
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, util
>Affects Versions: 2.7.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11293-branch-2-005.patch, HADOOP-11293.001.patch, 
> HADOOP-11293.002.patch, HADOOP-11293.003.patch, HADOOP-11293.004.patch, 
> HADOOP-11293.005.patch, HADOOP-11293.005.patch, HADOOP-11293.005.patch, 
> HADOOP-11293.005.patch
>
>
> Currently the code that detects the OS type is located in Shell.java. Code 
> that need to check OS type refers to Shell, even if no other stuff of Shell 
> is needed. 
> I am proposing to refactor OSType out to  its own class, so to make the 
> OSType easier to access and the dependency cleaner.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Patch Available  (was: Open)

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11293) Factor OSType out from Shell

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11293:

Status: Open  (was: Patch Available)

> Factor OSType out from Shell
> 
>
> Key: HADOOP-11293
> URL: https://issues.apache.org/jira/browse/HADOOP-11293
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, util
>Affects Versions: 2.7.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11293-branch-2-005.patch, HADOOP-11293.001.patch, 
> HADOOP-11293.002.patch, HADOOP-11293.003.patch, HADOOP-11293.004.patch, 
> HADOOP-11293.005.patch, HADOOP-11293.005.patch, HADOOP-11293.005.patch, 
> HADOOP-11293.005.patch
>
>
> Currently the code that detects the OS type is located in Shell.java. Code 
> that need to check OS type refers to Shell, even if no other stuff of Shell 
> is needed. 
> I am proposing to refactor OSType out to  its own class, so to make the 
> OSType easier to access and the dependency cleaner.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Attachment: HADOOP-12360.008.patch

Fixed checkstyle

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Open  (was: Patch Available)

Fixing checkstyle


> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Status: Patch Available  (was: Open)

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.1, trunk-win
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Affects Version/s: 2.7.1
  Environment: windows  (was: Apache jenkins windows1 server)

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win, 2.7.1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Status: Patch Available  (was: Open)

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Attachment: HADOOP-12321-005-aggregated.patch

Attaching an aggregated patch.

For test failures in MAPREDUCE-6462, I also feel we can remove that service 
check. It looks like doesnt add much of a value. Kicking Jenkins.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Attachment: 0004-HADOOP-12321.patch

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12321:
-
Status: Open  (was: Patch Available)

Cancelling patch to upload HADOOP specific and aggregated patches.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Attachment: HADOOP-10775-002.patch

Patch -002 reinstates the construction time warning, but at INFO, not error. 

> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win
> Environment: Apache jenkins windows1 server
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775-002.patch, HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10775) Shell operations to fail with meaningful errors on windows if winutils.exe not found

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10775:

Attachment: HADOOP-10775.patch

As well as retaining the original {{WINUTILS}} field, which is null if the 
field could not be retrieved, I've added two methods {{getWindowsU

# {{getWinutilsPathStrict()}} raises an IOE
# {{getWinutilsPathRTE()}} raises an RTE: I've inserted this where the method 
calling it doesn't raise an IOE.

Both these will throw an exception which nests the exception caught when trying 
to set up the path; those also include, on windows a link to a new wiki page.

* The wiki entry is [[https://wiki.apache.org/hadoop/WindowsProblems]]
* It links to a repo where we can collect the windows binaries. Currently 
[[mine|https://github.com/steveloughran/winutils]]

I've not been through all references to Shell.WINUTILS; the ones where a path 
is set up but not executed are left alone (e.g. 
{{Shell.getSetPermissionCommand()}}. Instead I fixed Shell Executor to scan 
command for null entries and fail fast, with better text. 

Irrespective of what jenkins says, this patch needs to be tested against a 
windows host






> Shell operations to fail with meaningful errors on windows if winutils.exe 
> not found
> 
>
> Key: HADOOP-10775
> URL: https://issues.apache.org/jira/browse/HADOOP-10775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: trunk-win
> Environment: Apache jenkins windows1 server
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-10775.patch
>
>
> If {{winutils.exe}} can't be found {{HADOOP_HOME}} wrong/unset or other 
> causes, then an error is logged -but when any of the {{Shell}} operations are 
> used, an NPE is raised rather than something meaningful.
> The error message at setup time should be preserved and then raised before 
> any attempt to invoke a winutils-driven process made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Xianyin Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyin Xin reassigned HADOOP-12409:


Assignee: Xianyin Xin

> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
>Priority: Minor
> Attachments: Hadoop-12409.001.patch
>
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Xianyin Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyin Xin updated HADOOP-12409:
-
Attachment: Hadoop-12409.001.patch

> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Priority: Minor
> Attachments: Hadoop-12409.001.patch
>
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743514#comment-14743514
 ] 

Hadoop QA commented on HADOOP-12360:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 14s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  1 
new checkstyle issues (total was 0, now 1). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   1m 55s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 36s | Tests passed in 
hadoop-common. |
| | |  63m 21s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755718/HADOOP-12360.007.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7656/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7656/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7656/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7656/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7656/console |


This message was automatically generated.

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Patch Available  (was: Open)

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Open  (was: Patch Available)

addressing comments

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-14 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Attachment: HADOOP-12360.007.patch

Addressed comments

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11252) RPC client write does not time out by default

2015-09-14 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743352#comment-14743352
 ] 

Masatake Iwasaki commented on HADOOP-11252:
---

[~wilfreds], do you have any update on this? I tested equivalent patch in 
YARN-2578 and +1(non-binding) for the fix. I would like to update the patch 
based on [~andrew.wang]'s comment if you don't have time. Thanks.

> RPC client write does not time out by default
> -
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12284:

Status: Patch Available  (was: Open)

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284-002.patch, HADOOP-12284.example, 
> HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12284:

Status: Open  (was: Patch Available)

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284-002.patch, HADOOP-12284.example, 
> HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12409) Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop Common

2015-09-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12409:

Affects Version/s: 3.0.0
 Priority: Minor  (was: Major)
  Component/s: util
   Issue Type: Improvement  (was: Wish)

> Move org.apache.hadoop.yarn.util.Clock and relative implementations to hadoop 
> Common
> 
>
> Key: HADOOP-12409
> URL: https://issues.apache.org/jira/browse/HADOOP-12409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Priority: Minor
>
> It is widely used by MR and YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743215#comment-14743215
 ] 

Sunil G commented on HADOOP-12321:
--

Thank you [~steve_l]
I think I overlooked those failures. Will make the changes and will upload a 
common patch under all 3 sub jiras.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> HADOOP-12321-003.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14743207#comment-14743207
 ] 

Steve Loughran commented on HADOOP-12321:
-

Its maven race conditions surfacing —if there is >1 simultaneous build then the 
artifacts may not be synchronized. Looks like a hadoop-common JAR from a 
different build was picked up. 

The test failures in MAPREDUCE-6462 *are* related.

They are easy to fix; if you do that then you can resubmit the entire patch 
under all three JIRAs as I did, so as to get the builds and tests across all 
the components

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> HADOOP-12321-003.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)