[jira] [Commented] (YARN-3364) Clarify Naming of yarn.client.nodemanager-connect.max-wait-ms and yarn.resourcemanager.connect.max-wait.ms

2015-03-18 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14367167#comment-14367167
 ] 

Andrew Johnson commented on YARN-3364:
--

No, I did not have YARN-3238 applied.  Thanks for that!

Given that and HADOOP-11398 I think this can can be closed.

 Clarify Naming of yarn.client.nodemanager-connect.max-wait-ms and 
 yarn.resourcemanager.connect.max-wait.ms 
 ---

 Key: YARN-3364
 URL: https://issues.apache.org/jira/browse/YARN-3364
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Andrew Johnson

 I encountered an issue recently where the ApplicationMaster for MapReduce 
 jobs would spend hours attempting to connect to a node in my cluster that had 
 died due to a hardware fault.  After debugging this, I found that the 
 yarn.client.nodemanager-connect.max-wait-ms property did not behave as I had 
 expected.  Based on the name I had thought this would set a maximum time 
 limit for attempting to connect to a NodeManager.  The code in 
 org.apache.hadoop.yarn.client.NMProxy corroborated this thought - it used a 
 RetryUpToMaximumTimeWithFixedSleep policy when a  ConnectTimeoutException was 
 thrown, as it was in my case with a dead node.
 However, the RetryUpToMaximumTimeWithFixedSleep policy doesn't actually set a 
 time limit, but instead divides the maximum time by the sleep period to set a 
 total number of retries, regardless of how long those retries take.  As such 
 I was seeing the ApplicationMaster spend much longer attempting to make a 
 connection than I had anticipated.
 The yarn.resourcemanager.connect.max-wait.ms would have the same behavior.  
 These properties would be better named like 
 yarn.client.nodemanager-connect.max.retries and 
 yarn.resourcemanager.connect.max.retries to better align with the actual 
 behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3364) Clarify Naming of yarn.client.nodemanager-connect.max-wait-ms and yarn.resourcemanager.connect.max-wait.ms

2015-03-18 Thread Andrew Johnson (JIRA)
Andrew Johnson created YARN-3364:


 Summary: Clarify Naming of 
yarn.client.nodemanager-connect.max-wait-ms and 
yarn.resourcemanager.connect.max-wait.ms 
 Key: YARN-3364
 URL: https://issues.apache.org/jira/browse/YARN-3364
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Andrew Johnson


I encountered an issue recently where the ApplicationMaster for MapReduce jobs 
would spend hours attempting to connect to a node in my cluster that had died 
due to a hardware fault.  After debugging this, I found that the 
yarn.client.nodemanager-connect.max-wait-ms property did not behave as I had 
expected.  Based on the name I had thought this would set a maximum time limit 
for attempting to connect to a NodeManager.  The code in 
org.apache.hadoop.yarn.client.NMProxy corroborated this thought - it used a 
RetryUpToMaximumTimeWithFixedSleep policy when a  ConnectTimeoutException was 
thrown, as it was in my case with a dead node.

However, the RetryUpToMaximumTimeWithFixedSleep policy doesn't actually set a 
time limit, but instead divides the maximum time by the sleep period to set a 
total number of retries, regardless of how long those retries take.  As such I 
was seeing the ApplicationMaster spend much longer attempting to make a 
connection than I had anticipated.

The yarn.resourcemanager.connect.max-wait.ms would have the same behavior.  
These properties would be better named like 
yarn.client.nodemanager-connect.max.retries and 
yarn.resourcemanager.connect.max.retries to better align with the actual 
behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2419) RM applications page doesn't sort application id properly

2015-01-15 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14278696#comment-14278696
 ] 

Andrew Johnson commented on YARN-2419:
--

I am encountering this same problem.  Is there a fix in the works?

 RM applications page doesn't sort application id properly
 -

 Key: YARN-2419
 URL: https://issues.apache.org/jira/browse/YARN-2419
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Thomas Graves

 The ResourceManager apps page doesn't sort the application ids properly when 
 the app id rolls over from  to 1.
 When it rolls over the 1+ application ids end up being many pages down by 
 the 0XXX numbers.
 I assume we just sort alphabetically so we would need a special sorter that 
 knows about application ids.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-08 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14269373#comment-14269373
 ] 

Andrew Johnson commented on YARN-2893:
--

Yeah, that definitely seems like its worth a look.  Is there anything specific 
I should look out for?

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-07 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268449#comment-14268449
 ] 

Andrew Johnson commented on YARN-2893:
--

I'm seeing this error on a non-secure cluster. 

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-07 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268193#comment-14268193
 ] 

Andrew Johnson commented on YARN-2893:
--

I've also noticed that if multiple jobs are submitted at the same time and this 
error occurs, all the jobs will fail.

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-07 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268183#comment-14268183
 ] 

Andrew Johnson commented on YARN-2893:
--

I am also encountering this same error.  The failures are pretty sporadic and 
I've never been able to reproduce it.  Resubmitting the failed job always 
works, however.

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-07 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268359#comment-14268359
 ] 

Andrew Johnson commented on YARN-2893:
--

No, it's at least 95% Scalding jobs.

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2893) AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream

2015-01-07 Thread Andrew Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268340#comment-14268340
 ] 

Andrew Johnson commented on YARN-2893:
--

[~jira.shegalov] This is always with Scalding jobs.

 AMLaucher: sporadic job failures due to EOFException in readTokenStorageStream
 --

 Key: YARN-2893
 URL: https://issues.apache.org/jira/browse/YARN-2893
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov

 MapReduce jobs on our clusters experience sporadic failures due to corrupt 
 tokens in the AM launch context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)