[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823554#comment-13823554
 ] 

Hudson commented on MAPREDUCE-5616:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #392 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/392/])
MAPREDUCE-5616. MR Client-AppMaster RPC max retries on socket timeout is too 
high. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1542001)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java


 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823638#comment-13823638
 ] 

Hudson commented on MAPREDUCE-5616:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #1583 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1583/])
MAPREDUCE-5616. MR Client-AppMaster RPC max retries on socket timeout is too 
high. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1542001)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java


 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823657#comment-13823657
 ] 

Hudson commented on MAPREDUCE-5616:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1609 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1609/])
MAPREDUCE-5616. MR Client-AppMaster RPC max retries on socket timeout is too 
high. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1542001)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java


 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13822707#comment-13822707
 ] 

Hudson commented on MAPREDUCE-5616:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #4739 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4739/])
MAPREDUCE-5616. MR Client-AppMaster RPC max retries on socket timeout is too 
high. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1542001)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java


 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-13 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13822129#comment-13822129
 ] 

Bikas Saha commented on MAPREDUCE-5616:
---

Looks like a fairly straightforward change for a fairly non-trivial bug. Thanks 
Chris! +1.

 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818074#comment-13818074
 ] 

Hadoop QA commented on MAPREDUCE-5616:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12612975/MAPREDUCE-5616.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

org.apache.hadoop.mapreduce.v2.TestUberAM

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4186//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4186//console

This message is automatically generated.

 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13818169#comment-13818169
 ] 

Chris Nauroth commented on MAPREDUCE-5616:
--

{quote}
-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
{quote}

Trying to write a test around this timeout condition would likely be very 
specific to the environment and risk breaking on other environments.


{quote}
-1 core tests. The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:
org.apache.hadoop.mapreduce.v2.TestUberAM
{quote}

This is the timeout that we've seen elsewhere on {{TestUberAM}}.  It's 
unrelated to this patch.

 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: MAPREDUCE-5616.1.patch


 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817538#comment-13817538
 ] 

Chris Nauroth commented on MAPREDUCE-5616:
--

Linking to MAPREDUCE-3811.  I have a patch in progress that will look very 
similar to that one.

 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (MAPREDUCE-5616) MR Client-AppMaster RPC max retries on socket timeout is too high.

2013-11-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13817567#comment-13817567
 ] 

Chris Nauroth commented on MAPREDUCE-5616:
--

The existing config override is sufficient for connection refused errors.  It 
doesn't cover connection timeout errors, which is configured separately in 
the base RPC client code.

After the AM exits, we would expect connection attempts to cause an immediate 
connection refused error, not a longer connection timeout error.  After all, 
the packets can get to their destination.  There's just no server listening 
anymore.  The reason I saw connection timeouts was a side effect of a feature 
of Windows Firewall called Stealth Mode.  This feature is on by default, and it 
intentionally drops outbound TCP RST packets for connections initiated against 
a port with no server listening.

http://technet.microsoft.com/en-us/library/dd448557%28WS.10%29

Without getting the RST, the client doesn't know that a connection has been 
refused, and so it just has to wait for the longer timeout condition.  It's 
possible to disable stealth mode by setting a registry key and restarting the 
firewall:

http://msdn.microsoft.com/en-us/library/ff720058.aspx

That article might be out of date though, because I found that this registry 
key was really at 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SharedAccess\Parameters\FirewallPolicy\PublicProfile
 in my environments.

My only known repro right now is on Windows.  I'm leaving this information here 
for anyone who might notice similar problems on other RPC interactions.  I'd 
still like to get a configuration patch into the client for this.

 MR Client-AppMaster RPC max retries on socket timeout is too high.
 --

 Key: MAPREDUCE-5616
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5616
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 MAPREDUCE-3811 introduced a separate config key for overriding the max 
 retries applied to RPC connections from the MapReduce Client to the MapReduce 
 Application Master.  This was done to make failover from the AM to the 
 MapReduce History Server faster in the event that the AM completes while the 
 client thinks it's still running.  However, the RPC client uses a separate 
 setting for socket timeouts, and this one is not overridden.  The default for 
 this is 45 retries with a 20-second timeout on each retry.  This means that 
 in environments subject to connection timeout instead of connection refused, 
 the client waits 15 minutes for failover.



--
This message was sent by Atlassian JIRA
(v6.1#6144)