[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992826#comment-13992826
 ] 

Hudson commented on MAPREDUCE-5402:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1777 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1777/])
MAPREDUCE-5402. In DynamicInputFormat, change MAX_CHUNKS_TOLERABLE, 
MAX_CHUNKS_IDEAL, MIN_RECORDS_PER_CHUNK and SPLIT_RATIO to be configurable.  
Contributed by Tsuyoshi OZAWA (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1592703)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13992790#comment-13992790
 ] 

Hudson commented on MAPREDUCE-5402:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #1751 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1751/])
MAPREDUCE-5402. In DynamicInputFormat, change MAX_CHUNKS_TOLERABLE, 
MAX_CHUNKS_IDEAL, MIN_RECORDS_PER_CHUNK and SPLIT_RATIO to be configurable.  
Contributed by Tsuyoshi OZAWA (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1592703)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-06 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990553#comment-13990553
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Thanks for reporting and helping us, David! This feature will be available 
2.5.0 release. And, thanks for your review, Tsz and Mithun!

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991211#comment-13991211
 ] 

Hudson commented on MAPREDUCE-5402:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #5594 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5594/])
MAPREDUCE-5402. In DynamicInputFormat, change MAX_CHUNKS_TOLERABLE, 
MAX_CHUNKS_IDEAL, MIN_RECORDS_PER_CHUNK and SPLIT_RATIO to be configurable.  
Contributed by Tsuyoshi OZAWA (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1592703)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990085#comment-13990085
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Thanks for your view, Tsz! Updated a patch to change the configuration prefix. 
Let's wait for Jenkins.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990243#comment-13990243
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12643421/MAPREDUCE-5402.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4582//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4582//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch, 
 MAPREDUCE-5402.5.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-04 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13989242#comment-13989242
 ] 

Tsz Wo Nicholas Sze commented on MAPREDUCE-5402:


{code}
+  public static final String CONF_LABEL_MAX_CHUNKS_TOLERABLE = 
distcp.max.chunks.tolerable;
+  public static final String CONF_LABEL_MAX_CHUNKS_IDEAL = 
distcp.max.chunks.ideal;
+  public static final String CONF_LABEL_MIN_RECORDS_PER_CHUNK = 
distcp.min.records_per_chunk;
+  public static final String CONF_LABEL_SPLIT_RATIO = distcp.split.ratio;
{code}
Since these conf are used only if -strategy dynamic is specified, let's use 
the prefix distcp.dynamic. for them.  The patch looks good other than that.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988591#comment-13988591
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12643183/MAPREDUCE-5402.4-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4579//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4579//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4-2.patch, MAPREDUCE-5402.4.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987797#comment-13987797
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Waiting for Jenkins.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-30 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13986257#comment-13986257
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Updates are as follows::
* Changed to use configuration in createSplits(..)
* Changed to use configuration in getSplitRatio(..)
* Added validattion in getMaxChunksTolerable, getMaxChunksIdeal and 
getMinRecordsPerChunk
* Added tests

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch, MAPREDUCE-5402.4.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-29 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13984168#comment-13984168
 ] 

Tsz Wo Nicholas Sze commented on MAPREDUCE-5402:


- In createSplits(..), should we get min records per chunk from conf?
- Similarly, in the new getSplitRatio(..) method, should we get split ratio 
from conf?
- Let's validate the conf values in getMaxChunksTolerable, getMaxChunksIdeal 
and getMinRecordsPerChunk.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-29 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13985103#comment-13985103
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Thanks for your review, Tsz! I'll update a patch in a few days.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-28 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13983845#comment-13983845
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Hi [~szetszwo], I looked that you've worked for distcp on Hadoop JIRA. Can you 
take a review?

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-28 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13983862#comment-13983862
 ] 

Tsz Wo Nicholas Sze commented on MAPREDUCE-5402:


Sure, I should be able to review this later this week.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-21 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975749#comment-13975749
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Thank you for testing, David!

[~qwertymaniac], can you take a look and review a latest patch?

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-21 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975768#comment-13975768
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

No problem!  Happy to help get this fixed!

FYI, if you're interested, here's the stats for another (supposedly more 
optimized) run of job 2.  No real improvement in execution time, though.

Total number of files: 17,197
Number of files copied: 17,132
Number of files skipped: 65
Number of bytes copied: 1,202,347,772,147
Number of mappers: 512
Split ratio: 10
Max chunks tolerable: 10,000
Number of dynamic-chunk-files created: 4416
Run time: 54mins, 51sec

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-20 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975212#comment-13975212
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Looks like the patch is working.  Tested it on a few heavy load jobs today, and 
it definitely seemed to work around the issue I was having with the Too many 
chunks created error.  I probably need to tune the parms a bit to optimize the 
distcp runs I'm doing, but the basic functionality does seem to work.  (Note:  
I did my testing using your patch with the backported version of distcp at 
https://github.com/QwertyManiac/hadoop-distcp-mr1, as described in 
https://issues.cloudera.org/browse/DISTRO-420 , and not the current Hadoop 
trunk version of the code.)

I'm still seeing a bit of long tail behavior, but it's more like 100 mappers 
that are taking longer to complete, rather than 1 or 2, which indicates that 
the copy is being distributed more optimally.

Here's some stats recorded from a few of these jobs:

Job 1:
Total number of files:  20,027
Number of files copied: 20,017
Number of files skipped:10
Number of bytes copied: 84,802,510,328
Number of mappers:  512
Split ratio:10
Max chunks tolerable:   10,000
Number of dynamic-chunk-files created:  5012
Run time:   5mins, 19sec

Job 2:
Total number of files:  36,374
Number of files copied: 17,160
Number of files skipped:19,214
Number of bytes copied: 1,196,591,437,407
Number of mappers:  512
Split ratio:10
Max chunks tolerable:   10,000
Number of dynamic-chunk-files created:  4714
Run time:   50mins, 50sec

Job 2 can obviously be optimized a bit better.  (I.e., it'll distribute much 
better if I eliminate all those files being skipped.)  But this is still an 
improvement - it used to take over 2 hours to run that job.

Thanks again for working on the fix for this issue.  Any additional questions 
you have please let me know.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-18 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973863#comment-13973863
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Thank you for taking, David. I hope a patch works well for your use case and 
test.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-18 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13974098#comment-13974098
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Testing it out today.  Will report back my findings.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-17 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973102#comment-13973102
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

My apologies for not following up on this.  Solving this issue is starting to 
become a bigger priority for me now, so I'd like to pick it up again.

I'd like to try to test out Tsuyoshi's patch.  Could anyone provide some 
assistance/pointers on a couple of things?

1) I've never built hadoop from source.  Could anyone assist in pointing me to 
the proper location of the source trees I'd need to build the distcp.jar?

2) Would anyone have any tips on how I might be able to backport this new 
version of distcp to mrv1, similar to what Harsh did in building 
hadoop-distcp-mr1-2.0.0-mr1-cdh4.0.1-harsh.jar in 
https://issues.cloudera.org/browse/DISTRO-420 ?  Most of my Hadoop clusters are 
still running mrv1.

TIA!  Any pointers much appreciated!

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973138#comment-13973138
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12593727/MAPREDUCE-5402.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4530//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4530//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-17 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973143#comment-13973143
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5402:
---

Hi David, 

To build Hadoop from trunk, protobuf 2.5.0 is required. You can get it from 
here: https://code.google.com/p/protobuf/downloads/list

Maybe following command works well to build hadoop:
{code}
$ git clone git://git.apache.org/hadoop-common.git
$ cd hadoop-common
$ wget 
https://issues.apache.org/jira/secure/attachment/12593727/MAPREDUCE-5402.3.patch
$ patch -p1  MAPREDUCE-5402.3.patch
$ mvn install -DskipTests
$ find . -name *distcp*jar
./hadoop-tools/hadoop-distcp/target/hadoop-distcp-3.0.0-SNAPSHOT-sources.jar
./hadoop-tools/hadoop-distcp/target/hadoop-distcp-3.0.0-SNAPSHOT.jar
{code}

Following links can help you understand building Hadoop:
http://wiki.apache.org/hadoop/HowToContribute
http://wiki.apache.org/hadoop/GitAndHadoop
https://github.com/apache/hadoop-common/blob/trunk/BUILDING.txt

Please let me know if you have any questions. Thanks.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-17 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973443#comment-13973443
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Thanks much for the pointers Tsuyoshi.  Will give this a try tonight.

Also, just wondering:  any thoughts on if/how it might be possible to backport 
this to mrv1?

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2014-04-17 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13973796#comment-13973796
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Never mind - I think I've got it.  Your MAPREDUCE-5402.3.patch file can patch 
directly against the code in Harsh's backport git repo.  (Using patch -p3.)

Built the code, and starting to test now ...

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-09-25 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13778477#comment-13778477
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Apologies - haven't been able to find time to test this.  (Among other reasons: 
 I currently don't have an environment set up for building Hadoop from source.) 
 Will try to do so when I can find a moment.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-08-30 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13754809#comment-13754809
 ] 

Mithun Radhakrishnan commented on MAPREDUCE-5402:
-

Gentlemen, apologies for the delay.

Your patch looks good to me, Tsuyoshi. Thanks for the fix. +1.

David, have you been able to check if this fix works well for you?

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-08-27 Thread David Rosenstrauch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751376#comment-13751376
 ] 

David Rosenstrauch commented on MAPREDUCE-5402:
---

Hi.  Just wondering if there's been any progress on getting this fix released.  
We're still running into issues in production with the long tail of distcp 
jobs taking hours to complete.  I'd love to deploy a fix to our system soon to 
solve that, if possible.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* proper parallelization when 
 dealing with large files and/or large numbers of mappers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-07-26 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13720977#comment-13720977
 ] 

Mithun Radhakrishnan commented on MAPREDUCE-5402:
-

Gentlemen, I'm afraid I'll have to review this next week. (I'm swamped.)

The main reason we tried to limit the maximum number of chunks on the DFS is 
because these are extremely small files (holding only target-file 
names/locations). Plus, they're likely to be short-lived. Increasing the number 
of these will increase NameNode pressure (short-lived file-objects). 400 was a 
good target for us at Yahoo, per DistCp job.

I agree that keeping this configurable would be best. But then the 
responsibility of being polite to the name-node will transfer to the user.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal with the potential consequence of not having 
 tuned their job properly.  Certainly that would be better than having an 
 arbitrary hard-coded limit that *prevents* 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-07-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13716472#comment-13716472
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12593707/MAPREDUCE-5402.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3878//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind it, then it should be 
 overridable - or even better:  removed altogether.  After all, I'm not sure I 
 see any need for it.  Even if numMaps * splitRatio resulted in an 
 extraordinarily large number, if the code were modified so that the number of 
 chunks got calculated as Math.min( numMaps * splitRatio, numFiles), then 
 there would be no need for MAX_CHUNKS_TOLERABLE.  In this worst-case scenario 
 where the product of numMaps and splitRatio is large, capping the number of 
 chunks at the number of files (numberOfChunks = numberOfFiles) would result 
 in 1 file per chunk - the maximum parallelization possible.  That may not be 
 the best-tuned solution for some users, but I would think that it should be 
 left up to the user to deal 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-07-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13716584#comment-13716584
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12593727/MAPREDUCE-5402.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3879//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3879//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch, MAPREDUCE-5402.2.patch, 
 MAPREDUCE-5402.3.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this MAX_CHUNKS_TOLERABLE limit?  I 
 can't personally see any.
 If this limit has no particular logic behind 

[jira] [Commented] (MAPREDUCE-5402) DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE

2013-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713861#comment-13713861
 ] 

Hadoop QA commented on MAPREDUCE-5402:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12593219/MAPREDUCE-5402.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3872//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3872//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-distcp.html
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3872//console

This message is automatically generated.

 DynamicInputFormat should allow overriding of MAX_CHUNKS_TOLERABLE
 --

 Key: MAPREDUCE-5402
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5402
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: distcp, mrv2
Reporter: David Rosenstrauch
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5402.1.patch


 In MAPREDUCE-2765, which provided the design spec for DistCpV2, the author 
 describes the implementation of DynamicInputFormat, with one of the main 
 motivations cited being to reduce the chance of long-tails where a few 
 leftover mappers run much longer than the rest.
 However, I today ran into a situation where I experienced exactly such a long 
 tail using DistCpV2 and DynamicInputFormat.  And when I tried to alleviate 
 the problem by overriding the number of mappers and the split ratio used by 
 the DynamicInputFormat, I was prevented from doing so by the hard-coded limit 
 set in the code by the MAX_CHUNKS_TOLERABLE constant.  (Currently set to 400.)
 This constant is actually set quite low for production use.  (See a 
 description of my use case below.)  And although MAPREDUCE-2765 states that 
 this is an overridable maximum, when reading through the code there does 
 not actually appear to be any mechanism available to override it.
 This should be changed.  It should be possible to expand the maximum # of 
 chunks beyond this arbitrary limit.
 For example, here is the situation I ran into today:
 I ran a distcpv2 job on a cluster with 8 machines containing 128 map slots.  
 The job consisted of copying ~2800 files from HDFS to Amazon S3.  I overrode 
 the number of mappers for the job from the default of 20 to 128, so as to 
 more properly parallelize the copy across the cluster.  The number of chunk 
 files created was calculated as 241, and mapred.num.entries.per.chunk was 
 calculated as 12.
 As the job ran on, it reached a point where there were only 4 remaining map 
 tasks, which had each been running for over 2 hours.  The reason for this was 
 that each of the 12 files that those mappers were copying were quite large 
 (several hundred megabytes in size) and took ~20 minutes each.  However, 
 during this time, all the other 124 mappers sat idle.
 In theory I should be able to alleviate this problem with DynamicInputFormat. 
  If I were able to, say, quadruple the number of chunk files created, that 
 would have made each chunk contain only 3 files, and these large files would 
 have gotten distributed better around the cluster and copied in parallel.
 However, when I tried to do that - by overriding mapred.listing.split.ratio 
 to, say, 10 - DynamicInputFormat responded with an exception (Too many 
 chunks created with splitRatio:10, numMaps:128. Reduce numMaps or decrease 
 split-ratio to proceed.) - presumably because I exceeded the 
 MAX_CHUNKS_TOLERABLE value of 400.
 Is there any particular logic behind this