[jira] Updated: (MAPREDUCE-2081) [GridMix3] Implement functionality for get the list of job traces which has different intervals.
[ https://issues.apache.org/jira/browse/MAPREDUCE-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinay Kumar Thota updated MAPREDUCE-2081: - Attachment: 2081-ydist.patch [GridMix3] Implement functionality for get the list of job traces which has different intervals. Key: MAPREDUCE-2081 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2081 Project: Hadoop Map/Reduce Issue Type: Test Components: contrib/gridmix Reporter: Vinay Kumar Thota Assignee: Vinay Kumar Thota Attachments: 2081-ydist.patch, 2081-ydist.patch Girdmix system tests should require different job traces with different time intervals for generate and submit the gridmix jobs. So, implement a functionaliy for getting the job traces and arrange them in hash table with time interval as key.Also getting the list of traces from resource location irrespective of time. The following methods needs to implement. Method signature: public static Map String, String getMRTraces(Configuration conf) throws IOException; - it get the traces with time intervals from resources default location. public static Map String, String getMRTraces(Configuration conf,Path path) throws IOException; - it get the traces with time intervals from user specified resource location. public static ListString listMRTraces(Configuration conf) throws IOException -it list all the traces from resource default location irrespective of time interval. public static ListString listMRTraces(Configuration conf, Path tracesPath) throws IOException - it list all the traces from user specified user location irrespective of time interval. public static ListString listMRTracesByTime(Configuration conf, String timeInterval) throws IOException - it list all traces of a given time interval from default resource location. public static ListString listMRTracesByTime(Configuration conf, String timeInterval,Path path) throws IOException - it list all traces of a given time interval from a given resources location. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2081) [GridMix3] Implement functionality for get the list of job traces which has different intervals.
[ https://issues.apache.org/jira/browse/MAPREDUCE-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinay Kumar Thota updated MAPREDUCE-2081: - Attachment: (was: 2081-ydist.patch) [GridMix3] Implement functionality for get the list of job traces which has different intervals. Key: MAPREDUCE-2081 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2081 Project: Hadoop Map/Reduce Issue Type: Test Components: contrib/gridmix Reporter: Vinay Kumar Thota Assignee: Vinay Kumar Thota Attachments: 2081-ydist.patch, 2081-ydist.patch Girdmix system tests should require different job traces with different time intervals for generate and submit the gridmix jobs. So, implement a functionaliy for getting the job traces and arrange them in hash table with time interval as key.Also getting the list of traces from resource location irrespective of time. The following methods needs to implement. Method signature: public static Map String, String getMRTraces(Configuration conf) throws IOException; - it get the traces with time intervals from resources default location. public static Map String, String getMRTraces(Configuration conf,Path path) throws IOException; - it get the traces with time intervals from user specified resource location. public static ListString listMRTraces(Configuration conf) throws IOException -it list all the traces from resource default location irrespective of time interval. public static ListString listMRTraces(Configuration conf, Path tracesPath) throws IOException - it list all the traces from user specified user location irrespective of time interval. public static ListString listMRTracesByTime(Configuration conf, String timeInterval) throws IOException - it list all traces of a given time interval from default resource location. public static ListString listMRTracesByTime(Configuration conf, String timeInterval,Path path) throws IOException - it list all traces of a given time interval from a given resources location. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-2095) Gridmix unable to run for compressed traces(.gz format).
[ https://issues.apache.org/jira/browse/MAPREDUCE-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12917924#action_12917924 ] Vinay Kumar Thota commented on MAPREDUCE-2095: -- Reviewed the patch and over all it looks good. However, please make sure to add java doc information for test method before commit. +1 Gridmix unable to run for compressed traces(.gz format). Key: MAPREDUCE-2095 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2095 Project: Hadoop Map/Reduce Issue Type: Bug Components: contrib/gridmix Affects Versions: 0.20.1 Reporter: Vinay Kumar Thota Assignee: Ranjit Mathew Attachments: MAPREDUCE-2095.patch, MAPREDUCE-2095_v2.patch, wordcount.json.gz I was trying to run gridmix with compressed trace file.However, it throws a JsonParseException and exit. exception details: == org.codehaus.jackson.JsonParseException: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens at [Source: org.apache.hadoop.fs.fsdatainputstr...@17ba38f; line: 1, column: 2] at org.codehaus.jackson.impl.JsonParserBase._constructError(JsonParserBase.java:651) at org.codehaus.jackson.impl.JsonParserBase._reportError(JsonParserBase.java:635) at org.codehaus.jackson.impl.JsonParserBase._throwInvalidSpace(JsonParserBase.java:596) at org.codehaus.jackson.impl.Utf8StreamParser._skipWSOrEnd(Utf8StreamParser.java:981) at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:77) at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:688) at org.codehaus.jackson.map.ObjectMapper._readValue(ObjectMapper.java:624) at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:275) at org.apache.hadoop.tools.rumen.JsonObjectMapperParser.getNext(JsonObjectMapperParser.java:84) at org.apache.hadoop.tools.rumen.ZombieJobProducer.getNextJob(ZombieJobProducer.java:117) at org.apache.hadoop.tools.rumen.ZombieJobProducer.getNextJob(ZombieJobProducer.java:29) at org.apache.hadoop.mapred.gridmix.JobFactory.getNextJobFiltered(JobFactory.java:174) at org.apache.hadoop.mapred.gridmix.StressJobFactory$StressReaderThread.run(StressJobFactory.java:166) 10/09/23 09:43:17 ERROR gridmix.Gridmix: Error in trace org.codehaus.jackson.JsonParseException: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens at [Source: org.apache.hadoop.fs.fsdatainputstr...@17ba38f; line: 1, column: 2] at org.codehaus.jackson.impl.JsonParserBase._constructError(JsonParserBase.java:651) at org.codehaus.jackson.impl.JsonParserBase._reportError(JsonParserBase.java:635) at org.codehaus.jackson.impl.JsonParserBase._throwInvalidSpace(JsonParserBase.java:596) at org.codehaus.jackson.impl.Utf8StreamParser._skipWSOrEnd(Utf8StreamParser.java:981) at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:77) at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:688) at org.codehaus.jackson.map.ObjectMapper._readValue(ObjectMapper.java:624) at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:275) at org.apache.hadoop.tools.rumen.JsonObjectMapperParser.getNext(JsonObjectMapperParser.java:84) at org.apache.hadoop.tools.rumen.ZombieJobProducer.getNextJob(ZombieJobProducer.java:117) at org.apache.hadoop.tools.rumen.ZombieJobProducer.getNextJob(ZombieJobProducer.java:29) at org.apache.hadoop.mapred.gridmix.JobFactory.getNextJobFiltered(JobFactory.java:174) at org.apache.hadoop.mapred.gridmix.StressJobFactory$StressReaderThread.run(StressJobFactory.java:166) 10/09/23 09:43:17 INFO gridmix.Gridmix: Exiting... -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-2112) Create a Common Data-Generator for Testing Hadoop
Create a Common Data-Generator for Testing Hadoop - Key: MAPREDUCE-2112 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2112 Project: Hadoop Map/Reduce Issue Type: New Feature Reporter: Ranjit Mathew Priority: Minor It is useful to have a common data-generator for testing Hadoop and related projects. Such a tool should be able to generate data in a specified format and should be able to use a Hadoop cluster for speeding up the data-generation. This tool can then be used across Hadoop (e.g. GridMix3), Pig, Hive, etc. reducing the need for each project to invent something like this itself. We can use the data-generator used in PigMix2 (PIG-200) as a starting point. It is described in [http://wiki.apache.org/pig/DataGeneratorHadoop]. Since it depends on the SDSU Java library ([http://www.eli.sdsu.edu/java-SDSU/]) released under the GNU GPL, it has to be modified a bit to eliminate this dependency before it can be included in Apache Hadoop. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-1125) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh
[ https://issues.apache.org/jira/browse/MAPREDUCE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simone Leo updated MAPREDUCE-1125: -- Affects Version/s: (was: 0.20.1) 0.21.0 SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh --- Key: MAPREDUCE-1125 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1125 Project: Hadoop Map/Reduce Issue Type: Bug Components: pipes Affects Versions: 0.21.0 Reporter: Simone Leo Attachments: deserializeFloat.patch {noformat} *** SerialUtils.hh *** float deserializeFloat(InStream stream); *** SerialUtils.cc *** void deserializeFloat(float t, InStream stream) { char buf[sizeof(float)]; stream.read(buf, sizeof(float)); XDR xdrs; xdrmem_create(xdrs, buf, sizeof(float), XDR_DECODE); xdr_float(xdrs, t); } {noformat} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-1125) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh
[ https://issues.apache.org/jira/browse/MAPREDUCE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simone Leo updated MAPREDUCE-1125: -- Attachment: (was: deserializeFloat.patch) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh --- Key: MAPREDUCE-1125 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1125 Project: Hadoop Map/Reduce Issue Type: Bug Components: pipes Affects Versions: 0.21.0 Reporter: Simone Leo Attachments: MAPREDUCE-1125.patch {noformat} *** SerialUtils.hh *** float deserializeFloat(InStream stream); *** SerialUtils.cc *** void deserializeFloat(float t, InStream stream) { char buf[sizeof(float)]; stream.read(buf, sizeof(float)); XDR xdrs; xdrmem_create(xdrs, buf, sizeof(float), XDR_DECODE); xdr_float(xdrs, t); } {noformat} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-1125) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh
[ https://issues.apache.org/jira/browse/MAPREDUCE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simone Leo updated MAPREDUCE-1125: -- Attachment: MAPREDUCE-1125.patch Patch for current tr...@1004113 SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh --- Key: MAPREDUCE-1125 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1125 Project: Hadoop Map/Reduce Issue Type: Bug Components: pipes Affects Versions: 0.21.0 Reporter: Simone Leo Attachments: MAPREDUCE-1125.patch {noformat} *** SerialUtils.hh *** float deserializeFloat(InStream stream); *** SerialUtils.cc *** void deserializeFloat(float t, InStream stream) { char buf[sizeof(float)]; stream.read(buf, sizeof(float)); XDR xdrs; xdrmem_create(xdrs, buf, sizeof(float), XDR_DECODE); xdr_float(xdrs, t); } {noformat} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-1125) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh
[ https://issues.apache.org/jira/browse/MAPREDUCE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simone Leo updated MAPREDUCE-1125: -- Status: Patch Available (was: Open) Attaching patch for current trunk SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh --- Key: MAPREDUCE-1125 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1125 Project: Hadoop Map/Reduce Issue Type: Bug Components: pipes Affects Versions: 0.21.0 Reporter: Simone Leo Attachments: MAPREDUCE-1125.patch {noformat} *** SerialUtils.hh *** float deserializeFloat(InStream stream); *** SerialUtils.cc *** void deserializeFloat(float t, InStream stream) { char buf[sizeof(float)]; stream.read(buf, sizeof(float)); XDR xdrs; xdrmem_create(xdrs, buf, sizeof(float), XDR_DECODE); xdr_float(xdrs, t); } {noformat} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-2112) Create a Common Data-Generator for Testing Hadoop
[ https://issues.apache.org/jira/browse/MAPREDUCE-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12918070#action_12918070 ] Olga Natkovich commented on MAPREDUCE-2112: --- It is important that the tool supports different column distribution so that we can simulate different kinds of data with it. Create a Common Data-Generator for Testing Hadoop - Key: MAPREDUCE-2112 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2112 Project: Hadoop Map/Reduce Issue Type: New Feature Reporter: Ranjit Mathew Priority: Minor It is useful to have a common data-generator for testing Hadoop and related projects. Such a tool should be able to generate data in a specified format and should be able to use a Hadoop cluster for speeding up the data-generation. This tool can then be used across Hadoop (e.g. GridMix3), Pig, Hive, etc. reducing the need for each project to invent something like this itself. We can use the data-generator used in PigMix2 (PIG-200) as a starting point. It is described in [http://wiki.apache.org/pig/DataGeneratorHadoop]. Since it depends on the SDSU Java library ([http://www.eli.sdsu.edu/java-SDSU/]) released under the GNU GPL, it has to be modified a bit to eliminate this dependency before it can be included in Apache Hadoop. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-2113) SequenceFile tests assume createWriter w/ overwrite
SequenceFile tests assume createWriter w/ overwrite --- Key: MAPREDUCE-2113 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2113 Project: Hadoop Map/Reduce Issue Type: Bug Components: test Affects Versions: 0.22.0 Reporter: Chris Douglas Priority: Trivial Attachments: M2113-0.patch Many of the {{\*SequenceFile}} tests create writers in a loop. This fails after HADOOP-6856, which will not overwrite the destination by default. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2113) SequenceFile tests assume createWriter w/ overwrite
[ https://issues.apache.org/jira/browse/MAPREDUCE-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated MAPREDUCE-2113: - Attachment: M2113-0.patch SequenceFile tests assume createWriter w/ overwrite --- Key: MAPREDUCE-2113 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2113 Project: Hadoop Map/Reduce Issue Type: Bug Components: test Affects Versions: 0.22.0 Reporter: Chris Douglas Priority: Trivial Attachments: M2113-0.patch Many of the {{\*SequenceFile}} tests create writers in a loop. This fails after HADOOP-6856, which will not overwrite the destination by default. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-2110) add getArchiveIndex to HarFileSystem
[ https://issues.apache.org/jira/browse/MAPREDUCE-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12918129#action_12918129 ] Ramkumar Vadali commented on MAPREDUCE-2110: @Mahadev, I agree that exposing an implementation detail is not good. But there is actually more functionality that we would like to add to HarFileSystem, we could use this Jira to discuss it. Raid creates a parity file for each data file that is raided and has reduced replication. As such this helps save disk space but doubles the number of inodes. Hence we create HARs out of the parity files to reduce the number of new inodes. Now the HAR part files have reduced replication as well and it is possible that a HAR part file has missing blocks, which we need to fix. To regenerate a HAR part file block, we need to identify what parity files/offsets map to that part file block. This requires new code that parses the HAR index file and maps a partfile:offset - datafile:offset. This is the functionality that we would actually like to add. Thoughts? add getArchiveIndex to HarFileSystem Key: MAPREDUCE-2110 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2110 Project: Hadoop Map/Reduce Issue Type: Improvement Reporter: Patrick Kling Priority: Minor Attachments: MAPREDUCE-2110.patch This patch adds a public getter for archiveIndex to HarFileSystem, allowing us to access the index file corresponding to a har file system (useful for raid). Index: src/tools/org/apache/hadoop/fs/HarFileSystem.java === --- src/tools/org/apache/hadoop/fs/HarFileSystem.java (revision 1004421) +++ src/tools/org/apache/hadoop/fs/HarFileSystem.java (working copy) @@ -759,6 +759,13 @@ } /** + * returns the archive index + */ + public Path getArchiveIndex() { +return archiveIndex; + } + + /** * return the top level archive path. */ public Path getHomeDirectory() { -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (MAPREDUCE-2086) CHANGES.txt does not reflect the release of version 0.21.0.
[ https://issues.apache.org/jira/browse/MAPREDUCE-2086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved MAPREDUCE-2086. -- Resolution: Fixed Assignee: Tom White I've fixed this. CHANGES.txt does not reflect the release of version 0.21.0. --- Key: MAPREDUCE-2086 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2086 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 0.21.0 Reporter: Konstantin Shvachko Assignee: Tom White Fix For: 0.21.1 CHANGES.txt should show the release date for 0.21.0 and include section for for 0.21.1 - Unreleased. Latest changes, that did not make into 0.21.0, should be moved under 0.21.1 section. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2020) Use new FileContext APIs for all mapreduce components
[ https://issues.apache.org/jira/browse/MAPREDUCE-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krishna Ramachandran updated MAPREDUCE-2020: Attachment: mapred-2020-10.patch Revised and expanded Incorporate previous review comments Cleanup most private APIs to use only fileContext Following components have not been fully (none or partial) migrated TaskTracker JobHistory (partial) MapTask (partial - references to getRecordWriter) ReduceTask(partial - skipWriter) OutputFormat and implementations(partial) Security (ugi) changes for JT are not in this patch - will update Use new FileContext APIs for all mapreduce components -- Key: MAPREDUCE-2020 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2020 Project: Hadoop Map/Reduce Issue Type: Improvement Affects Versions: 0.22.0 Reporter: Krishna Ramachandran Assignee: Krishna Ramachandran Attachments: mapred-2020-1.patch, mapred-2020-10.patch, mapred-2020-4.patch, mapred-2020-5.patch, mapred-2020-6.patch, mapred-2020-7.patch, mapred-2020.patch Migrate mapreduce components to using improved FileContext APIs implemented in HADOOP-4952 and HADOOP-6223 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2109) Add support for reading multiple hadoop delegation token files
[ https://issues.apache.org/jira/browse/MAPREDUCE-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated MAPREDUCE-2109: -- Attachment: mapreduce-2109.0.txt Adding support for HADOOP_TOKEN_FILE_LOCATION being interpreted as a comma-separated list of paths to delegation token files. Add support for reading multiple hadoop delegation token files -- Key: MAPREDUCE-2109 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2109 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 0.22.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: mapreduce-2109.0.txt This is the MR part of HADOOP-6988. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2109) Add support for reading multiple hadoop delegation token files
[ https://issues.apache.org/jira/browse/MAPREDUCE-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated MAPREDUCE-2109: -- Attachment: mapreduce-2109.1.txt Same patch, this time with -p0. Add support for reading multiple hadoop delegation token files -- Key: MAPREDUCE-2109 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2109 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 0.22.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: mapreduce-2109.0.txt, mapreduce-2109.1.txt This is the MR part of HADOOP-6988. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2109) Add support for reading multiple hadoop delegation token files
[ https://issues.apache.org/jira/browse/MAPREDUCE-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated MAPREDUCE-2109: -- Attachment: mapreduce-2109.2.txt Removed some log output that shouldn't have been included. Apologies for the noise. Add support for reading multiple hadoop delegation token files -- Key: MAPREDUCE-2109 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2109 Project: Hadoop Map/Reduce Issue Type: Bug Components: security Affects Versions: 0.22.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: mapreduce-2109.0.txt, mapreduce-2109.1.txt, mapreduce-2109.2.txt This is the MR part of HADOOP-6988. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-2040) Forrest Documentation for Dynamic Priority Scheduler
[ https://issues.apache.org/jira/browse/MAPREDUCE-2040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12918406#action_12918406 ] Tom White commented on MAPREDUCE-2040: -- +1 Results from test-patch: {noformat} [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] -1 tests included. The patch doesn't appear to include any new or modified tests. [exec] Please justify why no new tests are needed for this patch. [exec] Also please list what manual steps were performed to verify this patch. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] -1 system tests framework. The patch failed system tests framework compile. {noformat} Since this is a documentation patch the -1s are not a problem. Forrest Documentation for Dynamic Priority Scheduler Key: MAPREDUCE-2040 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2040 Project: Hadoop Map/Reduce Issue Type: New Feature Components: contrib/dynamic-scheduler Affects Versions: 0.21.0 Reporter: Thomas Sandholm Assignee: Thomas Sandholm Priority: Minor Fix For: 0.21.1 Attachments: MAPREDUCE-2040.patch New Forrest documentation for dynamic priority scheduler -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-2040) Forrest Documentation for Dynamic Priority Scheduler
[ https://issues.apache.org/jira/browse/MAPREDUCE-2040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated MAPREDUCE-2040: - Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Thomas! Forrest Documentation for Dynamic Priority Scheduler Key: MAPREDUCE-2040 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2040 Project: Hadoop Map/Reduce Issue Type: New Feature Components: contrib/dynamic-scheduler Affects Versions: 0.21.0 Reporter: Thomas Sandholm Assignee: Thomas Sandholm Priority: Minor Fix For: 0.21.1 Attachments: MAPREDUCE-2040.patch New Forrest documentation for dynamic priority scheduler -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.