[jira] Commented: (MAPREDUCE-764) TypedBytesInput's readRaw() does not preserve custom type codes
[ https://issues.apache.org/jira/browse/MAPREDUCE-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731835#action_12731835 ] Hadoop QA commented on MAPREDUCE-764: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413534/MAPREDUCE-764.patch against trunk revision 794324. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/397/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/397/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/397/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/397/console This message is automatically generated. > TypedBytesInput's readRaw() does not preserve custom type codes > --- > > Key: MAPREDUCE-764 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-764 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: contrib/streaming >Affects Versions: 0.21.0 >Reporter: Klaas Bosteels >Assignee: Klaas Bosteels > Attachments: MAPREDUCE-764.patch > > > The typed bytes format supports byte sequences of the form {{ code> }}. When reading such a sequence via > {{TypedBytesInput}}'s {{readRaw()}} method, however, the returned sequence > currently is {{0 }} (0 is the type code for a bytes array), > which leads to bugs such as the one described > [here|http://dumbo.assembla.com/spaces/dumbo/tickets/54]. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-626) Modify TestLostTracker to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731832#action_12731832 ] Jothi Padmanabhan commented on MAPREDUCE-626: - Test patch results [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 6 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] > Modify TestLostTracker to improve execution time > > > Key: MAPREDUCE-626 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-626 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-626-v1.patch, mapred-626-v2.patch, > mapred-626-v3.patch, mapred-626-v4.patch, mapred-626.patch, mapred-626.patch > > > This test can be made faster with a few modifications -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-626) Modify TestLostTracker to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-626: Attachment: mapred-626-v4.patch Made a small change, add nonRunningMapCache to the FakeJobInProgress.initTasks and let the code flow use failMap in JIP insted of overwriting it > Modify TestLostTracker to improve execution time > > > Key: MAPREDUCE-626 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-626 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-626-v1.patch, mapred-626-v2.patch, > mapred-626-v3.patch, mapred-626-v4.patch, mapred-626.patch, mapred-626.patch > > > This test can be made faster with a few modifications -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-626) Modify TestLostTracker to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-626: Status: Patch Available (was: Open) > Modify TestLostTracker to improve execution time > > > Key: MAPREDUCE-626 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-626 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-626-v1.patch, mapred-626-v2.patch, > mapred-626-v3.patch, mapred-626-v4.patch, mapred-626.patch, mapred-626.patch > > > This test can be made faster with a few modifications -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-626) Modify TestLostTracker to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-626: Status: Open (was: Patch Available) > Modify TestLostTracker to improve execution time > > > Key: MAPREDUCE-626 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-626 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-626-v1.patch, mapred-626-v2.patch, > mapred-626-v3.patch, mapred-626.patch, mapred-626.patch > > > This test can be made faster with a few modifications -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-18) Under load the shuffle sometimes gets incorrect data
[ https://issues.apache.org/jira/browse/MAPREDUCE-18?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Gummadi updated MAPREDUCE-18: -- Fix Version/s: 0.21.0 Affects Version/s: 0.21.0 Release Note: This patch adds the mapid and reduceid in the http header of mapoutput when being sent to reduce node. Also validates compressed length, decompressed length, mapid and reduceid from http header at reduce node. Status: Patch Available (was: Open) > Under load the shuffle sometimes gets incorrect data > > > Key: MAPREDUCE-18 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-18 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 0.21.0 >Reporter: Owen O'Malley >Assignee: Ravi Gummadi >Priority: Blocker > Fix For: 0.21.0 > > Attachments: MR-18.patch, MR-18.v1.patch > > > While testing HADOOP-5223 under load, we found reduces receiving completely > incorrect data. It was often random, but sometimes was the output of the > wrong map for the wrong map. It appears to either be a Jetty or JVM bug, but > it is clearly happening on the server side. In the HADOOP-5223 code, I added > information about the map and reduce that were included and we should add > similar protection to 0.20 and trunk. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-762) Task's process trees may not be killed if a TT is restarted
[ https://issues.apache.org/jira/browse/MAPREDUCE-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731823#action_12731823 ] Hemanth Yamijala commented on MAPREDUCE-762: bq. Child jvm, before exiting, should try and cleanup/kill all its sub-processes I am not sure about this. So, this will be done like in the finally block of the Child by sending a kill -pid to itself ? bq. Once a jvm is spawned, its session id should be persisted to task-tracker's private folder (TaskTracker.SUBDIR/pid with 700 permission?) What's the structure under TaskTracker.SUBDIR/pid ? I suppose the right solution is to use jvm-id.pid. Another solution could be taskAttemptId.[cleanup].pid. If we use taskAttemptId, we should take care of JVM Reuse. If we use jvm-id.pid, when a task is being killed, we can lookup the jvm-id for the task and then pick up the right pid file. Would this work ? Permissions for each of the files should be 600 owned by tasktracker. bq. Once the jvm exits, this pid file should be deleted +1. Should be done by the TT. bq. Upon restart, the pid files in the private folder should be cleaned up (under appropriate owner permissions) This should be done after sending the kill signal to the files in the folder - because they are all potentially running tasks - the reason of this bug. bq. pid files should have sufficient information to reconstruct jvm-context object which is required by LinuxTaskController to kill the process under user permission. We should just follow the path of TaskController.killTaskJVM - that will ensure it will work for all task controllers. Setting permissions to 600 for the pid files owned by TT should be fine. > Task's process trees may not be killed if a TT is restarted > --- > > Key: MAPREDUCE-762 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-762 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Hemanth Yamijala > > Some work has been done to make sure the tasktrackers kill process trees of > tasks when they finish (either successfully, or with failures or when they > are killed). Related JIRAs are HADOOP-2721, HADOOP-5488 and HADOOP-5420. But > when TTs are restarted, we do not handle killing of process trees - though > tasks will themselves die on re-establishing contact with the TT. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-18) Under load the shuffle sometimes gets incorrect data
[ https://issues.apache.org/jira/browse/MAPREDUCE-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731824#action_12731824 ] Jothi Padmanabhan commented on MAPREDUCE-18: +1. Patch looks fine. > Under load the shuffle sometimes gets incorrect data > > > Key: MAPREDUCE-18 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-18 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Owen O'Malley >Assignee: Ravi Gummadi >Priority: Blocker > Attachments: MR-18.patch, MR-18.v1.patch > > > While testing HADOOP-5223 under load, we found reduces receiving completely > incorrect data. It was often random, but sometimes was the output of the > wrong map for the wrong map. It appears to either be a Jetty or JVM bug, but > it is clearly happening on the server side. In the HADOOP-5223 code, I added > information about the map and reduce that were included and we should add > similar protection to 0.20 and trunk. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-630) TestKillCompletedJob can be modified to improve execution times
[ https://issues.apache.org/jira/browse/MAPREDUCE-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated MAPREDUCE-630: -- Resolution: Fixed Fix Version/s: 0.21.0 Status: Resolved (was: Patch Available) I just committed this. Thanks, Jothi! > TestKillCompletedJob can be modified to improve execution times > --- > > Key: MAPREDUCE-630 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-630 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Fix For: 0.21.0 > > Attachments: hadoop-6068.patch, mapred-630-v1.patch, > mapred-630-v2.patch, mapred-630.patch > > > This test can be easily made into a unit test -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-627) Modify TestTrackerBlacklistAcrossJobs to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated MAPREDUCE-627: -- Resolution: Fixed Fix Version/s: 0.21.0 Status: Resolved (was: Patch Available) I just committed this. Thanks, Jothi! > Modify TestTrackerBlacklistAcrossJobs to improve execution time > --- > > Key: MAPREDUCE-627 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-627 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Fix For: 0.21.0 > > Attachments: mapred-627.patch > > > Some minor modifications can be made to the test case to improve test > execution time -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-408) TestKillSubProcesses fails with assertion failure sometimes
[ https://issues.apache.org/jira/browse/MAPREDUCE-408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Gummadi updated MAPREDUCE-408: --- Attachment: MR-408.v1.1.patch Attaching new patch with a minor change. > TestKillSubProcesses fails with assertion failure sometimes > --- > > Key: MAPREDUCE-408 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-408 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Amareshwari Sriramadasu >Assignee: Ravi Gummadi > Attachments: MR-408.patch, MR-408.v1.1.patch, MR-408.v1.patch > > > org.apache.hadoop.mapred.TestKillSubProcesses.testJobKillFailAndSucceed fails > sometimes with following error Message: > {noformat} > Unexpected: The subprocess at level 3 in the subtree is not alive before Job > completion > {noformat} > Stacktrace > {noformat} > junit.framework.AssertionFailedError: Unexpected: The subprocess at level 3 > in the subtree is not alive before Job completion > at > org.apache.hadoop.mapred.TestKillSubProcesses.runJobAndSetProcessHandle(TestKillSubProcesses.java:221) > at > org.apache.hadoop.mapred.TestKillSubProcesses.runFailingJobAndValidate(TestKillSubProcesses.java:112) > at > org.apache.hadoop.mapred.TestKillSubProcesses.runTests(TestKillSubProcesses.java:327) > at > org.apache.hadoop.mapred.TestKillSubProcesses.testJobKillFailAndSucceed(TestKillSubProcesses.java:310) > {noformat} > one such failure at > http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/495/testReport/org.apache.hadoop.mapred/TestKillSubProcesses/testJobKillFailAndSucceed/ -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-408) TestKillSubProcesses fails with assertion failure sometimes
[ https://issues.apache.org/jira/browse/MAPREDUCE-408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731813#action_12731813 ] Ravi Gummadi commented on MAPREDUCE-408: The issue is reproducible with trunk if we add Thread.sleep(5000) in runJobAndSetProcessHandle() before the assert statements for checking if the child processes are alive. The problem was that fs was not set in Mappers, thus signalFile creation was not checked causing the map task to finish immediately(in case of failing mapper and succeeding mapper. > TestKillSubProcesses fails with assertion failure sometimes > --- > > Key: MAPREDUCE-408 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-408 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Amareshwari Sriramadasu >Assignee: Ravi Gummadi > Attachments: MR-408.patch, MR-408.v1.patch > > > org.apache.hadoop.mapred.TestKillSubProcesses.testJobKillFailAndSucceed fails > sometimes with following error Message: > {noformat} > Unexpected: The subprocess at level 3 in the subtree is not alive before Job > completion > {noformat} > Stacktrace > {noformat} > junit.framework.AssertionFailedError: Unexpected: The subprocess at level 3 > in the subtree is not alive before Job completion > at > org.apache.hadoop.mapred.TestKillSubProcesses.runJobAndSetProcessHandle(TestKillSubProcesses.java:221) > at > org.apache.hadoop.mapred.TestKillSubProcesses.runFailingJobAndValidate(TestKillSubProcesses.java:112) > at > org.apache.hadoop.mapred.TestKillSubProcesses.runTests(TestKillSubProcesses.java:327) > at > org.apache.hadoop.mapred.TestKillSubProcesses.testJobKillFailAndSucceed(TestKillSubProcesses.java:310) > {noformat} > one such failure at > http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/495/testReport/org.apache.hadoop.mapred/TestKillSubProcesses/testJobKillFailAndSucceed/ -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-767) to remove mapreduce dependency on commons-cli2
to remove mapreduce dependency on commons-cli2 -- Key: MAPREDUCE-767 URL: https://issues.apache.org/jira/browse/MAPREDUCE-767 Project: Hadoop Map/Reduce Issue Type: Improvement Components: contrib/streaming Reporter: Giridharan Kesavan mapreduce, streaming and eclipse plugin depends on common-cli2 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-766) Enhance -list-blacklisted-trackers to display host name, blacklisted reason and blacklist report.
[ https://issues.apache.org/jira/browse/MAPREDUCE-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreekanth Ramakrishnan updated MAPREDUCE-766: - Attachment: mapreduce-766-1.patch Attaching a patch which displays blacklist information for the list of all the blacklisted trackers. > Enhance -list-blacklisted-trackers to display host name, blacklisted reason > and blacklist report. > - > > Key: MAPREDUCE-766 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-766 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Sreekanth Ramakrishnan >Assignee: Sreekanth Ramakrishnan > Attachments: mapreduce-766-1.patch > > > Currently, the -list-blacklisted-trackers in the mapred job option list only > tracker name. We should enhance it to display as hostname, reason for > blacklisting and blacklist report. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-375) Change org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api.
[ https://issues.apache.org/jira/browse/MAPREDUCE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated MAPREDUCE-375: -- Status: Open (was: Patch Available) > Change org.apache.hadoop.mapred.lib.NLineInputFormat and > org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > -- > > Key: MAPREDUCE-375 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-375 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-375.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-375) Change org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api.
[ https://issues.apache.org/jira/browse/MAPREDUCE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated MAPREDUCE-375: -- Status: Patch Available (was: Open) test failures are not related to the patch. Retrying hudson > Change org.apache.hadoop.mapred.lib.NLineInputFormat and > org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > -- > > Key: MAPREDUCE-375 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-375 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-375.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-766) Enhance -list-blacklisted-trackers to display host name, blacklisted reason and blacklist report.
Enhance -list-blacklisted-trackers to display host name, blacklisted reason and blacklist report. - Key: MAPREDUCE-766 URL: https://issues.apache.org/jira/browse/MAPREDUCE-766 Project: Hadoop Map/Reduce Issue Type: Improvement Reporter: Sreekanth Ramakrishnan Currently, the -list-blacklisted-trackers in the mapred job option list only tracker name. We should enhance it to display as hostname, reason for blacklisting and blacklist report. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Assigned: (MAPREDUCE-766) Enhance -list-blacklisted-trackers to display host name, blacklisted reason and blacklist report.
[ https://issues.apache.org/jira/browse/MAPREDUCE-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreekanth Ramakrishnan reassigned MAPREDUCE-766: Assignee: Sreekanth Ramakrishnan > Enhance -list-blacklisted-trackers to display host name, blacklisted reason > and blacklist report. > - > > Key: MAPREDUCE-766 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-766 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Sreekanth Ramakrishnan >Assignee: Sreekanth Ramakrishnan > > Currently, the -list-blacklisted-trackers in the mapred job option list only > tracker name. We should enhance it to display as hostname, reason for > blacklisting and blacklist report. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-375) Change org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api.
[ https://issues.apache.org/jira/browse/MAPREDUCE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731768#action_12731768 ] Hadoop QA commented on MAPREDUCE-375: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413533/patch-375.txt against trunk revision 794324. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 316 release audit warnings (more than the trunk's current 315 warnings). -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/396/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/396/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/396/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/396/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/396/console This message is automatically generated. > Change org.apache.hadoop.mapred.lib.NLineInputFormat and > org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > -- > > Key: MAPREDUCE-375 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-375 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-375.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-627) Modify TestTrackerBlacklistAcrossJobs to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731707#action_12731707 ] Hadoop QA commented on MAPREDUCE-627: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12411630/mapred-627.patch against trunk revision 794324. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/395/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/395/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/395/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/395/console This message is automatically generated. > Modify TestTrackerBlacklistAcrossJobs to improve execution time > --- > > Key: MAPREDUCE-627 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-627 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-627.patch > > > Some minor modifications can be made to the test case to improve test > execution time -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-740) Provide summary information per job once a job is finished.
[ https://issues.apache.org/jira/browse/MAPREDUCE-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731668#action_12731668 ] Nigel Daley commented on MAPREDUCE-740: --- -1. No unit test or justification. Should logJobSummary(...) have a null check on job so we don't get NPEs? Ditto on meterTaskAttempt(..)? If you disagree on null check, can you document that input parameters must not be null OR document @throws NullPointerException if input parameter is null. > Provide summary information per job once a job is finished. > --- > > Key: MAPREDUCE-740 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-740 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: jobtracker >Reporter: Hong Tang >Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: MAPREDUCE-740_0_20090709.patch, > MAPREDUCE-740_0_20090713.patch, MAPREDUCE-740_0_20090713_yhadoop20.patch > > > It would be nice if JobTracker can output a one line summary information per > job once a job is finished. Otherwise, users or system administrators would > end up scraping individual job history logs. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-705) User-configurable quote and delimiter characters for Sqoop records and record reparsing
[ https://issues.apache.org/jira/browse/MAPREDUCE-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Kimball updated MAPREDUCE-705: Attachment: MAPREDUCE-705.3.patch Attaching rebased patch after MAPREDUCE-710. > User-configurable quote and delimiter characters for Sqoop records and record > reparsing > --- > > Key: MAPREDUCE-705 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-705 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: MAPREDUCE-705.2.patch, MAPREDUCE-705.3.patch, > MAPREDUCE-705.patch > > > Sqoop needs a mechanism for users to govern how fields are quoted and what > delimiter characters separate fields and records. With delimiters providing > an unambiguous format, a parse method can reconstitute the generated record > data object from a text-based representation of the same record. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-705) User-configurable quote and delimiter characters for Sqoop records and record reparsing
[ https://issues.apache.org/jira/browse/MAPREDUCE-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Kimball updated MAPREDUCE-705: Status: Open (was: Patch Available) > User-configurable quote and delimiter characters for Sqoop records and record > reparsing > --- > > Key: MAPREDUCE-705 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-705 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: MAPREDUCE-705.2.patch, MAPREDUCE-705.3.patch, > MAPREDUCE-705.patch > > > Sqoop needs a mechanism for users to govern how fields are quoted and what > delimiter characters separate fields and records. With delimiters providing > an unambiguous format, a parse method can reconstitute the generated record > data object from a text-based representation of the same record. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-705) User-configurable quote and delimiter characters for Sqoop records and record reparsing
[ https://issues.apache.org/jira/browse/MAPREDUCE-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Kimball updated MAPREDUCE-705: Status: Patch Available (was: Open) Cycling patch status.. > User-configurable quote and delimiter characters for Sqoop records and record > reparsing > --- > > Key: MAPREDUCE-705 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-705 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: MAPREDUCE-705.2.patch, MAPREDUCE-705.3.patch, > MAPREDUCE-705.patch > > > Sqoop needs a mechanism for users to govern how fields are quoted and what > delimiter characters separate fields and records. With delimiters providing > an unambiguous format, a parse method can reconstitute the generated record > data object from a text-based representation of the same record. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-750) Extensible ConnManager factory API
[ https://issues.apache.org/jira/browse/MAPREDUCE-750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731630#action_12731630 ] Aaron Kimball commented on MAPREDUCE-750: - I should point out -- I'm all for not cluttering up the mapred-, hdfs- and core-* config files with extraneous parameters. Is there a better place to put this? Can/should we have a contrib-default and contrib-site? (Or, inevitably, per-contrib-module config files?) > Extensible ConnManager factory API > -- > > Key: MAPREDUCE-750 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-750 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: MAPREDUCE-750.patch > > > Sqoop uses the ConnFactory class to instantiate a ConnManager implementation > based on the connect string and other arguments supplied by the user. This > allows per-database logic to be encapsulated in different ConnManager > instances, and dynamically chosen based on which database the user is > actually importing from. But adding new ConnManager implementations requires > modifying the source of a common ConnFactory class. An indirection layer > should be used to delegate instantiation to a number of factory > implementations which can be specified in the static configuration or at > runtime. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-565) Partitioner does not work with new API
[ https://issues.apache.org/jira/browse/MAPREDUCE-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731622#action_12731622 ] Hudson commented on MAPREDUCE-565: -- Integrated in Hadoop-Mapreduce-trunk #23 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk/23/]) . Fix partitioner to work with new API. Contributed by Owen O'Malley > Partitioner does not work with new API > -- > > Key: MAPREDUCE-565 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-565 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: task >Reporter: Jothi Padmanabhan >Assignee: Owen O'Malley >Priority: Blocker > Fix For: 0.20.1 > > Attachments: h5750.patch, h5750.patch, h5750.patch, h5750.patch, > h5750.patch, h5750.patch > > > Partitioner does not work with the new API. MapTask.java looks for > "mapred.partitioner.class" whereas the new API sets it to > mapreduce.partitioner.class -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-710) Sqoop should read and transmit passwords in a more secure manner
[ https://issues.apache.org/jira/browse/MAPREDUCE-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731623#action_12731623 ] Hudson commented on MAPREDUCE-710: -- Integrated in Hadoop-Mapreduce-trunk #23 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-Mapreduce-trunk/23/]) . Sqoop should read and transmit passwords in a more secure manner. Contributed by Aaron Kimball. > Sqoop should read and transmit passwords in a more secure manner > > > Key: MAPREDUCE-710 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-710 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Fix For: 0.21.0 > > Attachments: MAPREDUCE-710.2.patch, MAPREDUCE-710.3.patch, > MAPREDUCE-710.patch > > > Sqoop's current support for passwords involves reading passwords from the > command line "--password foo", which makes the password visible to other > users via 'ps'. An invisible-console approach should be taken. > Related, Sqoop transmits passwords to mysqldump in the same fashion, which is > also insecure. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-711) Move Distributed Cache from Common to Map/Reduce
[ https://issues.apache.org/jira/browse/MAPREDUCE-711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731557#action_12731557 ] Philip Zeyliger commented on MAPREDUCE-711: --- Have you been able to check this in? > Move Distributed Cache from Common to Map/Reduce > > > Key: MAPREDUCE-711 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-711 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Owen O'Malley >Assignee: Vinod K V > Attachments: MAPREDUCE-711-20090709-common.txt, > MAPREDUCE-711-20090709-mapreduce.1.txt, MAPREDUCE-711-20090709-mapreduce.txt, > MAPREDUCE-711-20090710.txt > > > Distributed Cache logically belongs as part of map/reduce and not Common. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-630) TestKillCompletedJob can be modified to improve execution times
[ https://issues.apache.org/jira/browse/MAPREDUCE-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731551#action_12731551 ] Jothi Padmanabhan commented on MAPREDUCE-630: - Test patch results: [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 5 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. > TestKillCompletedJob can be modified to improve execution times > --- > > Key: MAPREDUCE-630 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-630 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: hadoop-6068.patch, mapred-630-v1.patch, > mapred-630-v2.patch, mapred-630.patch > > > This test can be easily made into a unit test -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-680) Reuse of Writable objects is improperly handled by MRUnit
[ https://issues.apache.org/jira/browse/MAPREDUCE-680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johan Oskarsson updated MAPREDUCE-680: -- Resolution: Fixed Fix Version/s: 0.21.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Aaron! > Reuse of Writable objects is improperly handled by MRUnit > - > > Key: MAPREDUCE-680 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-680 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Fix For: 0.21.0 > > Attachments: MAPREDUCE-680.patch > > > As written, MRUnit's MockOutputCollector simply stores references to the > objects passed in to its collect() method. Thus if the same Text (or other > Writable) object is reused as an output containiner multiple times with > different values, these separate values will not all be collected. > MockOutputCollector needs to properly use io.serializations to deep copy the > objects sent in. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-630) TestKillCompletedJob can be modified to improve execution times
[ https://issues.apache.org/jira/browse/MAPREDUCE-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-630: Status: Patch Available (was: Open) Made a minor change to the previous test to improve coverage > TestKillCompletedJob can be modified to improve execution times > --- > > Key: MAPREDUCE-630 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-630 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: hadoop-6068.patch, mapred-630-v1.patch, > mapred-630-v2.patch, mapred-630.patch > > > This test can be easily made into a unit test -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-630) TestKillCompletedJob can be modified to improve execution times
[ https://issues.apache.org/jira/browse/MAPREDUCE-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-630: Attachment: mapred-630-v2.patch > TestKillCompletedJob can be modified to improve execution times > --- > > Key: MAPREDUCE-630 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-630 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: hadoop-6068.patch, mapred-630-v1.patch, > mapred-630-v2.patch, mapred-630.patch > > > This test can be easily made into a unit test -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-630) TestKillCompletedJob can be modified to improve execution times
[ https://issues.apache.org/jira/browse/MAPREDUCE-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jothi Padmanabhan updated MAPREDUCE-630: Status: Open (was: Patch Available) > TestKillCompletedJob can be modified to improve execution times > --- > > Key: MAPREDUCE-630 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-630 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: hadoop-6068.patch, mapred-630-v1.patch, mapred-630.patch > > > This test can be easily made into a unit test -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-421) mapred pipes might return exit code 0 even when failing
[ https://issues.apache.org/jira/browse/MAPREDUCE-421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated MAPREDUCE-421: Status: Open (was: Patch Available) There should be a unit test for this so that it doesn't regress in the future. > mapred pipes might return exit code 0 even when failing > --- > > Key: MAPREDUCE-421 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-421 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: pipes >Reporter: Christian Kunz >Assignee: Christian Kunz > Fix For: 0.20.1 > > Attachments: MAPREDUCE-421.patch > > > up to hadoop 0.18.3 org.apache.hadoop.mapred.JobShell ensured that 'hadoop > jar' returns non-zero exit code when the job fails. > This is no longer true after moving this to org.apache.hadoop.util.RunJar. > Pipes jobs submitted through cli never returned proper exit code. > The main methods in org.apache.hadoop.util.RunJar. and > org.apache.hadoop.mapred.pipes.Submitter should be modified to return an exit > code similar to how org.apache.hadoop.mapred.JobShell did it. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (MAPREDUCE-70) Unify the way job history filename is parsed
[ https://issues.apache.org/jira/browse/MAPREDUCE-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amar Kamat resolved MAPREDUCE-70. - Resolution: Fixed MAPREDUCE-11 should fix this. > Unify the way job history filename is parsed > - > > Key: MAPREDUCE-70 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-70 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Amar Kamat >Assignee: Amar Kamat > Attachments: HADOOP-4017-v1.1.patch, HADOOP-4017-v1.patch > > > Job history filename has the following meta-info : > - jobtracker's hostname > - job id > - username > - jobname > {{HistoryViewer.java}} and {{jobhistory.jsp}} are required to parse the > history filename to extract the meta-info. It makes more sense to provide a > common utility in {{JobHistory}} to do it. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-762) Task's process trees may not be killed if a TT is restarted
[ https://issues.apache.org/jira/browse/MAPREDUCE-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731462#action_12731462 ] Amar Kamat commented on MAPREDUCE-762: -- Here is a proposal : # Child jvm, before exiting, should try and cleanup/kill all its sub-processes # Once a jvm is spawned, its session id should be persisted to task-tracker's private folder (TaskTracker.SUBDIR/pid with 700 permission?) # Once the jvm exits, this pid file should be deleted # Upon restart, the pid files in the private folder should be cleaned up (under appropriate owner permissions) # pid files should have sufficient information to reconstruct jvm-context object which is required by LinuxTaskController to kill the process under user permission. @Hemanth, Ravi, Vinod, Sreekanth, Devaraj : Am I missing something here? > Task's process trees may not be killed if a TT is restarted > --- > > Key: MAPREDUCE-762 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-762 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Hemanth Yamijala > > Some work has been done to make sure the tasktrackers kill process trees of > tasks when they finish (either successfully, or with failures or when they > are killed). Related JIRAs are HADOOP-2721, HADOOP-5488 and HADOOP-5420. But > when TTs are restarted, we do not handle killing of process trees - though > tasks will themselves die on re-establishing contact with the TT. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-626) Modify TestLostTracker to improve execution time
[ https://issues.apache.org/jira/browse/MAPREDUCE-626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731467#action_12731467 ] Hadoop QA commented on MAPREDUCE-626: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413522/mapred-626-v3.patch against trunk revision 794223. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/394/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/394/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/394/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/394/console This message is automatically generated. > Modify TestLostTracker to improve execution time > > > Key: MAPREDUCE-626 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-626 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Jothi Padmanabhan >Assignee: Jothi Padmanabhan >Priority: Minor > Attachments: mapred-626-v1.patch, mapred-626-v2.patch, > mapred-626-v3.patch, mapred-626.patch, mapred-626.patch > > > This test can be made faster with a few modifications -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-765) eliminate the usage of FileSystem.create( ) depracated by Hadoop-5438
[ https://issues.apache.org/jira/browse/MAPREDUCE-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Yongqiang updated MAPREDUCE-765: --- Attachment: mapreduce-765-2009-07-15.patch > eliminate the usage of FileSystem.create( ) depracated by Hadoop-5438 > -- > > Key: MAPREDUCE-765 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-765 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: He Yongqiang >Priority: Minor > Attachments: mapreduce-765-2009-07-15.patch > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-765) eliminate the usage of FileSystem.create( ) depracated by Hadoop-5438
eliminate the usage of FileSystem.create( ) depracated by Hadoop-5438 -- Key: MAPREDUCE-765 URL: https://issues.apache.org/jira/browse/MAPREDUCE-765 Project: Hadoop Map/Reduce Issue Type: Improvement Reporter: He Yongqiang Priority: Minor -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-11) Cleanup JobHistory file naming to do with job recovery
[ https://issues.apache.org/jira/browse/MAPREDUCE-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731432#action_12731432 ] Iyappan Srinivasan commented on MAPREDUCE-11: - These are the test scenaros that are tried: 1) Some normal jobs ( sleep and random writer) gets submitted and it gets completed successfully and goes into done folder. All web-links related to it goes smoothly including "filter". The contents of the job file shows success. 2) Some jobs are submitted. JT is restarted to see if there is a recover file as well as the original job file. The jobs complete successfully and goes into the done folder. The contents of the job file shows success. 3) Some jobs are submitted. JT goes for a restart. JT comes up back again and jobs gets resubmitted. Jobs are 20% completed. JT restarts again. Jobs gets resubmitted from the beginning instead of from the 20%. JT restarts again. Jobs are again resubmitted. The jobs complete successfully and goes into the done folder. The contents of the job file shows success. 4) Some jobs are submitted. JT is stopped. After 5 minutes, it is restarted. One of the TTs gets job level blacklisted. Still, the jobs are resubmitted and passes to the "done" folder. 5) Kill job tracker as soon as some jobs starts, when they had a "0" file size. Restart JT. These jobs should get resubmitted and pass successfully. 6) Submit jobs, do JT restart and when jobs gets resubmitted, kill those jobs using "job -kill". They shd be killed properly and move to done folder with correct file contents. Two issue found are : 1) org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /user/hadoopqa/input3/_logs/history/servername.inktomisearch.com_job_200907151220_0003_conf.xml for DFSClient_-123 on client IPAddress because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:888) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:821) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:565) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:960) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:958) at org.apache.hadoop.ipc.Client.call(Client.java:743) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) at $Proxy0.create(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy0.create(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.(DFSClient.java:2932) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:436) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:206) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:529) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:503) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:391) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:383) at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1082) at org.apache.hadoop.mapred.JobInProgress.logToJobHistory(JobInProgress.java:589) 2) 2009-07-15 11:36:31,576 ERROR org.mortbay.log: /jobdetailshistory.jsp java.lang.IllegalArgumentException: No enum const class org.apache.hadoop.mapred.JobHistory$RecordTypes.b0VIM at java.lang.Enum.valueOf(Enum.java:192) at org.apache.hadoop.mapred.JobHistory$RecordTypes.valueOf(JobHistory.java:233) at org.apache.hadoop.mapred.JobHistory.parseLine(JobHistory.java:465) at org.apache.hadoop.mapred.JobHistory.access$200(JobHistory.java:75) at org.apache.hadoop.mapred.JobH
[jira] Commented: (MAPREDUCE-739) Allow relative paths to be created inside archives.
[ https://issues.apache.org/jira/browse/MAPREDUCE-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731394#action_12731394 ] Hadoop QA commented on MAPREDUCE-739: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413485/MAPREDUCE-739.patch against trunk revision 794101. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 34 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 317 release audit warnings (more than the trunk's current 315 warnings). +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/393/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/393/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/393/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/393/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/393/console This message is automatically generated. > Allow relative paths to be created inside archives. > --- > > Key: MAPREDUCE-739 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-739 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: harchive >Reporter: Mahadev konar >Assignee: Mahadev konar > Fix For: 0.21.0 > > Attachments: HADOOP-3663.patch, HADOOP-3663.patch, HADOOP-3663.patch, > MAPREDUCE-739.patch, MAPREDUCE-739.patch > > > Archives currently stores the full path from the input sources -- since it > allows multiple sources and regular expressions as inputs. So the created > archives have the full path of the input sources. This is un intuitive and a > user hassle. We should get rid of it and allow users to say that the created > archive should be relative to some absolute path and throw an excpetion if > the input does not confirm to the relative absolute path. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-659) gridmix2 not compiling under mapred module trunk/src/benchmarks/gridmix2
[ https://issues.apache.org/jira/browse/MAPREDUCE-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731390#action_12731390 ] Amareshwari Sriramadasu commented on MAPREDUCE-659: --- To catch the compilation failures early, gridmix should be added to the binary/package target itself. User should be able to build the jar just by doing in ant in the src/benchmarks/gridmix2 directory. If there are any dependencies with streaming and test, they should be added to build.xml > gridmix2 not compiling under mapred module trunk/src/benchmarks/gridmix2 > - > > Key: MAPREDUCE-659 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-659 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build > Environment: latest trunk >Reporter: Iyappan Srinivasan >Assignee: Giridharan Kesavan >Priority: Critical > Attachments: 659-1.patch, MAPREDUCE-659.PATCH > > > When build is tried in gridmix2, it fails > trunk/src/benchmarks/gridmix2 $ ant > Buildfile: build.xml > init: > compile: > [javac] Compiling 3 source files to > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/build > [javac] > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/src/java/org/apache/hadoop/mapred/GridMixRunner.java:40: > package org.apache.hadoop.streaming does not exist > [javac] import org.apache.hadoop.streaming.StreamJob; > [javac] ^ > [javac] > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/src/java/org/apache/hadoop/mapred/GridMixRunner.java:123: > cannot find symbol > [javac] symbol: variable StreamJob > [javac] JobConf jobconf = StreamJob.createJob(args); > [javac] ^ > [javac] Note: Some input files use or override a deprecated API. > [javac] Note: Recompile with -Xlint:deprecation for details. > [javac] 2 errors > BUILD FAILED > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/build.xml:27: > Compile failed; see the compiler error output for details. > Total time: 1 second -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-659) gridmix2 not compiling under mapred module trunk/src/benchmarks/gridmix2
[ https://issues.apache.org/jira/browse/MAPREDUCE-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated MAPREDUCE-659: -- Priority: Critical (was: Major) Increasing the priority as gridmix compilation is broken > gridmix2 not compiling under mapred module trunk/src/benchmarks/gridmix2 > - > > Key: MAPREDUCE-659 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-659 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: build > Environment: latest trunk >Reporter: Iyappan Srinivasan >Assignee: Giridharan Kesavan >Priority: Critical > Attachments: 659-1.patch, MAPREDUCE-659.PATCH > > > When build is tried in gridmix2, it fails > trunk/src/benchmarks/gridmix2 $ ant > Buildfile: build.xml > init: > compile: > [javac] Compiling 3 source files to > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/build > [javac] > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/src/java/org/apache/hadoop/mapred/GridMixRunner.java:40: > package org.apache.hadoop.streaming does not exist > [javac] import org.apache.hadoop.streaming.StreamJob; > [javac] ^ > [javac] > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/src/java/org/apache/hadoop/mapred/GridMixRunner.java:123: > cannot find symbol > [javac] symbol: variable StreamJob > [javac] JobConf jobconf = StreamJob.createJob(args); > [javac] ^ > [javac] Note: Some input files use or override a deprecated API. > [javac] Note: Recompile with -Xlint:deprecation for details. > [javac] 2 errors > BUILD FAILED > /home/iyappans/new_trunk1/mapreduce/trunk/src/benchmarks/gridmix2/build.xml:27: > Compile failed; see the compiler error output for details. > Total time: 1 second -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-710) Sqoop should read and transmit passwords in a more secure manner
[ https://issues.apache.org/jira/browse/MAPREDUCE-710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated MAPREDUCE-710: Resolution: Fixed Fix Version/s: 0.21.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Aaron! > Sqoop should read and transmit passwords in a more secure manner > > > Key: MAPREDUCE-710 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-710 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: contrib/sqoop >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Fix For: 0.21.0 > > Attachments: MAPREDUCE-710.2.patch, MAPREDUCE-710.3.patch, > MAPREDUCE-710.patch > > > Sqoop's current support for passwords involves reading passwords from the > command line "--password foo", which makes the password visible to other > users via 'ps'. An invisible-console approach should be taken. > Related, Sqoop transmits passwords to mysqldump in the same fashion, which is > also insecure. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-465) Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner
[ https://issues.apache.org/jira/browse/MAPREDUCE-465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sharad Agarwal updated MAPREDUCE-465: - Fix Version/s: (was: 0.21.0) 0.20.1 Committed the bug fix to branch 0.20 as well. > Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner > - > > Key: MAPREDUCE-465 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-465 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu >Priority: Minor > Fix For: 0.20.1 > > Attachments: patch-465-0.20.txt, patch-465.txt, patch-6023.txt > > > Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner to use > org.apache.hadoop.mapreduce.lib.MultithreadedMapRunner -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-740) Provide summary information per job once a job is finished.
[ https://issues.apache.org/jira/browse/MAPREDUCE-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731362#action_12731362 ] Hemanth Yamijala commented on MAPREDUCE-740: Arun chatted offline with me. We decided it's ok to keep JobSummary as it is now. Also, the fix with respect to start time seems fine. I think my points have been addressed. > Provide summary information per job once a job is finished. > --- > > Key: MAPREDUCE-740 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-740 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: jobtracker >Reporter: Hong Tang >Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: MAPREDUCE-740_0_20090709.patch, > MAPREDUCE-740_0_20090713.patch, MAPREDUCE-740_0_20090713_yhadoop20.patch > > > It would be nice if JobTracker can output a one line summary information per > job once a job is finished. Otherwise, users or system administrators would > end up scraping individual job history logs. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-735) ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner
[ https://issues.apache.org/jira/browse/MAPREDUCE-735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731359#action_12731359 ] Iyappan Srinivasan commented on MAPREDUCE-735: -- Some more testing on these scenarios and found them to pass: 1) No exception should occur and job succeed when the -Dmapred.text.key.comparator.options is set to any negative value or any positive value which is out of bound of the data. 2) No exception should occur and job succeed when the seperator has strange values which are not present in the data. 3) Taking out some parameters and stil lsee if it does not get exception. > ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner > > > Key: MAPREDUCE-735 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-735 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 0.20.1 >Reporter: Suman Sehgal >Assignee: Amar Kamat > Attachments: HADOOP-6130-v1.0.patch, MAPREDUCE-735-v1.2.patch, > MAPREDUCE-735-v1.4-branch-0.20.patch, MAPREDUCE-735-v1.4.patch, > MAPREDUCE-735-v1.5.patch > > > KeyFieldBasedPartitioner throws "KeyFieldBasedPartitioner" when some part of > the specified key is missing. > Scenario : > === > when value of num.key.fields.for.partition is greater than the separators > provided in the input. > Command: > > hadoop jar streaming.jar -Dmapred.reduce.tasks=3 > -Dnum.key.fields.for.partition=5 -input -output > -mapper org.apache.hadoop.mapred.lib.IdentityMapper -reducer > org.apache.hadoop.mapred.lib.IdentityReducer -inputformat > org.apache.hadoop.mapred.KeyValueTextInputFormat -partitioner > org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-40) Memory management variables need a backwards compatibility option after HADOOP-5881
[ https://issues.apache.org/jira/browse/MAPREDUCE-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731346#action_12731346 ] Hadoop QA commented on MAPREDUCE-40: -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413299/hadoop-5919-13-20.patch against trunk revision 794101. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/392/console This message is automatically generated. > Memory management variables need a backwards compatibility option after > HADOOP-5881 > --- > > Key: MAPREDUCE-40 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-40 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Hemanth Yamijala >Assignee: rahul k singh >Priority: Blocker > Attachments: hadoop-5919-1.patch, hadoop-5919-10.patch, > hadoop-5919-11.patch, hadoop-5919-12-20.patch, hadoop-5919-12.patch, > hadoop-5919-13-20.patch, hadoop-5919-13.patch, hadoop-5919-2.patch, > hadoop-5919-3.patch, hadoop-5919-4.patch, hadoop-5919-5.patch, > hadoop-5919-6.patch, hadoop-5919-7.patch, hadoop-5919-8.patch, > hadoop-5919-9.patch > > > HADOOP-5881 modified variables related to memory management without looking > at the backwards compatibility angle. This JIRA is to adress the gap. Marking > it a blocker for 0.20.1 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-764) TypedBytesInput's readRaw() does not preserve custom type codes
[ https://issues.apache.org/jira/browse/MAPREDUCE-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Klaas Bosteels updated MAPREDUCE-764: - Attachment: MAPREDUCE-764.patch > TypedBytesInput's readRaw() does not preserve custom type codes > --- > > Key: MAPREDUCE-764 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-764 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: contrib/streaming >Affects Versions: 0.21.0 >Reporter: Klaas Bosteels >Assignee: Klaas Bosteels > Attachments: MAPREDUCE-764.patch > > > The typed bytes format supports byte sequences of the form {{ code> }}. When reading such a sequence via > {{TypedBytesInput}}'s {{readRaw()}} method, however, the returned sequence > currently is {{0 }} (0 is the type code for a bytes array), > which leads to bugs such as the one described > [here|http://dumbo.assembla.com/spaces/dumbo/tickets/54]. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-764) TypedBytesInput's readRaw() does not preserve custom type codes
[ https://issues.apache.org/jira/browse/MAPREDUCE-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Klaas Bosteels updated MAPREDUCE-764: - Status: Patch Available (was: Open) > TypedBytesInput's readRaw() does not preserve custom type codes > --- > > Key: MAPREDUCE-764 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-764 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: contrib/streaming >Affects Versions: 0.21.0 >Reporter: Klaas Bosteels >Assignee: Klaas Bosteels > Attachments: MAPREDUCE-764.patch > > > The typed bytes format supports byte sequences of the form {{ code> }}. When reading such a sequence via > {{TypedBytesInput}}'s {{readRaw()}} method, however, the returned sequence > currently is {{0 }} (0 is the type code for a bytes array), > which leads to bugs such as the one described > [here|http://dumbo.assembla.com/spaces/dumbo/tickets/54]. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-375) Change org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api.
[ https://issues.apache.org/jira/browse/MAPREDUCE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated MAPREDUCE-375: -- Status: Patch Available (was: Open) > Change org.apache.hadoop.mapred.lib.NLineInputFormat and > org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > -- > > Key: MAPREDUCE-375 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-375 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-375.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-740) Provide summary information per job once a job is finished.
[ https://issues.apache.org/jira/browse/MAPREDUCE-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731345#action_12731345 ] Hadoop QA commented on MAPREDUCE-740: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413395/MAPREDUCE-740_0_20090713_yhadoop20.patch against trunk revision 794101. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/391/console This message is automatically generated. > Provide summary information per job once a job is finished. > --- > > Key: MAPREDUCE-740 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-740 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: jobtracker >Reporter: Hong Tang >Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: MAPREDUCE-740_0_20090709.patch, > MAPREDUCE-740_0_20090713.patch, MAPREDUCE-740_0_20090713_yhadoop20.patch > > > It would be nice if JobTracker can output a one line summary information per > job once a job is finished. Otherwise, users or system administrators would > end up scraping individual job history logs. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (MAPREDUCE-375) Change org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api.
[ https://issues.apache.org/jira/browse/MAPREDUCE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated MAPREDUCE-375: -- Attachment: patch-375.txt Patch modifying org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > Change org.apache.hadoop.mapred.lib.NLineInputFormat and > org.apache.hadoop.mapred.MapFileOutputFormat to use new api. > -- > > Key: MAPREDUCE-375 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-375 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-375.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-656) Change org.apache.hadoop.mapred.SequenceFile* classes to use new api
[ https://issues.apache.org/jira/browse/MAPREDUCE-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731341#action_12731341 ] Amareshwari Sriramadasu commented on MAPREDUCE-656: --- -1 release audit. Is spurious. diff file shows jdiff files. -1 contrib tests. Is know issue > Change org.apache.hadoop.mapred.SequenceFile* classes to use new api > > > Key: MAPREDUCE-656 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-656 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-656-1.txt, patch-656.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (MAPREDUCE-656) Change org.apache.hadoop.mapred.SequenceFile* classes to use new api
[ https://issues.apache.org/jira/browse/MAPREDUCE-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731338#action_12731338 ] Hadoop QA commented on MAPREDUCE-656: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12413379/patch-656-1.txt against trunk revision 794101. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 323 release audit warnings (more than the trunk's current 315 warnings). +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/390/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/390/artifact/trunk/current/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/390/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/390/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Mapreduce-Patch-vesta.apache.org/390/console This message is automatically generated. > Change org.apache.hadoop.mapred.SequenceFile* classes to use new api > > > Key: MAPREDUCE-656 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-656 > Project: Hadoop Map/Reduce > Issue Type: Sub-task >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu > Attachments: patch-656-1.txt, patch-656.txt > > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-764) TypedBytesInput's readRaw() does not preserve custom type codes
TypedBytesInput's readRaw() does not preserve custom type codes --- Key: MAPREDUCE-764 URL: https://issues.apache.org/jira/browse/MAPREDUCE-764 Project: Hadoop Map/Reduce Issue Type: Bug Components: contrib/streaming Affects Versions: 0.21.0 Reporter: Klaas Bosteels Assignee: Klaas Bosteels The typed bytes format supports byte sequences of the form {{ }}. When reading such a sequence via {{TypedBytesInput}}'s {{readRaw()}} method, however, the returned sequence currently is {{0 }} (0 is the type code for a bytes array), which leads to bugs such as the one described [here|http://dumbo.assembla.com/spaces/dumbo/tickets/54]. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-763) Capacity scheduler should clean up reservations if it runs tasks on nodes other than where it has made reservations
Capacity scheduler should clean up reservations if it runs tasks on nodes other than where it has made reservations --- Key: MAPREDUCE-763 URL: https://issues.apache.org/jira/browse/MAPREDUCE-763 Project: Hadoop Map/Reduce Issue Type: Bug Components: contrib/capacity-sched Reporter: Hemanth Yamijala Currently capacity scheduler makes a reservation on nodes for high memory jobs that cannot currently run at the time. It could happen that in the meantime other tasktrackers become free to run the tasks of this job. Ideally in the next heartbeat from the reserved TTs the reservation should be removed. Otherwise it could unnecessarily block capacity for a while (until the TT has enough slots free to run a task of this job). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (MAPREDUCE-762) Task's process trees may not be killed if a TT is restarted
Task's process trees may not be killed if a TT is restarted --- Key: MAPREDUCE-762 URL: https://issues.apache.org/jira/browse/MAPREDUCE-762 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Hemanth Yamijala Some work has been done to make sure the tasktrackers kill process trees of tasks when they finish (either successfully, or with failures or when they are killed). Related JIRAs are HADOOP-2721, HADOOP-5488 and HADOOP-5420. But when TTs are restarted, we do not handle killing of process trees - though tasks will themselves die on re-establishing contact with the TT. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.