[ https://issues.apache.org/jira/browse/HBASE-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13772239#comment-13772239 ]
Aleksandr Shulman commented on HBASE-9533: ------------------------------------------ [~saint....@gmail.com] and I took a look at the issue and discovered that the dependency is being excluded by zookeeper in 0.96 and trunk. Removing the exclusion fixes the problem. Testing: We have automation that runs MRv1 over HBase that failed because of this issue. When we removed the exclusion and ran it from a custom branch, it passed. The branch is the latest 0.96 + the patch. https://github.com/AleksandrShulman/hbase/commit/5f7df8e7b08eebe2d28337e2eb0750acea21d51d After the patch is applied, MRv1 and MRv2 both work for a regular pi job (MR only) and a rowcounter job (MR over HBase) The patch is straightforward. I will attach it shortly. > List of dependency jars for MR jobs is hard-coded and does not include netty, > breaking MRv1 jobs > ------------------------------------------------------------------------------------------------ > > Key: HBASE-9533 > URL: https://issues.apache.org/jira/browse/HBASE-9533 > Project: HBase > Issue Type: Bug > Components: mapreduce > Affects Versions: 0.95.2, 0.96.1 > Reporter: Aleksandr Shulman > Assignee: Matteo Bertozzi > Fix For: 0.95.2, 0.96.1 > > Attachments: failed_mrv1_rowcounter_tt_taskoutput.out > > > Observed behavior: > Against trunk, using MRv1 with hadoop 1.0.4, r1393290, I am able to run MRv1 > jobs (e.g. pi 2 4). > However, when I use it to run MR over HBase jobs, they fail with the stack > trace below. > From the trace, the issue seems to be that it cannot find a class that the > netty jar contains. This would make sense, given that the dependency jars > that we use for the MapReduce job are hard-coded, and that the netty jar is > not one of them. > https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java#L519 > Strangely, this is only an issue in trunk, not 0.95, even though the code > hasn't changed. > Command: > {code}/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter > sampletable{code} > TT logs (attached) > Output from console running job: > {code}13/09/13 16:02:58 INFO mapred.JobClient: Task Id : > attempt_201309131601_0002_m_000000_2, Status : FAILED > java.io.IOException: Cannot create a record reader because of a previous > error. Please look at the previous logs lines from the task's full log for > more details. > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:119) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:489) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > 13/09/13 16:03:09 INFO mapred.JobClient: Job complete: job_201309131601_0002 > 13/09/13 16:03:09 INFO mapred.JobClient: Counters: 7 > 13/09/13 16:03:09 INFO mapred.JobClient: Job Counters > 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=29913 > 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all reduces > waiting after reserving slots (ms)=0 > 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all maps > waiting after reserving slots (ms)=0 > 13/09/13 16:03:09 INFO mapred.JobClient: Launched map tasks=4 > 13/09/13 16:03:09 INFO mapred.JobClient: Data-local map tasks=4 > 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0 > 13/09/13 16:03:09 INFO mapred.JobClient: Failed map tasks=1{code} > Expected behavior: > As a stopgap, the netty jar should be included in that list. More generally, > there should be a more elegant way to include the jars that are needed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira