[ 
https://issues.apache.org/jira/browse/HCATALOG-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13463092#comment-13463092
 ] 

Travis Crawford commented on HCATALOG-497:
------------------------------------------

In the pig adapter, when reading a table with column type smallint we get the 
following exception in mappers running on the cluster.

{code}
2012-09-25 18:15:35,275 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: Unexpected data type java.lang.Short found in 
stream. Note only standard Pig type is supported when you output from 
UDF/LoadFunc
        at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:562)
        at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:437)
        at 
org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
        at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:616)
        at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:445)
        at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:437)
        at 
org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
        at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:616)
        at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:445)
        at org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:41)
        at 
org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:123)
        at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90)
        at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77)
        at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:926)
        at 
org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:574)
        at 
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:123)
        at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:285)
        at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
        at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
        at org.apache.hadoop.mapred.Child.main(Child.java:264)
{code}
                
> HCatContext should use the jobconf instead of its own conf
> ----------------------------------------------------------
>
>                 Key: HCATALOG-497
>                 URL: https://issues.apache.org/jira/browse/HCATALOG-497
>             Project: HCatalog
>          Issue Type: Bug
>            Reporter: Travis Crawford
>            Assignee: Travis Crawford
>
> HCatContext is a recently added class that provides global configuration 
> access, which is very useful to configure static converter classes.
> An issue was discovered running in full MR mode that was missed in the unit 
> tests. Since HCatContext has its own configuration object, its never passed 
> to tasks on the cluster.
> HCatContext should reuse the job conf which is passed to worker tasks by the 
> MR framework, so settings are present in both the FE/BE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to