[ 
https://issues.apache.org/jira/browse/PIG-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13997678#comment-13997678
 ] 

PJ Van Aeken commented on PIG-3655:
-----------------------------------

I suspect I am suffering from this issue. We are generating serialized objects 
in an UDF returning DataByteArray. The script runs in two mapred jobs and fails 
during the map stage of the second job with the following error message :

{quote}
java.lang.RuntimeException: Unexpected data type 48 found in stream.
at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:422)
at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:313)
at org.apache.pig.data.utils.SedesHelper.readGenericTuple(SedesHelper.java:144)
at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:344)
at 
org.apache.pig.impl.io.InterRecordReader.nextKeyValue(InterRecordReader.java:113)
at org.apache.pig.impl.io.InterStorage.getNext(InterStorage.java:77)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
at org.apache.hadoop.mapred.MapTask.runNew
{quote}

I suspect that by accident the exact same sequence that is used as a record 
splitter also occurs somewhere in one of our serialized objects but the amount 
of records is so large that I'm unsure how to go about isolating the issue in 
order to reproduce it. A custom UDF returning the exact same byte sequence does 
not cause the script to fail, which seems odd imo.

> BinStorage and InterStorage approach to record markers is broken
> ----------------------------------------------------------------
>
>                 Key: PIG-3655
>                 URL: https://issues.apache.org/jira/browse/PIG-3655
>             Project: Pig
>          Issue Type: Bug
>    Affects Versions: 0.2.0, 0.3.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.8.1, 
> 0.9.0, 0.9.1, 0.9.2, 0.10.0, 0.11, 0.10.1, 0.12.0, 0.11.1
>            Reporter: Jeff Plaisance
>
> The way that the record readers for these storage formats seek to the first 
> record in an input split is to find the byte sequence 1 2 3 110 for 
> BinStorage or 1 2 3 19-21|28-30|36-45 for InterStorage. If this sequence 
> occurs in the data for any reason (for example the integer 16909166 stored 
> big endian encodes to the byte sequence for BinStorage) other than to mark 
> the start of a tuple it can cause mysterious failures in pig jobs because the 
> record reader will try to decode garbage and fail.
> For this approach of using an unlikely sequence to mark record boundaries, it 
> is important to reduce the probability of the sequence occuring naturally in 
> the data by ensuring that your record marker is sufficiently long. Hadoop 
> SequenceFile uses 128 bits for this and randomly generates the sequence for 
> each file (selecting a fixed, predetermined value opens up the possibility of 
> a mean person intentionally sending you that value). This makes it extremely 
> unlikely that collisions will occur. In the long run I think that pig should 
> also be doing this.
> As a quick fix it might be good to save the current position in the file 
> before entering readDatum, and if an exception is thrown seek back to the 
> saved position and resume trying to find the next record marker.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to