[
https://issues.apache.org/jira/browse/HADOOP-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12699378#action_12699378
]
Chris Douglas commented on HADOOP-5589:
---------------------------------------
Unfortunately, this would be an incompatible change to TupleWritable; one could
no longer read tuples written using an older version of Hadoop, right?
I see two possible approaches:
# Define backwards-compatible semantics for TupleWritable, so readFields will
read both old and new
# Subclass or create a new TupleWritable2 (or whatever) and modify the join
framework to use that instead
> TupleWritable: Lift implicit limit on the number of values that can be stored
> -----------------------------------------------------------------------------
>
> Key: HADOOP-5589
> URL: https://issues.apache.org/jira/browse/HADOOP-5589
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Affects Versions: 0.21.0
> Reporter: Jingkei Ly
> Assignee: Jingkei Ly
> Attachments: HADOOP-5589-1.patch, HADOOP-5589-2.patch,
> HADOOP-5589-3.patch
>
>
> TupleWritable uses an instance field of the primitive type, long, which I
> presume is so that it can quickly determine if a position has been written to
> in its array of Writables (by using bit-shifting operations on the long
> field). The problem with this is that it implies that there is a maximum
> limit of 64 values you can store in a TupleWritable.
> An example of a use-case where I think this would be a problem is if you had
> two MR jobs with over 64 reduces tasks and you wanted to join the outputs
> with CompositeInputFormat - this will probably cause unexpected results in
> the current scheme.
> At the very least, the 64-value limit should be documented in TupleWritable.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.