Thank you for your help Shahab.

I guess I wasn't being too clear. My logic is that I use a custom type as
key and in order to deserialize it on the compute nodes, I need an extra
piece of information (also a custom type).

To use an analogy, a Text is serialized by writing the length of the string
as a number and then the bytes that compose the actual string. When it is
deserialized, the number informs the reader when to stop reading the
string. This number is varies from string to string and it is compact so it
makes sense to serialize it with the string.

My use case is similar to it. I have a complex type (let's call this data),
and in order to deserialize it, I need another complex type (let's call
this second type metadata). The metadata is not closely tied to the data
(i.e. if the data value changes, the metadata does not) and the metadata
size is quite large.

I ruled out a couple of options, but please let me know if you think I did
so for the wrong reasons:
1. I could serialize each data value with it's own metadata value, but
since the data value count is in the +tens of millions and the metadata
value distinct count can be up to one hundred, it would waste resources in
the system.
2. I could serialize the metadata and then the data as a collection
property of the metadata. This would be an elegant solution code-wise, but
then all the data would have to be read and kept in memory as a massive
object before any reduce operations can happen. I wasn't able to find any
info on this online so this is just a guess from peeking at the hadoop code.

My "solution" was to serialize the data with a hash of the metadata and
separately serialize the metadata and its hash in the job configuration (as
key/value pairs). For this to work, I would need to be able to deserialize
the metadata on the reduce node before the data is deserialized in the
readFields() method.

I think that for that to happen I need to hook into the code somewhere
where a context or job configuration is used (before readFields()), but I'm
stumped as to where that is.

Cheers,
Adi


On Sat, Aug 31, 2013 at 3:42 AM, Shahab Yunus <shahab.yu...@gmail.com>wrote:

> What I meant was that you might have to split or redesign your logic or
> your usecase (which we don't know about)?
>
> Regards,
> Shahab
>
>
> On Fri, Aug 30, 2013 at 10:31 PM, Adrian CAPDEFIER <chivas314...@gmail.com
> > wrote:
>
>> But how would the comparator have access to the job config?
>>
>>
>> On Sat, Aug 31, 2013 at 2:38 AM, Shahab Yunus <shahab.yu...@gmail.com>wrote:
>>
>>> I think you have to override/extend the Comparator to achieve that,
>>> something like what is done in Secondary Sort?
>>>
>>> Regards,
>>> Shahab
>>>
>>>
>>> On Fri, Aug 30, 2013 at 9:01 PM, Adrian CAPDEFIER <
>>> chivas314...@gmail.com> wrote:
>>>
>>>> Howdy,
>>>>
>>>> I apologise for the lack of code in this message, but the code is
>>>> fairly convoluted and it would obscure my problem. That being said, I can
>>>> put together some sample code if really needed.
>>>>
>>>> I am trying to pass some metadata between the map & reduce steps. This
>>>> metadata is read and generated in the map step and stored in the job
>>>> config. It also needs to be recreated on the reduce node before the key/
>>>> value fields can be read in the readFields function.
>>>>
>>>> I had assumed that I would be able to override the Reducer.setup()
>>>> function and that would be it, but apparently the readFields function is
>>>> called before the Reducer.setup() function.
>>>>
>>>> My question is what is any (the best) place on the reduce node where I
>>>> can access the job configuration/ context before the readFields function is
>>>> called?
>>>>
>>>> This is the stack trace:
>>>>
>>>>         at
>>>> org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:103)
>>>>         at
>>>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1111)
>>>>         at
>>>> org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:70)
>>>>         at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:59)
>>>>         at
>>>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1399)
>>>>         at
>>>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1298)
>>>>         at
>>>> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
>>>>         at
>>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
>>>>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>>>>         at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>>>         at org.apache.hadoop.mapred.Child.main(Child.java:249)
>>>>
>>>>
>>>
>>
>

Reply via email to