Re: Partitioner vs GroupComparator
Hi Shahab, thanks, I just missed the fact that the key gets updated while iterating the values. Although working with Hadoop for three years there is always something that can surprise you. :-) Cheers, Jan Original message Subject: Re: Partitioner vs GroupComparator From: Shahab Yunus To: "user@hadoop.apache.org" CC: Jan " is that you need to put the data you want to secondary sort into your key class. " Yes but then you can also don't put the secondary sort column/data piece in the value part and this way there will be no duplication. " But, what I just realized is that the original key probably IS accessible, because of the Writable semantics. As you iterate through the Iterable passed to the reduce call the Key changes its contents. Am I right? " Yes. "all howtos on doing secondary sort look. All I have seen duplicate the secondary part of the key in value." Check this link below where 'null' value is being passed because that has already been captured as part of the key due to secondary sort requirements. http://www.javacodegeeks.com/2013/01/mapreduce-algorithms-secondary-sorting.html Regards, Shahab On Fri, Aug 23, 2013 at 1:34 PM, Lukavsky, Jan mailto:jan.lukav...@firma.seznam.cz>> wrote: Hi Shahab, I'm not sure if I understand right, but the problem is that you need to put the data you want to secondary sort into your key class. But, what I just realized is that the original key probably IS accessible, because of the Writable semantics. As you iterate through the Iterable passed to the reduce call the Key changes its contents. Am I right? This seems a bit weird but probably is how it works. I just overlooked this, because of the way the API looks and how all howtos on doing secondary sort look. All I have seen duplicate the secondary part of the key in value. Jan Original message Subject: Re: Partitioner vs GroupComparator From: Shahab Yunus mailto:shahab.yu...@gmail.com>> To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" mailto:user@hadoop.apache.org>> CC: @Jan, why not, not send the 'hidden' part of the key as a value? Why not then pass value as null or with some other value part. So in the reducer side there is no duplication and you can extract the 'hidden' part of the key yourself (which should be possible as you will be encapsulating it in a some class/object model...? Regards, Shahab On Fri, Aug 23, 2013 at 12:22 PM, Jan Lukavský mailto:jan.lukav...@firma.seznam.cz>> wrote: Hi all, when speaking about this, has anyone ever measured how much more data needs to be transferred over the network when using GroupingComparator the way Harsh suggests? What do I mean, when you use the GroupingComparator, it hides you the real key that you have emitted from Mapper. You just see the first key in the reduce group and any data that was carried in the key needs to be duplicated in the value in order to be accessible on the reduce end. Let's say you have key consisting of two parts (base, extension), you partition by the 'base' part and use GroupingComparator to group keys with the same base part. Than you have no other chance than to emit from Mapper something like this - (key: (base, extension), value: extension), which means the 'extension' part is duplicated in the data, that has to be transferred over the network. This overhead can be diminished by using compression between map and reduce side, but I believe that in some cases this can be significant. It would be nice if the API allowed to access the 'real' key for each value, not only the first key of the reduce group. The only
Re: Partitioner vs GroupComparator
Jan " is that you need to put the data you want to secondary sort into your key class. " Yes but then you can also don't put the secondary sort column/data piece in the value part and this way there will be no duplication. " But, what I just realized is that the original key probably IS accessible, because of the Writable semantics. As you iterate through the Iterable passed to the reduce call the Key changes its contents. Am I right? " Yes. "all howtos on doing secondary sort look. All I have seen duplicate the secondary part of the key in value." Check this link below where 'null' value is being passed because that has already been captured as part of the key due to secondary sort requirements. http://www.javacodegeeks.com/2013/01/mapreduce-algorithms-secondary-sorting.html Regards, Shahab On Fri, Aug 23, 2013 at 1:34 PM, Lukavsky, Jan wrote: > Hi Shahab, I'm not sure if I understand right, but the problem is that > you need to put the data you want to secondary sort into your key class. > But, what I just realized is that the original key probably IS accessible, > because of the Writable semantics. As you iterate through the Iterable > passed to the reduce call the Key changes its contents. Am I right? This > seems a bit weird but probably is how it works. I just overlooked this, > because of the way the API looks and how all howtos on doing secondary sort > look. All I have seen duplicate the secondary part of the key in value. > > Jan > > > > Original message > Subject: Re: Partitioner vs GroupComparator > From: Shahab Yunus > To: "user@hadoop.apache.org" > CC: > > > @Jan, why not, not send the 'hidden' part of the key as a value? Why not > then pass value as null or with some other value part. So in the reducer > side there is no duplication and you can extract the 'hidden' part of the > key yourself (which should be possible as you will be encapsulating it in a > some class/object model...? > > Regards, > Shahab > > > > > On Fri, Aug 23, 2013 at 12:22 PM, Jan Lukavský < > jan.lukav...@firma.seznam.cz> wrote: > >> Hi all, >> >> when speaking about this, has anyone ever measured how much more data >> needs to be transferred over the network when using GroupingComparator the >> way Harsh suggests? What do I mean, when you use the GroupingComparator, it >> hides you the real key that you have emitted from Mapper. You just see the >> first key in the reduce group and any data that was carried in the key >> needs to be duplicated in the value in order to be accessible on the reduce >> end. >> >> Let's say you have key consisting of two parts (base, extension), you >> partition by the 'base' part and use GroupingComparator to group keys with >> the same base part. Than you have no other chance than to emit from Mapper >> something like this - (key: (base, extension), value: extension), which >> means the 'extension' part is duplicated in the data, that has to be >> transferred over the network. This overhead can be diminished by using >> compression between map and reduce side, but I believe that in some cases >> this can be significant. >> >> It would be nice if the API allowed to access the 'real' key for each >> value, not only the first key of the reduce group. The only > >
RE: Partitioner vs GroupComparator
As Harsh said, sometime you want to do the 2nd sort, but for MR, it can only be sorted by key, not by value. A lot of time, you want to the reducer output sort by a field, but only do the sort within a group, kind of like 'windowing sort' in relation DB SQL. For example, if you have a data about all the employee, you want the MR job to sort the Employee by salary, but within each department. So what you choose the key as the omit from Mapper? Department_id? If so, then it is hard to make the result sorted by salary. Using "Department_id + salary", then we cannot put all the data from one department into one reducer. In this case, you separate keys composing way from grouping way. You still use 'Department_id+salary' as the key, but override the GroupComparator to group ONLY by "Department_id", but in the meantime, you sort the data on both 'Department_id + salary'. The final goal is to make sure that all the data for the same department arrive in the same reducer, and when they arrive, they will be sorted by salary too, by utilizing the MR's sort/shuffle build-in ability. Yong Date: Fri, 23 Aug 2013 13:06:01 -0400 Subject: Re: Partitioner vs GroupComparator From: shahab.yu...@gmail.com To: user@hadoop.apache.org @Jan, why not, not send the 'hidden' part of the key as a value? Why not then pass value as null or with some other value part. So in the reducer side there is no duplication and you can extract the 'hidden' part of the key yourself (which should be possible as you will be encapsulating it in a some class/object model...? Regards,Shahab On Fri, Aug 23, 2013 at 12:22 PM, Jan Lukavský wrote: Hi all, when speaking about this, has anyone ever measured how much more data needs to be transferred over the network when using GroupingComparator the way Harsh suggests? What do I mean, when you use the GroupingComparator, it hides you the real key that you have emitted from Mapper. You just see the first key in the reduce group and any data that was carried in the key needs to be duplicated in the value in order to be accessible on the reduce end. Let's say you have key consisting of two parts (base, extension), you partition by the 'base' part and use GroupingComparator to group keys with the same base part. Than you have no other chance than to emit from Mapper something like this - (key: (base, extension), value: extension), which means the 'extension' part is duplicated in the data, that has to be transferred over the network. This overhead can be diminished by using compression between map and reduce side, but I believe that in some cases this can be significant. It would be nice if the API allowed to access the 'real' key for each value, not only the first key of the reduce group. The only way to get rid of this overhead now is by not using the GroupingComparator and instead store some internal state in the Reducer class, that is persisted across mutliple calls to reduce() method, which in my opinion makes using GroupingComparator this way less 'preferred' way of doing secondary sort. Does anyone have any experience with this overhead? Jan On 08/23/2013 06:05 PM, Harsh J wrote: The partitioner runs on the map-end. It assigns a partition ID (reducer ID) to each key. The grouping comparator runs on the reduce-end. It helps reducers, which read off a merge-sorted single file, to understand how to break the sequential file into reduce calls of . Typically one never overrides the GroupingComparator, and it is usually the same as the SortComparator. But if you wish to do things such as Secondary Sort, then overriding this comes useful - cause you may want to sort over two parts of a key object, but only group by one part, etc.. On Fri, Aug 23, 2013 at 8:49 PM, Eugene Morozov wrote: Hello, I have two different types of keys emerged from Map and processed by Reduce. These keys have some part in common. And I'd like to have similar keys in one reducer. For that purpose I used Partitioner and partition everything gets in by this common part. It seems to be fine, but MRUnit seems doesn't know anything about Partitioners. So, here is where GroupComparator comes into play. It seems that MRUnit well aware of the guy, but it surprises me: it looks like Partitioner and GroupComparator are actually doing exactly same - they both somehow group keys to have them in one reducer. Could you shed some light on it, please. --
Re: Partitioner vs GroupComparator
Hi Shahab, I'm not sure if I understand right, but the problem is that you need to put the data you want to secondary sort into your key class. But, what I just realized is that the original key probably IS accessible, because of the Writable semantics. As you iterate through the Iterable passed to the reduce call the Key changes its contents. Am I right? This seems a bit weird but probably is how it works. I just overlooked this, because of the way the API looks and how all howtos on doing secondary sort look. All I have seen duplicate the secondary part of the key in value. Jan Original message Subject: Re: Partitioner vs GroupComparator From: Shahab Yunus To: "user@hadoop.apache.org" CC: @Jan, why not, not send the 'hidden' part of the key as a value? Why not then pass value as null or with some other value part. So in the reducer side there is no duplication and you can extract the 'hidden' part of the key yourself (which should be possible as you will be encapsulating it in a some class/object model...? Regards, Shahab On Fri, Aug 23, 2013 at 12:22 PM, Jan Lukavský mailto:jan.lukav...@firma.seznam.cz>> wrote: Hi all, when speaking about this, has anyone ever measured how much more data needs to be transferred over the network when using GroupingComparator the way Harsh suggests? What do I mean, when you use the GroupingComparator, it hides you the real key that you have emitted from Mapper. You just see the first key in the reduce group and any data that was carried in the key needs to be duplicated in the value in order to be accessible on the reduce end. Let's say you have key consisting of two parts (base, extension), you partition by the 'base' part and use GroupingComparator to group keys with the same base part. Than you have no other chance than to emit from Mapper something like this - (key: (base, extension), value: extension), which means the 'extension' part is duplicated in the data, that has to be transferred over the network. This overhead can be diminished by using compression between map and reduce side, but I believe that in some cases this can be significant. It would be nice if the API allowed to access the 'real' key for each value, not only the first key of the reduce group. The only
Re: Partitioner vs GroupComparator
@Jan, why not, not send the 'hidden' part of the key as a value? Why not then pass value as null or with some other value part. So in the reducer side there is no duplication and you can extract the 'hidden' part of the key yourself (which should be possible as you will be encapsulating it in a some class/object model...? Regards, Shahab On Fri, Aug 23, 2013 at 12:22 PM, Jan Lukavský wrote: > Hi all, > > when speaking about this, has anyone ever measured how much more data > needs to be transferred over the network when using GroupingComparator the > way Harsh suggests? What do I mean, when you use the GroupingComparator, it > hides you the real key that you have emitted from Mapper. You just see the > first key in the reduce group and any data that was carried in the key > needs to be duplicated in the value in order to be accessible on the reduce > end. > > Let's say you have key consisting of two parts (base, extension), you > partition by the 'base' part and use GroupingComparator to group keys with > the same base part. Than you have no other chance than to emit from Mapper > something like this - (key: (base, extension), value: extension), which > means the 'extension' part is duplicated in the data, that has to be > transferred over the network. This overhead can be diminished by using > compression between map and reduce side, but I believe that in some cases > this can be significant. > > It would be nice if the API allowed to access the 'real' key for each > value, not only the first key of the reduce group. The only way to get rid > of this overhead now is by not using the GroupingComparator and instead > store some internal state in the Reducer class, that is persisted across > mutliple calls to reduce() method, which in my opinion makes using > GroupingComparator this way less 'preferred' way of doing secondary sort. > > Does anyone have any experience with this overhead? > > Jan > > > On 08/23/2013 06:05 PM, Harsh J wrote: > >> The partitioner runs on the map-end. It assigns a partition ID >> (reducer ID) to each key. >> The grouping comparator runs on the reduce-end. It helps reducers, >> which read off a merge-sorted single file, to understand how to break >> the sequential file into reduce calls of . >> >> Typically one never overrides the GroupingComparator, and it is >> usually the same as the SortComparator. But if you wish to do things >> such as Secondary Sort, then overriding this comes useful - cause you >> may want to sort over two parts of a key object, but only group by one >> part, etc.. >> >> On Fri, Aug 23, 2013 at 8:49 PM, Eugene Morozov >> wrote: >> >>> Hello, >>> >>> I have two different types of keys emerged from Map and processed by >>> Reduce. >>> These keys have some part in common. And I'd like to have similar keys in >>> one reducer. For that purpose I used Partitioner and partition everything >>> gets in by this common part. It seems to be fine, but MRUnit seems >>> doesn't >>> know anything about Partitioners. So, here is where GroupComparator comes >>> into play. It seems that MRUnit well aware of the guy, but it surprises >>> me: >>> it looks like Partitioner and GroupComparator are actually doing exactly >>> same - they both somehow group keys to have them in one reducer. >>> Could you shed some light on it, please. >>> -- >>> >>> >> >>
Re: Partitioner vs GroupComparator
Hi all, when speaking about this, has anyone ever measured how much more data needs to be transferred over the network when using GroupingComparator the way Harsh suggests? What do I mean, when you use the GroupingComparator, it hides you the real key that you have emitted from Mapper. You just see the first key in the reduce group and any data that was carried in the key needs to be duplicated in the value in order to be accessible on the reduce end. Let's say you have key consisting of two parts (base, extension), you partition by the 'base' part and use GroupingComparator to group keys with the same base part. Than you have no other chance than to emit from Mapper something like this - (key: (base, extension), value: extension), which means the 'extension' part is duplicated in the data, that has to be transferred over the network. This overhead can be diminished by using compression between map and reduce side, but I believe that in some cases this can be significant. It would be nice if the API allowed to access the 'real' key for each value, not only the first key of the reduce group. The only way to get rid of this overhead now is by not using the GroupingComparator and instead store some internal state in the Reducer class, that is persisted across mutliple calls to reduce() method, which in my opinion makes using GroupingComparator this way less 'preferred' way of doing secondary sort. Does anyone have any experience with this overhead? Jan On 08/23/2013 06:05 PM, Harsh J wrote: The partitioner runs on the map-end. It assigns a partition ID (reducer ID) to each key. The grouping comparator runs on the reduce-end. It helps reducers, which read off a merge-sorted single file, to understand how to break the sequential file into reduce calls of . Typically one never overrides the GroupingComparator, and it is usually the same as the SortComparator. But if you wish to do things such as Secondary Sort, then overriding this comes useful - cause you may want to sort over two parts of a key object, but only group by one part, etc.. On Fri, Aug 23, 2013 at 8:49 PM, Eugene Morozov wrote: Hello, I have two different types of keys emerged from Map and processed by Reduce. These keys have some part in common. And I'd like to have similar keys in one reducer. For that purpose I used Partitioner and partition everything gets in by this common part. It seems to be fine, but MRUnit seems doesn't know anything about Partitioners. So, here is where GroupComparator comes into play. It seems that MRUnit well aware of the guy, but it surprises me: it looks like Partitioner and GroupComparator are actually doing exactly same - they both somehow group keys to have them in one reducer. Could you shed some light on it, please. --
Re: Partitioner vs GroupComparator
The partitioner runs on the map-end. It assigns a partition ID (reducer ID) to each key. The grouping comparator runs on the reduce-end. It helps reducers, which read off a merge-sorted single file, to understand how to break the sequential file into reduce calls of . Typically one never overrides the GroupingComparator, and it is usually the same as the SortComparator. But if you wish to do things such as Secondary Sort, then overriding this comes useful - cause you may want to sort over two parts of a key object, but only group by one part, etc.. On Fri, Aug 23, 2013 at 8:49 PM, Eugene Morozov wrote: > Hello, > > I have two different types of keys emerged from Map and processed by Reduce. > These keys have some part in common. And I'd like to have similar keys in > one reducer. For that purpose I used Partitioner and partition everything > gets in by this common part. It seems to be fine, but MRUnit seems doesn't > know anything about Partitioners. So, here is where GroupComparator comes > into play. It seems that MRUnit well aware of the guy, but it surprises me: > it looks like Partitioner and GroupComparator are actually doing exactly > same - they both somehow group keys to have them in one reducer. > Could you shed some light on it, please. > -- > -- Harsh J