[ 
https://issues.apache.org/jira/browse/PIG-1846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046964#comment-13046964
 ] 

Thejas M Nair commented on PIG-1846:
------------------------------------

For the general case, where there is skew on the group-by keys, or the 
cardinality of the group-by keys  is very low compared to desired parallelism. 
The usual way of processing it -
{code}
gby = GROUP in BY (c1, c2) PARALLEL 100;
res = FOREACH gby GENERATE group.c1, group.c2, FUNC(distinct in.c3);
{code}

can be converted to - 
{code}
dist_f = FOREACH in GENERATE c1, c2, c3;
dist = DISTINCT dist_f PARALLEL 100;
dist_grp = GROUP dist by c1, c2;
res = FOREACH dist generate c1, c2, FUNC(c3); -- no distinct on c3 required 
here 
{code}


> optimize queries like - count distinct users for each gender
> ------------------------------------------------------------
>
>                 Key: PIG-1846
>                 URL: https://issues.apache.org/jira/browse/PIG-1846
>             Project: Pig
>          Issue Type: Improvement
>    Affects Versions: 0.9.0
>            Reporter: Thejas M Nair
>             Fix For: 0.10
>
>
> The pig group operation does not usually have to deal with skew on the 
> group-by keys if the foreach statement that works on the results of group has 
> only algebraic functions on the bags. But for some queries like the 
> following, skew can be a problem -
> {code}
> user_data = load 'file' as (user, gender, age);
> user_group_gender = group user_data by gender parallel 100;
> dist_users_per_gender = foreach user_group_gender 
>                         { 
>                              dist_user = distinct user_data.user; 
>                              generate group as gender, COUNT(dist_user) as 
> user_count;
>                         }
> {code}
> Since there are only 2 distinct values of the group-by key, only 2 reducers 
> will actually get used in current implementation. ie, you can't get better 
> performance by adding more reducers.
> Similar problem is there when the data is skewed on the group key. With 
> current implementation, another problem is that pig and MR has to deal with 
> records with extremely large bags that have the large number of distinct user 
> names, which results in high memory utilization and having to spill the bags 
> to disk.
> The query plan should be modified to handle the skew in such cases and make 
> use of more reducers.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to