[ 
https://issues.apache.org/jira/browse/SPARK-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232522#comment-14232522
 ] 

Shixiong Zhu commented on SPARK-4644:
-------------------------------------

Looks `groupByKey` is really different from `join`. The signature of 
`groupByKey` is `def groupByKey(partitioner: Partitioner): RDD[(K, 
Iterable[V])]`, the return value is `RDD[(K, Iterable[V])]`. It exposes the 
internal data structure as `Iterable` to the user, and user can write 
`rdd.groupByKey().repartition(5)`. Therefore, `Iterable` returned by 
`groupByKey` needs to be `Serializable` and can be used in other nodes.

`ChunkBuffer` I designed for skewed join is only used internally and won't be 
exposed to the user. So now it's not `Serializable` and cannot be used by 
`groupByKey`.

In summary, we need a special `Iterable` for `groupByKey`: it can write to disk 
if there is in insufficient space; it can be used in any node, which means this 
`Iterable` can access other nodes' disk (maybe via BlockManager?). Therefore, 
for now I cannot find a general approach both for `join` and `groupByKey`.

> Implement skewed join
> ---------------------
>
>                 Key: SPARK-4644
>                 URL: https://issues.apache.org/jira/browse/SPARK-4644
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Shixiong Zhu
>         Attachments: Skewed Join Design Doc.pdf
>
>
> Skewed data is not rare. For example, a book recommendation site may have 
> several books which are liked by most of the users. Running ALS on such 
> skewed data will raise a OutOfMemory error, if some book has too many users 
> which cannot be fit into memory. To solve it, we propose a skewed join 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to