[ 
https://issues.apache.org/jira/browse/SPARK-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15438461#comment-15438461
 ] 

Alexander Tronchin-James commented on SPARK-12394:
--------------------------------------------------

Awesome news Tejas!

The filter feature is secondary AFAIK, and I'd prioritize the sorted-merge
bucketed-map (SMB) join if I had the choice. Strong preference for
supporting all of inner and left/right/full outer joins between tables with
integer multiple differences in the number of buckets, selecting the number
of executors (a rational multiple or fraction of the number of buckets),
and also selecting the number of emitted buckets. Bonus points for an
implementation that automatically applies SMB joins and avoids
re-sorts where possible. Maybe a tall order, but we know it can be done.
;-)

If we don't get it all in the first pull request we can always iterate.
Thanks for pushing on this!






> Support writing out pre-hash-partitioned data and exploit that in join 
> optimizations to avoid shuffle (i.e. bucketing in Hive)
> ------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12394
>                 URL: https://issues.apache.org/jira/browse/SPARK-12394
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>            Reporter: Reynold Xin
>            Assignee: Nong Li
>             Fix For: 2.0.0
>
>         Attachments: BucketedTables.pdf
>
>
> In many cases users know ahead of time the columns that they will be joining 
> or aggregating on.  Ideally they should be able to leverage this information 
> and pre-shuffle the data so that subsequent queries do not require a shuffle. 
>  Hive supports this functionality by allowing the user to define buckets, 
> which are hash partitioning of the data based on some key.
>  - Allow the user to specify a set of columns when caching or writing out data
>  - Allow the user to specify some parallelism
>  - Shuffle the data when writing / caching such that its distributed by these 
> columns
>  - When planning/executing  a query, use this distribution to avoid another 
> shuffle when reading, assuming the join or aggregation is compatible with the 
> columns specified
>  - Should work with existing save modes: append, overwrite, etc
>  - Should work at least with all Hadoops FS data sources
>  - Should work with any data source when caching



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to