GitHub user wangyum opened a pull request:

    https://github.com/apache/spark/pull/17020

    [SPARK-19693][SQL] Make the SET mapreduce.job.reduces automatically 
converted to spark.sql.shuffle.partitions

    ## What changes were proposed in this pull request?
    Make the `SET mapreduce.job.reduces` automatically converted to 
`spark.sql.shuffle.partitions`, it's similar to `SET mapred.reduce.tasks`.
    
    ## How was this patch tested?
    
    unit tests


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wangyum/spark SPARK-19693

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17020.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17020
    
----
commit c29ca8a5158a5caab72a866fb2043d88683fb44a
Author: Yuming Wang <wgy...@gmail.com>
Date:   2017-02-20T10:57:05Z

    SET mapreduce.job.reduces automatically converted to 
spark.sql.shuffle.partitions

commit 79484664490cb24ae3cf51667758902edfb6b896
Author: Yuming Wang <wgy...@gmail.com>
Date:   2017-02-22T03:23:12Z

    Make test case for better readability

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to