Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-05 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/#review49669
---

Ship it!


Nice work!! Thank you very much for contribution! I will leave this for a 
little while in case anyone else has comments.

- Brock Noland


On Aug. 5, 2014, 7:19 a.m., chengxiang li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24221/
> ---
> 
> (Updated Aug. 5, 2014, 7:19 a.m.)
> 
> 
> Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.
> 
> 
> Bugs: HIVE-7567
> https://issues.apache.org/jira/browse/HIVE-7567
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> support automatic adjusting reducer number same as MR, configure through 3 
> following parameters:
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 
> 6dca6c9 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 3840318 
> 
> Diff: https://reviews.apache.org/r/24221/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> chengxiang li
> 
>



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-05 Thread Lars Francke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/#review49574
---

Ship it!


Ship It!

- Lars Francke


On Aug. 5, 2014, 7:19 a.m., chengxiang li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24221/
> ---
> 
> (Updated Aug. 5, 2014, 7:19 a.m.)
> 
> 
> Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.
> 
> 
> Bugs: HIVE-7567
> https://issues.apache.org/jira/browse/HIVE-7567
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> support automatic adjusting reducer number same as MR, configure through 3 
> following parameters:
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 
> 6dca6c9 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 3840318 
> 
> Diff: https://reviews.apache.org/r/24221/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> chengxiang li
> 
>



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-05 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
---

(Updated Aug. 5, 2014, 7:19 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Changes
---

fix minor style issues.


Bugs: HIVE-7567
https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
---

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 6dca6c9 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
---


Thanks,

chengxiang li



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-05 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
---

(Updated Aug. 5, 2014, 7:14 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Changes
---

rebase the branch and update patch.


Bugs: HIVE-7567
https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
---

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 6dca6c9 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
---


Thanks,

chengxiang li



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-04 Thread Lars Francke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/#review49565
---

Ship it!


A few more minor code style issues, apart from that it looks good. Thank you!


ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java


Wrong indentation (3 -> 2)



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java


no need to wrap



ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java


missing space



ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java


missing space


- Lars Francke


On Aug. 5, 2014, 5:32 a.m., chengxiang li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24221/
> ---
> 
> (Updated Aug. 5, 2014, 5:32 a.m.)
> 
> 
> Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.
> 
> 
> Bugs: HIVE-7567
> https://issues.apache.org/jira/browse/HIVE-7567
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> support automatic adjusting reducer number same as MR, configure through 3 
> following parameters:
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java 
> abd4718 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java 
> f262065 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
> 73553ee 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 
> 75a1033 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 3840318 
> 
> Diff: https://reviews.apache.org/r/24221/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> chengxiang li
> 
>



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-04 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
---

(Updated Aug. 5, 2014, 5:32 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Bugs: HIVE-7567
https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
---

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java abd4718 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java f262065 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
73553ee 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 75a1033 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
---


Thanks,

chengxiang li



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-04 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
---

(Updated Aug. 5, 2014, 3:53 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Bugs: HIVE-7567
https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
---

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java abd4718 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java f262065 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
73553ee 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 75a1033 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
---


Thanks,

chengxiang li



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-04 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/#review49557
---


This looks great! I've made a few comments below, all of which are minor.


ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java


Can this be final?



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java


= null is not required



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java


I know this method name came from another section of code. However shall we 
rename it determineNumberOfReducers or configureNumberOfReducers since it's not 
a setter?



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java.orig


let's remove the .orig file



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java


I think we have these public member variables since this code was copied 
from Tez? However public member variables are not standard. Can you generate 
accessors?



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java


Can you put a TODO as to why this is still commented out and open a JIRA 
for to fix this?


- Brock Noland


On Aug. 5, 2014, 3:43 a.m., chengxiang li wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24221/
> ---
> 
> (Updated Aug. 5, 2014, 3:43 a.m.)
> 
> 
> Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.
> 
> 
> Bugs: HIVE-7567
> https://issues.apache.org/jira/browse/HIVE-7567
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> support automatic adjusting reducer number same as MR, configure through 3 
> following parameters:
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java 
> abd4718 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java 
> f262065 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
> 73553ee 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
>   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 
> 75a1033 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java.orig 
> PRE-CREATION 
>   
> ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
>  PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
> 3840318 
> 
> Diff: https://reviews.apache.org/r/24221/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> chengxiang li
> 
>



Re: Review Request 24221: HIVE-7567, support automatic adjusting reducer number for hive on spark job

2014-08-04 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24221/
---

(Updated Aug. 5, 2014, 3:43 a.m.)


Review request for hive, Brock Noland, Lars Francke, and Szehon Ho.


Bugs: HIVE-7567
https://issues.apache.org/jira/browse/HIVE-7567


Repository: hive-git


Description
---

support automatic adjusting reducer number same as MR, configure through 3 
following parameters:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/GroupByShuffler.java abd4718 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SortByShuffler.java f262065 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
73553ee 
  ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java fb25596 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java d7e1fbf 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SetSparkReducerParallelism.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 75a1033 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java.orig 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 3840318 

Diff: https://reviews.apache.org/r/24221/diff/


Testing
---


Thanks,

chengxiang li