> On Nov. 15, 2014, 2:34 a.m., Szehon Ho wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java, line 39
> > <https://reviews.apache.org/r/28064/diff/1/?file=764642#file764642line39>
> >
> >     OK, does spark handle that if we pass NONE in by doing no-op?   If 
> > that's the case, then maybe cleaner for our code in that case.  I'm a bit 
> > confused what NONE means. 
> >     
> >     If we dont want to call NONE due to side-effects, can we just change 
> > the HadoopRDD call to:
> >     
> >     storageHandler.equals(StorageHandler.NONE) ? hadoopRdd : ...
> >     
> >     Then the logic is centralized to there.
> 
> Jimmy Xiang wrote:
>     Sure. Will fix it as suggested. Thanks.

persist() also register the RDD for GC clean up, but there seem to have no 
extra cost besides that. Either way is fine to me.


- Chao


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28064/#review61616
-----------------------------------------------------------


On Nov. 15, 2014, 12:32 a.m., Jimmy Xiang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28064/
> -----------------------------------------------------------
> 
> (Updated Nov. 15, 2014, 12:32 a.m.)
> 
> 
> Review request for hive and Xuefu Zhang.
> 
> 
> Bugs: HIVE-8844
>     https://issues.apache.org/jira/browse/HIVE-8844
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> Changed spark cache policy to be configurable with default memory+disk.
> 
> 
> Diffs
> -----
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/MapInput.java 79baea7 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/ShuffleTran.java 8565ba0 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java 
> 11f4236 
> 
> Diff: https://reviews.apache.org/r/28064/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Jimmy Xiang
> 
>

Reply via email to