-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69107/#review210041
-----------------------------------------------------------




ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java
Line 63 (original), 67 (patched)
<https://reviews.apache.org/r/69107/#comment294681>

    intialize this above and mark it as final, since its accessed by the 
MemoryInfoLogger thread it needs to be thread safe.
    
    use a custom `ThreadFactory` for the pool. You can use Guava's 
`ThreadFactoryBuilder` - the pool should use daemon threads, specify a name 
format that includes something like `MemoryAndRowLogger`, and a customer 
uncaught exception handler that should just log any exceptions that are caught



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java
Lines 113 (patched)
<https://reviews.apache.org/r/69107/#comment294682>

    instead of just calling `shutdownNow` you should call `shutdown` and then 
run `awaitTermination` with a wait time of say 30 seconds, and then call 
`shutdownNow`. This allows for orderly shutdown of the executor. All in 
progress tasks are allowed to complete.
    
    this will also require handling the race condition where the 
`MemoryInfoLogger` is tries to schedule a task on a shutdown executor. You will 
probably have to use a a custom `RejectedExecutionHandler` - probably the 
`ThreadPoolExecutor.DiscardPolicy`


- Sahil Takiar


On Oct. 24, 2018, 8:55 p.m., Bharathkrishna Guruvayoor Murali wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/69107/
> -----------------------------------------------------------
> 
> (Updated Oct. 24, 2018, 8:55 p.m.)
> 
> 
> Review request for hive, Antal Sinkovits, Sahil Takiar, and Vihang 
> Karajgaonkar.
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> Improve record and memory usage logging in SparkRecordHandler
> 
> 
> Diffs
> -----
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java 
> 88dd12c05ade417aca4cdaece4448d31d4e1d65f 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMergeFileRecordHandler.java
>  8880bb604e088755dcfb0bcb39689702fab0cb77 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java 
> cb5bd7ada2d5ad4f1f654cf80ddaf4504be5d035 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkReduceRecordHandler.java
>  20e7ea0f4e8d4ff79dddeaab0406fc7350d22bd7 
> 
> 
> Diff: https://reviews.apache.org/r/69107/diff/2/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Bharathkrishna Guruvayoor Murali
> 
>

Reply via email to