-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53166/#review153804
-----------------------------------------------------------



Thanks for the patch Zsombor!

Good to get rid of one more flaky test :)

I am not sure whether we want to keep the possibility to add new listeners 
after a job was started, or not.
- If we want to add more listeners after the job was started, then I think we 
should keep the JobHandleImpl.addListener method public, and leave it on the 
JobHandle interface too. If we chose this, at lease a comment should describe 
that after submission of the Job the listener calls are unpredictable because 
of the race conditions.
- If we do not want to add more listeners after the job was started, then we 
should add the listeners to the constructor of the JobHandleImpl, and keep the 
addListener method private

This is more nicety than anything else.

Otherwise the patch seems good to me. Thanks again!

I hope someone with more Spark experience will check it too.

Peter

- Peter Vary


On Oct. 25, 2016, 3:20 p.m., Barna Zsombor Klara wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/53166/
> -----------------------------------------------------------
> 
> (Updated Oct. 25, 2016, 3:20 p.m.)
> 
> 
> Review request for hive, Mohit Sabharwal, Siddharth Seth, and Xuefu Zhang.
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> HIVE-14910: Flaky test: TestSparkClient.testJobSubmission
> I ran into this problem today while investigating a flaky test. I think the 
> failure is coming from this race condition: the listener can be added to the 
> JobHandle only after the job has been submitted. So there is no guarantee 
> that every method of the listener will be invoked, some state changes may 
> have happened before the caller received the handler back.
> I propose a slight change in the API. We should add the listeners as an 
> argument of the submit method, so we can set them on the Handler before the 
> job itself is submitted. This way any status change should be signalled to 
> the listener.
> 
> 
> Diffs
> -----
> 
>   spark-client/src/main/java/org/apache/hive/spark/client/JobHandle.java 
> 44aa255a8271894ed3e787c3e7d1323628db63c4 
>   spark-client/src/main/java/org/apache/hive/spark/client/JobHandleImpl.java 
> 17c8f40edd472682d5604f41980d06e60cc92893 
>   spark-client/src/main/java/org/apache/hive/spark/client/SparkClient.java 
> 3e921a5d9b77966d368684ee7b6f1c861ac60e08 
>   
> spark-client/src/main/java/org/apache/hive/spark/client/SparkClientImpl.java 
> e2a30a76e0f7fe95d8a453f502311baa08abcbe2 
>   
> spark-client/src/test/java/org/apache/hive/spark/client/TestSparkClient.java 
> b95cd7a05d44224b53bf2cef9170146b8b2eb4a8 
> 
> Diff: https://reviews.apache.org/r/53166/diff/
> 
> 
> Testing
> -------
> 
> Unit tests modified and tested.
> I also ran a simple query with HoS as the execution engine.
> 
> 
> Thanks,
> 
> Barna Zsombor Klara
> 
>

Reply via email to