[ 
https://issues.apache.org/jira/browse/SPARK-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15680080#comment-15680080
 ] 

Saikat Kanjilal edited comment on SPARK-9487 at 11/19/16 11:59 PM:
-------------------------------------------------------------------

Ok guess I spoke too soon :), onto the next set of challenges, jenkins build 
report is here:  
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68897/


I ran each of these tests individually as well as together as a suite locally 
and they all passed, any ideas on how to address these?


was (Author: kanjilal):
Ok guess I spoke too soon :), onto the next set of challenges, jenkins build 
report is here:  
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68897/


I ran each of these tests individually as well as together as a suite and they 
all passed, any ideas on how to address these?

> Use the same num. worker threads in Scala/Python unit tests
> -----------------------------------------------------------
>
>                 Key: SPARK-9487
>                 URL: https://issues.apache.org/jira/browse/SPARK-9487
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, Spark Core, SQL, Tests
>    Affects Versions: 1.5.0
>            Reporter: Xiangrui Meng
>              Labels: starter
>         Attachments: ContextCleanerSuiteResults, HeartbeatReceiverSuiteResults
>
>
> In Python we use `local[4]` for unit tests, while in Scala/Java we use 
> `local[2]` and `local` for some unit tests in SQL, MLLib, and other 
> components. If the operation depends on partition IDs, e.g., random number 
> generator, this will lead to different result in Python and Scala/Java. It 
> would be nice to use the same number in all unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to