Github user squito commented on the issue:

    https://github.com/apache/spark/pull/16831
  
    @jinxing64 that way of testing is fine, but I find its much faster to use 
sbt.
    
    http://www.scala-sbt.org/0.13/docs/Testing.html
    
    ```
    build/sbt -Pyarn -Phadoop-2.6 -Phive-thriftserver -Dhadoop.version=2.6.5
    [this will put you in an sbt console]
    > project core
    > testOnly *DAGSchedulerSuite
    [run all tests that match the pattern -- in this case, one suite]
    > testOnly *spark.scheduler.*
    [this time we run everything in the scheduler package]
    >~testOnly *DAGSchedulerSuite
    [the '~' in front means that as we modify the code (eg. in another terminal 
or an IDE), sbt will re-run the tests everytime the source changes.]
    >~testOnly *DAGSchedulerSuite -- -z "SPARK-12345"
    [as above, but only run tests within that suite whose name matches the 
pattern]
    ```
    
    The last variant is the quickest way for me run one test repeatedly as I'm 
developing.  Because it runs everytime I save changes to disk, it often runs 
when my code is in some bad state and everything fails.  But no big deal, it 
just runs again when I fix things, so I ignore the window with the running 
tests until I think I have things in an OK state.
    
    some more description of the arguments to scalatest itself (eg `-z` 
http://www.scalatest.org/user_guide/using_the_runner)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to