GitHub user kayousterhout opened a pull request:

    https://github.com/apache/spark/pull/1024

    Added a TaskSetManager unit test.

    This test ensures that when there are no
    alive executors that satisfy a particular locality level,
    the TaskSetManager doesn't ever use that as the maximum
    allowed locality level (this optimization ensures that a
    job doesn't wait extra time in an attempt to satisfy
    a scheduling locality level that is impossible).
    
    @mateiz and @lirui-intel this unit test illustrates an issue
    with #892 (it fails with that patch).

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/kayousterhout/spark-1 scheduler_unit_test

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/1024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1024
    
----
commit de6a08f5c516857688b61568d267755aa3444ded
Author: Kay Ousterhout <[email protected]>
Date:   2014-06-09T17:56:39Z

    Added a TaskSetManager unit test.
    
    This test ensures that (as an optimization), when there are no
    alive executors that satisfy a particular locality level,
    the TaskSetManager doesn't ever use that as the maximum
    allowed locality level (this optimization ensures that a
    job doesn't wait extra time in an attempt to satisfy
    a scheduling locality level that is impossible).

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to