GitHub user srowen opened a pull request:

    https://github.com/apache/spark/pull/4873

    SPARK-4044 [CORE] Thriftserver fails to start when JAVA_HOME points to JRE 
instead of JDK

    So, I think it would be a step too far to tell people they have to run 
Spark with a JDK instead of a JRE. It's uncommon to put a JDK in production. So 
this runtime script needs to cope with not having `jar` available. 
    
    I've added explicit checks to the two cases where it's used, but yes it's 
only the second datanucleus case that matters.
    
    At least there's an error message now, but, how about just adding the jars 
anyway, if they're present, in this case?
    
    CC @JoshRosen 

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/srowen/spark SPARK-4044

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/4873.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4873
    
----
commit 471be8754cf6f77feb0549c72869422d12089e95
Author: Sean Owen <so...@cloudera.com>
Date:   2015-03-03T13:22:59Z

    Explicit check to see if JAR_CMD is available

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to