Github user tnachen commented on the pull request:

    https://github.com/apache/spark/pull/2145#issuecomment-53516030
  
    I think we need to consolidate all our fixes now :) So I think we all 
understand the problem and get the idea, the problem is really I think our 
fixes as we reviewed each other's stuff for now isn't completely fixing the 
problem.
    
    What used to happen as we know now is that we're passing in extra options 
like java settings to as one of the param to spark-class which is incorrect.
    
    Now your fix here although uses CommandUtils which is the consolidated util 
for generating command, it's still running compute-classpath and setting the 
classpath information directly in the CommandInfo's value. I tested my fix 
#2103 earlier with mesos master and slave on the same host, and although it ran 
it still not enough when running a mesos cluster with master and slaves in 
separate hosts as the classpath was wrong.
    
    I think we should not assume the classpath from the framework launching 
tasks should be the same as the slave. 
    
    I think we should either 1) still use spark-class and let spark-class 
resolve classpaths 2) Update command utils to not run compute-classpath on the 
spot,  but let the slave run compute-classpath to run the /usr/bin/java with.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to