spark-submit in mesos cluster mode --jars option not working
creating new thread for this. Is anyone able to use --jars with spark-submit in mesos cluster mode. We have tried giving local file, hdfs file, file from http server , --jars didnt work with any of the approach Saw couple of similar open question with no answer http://stackoverflow.com/questions/33978672/spark-mesos-cluster-mode-who-uploads-the-jar mesos cluster mode with NO jar upload capability is very limiting. wondering anyone has any solution to this. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-in-mesos-cluster-mode-jars-option-not-working-tp28690.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Re: Not able pass 3rd party jars to mesos executors
Hi , Is anyone able to use --jars with spark-submit in mesos cluster mode. We have tried giving local file, hdfs file, file from http server , --jars didnt work with any of the approach Saw couple of similar open question with no answer http://stackoverflow.com/questions/33978672/spark-mesos-cluster-mode-who-uploads-the-jar mesos cluster mode with jar upload capability is very limiting. wondering anyone has any solution to this. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Not-able-pass-3rd-party-jars-to-mesos-executors-tp26918p28689.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Fwd: Saving input schema along with PipelineModel
Hi All, Is there any way I can save Input schema along with ml PipelineModel object? This feature will be really helpful while loading the model and running transform, as user can get back the schema , prepare the dataset for model.transform and don't need to remember it. I see below jira talks about this as one of the update, but I am not able to get any sub-task for the same(also it is marked as resolved). https://issues.apache.org/jira/browse/SPARK-6725 "*UPDATE*: In spark.ml, we could save feature metadata using DataFrames. Other libraries and formats can support this, and it would be great if we could too. We could do either of the following: - save() optionally takes a dataset (or schema), and load will return a (model, schema) pair. - Models themselves save the input schema. Both options would mean inheriting from new Saveable, Loadable types." Please let me know if any update or jira on this. Thanks, Satya -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fwd-Saving-input-schema-along-with-PipelineModel-tp27450.html Sent from the Apache Spark User List mailing list archive at Nabble.com.