Stavros Kontopoulos created SPARK-22728:
-------------------------------------------

             Summary: Unify artifact access for (mesos, standalone and yarn) 
when HDFS is available
                 Key: SPARK-22728
                 URL: https://issues.apache.org/jira/browse/SPARK-22728
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 2.3.0
            Reporter: Stavros Kontopoulos


A unified cluster layer for caching artifacts would be very useful like in the 
case for the work that has be done for Flink: 
https://issues.apache.org/jira/browse/FLINK-6177
It would be great to make available the Hadoop Distributed Cache when we deploy 
jobs in Mesos and Standalone envs. Hdfs is often present in many end-to-end 
apps out there, so we should have an option for using it.
I am creating this JIRA as a follow up of the discussion here: 
https://github.com/apache/spark/pull/18587#issuecomment-314718391



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to