Github user mgummelt commented on the issue:

    https://github.com/apache/spark/pull/17723
  
    @jerryshao Are those providers different than the Hive and HBase providers 
already in the Spark codebase?
    
    Regardless, with what I'm proposing, the `yarn.ServiceCredentialProvider` 
would remain, so you would still retain the ability to plugin those providers 
for the YARN scheduler to use.
    
    It's just the Mesos scheduler (or any other scheduler that uses core) that 
wouldn't support pluggable providers.  I'm not worried about this, because I 
haven't seen any demand from Mesos users for any provider other than HDFS, 
Hive, and HBase.  We can expose it once, if ever, this demand arises, but by 
keeping it private for now, we gain the option to redesign the interface if a 
new provider type emerges, which is one of the things @vanzin and I were 
worried about.  And we still wouldn't be exposing Hadoop types publicly, which 
is what rxin and @mridulm were worried about.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to