[ 
https://issues.apache.org/jira/browse/SPARK-26254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719574#comment-16719574
 ] 

Marcelo Vanzin commented on SPARK-26254:
----------------------------------------

bq. loaded the providers with ServiceLoader 

If you're going to use that, then you probably don't need a new module. Keep 
HDFS and HBase in core, move the Kafka one to some Kafka package (which one 
TBD, especially if you want to support both dstreams and structured streaming), 
and the Hive one to the Hive module.

There was concern about using ServiceLoader before, but if the interface being 
loaded is private to Spark, it's fine with me.

My original idea was to move everything (renewer code et al) to a new module, 
and make core not have this feature at all; yarn, mesos and others would depend 
on this new module. But the above change might be simpler / better.

> Move delegation token providers into a separate project
> -------------------------------------------------------
>
>                 Key: SPARK-26254
>                 URL: https://issues.apache.org/jira/browse/SPARK-26254
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Gabor Somogyi
>            Priority: Major
>
> There was a discussion in 
> [PR#22598|https://github.com/apache/spark/pull/22598] that there are several 
> provided dependencies inside core project which shouldn't be there (for ex. 
> hive and kafka). This jira is to solve this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to