Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19631#discussion_r154403001
  
    --- Diff: project/MimaExcludes.scala ---
    @@ -36,6 +36,12 @@ object MimaExcludes {
     
       // Exclude rules for 2.3.x
       lazy val v23excludes = v22excludes ++ Seq(
    +    // SPARK-22372: Make cluster submission use SparkApplication.
    +    
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.deploy.SparkHadoopUtil.getSecretKeyFromUserCredentials"),
    --- End diff --
    
    The change is 2.3 only.
    
    I've always questioned why `SparkHadoopUtil` is public in the first place. 
I'm not that worried about removing the functionality in this method because 
Spark apps don't really need to depend on it; it's easy to call the Hadoop API 
directly, and the API did nothing before except in YARN mode.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to