[ 
https://issues.apache.org/jira/browse/SPARK-33212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289728#comment-17289728
 ] 

Xiaochen Ouyang commented on SPARK-33212:
-----------------------------------------

Thanks for your reply [~csun] !

Submmit command: spark-submit --master yarn --deploy-mode client --class 
org.apache.spark.examples.SparkPi /opt/spark/examples/jars/spark-examples*.jar

In ApplicationMaster.scala

/** Add the Yarn IP filter that is required for properly securing the UI. */
private def addAmIpFilter(driver: Option[RpcEndpointRef]) = {
 val proxyBase = 
System.getenv(ApplicationConstants.APPLICATION_WEB_PROXY_BASE_ENV)
 {color:#de350b}val amFilter = 
"org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter"{color}
 val params = client.getAmIpFilterParams(yarnConf, proxyBase)
 driver match {
 case Some(d) =>
 d.send(AddWebUIFilter(amFilter, params.toMap, proxyBase))

 case None =>
 System.setProperty("spark.ui.filters", amFilter)
 params.foreach \{ case (k, v) => 
System.setProperty(s"spark.$amFilter.param.$k", v) }
 }
}

We need load hadoop-yarn-server-web-proxy.jar into driver classloader when 
submitting a spark on yarn application . Do you mean that we should copy 
hadoop-yarn-server-web-proxy.jar to spark/jars ? 

 

1. AMIpFilter  ClassNotFoundException:

2021-02-24 14:52:56,617 INFO org.apache.spark.storage.BlockManager: Initialized 
BlockManager: BlockManagerId(driver, spark-worker-2, 38399, None)
2021-02-24 14:52:56,704 INFO 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend: Add WebUI 
Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, 
Map(PROXY_HOSTS -> spark-worker-1,spark-worker-2, PROXY_URI_BASES -> 
http://spark-worker-1:8088/proxy/application_1613961532167_0098,http://spark-worker-2:8088/proxy/application_1613961532167_0098,
 RM_HA_URLS -> spark-worker-1:8088,spark-worker-2:8088), 
/proxy/application_1613961532167_0098
2021-02-24 14:52:56,708 INFO org.apache.spark.ui.JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, 
/jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, 
/stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, 
/storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, 
/executors/json, /executors/threadDump, /executors/threadDump/json, /logLevel, 
/static, /, /api, /jobs/job/kill, /stages/stage/kill.
2021-02-24 14:52:56,722 WARN org.spark_project.jetty.servlet.BaseHolder:
java.lang.ClassNotFoundException: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at org.spark_project.jetty.util.Loader.loadClass(Loader.java:86)
 at org.spark_project.jetty.servlet.BaseHolder.doStart(BaseHolder.java:95)
 at org.spark_project.jetty.servlet.FilterHolder.doStart(FilterHolder.java:92)
 at 
org.spark_project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.spark_project.jetty.servlet.ServletHandler.initialize(ServletHandler.java:872)
 at 
org.spark_project.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1596)
 at 
org.spark_project.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
 at 
org.spark_project.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1297)
 at 
org.spark_project.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
 at 
org.spark_project.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1$$anonfun$apply$1.apply(JettyUtils.scala:325)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1$$anonfun$apply$1.apply(JettyUtils.scala:294)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1.apply(JettyUtils.scala:294)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1.apply(JettyUtils.scala:293)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at org.apache.spark.ui.JettyUtils$.addFilters(JettyUtils.scala:293)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter$3.apply(YarnSchedulerBackend.scala:176)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter$3.apply(YarnSchedulerBackend.scala:176)
 at scala.Option.foreach(Option.scala:257)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend.org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter(YarnSchedulerBackend.scala:176)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receive$1.applyOrElse(YarnSchedulerBackend.scala:262)
 at 
org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
 at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
 at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
 at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

 

spark/jars jar list as follows:

ll|grep hadoop-
-rw-r--r--. 1 root root 18327467 Feb 23 19:11 hadoop-client-api-3.2.2.jar
-rw-r--r--. 1 root root 23526033 Feb 23 19:11 hadoop-client-runtime-3.2.2.jar

 

 

2. java.lang.IllegalStateException: class 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is not a 
javax.servlet.Filter

a. use hadoop-client-minicluster-3.2.2.jar 

2021-02-24 15:04:08,735 ERROR org.apache.spark.rpc.netty.Inbox: Ignoring error
java.lang.RuntimeException: MultiException[java.lang.IllegalStateException: 
class org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is not a 
javax.servlet.Filter, java.lang.IllegalStateException: class 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is not a 
javax.servlet.Filter]
 at 
org.spark_project.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1600)
 at 
org.spark_project.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
 at 
org.spark_project.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1297)
 at 
org.spark_project.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
 at 
org.spark_project.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1$$anonfun$apply$1.apply(JettyUtils.scala:325)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1$$anonfun$apply$1.apply(JettyUtils.scala:294)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1.apply(JettyUtils.scala:294)
 at 
org.apache.spark.ui.JettyUtils$$anonfun$addFilters$1.apply(JettyUtils.scala:293)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at org.apache.spark.ui.JettyUtils$.addFilters(JettyUtils.scala:293)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter$3.apply(YarnSchedulerBackend.scala:176)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter$3.apply(YarnSchedulerBackend.scala:176)
 at scala.Option.foreach(Option.scala:257)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend.org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$addWebUIFilter(YarnSchedulerBackend.scala:176)
 at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receive$1.applyOrElse(YarnSchedulerBackend.scala:262)
 at 
org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
 at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
 at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
 at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)

 

spark/jars jar list as follows:

ll|grep hadoop-
-rw-r--r--. 1 root root 18370642 Feb 23 18:30 hadoop-client-api-3.2.2.jar
-rw-r--r--. 1 root root 44088603 Feb 23 18:30 
hadoop-client-minicluster-3.2.2.jar
-rw-r--r--. 1 root root 34935249 Feb 23 18:30 hadoop-client-runtime-3.2.2.jar

 

b. copy 
hadoop-yarn-api,hadoop-yarn-client,hadoop-yarn-common,hadoop-yarn-registry,hadoop-yarn-server-common,hadoop-yarn-server-web-proxy
 binary jar to spark/jars

We can submit and run application successfully!

 

spark/jars jar list as follows:

ll|grep hadoop-
-rw-r--r-- 1 root root 18370642 Feb 24 22:53 hadoop-client-api-3.2.2.jar
-rw-r--r-- 1 root root 34935249 Feb 24 22:53 hadoop-client-runtime-3.2.2.jar
-rw-r--r-- 1 root root 3329395 Feb 24 22:53 hadoop-yarn-api-3.2.2.jar
-rw-r--r-- 1 root root 323219 Feb 24 22:53 hadoop-yarn-client-3.2.2.jar
-rw-r--r-- 1 root root 2908452 Feb 24 22:53 hadoop-yarn-common-3.2.2.jar
-rw-r--r-- 1 root root 226086 Feb 24 22:53 hadoop-yarn-registry-3.2.2.jar
-rw-r--r-- 1 root root 1393291 Feb 24 22:53 hadoop-yarn-server-common-3.2.2.jar
-rw-r--r-- 1 root root 80908 Feb 24 22:53 hadoop-yarn-server-web-proxy-3.2.2.jar

 

> Upgrade to Hadoop 3.2.2 and move to shaded clients for Hadoop 3.x profile
> -------------------------------------------------------------------------
>
>                 Key: SPARK-33212
>                 URL: https://issues.apache.org/jira/browse/SPARK-33212
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, Spark Submit, SQL, YARN
>    Affects Versions: 3.0.1
>            Reporter: Chao Sun
>            Assignee: Chao Sun
>            Priority: Major
>              Labels: releasenotes
>             Fix For: 3.2.0
>
>
> Hadoop 3.x+ offers shaded client jars: hadoop-client-api and 
> hadoop-client-runtime, which shade 3rd party dependencies such as Guava, 
> protobuf, jetty etc. This Jira switches Spark to use these jars instead of 
> hadoop-common, hadoop-client etc. Benefits include:
>  * It unblocks Spark from upgrading to Hadoop 3.2.2/3.3.0+. The newer 
> versions of Hadoop have migrated to Guava 27.0+ and in order to resolve Guava 
> conflicts, Spark depends on Hadoop to not leaking dependencies.
>  * It makes Spark/Hadoop dependency cleaner. Currently Spark uses both 
> client-side and server-side Hadoop APIs from modules such as hadoop-common, 
> hadoop-yarn-server-common etc. Moving to hadoop-client-api allows use to only 
> use public/client API from Hadoop side.
>  * Provides a better isolation from Hadoop dependencies. In future Spark can 
> better evolve without worrying about dependencies pulled from Hadoop side 
> (which used to be a lot).
> *There are some behavior changes introduced with this JIRA, when people use 
> Spark compiled with Hadoop 3.x:*
> - Users now need to make sure class path contains `hadoop-client-api` and 
> `hadoop-client-runtime` jars when they deploy Spark with the 
> `hadoop-provided` option. In addition, it is high recommended that they put 
> these two jars before other Hadoop jars in the class path. Otherwise, 
> conflicts such as from Guava could happen if classes are loaded from the 
> other non-shaded Hadoop jars.
> - Since the new shaded Hadoop clients no longer include 3rd party 
> dependencies. Users who used to depend on these now need to explicitly put 
> the jars in their class path.
> Ideally the above should go to release notes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to