[ https://issues.apache.org/jira/browse/SPARK-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Rekha Joshi updated SPARK-4503: ------------------------------- Attachment: historyserver1.png Hi [~xjxyxgq] MarsXu, I was not able to replicate the issue on Spark-1.5.0.Also checked with the latest, 1.6.0-SNAPSHOT and with yarn configured correctly the namenode works fine for me.Attached screenshot.Agree with [~vanzin] it could be some setup issue at your end.Please confirm if we can close this? Thanks! > The history server is not compatible with HDFS HA > ------------------------------------------------- > > Key: SPARK-4503 > URL: https://issues.apache.org/jira/browse/SPARK-4503 > Project: Spark > Issue Type: Bug > Components: Deploy > Affects Versions: 1.1.0 > Reporter: MarsXu > Priority: Minor > Attachments: historyserver1.png > > > I use a high availability of HDFS to store the history server data. > Can be written eventlog to HDFS , but history server cannot be started. > > Error log when execute "sbin/start-history-server.sh": > {quote} > .... > 14/11/20 10:25:04 INFO SecurityManager: SecurityManager: authentication > disabled; ui acls disabled; users with view permissions: Set(root, ); users > with modify permissions: Set(root, ) > 14/11/20 10:25:04 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Exception in thread "main" java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) > at java.lang.reflect.Constructor.newInstance(Constructor.java:513) > at > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:187) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: > appcluster > at > org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377) > .... > {quote} > When I set <export > SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://s161.zw.db.d:53310/spark_history"> > in spark-evn.sh, can start, but no high availability. > Environment > {quote} > spark-1.1.0-bin-hadoop2.4 > hadoop-2.5.1 > zookeeper-3.4.6 > {quote} > The config file is as follows: > {quote} > !### spark-defaults.conf ### > spark.eventLog.dir hdfs://appcluster/history_server/ > spark.yarn.historyServer.address s161.zw.db.d:18080 > !### spark-env.sh ### > export > SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://appcluster/history_server" > !### core-site.xml ### > <property> > <name>fs.defaultFS</name> > <value>hdfs://appcluster</value> > </property> > !### hdfs-site.xml ### > <property> > <name>dfs.nameservices</name> > <value>appcluster</value> > </property> > <property> > <name>dfs.ha.namenodes.appcluster</name> > <value>nn1,nn2</value> > </property> > <property> > <name>dfs.namenode.rpc-address.appcluster.nn1</name> > <value>s161.zw.db.d:8020</value> > </property> > <property> > <name>dfs.namenode.rpc-address.appcluster.nn2</name> > <value>s162.zw.db.d:8020</value> > </property> > <property> > <name>dfs.namenode.servicerpc-address.appcluster.nn1</name> > <value>s161.zw.db.d:53310</value> > </property> > <property> > <name>dfs.namenode.servicerpc-address.appcluster.nn2</name> > <value>s162.zw.db.d:53310</value> > </property> > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org