[ 
https://issues.apache.org/jira/browse/SPARK-20608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16006454#comment-16006454
 ] 

Steve Loughran commented on SPARK-20608:
----------------------------------------

One thing to consider here is starting with a test to see what happens, 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
 shows to to set up MiniDFSCluster to run HA

But yes, Marcelo is right: you don't go hdfs://hostname1:8000/ you do 
{{hdfs://hacluster:8000/}} and rely on the list of hosts in 
{{dfs.ha.namenodes.hacluster}} to tell HDFS who to try to talk to. I suspect 
it's important to have a cluster name which is not a resolvable hostname, as 
that may confuse the client (just a guess; haven't looked at the source to see)

> Standby namenodes should be allowed to included in 
> yarn.spark.access.namenodes to support HDFS HA
> -------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-20608
>                 URL: https://issues.apache.org/jira/browse/SPARK-20608
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Submit, YARN
>    Affects Versions: 2.0.1, 2.1.0
>            Reporter: Yuechen Chen
>            Priority: Minor
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> If one Spark Application need to access remote namenodes, 
> yarn.spark.access.namenodes should be only be configged in spark-submit 
> scripts, and Spark Client(On Yarn) would fetch HDFS credential periodically.
> If one hadoop cluster is configured by HA, there would be one active namenode 
> and at least one standby namenode. 
> However, if yarn.spark.access.namenodes includes both active and standby 
> namenodes, Spark Application will be failed for the reason that the standby 
> namenode would not access by Spark for org.apache.hadoop.ipc.StandbyException.
> I think it won't cause any bad effect to config standby namenodes in 
> yarn.spark.access.namenodes, and my Spark Application can be able to sustain 
> the failover of Hadoop namenode.
> HA Examples:
> Spark-submit script: 
> yarn.spark.access.namenodes=hdfs://namenode01,hdfs://namenode02
> Spark Application Codes:
> dataframe.write.parquet(getActiveNameNode(...) + hdfsPath)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to