[ 
https://issues.apache.org/jira/browse/SPARK-20608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuechen Chen updated SPARK-20608:
---------------------------------
    Description: 
If one Spark Application need to access remote namenodes, 
yarn.spark.access.namenodes should be only be configged in spark-submit 
scripts, and Spark Client(On Yarn) would fetch HDFS credential periodically.
If one hadoop cluster is configured by HA, there would be one active namenode 
and at least one standby namenode. 
However, if yarn.spark.access.namenodes includes both active and standby 
namenodes, Spark Application will be failed for the reason that the standby 
namenode would not access by Spark for org.apache.hadoop.ipc.StandbyException.
I think it won't cause any bad effect to config standby namenodes in 
yarn.spark.access.namenodes, and my Spark Application can be able to sustain 
the failover of Hadoop namenode.

HA Examples:
Spark-submit script: 
yarn.spark.access.namenodes=hdfs://namenode01,hdfs://namenode02
Spark Application Codes:
dataframe.write.parquet(getActiveNameNode(...) + hdfsPath)


  was:
If one Spark Application need to access remote namenodes, 
{yarn.spark.access.namenodes} should be only be configged in spark-submit 
scripts, and Spark Client(On Yarn) would fetch HDFS credential periodically.
If one hadoop cluster is configured by HA, there would be one active namenode 
and at least one standby namenode. 
However, if {yarn.spark.access.namenodes} includes both active and standby 
namenodes, Spark Application will be failed for the reason that the standby 
namenode would not access by Spark for org.apache.hadoop.ipc.StandbyException.
I think it won't cause any bad effect to config standby namenodes in 
{yarn.spark.access.namenodes}, and my Spark Application can be able to sustain 
the failover of Hadoop namenode.

HA Examples:
Spark-submit script: 
yarn.spark.access.namenodes=hdfs://namenode01,hdfs://namenode02
Spark Application Codes:
dataframe.write.parquet(getActiveNameNode(...) + hdfsPath)



> Standby namenodes should be allowed to included in 
> yarn.spark.access.namenodes to support HDFS HA
> -------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-20608
>                 URL: https://issues.apache.org/jira/browse/SPARK-20608
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Submit, YARN
>    Affects Versions: 2.0.1, 2.1.0
>            Reporter: Yuechen Chen
>            Priority: Minor
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> If one Spark Application need to access remote namenodes, 
> yarn.spark.access.namenodes should be only be configged in spark-submit 
> scripts, and Spark Client(On Yarn) would fetch HDFS credential periodically.
> If one hadoop cluster is configured by HA, there would be one active namenode 
> and at least one standby namenode. 
> However, if yarn.spark.access.namenodes includes both active and standby 
> namenodes, Spark Application will be failed for the reason that the standby 
> namenode would not access by Spark for org.apache.hadoop.ipc.StandbyException.
> I think it won't cause any bad effect to config standby namenodes in 
> yarn.spark.access.namenodes, and my Spark Application can be able to sustain 
> the failover of Hadoop namenode.
> HA Examples:
> Spark-submit script: 
> yarn.spark.access.namenodes=hdfs://namenode01,hdfs://namenode02
> Spark Application Codes:
> dataframe.write.parquet(getActiveNameNode(...) + hdfsPath)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to