[
https://issues.apache.org/jira/browse/HDFS-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13092073#comment-13092073
]
Aaron T. Myers commented on HDFS-1973:
--------------------------------------
@Eli, sure I'll file a separate JIRA. It'd certainly be worth enumerating all
of the places where HTTP fail-over is an issue.
The example you provided is an interesting one. It seems you're assuming that
an HA setup would have three nodes - active, standby, and 2NN, with the 2NN
failing over to do checkpointing against the standby after a failure of the
active. The design document in HDFS-1623 doesn't really address checkpointing.
I've heard from Suresh and Todd informally that the intention is probably to
make the standby node also capable of performing checkpointing. I'll file a
separate JIRA to address this as well.
> HA: HDFS clients must handle namenode failover and switch over to the new
> active namenode.
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1973
> URL: https://issues.apache.org/jira/browse/HDFS-1973
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Suresh Srinivas
> Assignee: Aaron T. Myers
>
> During failover, a client must detect the current active namenode failure and
> switch over to the new active namenode. The switch over might make use of IP
> failover or some thing more elaborate such as zookeeper to discover the new
> active.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira