[jira] [Updated] (HDFS-5122) Support failover and retry in WebHdfsFileSystem for NN HA

2013-09-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5122:


Summary: Support failover and retry in WebHdfsFileSystem for NN HA  (was: 
WebHDFS should support logical service names in URIs)

> Support failover and retry in WebHdfsFileSystem for NN HA
> -
>
> Key: HDFS-5122
> URL: https://issues.apache.org/jira/browse/HDFS-5122
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, webhdfs
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Attachments: HDFS-5122.001.patch, HDFS-5122.002.patch, 
> HDFS-5122.003.patch, HDFS-5122.004.patch, HDFS-5122.patch
>
>
> For example if the dfs.nameservices is set to arpit
> {code}
> hdfs dfs -ls webhdfs://arpit:50070/tmp
> or 
> hdfs dfs -ls webhdfs://arpit/tmp
> {code}
> does not work
> You have to provide the exact active namenode hostname. On an HA cluster 
> using dfs client one should not need to provide the active nn hostname

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5122) Support failover and retry in WebHdfsFileSystem for NN HA

2013-09-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5122:


Description: 
Bug reported by [~arpitgupta]:

If the dfs.nameservices is set to arpit,
{code}
hdfs dfs -ls webhdfs://arpit/tmp
{code}
does not work. You have to provide the exact active namenode hostname. On an HA 
cluster using dfs client one should not need to provide the active nn hostname.

To fix this, we try to 
1) let WebHdfsFileSystem support logical NN service name
2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA



  was:
For example if the dfs.nameservices is set to arpit

{code}
hdfs dfs -ls webhdfs://arpit:50070/tmp

or 

hdfs dfs -ls webhdfs://arpit/tmp
{code}
does not work

You have to provide the exact active namenode hostname. On an HA cluster using 
dfs client one should not need to provide the active nn hostname


> Support failover and retry in WebHdfsFileSystem for NN HA
> -
>
> Key: HDFS-5122
> URL: https://issues.apache.org/jira/browse/HDFS-5122
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, webhdfs
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Attachments: HDFS-5122.001.patch, HDFS-5122.002.patch, 
> HDFS-5122.003.patch, HDFS-5122.004.patch, HDFS-5122.patch
>
>
> Bug reported by [~arpitgupta]:
> If the dfs.nameservices is set to arpit,
> {code}
> hdfs dfs -ls webhdfs://arpit/tmp
> {code}
> does not work. You have to provide the exact active namenode hostname. On an 
> HA cluster using dfs client one should not need to provide the active nn 
> hostname.
> To fix this, we try to 
> 1) let WebHdfsFileSystem support logical NN service name
> 2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5122) Support failover and retry in WebHdfsFileSystem for NN HA

2013-09-18 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5122:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the work, [~wheat9]! I've committed this to trunk and branch-2.

> Support failover and retry in WebHdfsFileSystem for NN HA
> -
>
> Key: HDFS-5122
> URL: https://issues.apache.org/jira/browse/HDFS-5122
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, webhdfs
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5122.001.patch, HDFS-5122.002.patch, 
> HDFS-5122.003.patch, HDFS-5122.004.patch, HDFS-5122.patch
>
>
> Bug reported by [~arpitgupta]:
> If the dfs.nameservices is set to arpit,
> {code}
> hdfs dfs -ls webhdfs://arpit/tmp
> {code}
> does not work. You have to provide the exact active namenode hostname. On an 
> HA cluster using dfs client one should not need to provide the active nn 
> hostname.
> To fix this, we try to 
> 1) let WebHdfsFileSystem support logical NN service name
> 2) add failover_and_retry functionality in WebHdfsFileSystem for NN HA

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira