[ 
https://issues.apache.org/jira/browse/ARROW-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303037#comment-17303037
 ] 

wondertx commented on ARROW-9226:
---------------------------------

If HA support is not supported by `fs.HadoopFileSystem`, `pyarrow.hdfs.connect` 
cannot be simply replaced

`

> [Python] pyarrow.fs.HadoopFileSystem - retrieve options from core-site.xml or 
> hdfs-site.xml if available
> --------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-9226
>                 URL: https://issues.apache.org/jira/browse/ARROW-9226
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++, Python
>    Affects Versions: 0.17.1
>            Reporter: Bruno Quinart
>            Priority: Minor
>              Labels: hdfs
>             Fix For: 4.0.0
>
>
> 'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from 
> the hadoop configuration files.
> The new pyarrow.fs.HadoopFileSystem requires the host to be specified.
> Inferring this info from "the environment" makes it easier to deploy 
> pipelines.
> But more important, for HA namenodes it is almost impossible to know for sure 
> what to specify. If a rolling restart is ongoing, the namenode is changing. 
> There is no guarantee on which will be active in a HA setup.
> I tried connecting to the standby namenode. The connection gets established, 
> but when writing a file an error is raised that standby namenodes are not 
> allowed to write to.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to