[jira] [Commented] (FLINK-10130) How to define two hdfs name-node IPs in flink-conf.yaml file

2019-04-09 Thread Liya Fan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814097#comment-16814097
 ] 

Liya Fan commented on FLINK-10130:
--

[~Paul Lin] is right. HDFS name service solves the problem. So this issue can 
be closed.

> How to define two hdfs name-node IPs in flink-conf.yaml file
> 
>
> Key: FLINK-10130
> URL: https://issues.apache.org/jira/browse/FLINK-10130
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Reporter: Keshav Lodhi
>Priority: Major
> Attachments: docker-entrypoints.sh
>
>
> Hi Team,
> Here is, what we are looking for:
>  * We have  flink HA dockerized cluster with (3 zookeepers, 2 job-managers, 3 
> task-managers).
>  * We are using HDFS from the flink to store some data. The problem we are 
> facing is that, we are not able to pass 2 name-node IPs in config. 
>  * These are the config parameters we want to add two name-node IPs
>  #       "state.checkpoints.dir: hdfs://X.X.X.X:9001/flinkdatastorage"
>  #       "state.backend.fs.checkpointdir: 
> hdfs://X.X.X.X:9001/flinkdatastorage"
>  #       "high-availability.zookeeper.storageDir: 
> hdfs://X.X.X.X:9001/flink/recovery"
> Currently we are passing only one name-node IP
> Please advise. 
> I have attached the sample *docker-entrypoint.sh* file: 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10130) How to define two hdfs name-node IPs in flink-conf.yaml file

2018-08-14 Thread Paul Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579710#comment-16579710
 ] 

Paul Lin commented on FLINK-10130:
--

It could be solved by  [HDFS name 
services|https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html],
 and I think it might be better to leave it to the HDFS client. FYI.

> How to define two hdfs name-node IPs in flink-conf.yaml file
> 
>
> Key: FLINK-10130
> URL: https://issues.apache.org/jira/browse/FLINK-10130
> Project: Flink
>  Issue Type: Bug
>Reporter: Keshav Lodhi
>Priority: Blocker
> Attachments: docker-entrypoints.sh
>
>
> Hi Team,
> Here is, what we are looking for:
>  * We have  flink HA dockerized cluster with (3 zookeepers, 2 job-managers, 3 
> task-managers).
>  * We are using HDFS from the flink to store some data. The problem we are 
> facing is that, we are not able to pass 2 name-node IPs in config. 
>  * These are the config parameters we want to add two name-node IPs
>  #       "state.checkpoints.dir: hdfs://X.X.X.X:9001/flinkdatastorage"
>  #       "state.backend.fs.checkpointdir: 
> hdfs://X.X.X.X:9001/flinkdatastorage"
>  #       "high-availability.zookeeper.storageDir: 
> hdfs://X.X.X.X:9001/flink/recovery"
> Currently we are passing only one name-node IP
> Please advise. 
> I have attached the sample *docker-entrypoint.sh* file: 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)