[jira] [Commented] (FLINK-15097) flink can not use user specified hdfs conf when submitting app in client node
[ https://issues.apache.org/jira/browse/FLINK-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991192#comment-16991192 ] Congxian Qiu(klion26) commented on FLINK-15097: --- If these two are truly the same, could we close this issue as Duplicated and linked to FLINK-11135? > flink can not use user specified hdfs conf when submitting app in client node > - > > Key: FLINK-15097 > URL: https://issues.apache.org/jira/browse/FLINK-15097 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission >Affects Versions: 1.9.1 >Reporter: qian wang >Priority: Major > Attachments: 0001-adjust-read-hdfs-conf-order.patch > > > now if cluster node had set env HADOOP_CONF_DIR,flink would force use the > hdfs-site.xml in the corresponding dir, then user who submitted app in the > client node couldn't use custom specified hdfs-site.xml/hdfs-default through > setting fs.hdfs.hdfssite or fs.hdfs.hdfsdefault so as to set custom blocksize > or replication num. For example Using yarnship to upload my hdfs conf dir and > set fs.hdfs.hdfssite direct to \{conf dir}/hdfs-site.xml is useless > Deep in code it is due to the order of choosing conf in HadoopUtils.java,the > conf in HADOOP_CONF_DIR will override user's uploaded conf, i think the way > is not sensible, so i reverse the order which flink read hdfs conf in order > to let user custom conf uploaded override HADOOP_CONF_DIR -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15097) flink can not use user specified hdfs conf when submitting app in client node
[ https://issues.apache.org/jira/browse/FLINK-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991085#comment-16991085 ] Paul Lin commented on FLINK-15097: -- +1. I've reported the same issue a while ago, https://issues.apache.org/jira/browse/FLINK-11135. FYI. > flink can not use user specified hdfs conf when submitting app in client node > - > > Key: FLINK-15097 > URL: https://issues.apache.org/jira/browse/FLINK-15097 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission >Affects Versions: 1.9.1 >Reporter: qian wang >Priority: Major > Attachments: 0001-adjust-read-hdfs-conf-order.patch > > > now if cluster node had set env HADOOP_CONF_DIR,flink would force use the > hdfs-site.xml in the corresponding dir, then user who submitted app in the > client node couldn't use custom specified hdfs-site.xml/hdfs-default through > setting fs.hdfs.hdfssite or fs.hdfs.hdfsdefault so as to set custom blocksize > or replication num. For example Using yarnship to upload my hdfs conf dir and > set fs.hdfs.hdfssite direct to \{conf dir}/hdfs-site.xml is useless > Deep in code it is due to the order of choosing conf in HadoopUtils.java,the > conf in HADOOP_CONF_DIR will override user's uploaded conf, i think the way > is not sensible, so i reverse the order which flink read hdfs conf in order > to let user custom conf uploaded override HADOOP_CONF_DIR -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15097) flink can not use user specified hdfs conf when submitting app in client node
[ https://issues.apache.org/jira/browse/FLINK-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16990453#comment-16990453 ] qian wang commented on FLINK-15097: --- @Kostas Kloudas > flink can not use user specified hdfs conf when submitting app in client node > - > > Key: FLINK-15097 > URL: https://issues.apache.org/jira/browse/FLINK-15097 > Project: Flink > Issue Type: Bug > Components: Client / Job Submission >Affects Versions: 1.9.1 >Reporter: qian wang >Priority: Major > Attachments: 0001-adjust-read-hdfs-conf-order.patch > > > now if cluster node had set env HADOOP_CONF_DIR,flink would force use the > hdfs-site.xml in the corresponding dir, then user who submitted app in the > client node couldn't use custom specified hdfs-site.xml/hdfs-default through > setting fs.hdfs.hdfssite or fs.hdfs.hdfsdefault so as to set custom blocksize > or replication num. For example Using yarnship to upload my hdfs conf dir and > set fs.hdfs.hdfssite direct to \{conf dir}/hdfs-site.xml is useless > Deep in code it is due to the order of choosing conf in HadoopUtils.java,the > conf in HADOOP_CONF_DIR will override user's uploaded conf, i think the way > is not sensible, so i reverse the order which flink read hdfs conf in order > to let user custom conf uploaded override HADOOP_CONF_DIR -- This message was sent by Atlassian Jira (v8.3.4#803005)