There's a configuration value "env.hadoop.conf.dir" to set the hadoop
configuration directory:
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#env-hadoop-conf-dir
If the files in that directory correctly configure Hadoop HA, the client
side should pick up the config.

On Tue, Feb 11, 2020 at 3:39 AM sunfulin <sunfulin0...@163.com> wrote:

> Hi ,guys
> Thanks for kind reply. Actually I want to know how to change client side
> haddop conf while using table API within my program. Hope some useful sug.
>
>
>
>
>
> At 2020-02-11 02:42:31, "Bowen Li" <bowenl...@gmail.com> wrote:
>
> Hi sunfulin,
>
> Sounds like you didn't config the hadoop HA correctly on the client side
> according to [1]. Let us know if it helps resolve the issue.
>
> [1]
> https://stackoverflow.com/questions/25062788/namenode-ha-unknownhostexception-nameservice1
>
>
>
>
> On Mon, Feb 10, 2020 at 7:11 AM Khachatryan Roman <
> khachatryan.ro...@gmail.com> wrote:
>
>> Hi,
>>
>> Could you please provide a full stacktrace?
>>
>> Regards,
>> Roman
>>
>>
>> On Mon, Feb 10, 2020 at 2:12 PM sunfulin <sunfulin0...@163.com> wrote:
>>
>>> Hi, guys
>>> I am using Flink 1.10 and test functional cases with hive intergration.
>>> Hive with 1.1.0-cdh5.3.0 and with hadoop HA enabled.Running flink job I can
>>> see successful connection with hive metastore, but cannot read table data
>>> with exception:
>>>
>>> java.lang.IllegalArgumentException: java.net.UnknownHostException:
>>> nameservice1
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:668)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:604)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>>> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
>>>
>>> I am running a standalone application. Looks like I am missing my hadoop
>>> conf file in my flink job application classpath. Where should I config ?
>>>
>>>
>>>
>>>
>>
>
>
>

Reply via email to