Re: fail to mount hadoop-config-volume when using flink-k8s-operator

2022-10-13 Thread Yang Wang
Currently, exporting the env "HADOOP_CONF_DIR" could only work for native
K8s integration. The flink client will try to create the
hadoop-config-volume automatically if hadoop env found.

If you want to set the HADOOP_CONF_DIR in the docker image, please also
make sure the specified hadoop conf directory exists in the image.

For flink-k8s-operator, another feasible solution is to create a
hadoop-config-configmap manually and then use
*"kubernetes.hadoop.conf.config-map.name
" *to mount it to JobManager
and TaskManager pods.


Best,
Yang

Liting Liu (litiliu)  于2022年10月12日周三 16:11写道:

> Hi, community:
>   I'm using flink-k8s-operator v1.2.0 to deploy flink job. And the
> "HADOOP_CONF_DIR" environment variable was setted in the image that i
> buiilded from flink:1.15.  I found the taskmanager pod was trying to mount
> a volume named "hadoop-config-volume" from configMap.  But the configMap
> with the name "hadoop-config-volume" was't created.
>
> Do i need to remove the "HADOOP_CONF_DIR" environment variable in
> dockerfile?
> If yes, what should i do to specify the hadoop conf?
>
>


fail to mount hadoop-config-volume when using flink-k8s-operator

2022-10-12 Thread Liting Liu (litiliu)
Hi, community:
  I'm using flink-k8s-operator v1.2.0 to deploy flink job. And the 
"HADOOP_CONF_DIR" environment variable was setted in the image that i buiilded 
from flink:1.15.  I found the taskmanager pod was trying to mount a volume 
named "hadoop-config-volume" from configMap.  But the configMap with the name 
"hadoop-config-volume" was't created.

Do i need to remove the "HADOOP_CONF_DIR" environment variable in dockerfile?
If yes, what should i do to specify the hadoop conf?