I think you're right, Flavio. I created FLINK-22414 to cover this. Thanks
for bringing it up.
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-22414
On Fri, Apr 16, 2021 at 9:32 AM Flavio Pompermaier
wrote:
> Hi Yang,
> isn't this something to fix? If I look at the documentation at
Great! Thanks for the support
On Thu, Apr 22, 2021 at 2:57 PM Matthias Pohl
wrote:
> I think you're right, Flavio. I created FLINK-22414 to cover this. Thanks
> for bringing it up.
>
> Matthias
>
> [1] https://issues.apache.org/jira/browse/FLINK-22414
>
> On Fri, Apr 16, 2021 at 9:32 AM Flavio
It seems that we do not export HADOOP_CONF_DIR as environment variables in
current implementation, even though we have set the env.xxx flink config
options. It is only used to construct the classpath for the JM/TM process.
However, in "HadoopUtils"[2] we do not support getting the hadoop
Hi Robert,
indeed my docker-compose does work only if I add also Hadoop and yarn home
while I was expecting that those two variables were generated automatically
just setting env.xxx variables in FLINK_PROPERTIES variable..
I just want to understand what to expect, if I really need to specify
Hi,
I'm not aware of any known issues with Hadoop and Flink on Docker.
I also tried what you are doing locally, and it seems to work:
flink-jobmanager| 2021-04-15 18:37:48,300 INFO
org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Starting
Hi everybody,
I'm trying to set up reading from HDFS using docker-compose and Flink
1.11.3.
If I pass 'env.hadoop.conf.dir' and 'env.yarn.conf.dir'
using FLINK_PROPERTIES (under environment section of the docker-compose
service) I see in the logs the following line:
"Could not find Hadoop