Hi Flink Users,

We are trying to upgrade Flink from 1.12.7 to 1.16.0. But we run into the
following issue:
We are running Flink job through application mode. After the upgrade, when
we submit the job and now it gets this exception:

*org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
deploy Yarn Application Cluster at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:478)
~[flink-yarn-1.16.0.jar!/:1.16.0]*
*        ......* Caused by: org.apache.hadoop.fs.PathIOException: `Cannot
get relative path for
URI:file:///tmp/application_1674531932229_0030-flink-conf.yaml587547081521530798.tmp':
Input/output error at
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:360)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:222)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:169)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$25(S3AFileSystem.java:3920)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
~[hadoop-common-3.3.3.jar!/:?] at
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444)
~[hadoop-common-3.3.3.jar!/:?] at
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:3913)
~[flink-s3-fs-hadoop-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnApplicationFileUploader.copyToRemoteApplicationDir(YarnApplicationFileUploader.java:397)
~[flink-yarn-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnApplicationFileUploader.uploadLocalFileToRemote(YarnApplicationFileUploader.java:202)
~[flink-yarn-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:181)
~[flink-yarn-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1047)
~[flink-yarn-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:623)
~[flink-yarn-1.16.0.jar!/:1.16.0] at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:471)
~[flink-yarn-1.16.0.jar!/:1.16.0] ... 35 more

Looks like it failed to upload the temp flink conf file onto S3. In Flink
1.12.7 we don't have this issue. I am wondering if we can get some help
here.

Here is the Flink version that we are using:
Flink 1.16.0
Hadoop 3.3.3

Thanks
Leon

Reply via email to