Re: flink1.12版本,使用yarn-application模式提交任务失败

2021-03-15 文章 todd
我从flink yaml文件设置了如下配置项:
HADOOP_CONF_DIR:
execution.target: yarn-application
yarn.provided.lib.dirs:hdfs://...
pipeline.jars: hdfs://...

所以我不确定你们使用yarn-application如何进行的配置



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: flink1.12版本,使用yarn-application模式提交任务失败

2021-03-15 文章 Congxian Qiu
Hi
从你的日志看作业启动失败的原因是:
   Caused by: java.lang.IllegalArgumentException: Wrong FS:
   hdfs://xx/flink120/, expected: file:///
   看上去你设置的地址和 需要的 schema 不一样,你需要解决一下这个问题

Best,
Congxian


todd  于2021年3月15日周一 下午2:22写道:

> 通过脚本提交flink作业,提交命令:
> /bin/flink run-application -t yarn-application
> -Dyarn.provided.lib.dirs="hdfs://xx/flink120/" hdfs://xx/flink-example.jar
> --sqlFilePath   /xxx/kafka2print.sql
>
> flink使用的Lib及user jar已经上传到Hdfs路径,但是抛出以下错误:
> ---
>  The program finished with the following exception:
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
> deploy Yarn Application Cluster
> at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
> at
>
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
> at
>
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
> at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
> at
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
> Caused by: java.lang.IllegalArgumentException: Wrong FS:
> hdfs://xx/flink120/, expected: file:///
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:648)
> at
>
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
> at
>
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
> at
>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
> at
>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
> at
>
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
> at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.lambda$getAllFilesInProvidedLibDirs$2(YarnApplicationFileUploader.java:469)
> at
>
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedConsumer$3(FunctionUtils.java:93)
> at java.util.ArrayList.forEach(ArrayList.java:1257)
> at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.getAllFilesInProvidedLibDirs(YarnApplicationFileUploader.java:466)
> at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.(YarnApplicationFileUploader.java:106)
> at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.from(YarnApplicationFileUploader.java:381)
> at
>
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:789)
> at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:592)
> at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:458)
> ... 9 more
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>


flink1.12版本,使用yarn-application模式提交任务失败

2021-03-15 文章 todd
通过脚本提交flink作业,提交命令:
/bin/flink run-application -t yarn-application 
-Dyarn.provided.lib.dirs="hdfs://xx/flink120/" hdfs://xx/flink-example.jar
--sqlFilePath   /xxx/kafka2print.sql

flink使用的Lib及user jar已经上传到Hdfs路径,但是抛出以下错误:
---
 The program finished with the following exception:

org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
deploy Yarn Application Cluster
at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
at
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
at
org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
at
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: java.lang.IllegalArgumentException: Wrong FS:
hdfs://xx/flink120/, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:648)
at
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
at
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
at
org.apache.flink.yarn.YarnApplicationFileUploader.lambda$getAllFilesInProvidedLibDirs$2(YarnApplicationFileUploader.java:469)
at
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedConsumer$3(FunctionUtils.java:93)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at
org.apache.flink.yarn.YarnApplicationFileUploader.getAllFilesInProvidedLibDirs(YarnApplicationFileUploader.java:466)
at
org.apache.flink.yarn.YarnApplicationFileUploader.(YarnApplicationFileUploader.java:106)
at
org.apache.flink.yarn.YarnApplicationFileUploader.from(YarnApplicationFileUploader.java:381)
at
org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:789)
at
org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:592)
at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:458)
... 9 more



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: yarn application模式提交任务失败

2020-12-22 文章 datayangl
那如果是yarn模式该怎么使用这个参数呢 -yD?



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: yarn application模式提交任务失败

2020-12-21 文章 Yang Wang
silence的回答是对的
如果用-t参数,搭配的都是-D来引导的,不需要prefix,文档里面也是[1]
这个和之前-m yarn-cluster是不一样的,以前的方式需要-yD来引导

[1].
https://ci.apache.org/projects/flink/flink-docs-master/deployment/resource-providers/yarn.html#application-mode

Best,
Yang

silence  于2020年12月21日周一 上午10:53写道:

> 应该是-D不是-yD
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>


Re: yarn application模式提交任务失败

2020-12-20 文章 silence
应该是-D不是-yD



--
Sent from: http://apache-flink.147419.n8.nabble.com/


yarn application模式提交任务失败

2020-12-19 文章 陈帅
flink 1.11+ 支持yarn 
application模式提交任务,我试着用这个模式提交examples下的TopSpeedWindowing任务,我将$FLINK_HOME/lib目录下的文件和要运行任务的jar文件都上传到了hdfs,运行如下命令:


./bin/flink run-application -p 1 -t yarn-application \
-yD yarn.provided.lib.dirs="hdfs://localhost:9000/flink/libs" \
hdfs://localhost:9000/user-jars/TopSpeedWindowing.jar


结果提交任务失败了,查看了日志报如下错误,请问是哪里出了问题?应该如何正确提交yarn application模式的任务?


xception in thread "main" java.lang.NoClassDefFoundError: scala/Option
at 
org.apache.flink.yarn.entrypoint.YarnEntrypointUtils.loadConfiguration(YarnEntrypointUtils.java:90)
at 
org.apache.flink.yarn.entrypoint.YarnApplicationClusterEntryPoint.main(YarnApplicationClusterEntryPoint.java:91)
Caused by: java.lang.ClassNotFoundException: scala.Option
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 2 more