将flink-shaded-hadoop-2-3.0.0-cdh6.3.0-7.0.jar放在flink/lib目录下,或者打入fat jar都不起作用。。。
















At 2020-06-16 13:49:27, "Zhou Zach" <wander...@163.com> wrote:

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
initialize the cluster entrypoint YarnJobClusterEntrypoint.

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)

at 
org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:119)

Caused by: java.io.IOException: Could not create FileSystem for highly 
available storage path (hdfs:/flink/ha/application_1592215995564_0027)

at 
org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:103)

at 
org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)

at 
org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)

at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)

... 2 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could 
not find a file system implementation for scheme 'hdfs'. The scheme is not 
directly supported by Flink and no Hadoop file system to support this scheme 
could be loaded.

at 
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:450)

at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)

at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)

at 
org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)

... 13 more

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in the 
classpath, or some classes are missing from the classpath.

at 
org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184)

at 
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:446)

... 16 more

Caused by: java.lang.VerifyError: Bad return type

Exception Details:

  Location:

    
org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage;
 @160: areturn

  Reason:

    Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0]) is not 
assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method signature)

  Current Frame:

    bci: @160

    flags: { }

    locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String', 
'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' }

    stack: { 'org/apache/hadoop/fs/ContentSummary' }






在本地intellij idea中可以正常运行,flink job上订阅kafka,sink到mysql和hbase,集群flink lib目录下,




 

Reply via email to