Hi, 奔跑的小飞袁
    Flink的class加载原则是child
first,所以,尽量避免在pom中自己引入flink相关的依赖,避免跟Flink集群环境造成冲突,建议将安装包放在lib下,由flink去加载。

Best,
Robin



奔跑的小飞袁 wrote
> hello 
> 我在使用flinksql连接器时当我将flink-sql-connector-elasticsearch6_2.11-1.11.1.jar放在lib下,程序正常执行,但是当我在pom中进行配置时会产生如下报错,同样的问题会产生在hbase、jdbc的connector中,请问下这可能是什么造成的
> org.apache.flink.client.program.ProgramInvocationException: The main
> method
> caused an error: Unable to create a sink for writing table
> 'default_catalog.default_database.cloud_behavior_sink'.
> 
> Table options are:
> 
> 'connector'='elasticsearch-6'
> 'document-type'='cdbp'
> 'hosts'='http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'
> 'index'='flink_sql_test'
> 'sink.bulk-flush.max-actions'='100'
>       at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>       at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>       at
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
>       at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
>       at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
>       at
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>       at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
>       at
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>       at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
> Caused by: org.apache.flink.table.api.ValidationException: Unable to
> create
> a sink for writing table
> 'default_catalog.default_database.cloud_behavior_sink'.
> 
> Table options are:
> 
> 'connector'='elasticsearch-6'
> 'document-type'='cdbp'
> 'hosts'='http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'
> 'index'='flink_sql_test'
> 'sink.bulk-flush.max-actions'='100'
>       at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164)
>       at
> org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344)
>       at
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>       at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>       at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>       at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>       at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>       at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1264)
>       at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:700)
>       at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:787)
>       at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690)
>       at
> com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callDML(SqlSubmit.java:97)
>       at
> com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callCommand(SqlSubmit.java:72)
>       at
> com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.run(SqlSubmit.java:53)
>       at
> com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.main(SqlSubmit.java:24)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>       ... 11 more
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover
> a
> connector using option ''connector'='elasticsearch-6''.
>       at
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
>       at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157)
>       ... 37 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find
> any factory for identifier 'elasticsearch-6' that implements
> 'org.apache.flink.table.factories.DynamicTableSinkFactory' in the
> classpath.
> 
> Available factory identifiers are:
> 
> blackhole
> hbase-1.4
> jdbc
> kafka
> print
>       at
> org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:240)
>       at
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:326)
>       ... 38 more
> 
> 
> 
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/





--
Sent from: http://apache-flink.147419.n8.nabble.com/

Reply via email to