Okay, thank you! mysql的驱动已经添加 inlong-manage/lib 目录中,并且manage已经正常可以连接mysql数据库。我对比了sort-connector-jdbc-v1.15-1.13.0 和sort-connector-jdbc-1.13.0.jar ,v1.15 版本缺少了MysqlDialectFactory。 疑问:
1. inlong -sort 是否支持flink v15.4和flink v13.6? 2. mysql驱动也需要放到inlong-sort/connector目录下吗? 同时我也会在github上面提issue。 还望不吝赐教! ________________________________ 发件人: Charles Zhang <[email protected]> 发送时间: 2024年9月23日 11:39 收件人: [email protected] <[email protected]>; [email protected] <[email protected]> 主题: Re: 应龙 sort-connector-jdbc 版本选择问题 Hi, maybe you forgot to add mysql-connector-java-8.0.28.jar, you could have a test. otherwise, we recommend creating an issue on GitHub directly if you have forwarded problems. Best wishes, Charles Zhang from Apache InLong wu xincai <[email protected]> 于2024年9月23日周一 11:34写道: > 应龙团队你好: > 我在搭建开发环境时配置数据流组失败,经过查看日志时发现报错,日志内容如下: > org.apache.flink.client.program.ProgramInvocationException: The main > method caused an error: Unable to create a sink for writing table > 'default_catalog.default_database.table_12_mysql'. > > Table options are: > > 'connector'='jdbc-inlong' > 'inlong.metric.labels'='groupId=museum&streamId=inlong&nodeId=12_mysql' > 'password'='******' > 'table-name'='rbac' > 'url'='jdbc:mysql:// > 10.130.1.12:3306/inlong?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=Asia/Shanghai&autoDeserialize=false&allowUrlInLocalInfile=false&allowLoadLocalInfile=false > ' > 'username'='root' > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) > ~[flink-clients-1.15.4.jar:1.15.4] > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) > ~[flink-clients-1.15.4.jar:1.15.4] > at > org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:158) > ~[flink-clients-1.15.4.jar:1.15.4] > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:82) > ~[flink-clients-1.15.4.jar:1.15.4] > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:117) > ~[flink-clients-1.15.4.jar:1.15.4] > at > org.apache.inlong.manager.plugin.flink.FlinkService.submitJobBySavepoint(FlinkService.java:224) > ~[manager-plugins-base-1.13.0.jar:1.13.0] > at > org.apache.inlong.manager.plugin.flink.FlinkService.submit(FlinkService.java:174) > ~[manager-plugins-base-1.13.0.jar:1.13.0] > at > org.apache.inlong.manager.plugin.flink.IntegrationTaskRunner.run(IntegrationTaskRunner.java:58) > ~[manager-plugins-base-1.13.0.jar:1.13.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_144] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_144] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_144] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_144] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144] > Caused by: org.apache.flink.table.api.ValidationException: Unable to > create a sink for writing table > 'default_catalog.default_database.table_12_mysql'. > > Table options are: > > 'connector'='jdbc-inlong' > 'inlong.metric.labels'='groupId=museum&streamId=inlong&nodeId=12_mysql' > 'password'='******' > 'table-name'='rbac' > 'url'='jdbc:mysql:// > 10.130.1.12:3306/inlong?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=Asia/Shanghai&autoDeserialize=false&allowUrlInLocalInfile=false&allowLoadLocalInfile=false > ' > 'username'='root' > at > org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:262) > ~[flink-table-common-1.15.4.jar:1.15.4] > at > org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:434) > ~[flink-table-planner_2.12-1.15.4.jar:1.15.4] > at > org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:227) > ~[flink-table-planner_2.12-1.15.4.jar:1.15.4] > at > org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:185) > ~[flink-table-planner_2.12-1.15.4.jar:1.15.4] > at > scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) > ~[scala-library-2.12.7.jar:?] > at scala.collection.Iterator.foreach(Iterator.scala:937) > ~[scala-library-2.12.7.jar:?] > at scala.collection.Iterator.foreach$(Iterator.scala:937) > ~[scala-library-2.12.7.jar:?] > at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) > ~[scala-library-2.12.7.jar:?] > at scala.collection.IterableLike.foreach(IterableLike.scala:70) > ~[scala-library-2.12.7.jar:?] > at scala.collection.IterableLike.foreach$(IterableLike.scala:69) > ~[scala-library-2.12.7.jar:?] > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > ~[scala-library-2.12.7.jar:?] > at scala.collection.TraversableLike.map(TraversableLike.scala:233) > ~[scala-library-2.12.7.jar:?] > at scala.collection.TraversableLike.map$(TraversableLike.scala:226) > ~[scala-library-2.12.7.jar:?] > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > ~[scala-library-2.12.7.jar:?] > at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:185) > ~[flink-table-planner_2.12-1.15.4.jar:1.15.4] > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656) > ~[flink-table-api-java-1.15.4.jar:1.15.4] > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:782) > ~[flink-table-api-java-1.15.4.jar:1.15.4] > at > org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:108) > ~[flink-table-api-java-1.15.4.jar:1.15.4] > at > org.apache.inlong.sort.parser.result.FlinkSqlParseResult.executeLoadSqls(FlinkSqlParseResult.java:84) > ~[?:?] > at > org.apache.inlong.sort.parser.result.FlinkSqlParseResult.execute(FlinkSqlParseResult.java:63) > ~[?:?] > at org.apache.inlong.sort.Entrance.main(Entrance.java:99) ~[?:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_144] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_144] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_144] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144] > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) > ~[flink-clients-1.15.4.jar:1.15.4] > ... 12 more > Caused by: java.lang.IllegalStateException: Could not find any jdbc > dialect factory that can handle url 'jdbc:mysql:// > 10.130.1.12:3306/inlong?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=Asia/Shanghai&autoDeserialize=false&allowUrlInLocalInfile=false&allowLoadLocalInfile=false' > that implements 'org.apache.inlong.sort.jdbc.dialect.JdbcDialectFactory' in > the classpath. > Available factories are: > org.apache.inlong.sort.jdbc.dialect.clickhouse.ClickHouseDialectFactory > > 经过分析发现 sort-connector-jdbc-v1.15-1.13.0.jar 没有MysqlDialectFactory > 。我不明白为啥v1.15版本为啥删除了MysqlDialect 但是inlong为啥又在用? > > > >
