??????flink-connector-jdbc????????????values????

2022-03-07 Thread ????
??insert into x() values() ON duplicate 
Key Insert update




--  --
??: "payne_z"

退订

2022-03-07 Thread 天下五帝东
退订

k8s native session 问题咨询

2022-03-07 Thread 崔深圳
k8s native session 模式下,配置了ha,job_manager 的数量为3,然后web ui,通过rest 
service访问,总是路由到非master节点,有什么办法使其稳定吗?


退订

2022-03-07 Thread liber xue
退订


Re:jdk11创建hive catalog抛错

2022-03-07 Thread zhangmang1
可以尝试使用hive 2.3.7 版本,这个里面解决了一部分 hive 不兼容jdk11的问题,我们线上已经使用一年多了

















At 2021-11-22 12:00:07, "aiden" <18765295...@163.com> wrote:
>求助,jdk从8升级到11后使用hive作为flink 
>table的catalog抛错,排查是bsTableEnv.registerCatalog(catalogName, catalog) 抛错,具体异常为:
>11:55:22.343 [main] ERROR hive.log - Got exception: 
>java.lang.ClassCastException class [Ljava.lang.Object; cannot be cast to class 
>[Ljava.net.URI; ([Ljava.lang.Object; and [Ljava.net.URI; are in module 
>java.base of loader 'bootstrap')
>java.lang.ClassCastException: class [Ljava.lang.Object; cannot be cast to 
>class [Ljava.net.URI; ([Ljava.lang.Object; and [Ljava.net.URI; are in module 
>java.base of loader 'bootstrap')
>at 
>org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:274)
> [hive-exec-2.1.1.jar:2.1.1]
>at 
>org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:210)
> [hive-exec-2.1.1.jar:2.1.1]
>at 
>java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method) ~[?:?]
>at 
>java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> [?:?]
>at 
>java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> [?:?]
>at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) 
>[?:?]
>at 
>org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1652)
> [hive-exec-2.1.1.jar:2.1.1]
>at 
>org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80)
> [hive-exec-2.1.1.jar:2.1.1]
>at 
>org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
> [hive-exec-2.1.1.jar:2.1.1]
>at 
>org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:115)
> [hive-exec-2.1.1.jar:2.1.1]
>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
>Method) ~[?:?]
>at 
>java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[?:?]
>at 
>java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:?]
>at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>at 
>org.apache.flink.table.catalog.hive.client.HiveShimV200.getHiveMetastoreClient(HiveShimV200.java:54)
> [flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:277)
> [flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.(HiveMetastoreClientWrapper.java:78)
> [flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.(HiveMetastoreClientWrapper.java:68)
> [flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:32)
> [flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:296) 
>[flink-connector-hive_2.11-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:195)
> [flink-table-api-java-1.14.0.jar:1.14.0]
>at 
>org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:373)
> [flink-table-api-java-1.14.0.jar:1.14.0]
>at catalogTest.FlinkExecTableRun.flinkMain(FlinkExecTableRun.java:27) 
>[classes/:?]
>at catalogTest.test.main(test.java:11) [classes/:?]
>11:55:22.348 [main] ERROR hive.log - Converting exception to MetaException
>Exception in thread "main" 
>org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
>Hive Metastore client
>at 
>org.apache.flink.table.catalog.hive.client.HiveShimV200.getHiveMetastoreClient(HiveShimV200.java:61)
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:277)
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.(HiveMetastoreClientWrapper.java:78)
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.(HiveMetastoreClientWrapper.java:68)
>at 
>org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:32)
>at org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:296)
>at 
>org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:195)
>at 
>org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:373)
>at catalogTest.FlinkExecTableRun.flinkMain(FlinkExecTableRun.java:27)
>at catalogTest.test.main(test.java:11)
>Caused by: java.lang.reflect.InvocationTargetException
>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
>Method)
>at 

Re: io.network.netty.exception

2022-03-07 Thread Zhilong Hong
Hi, 明文:

这个报错实际上是TM失联,一般是TM被kill导致的,可以根据TM的Flink日志和GC日志、集群层面的NM日志(YARN环境)或者是K8S日志查看TM被kill的原因。一般情况下可能是:gc时间过长导致TM心跳超时被kill、TM内存超用导致container/pod被kill等等。

Best.
Zhilong

On Mon, Mar 7, 2022 at 10:18 AM 潘明文  wrote:

> HI 读kafka,入hbase和kafka
> flink任务经常性报错
>
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException:
> Connection unexpectedly closed by remote task manager 'cdh02/xxx:42892'.
> This might indicate that the remote task manager was lost.


Re:Re: io.network.netty.exception

2022-03-07 Thread 潘明文
HI ,
  谢谢,有没有好的解决方案解决该问题呀?











在 2022-03-08 02:20:57,"Zhilong Hong"  写道:
>Hi, 明文:
>
>这个报错实际上是TM失联,一般是TM被kill导致的,可以根据TM的Flink日志和GC日志、集群层面的NM日志(YARN环境)或者是K8S日志查看TM被kill的原因。一般情况下可能是:gc时间过长导致TM心跳超时被kill、TM内存超用导致container/pod被kill等等。
>
>Best.
>Zhilong
>
>On Mon, Mar 7, 2022 at 10:18 AM 潘明文  wrote:
>
>> HI 读kafka,入hbase和kafka
>> flink任务经常性报错
>>
>> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException:
>> Connection unexpectedly closed by remote task manager 'cdh02/xxx:42892'.
>> This might indicate that the remote task manager was lost.


flink s3 checkpoint 一直IN_PROGRESS(100%)直到失败

2022-03-07 Thread Sun.Zhu
hi all,
flink 1.13.2,将checkpoint 写到S3但是一直成功不了,一直显示IN_PROGRESS,直到超时失败,有大佬遇到过吗?

Re:flink s3 checkpoint 一直IN_PROGRESS(100%)直到失败

2022-03-07 Thread Sun.Zhu
图挂了

https://postimg.cc/Z9XdxwSk













在 2022-03-08 14:05:39,"Sun.Zhu" <17626017...@163.com> 写道:

hi all,
flink 1.13.2,将checkpoint 写到S3但是一直成功不了,一直显示IN_PROGRESS,直到超时失败,有大佬遇到过吗?