Re: use flink 1.19 JDBC Driver can find jdbc connector
退订 Replied Message | From | abc15...@163.com | | Date | 05/10/2024 12:26 | | To | user-zh@flink.apache.org | | Cc | | | Subject | Re: use flink 1.19 JDBC Driver can find jdbc connector | I've solved it. You need to register the number of connections in the jar of gateway. But this is inconvenient, and I still hope to improve it. 发自我的 iPhone > 在 2024年5月10日,11:56,Xuyang 写道: > > Hi, can you print the classloader and verify if the jdbc connector exists in > it? > > > > > -- > >Best! >Xuyang > > > > > > At 2024-05-09 17:48:33, "McClone" wrote: >> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not >> find jdbc connector,but use sql-client is normal.
flink集群如何将日志直接写入elasticsearch中?
有没有比较方便快捷的解决方案?
回复:flink operator 高可用任务偶发性报错unable to update ConfigMapLock
有没有高手指点一二???在线等 回复的原邮件 | 发件人 | kellygeorg...@163.com | | 日期 | 2024年03月11日 20:29 | | 收件人 | user-zh | | 抄送至 | | | 主题 | flink operator 高可用任务偶发性报错unable to update ConfigMapLock | jobmanager的报错如下所示,请问是什么原因? Exception occurred while renewing lock:Unable to update ConfigMapLock Caused by:io.fabric8.kubernetes.client.Kubernetes Client Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task xx- configmap] in namespace:[default] Caused by: Java.net.SocketTimeoutException:timeout
flink operator 高可用任务偶发性报错unable to update ConfigMapLock
jobmanager的报错如下所示,请问是什么原因? Exception occurred while renewing lock:Unable to update ConfigMapLock Caused by:io.fabric8.kubernetes.client.Kubernetes Client Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task xx- configmap] in namespace:[default] Caused by: Java.net.SocketTimeoutException:timeout