在使用kafka自定义序列化时会导致对应的class无法加载的问题。通过分析,现有代码在使用AppClassLoader类加载器先加载了KafkaSerializerWrapper时,用户提交任务自己编写的类是无法通过AppClassLoader加载的,通过FlinkUserCodeClassLoader加载的话需要KafkaSerializerWrapper也是通过FlinkUserCodeClassLoader加载。
public void open(InitializationContext context) throws Exception {
final ClassLoader
Hey Vidya Sagar,
*- Is the code actually using this compression library? Can this
vulnerability issue be ignored?*
I glanced at the LZ4 in Flink. IIUC, LZ4 is used to compress blocks in
batch table which was introduced by FLINK-11858[1], FLINK-23447[2] bumped
it to 1.8. So, LZ4 is actually used
Hello Folks,
I have deployed an EKS cluster in AWS with the Flink Kubernetes operator on
it. I added the appropriate datadog jars to the docker image and I'm able
to see the flink operator metrics in datadog, but when I try to remap the
system scopes so instead of showing up as
Hi Yuxia,
Thanks for getting back to me.
SortOperator is a class in Hudi that was copied from Flink. The code says:
/**
* Operator for batch sort.
*
* Copied from org.apache.flink.table.runtime.operators.sort.SortOperator to
change the annotation.
*/
public class SortOperator extends
Hi,
We are running a flink cluster (Flink version 1.14.3) on kubernetes with
high-availablity.type: kubernetes. We have 3 jobmanagers. When we send jobs
to the flink cluster, we run a "flink list --jobmanager
flink-jobmanager:8081" command as part of the process".
At first, we succeeded to run
Hi Hangxiang,
after some more digging, I think the job ID is maintained not because of
Flink HA, but because of the Kubernetes operator. It seems to me that
"savepoint" upgrade mode should ideally alter job ID when starting from the
savepoint, but I'm not sure.
Regards,
Alexis.
Am Mo., 12. Dez.
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 4.0.0.
This release contains the Amazon Kinesis Data Streams/Firehose and Amazon
DynamoDB connectors with support for Flink 1.16.
Apache Flink® is an open-source stream processing framework for
Hi, hjw.
I think [1] & [2] may help you.
[1]
https://stackoverflow.com/questions/16866978/maven-cant-find-my-local-artifacts
[2]
https://stackoverflow.com/questions/32571400/remote-repositories-prevents-maven-from-resolving-remote-parent
On Fri, Dec 2, 2022 at 1:44 AM hjw wrote:
> Hi, team.
>
Hi Alexis.
IIUC, by default, the job id of the new job should be different if you
restore from a stopped job ? Whether to cleanup is related to the savepoint
restore mode.
Just in the case of failover, the job id should not change, and everything
in the checkpoint dir will be claimed as you said.