Hi.

Could you tell us the version of the Flink you are using? What's the
version of commons-collections:commons-collections:jar when you compile the
sql and the version in the cluster? It's possible you compile the sql and
submit with the different version.

I am not sure how you submit your flink sql job. Do you submit your job
with sql client or use jars to execute?

Best,
Shengkai

wang <24248...@163.com> 于2022年5月25日周三 15:04写道:

> Hi dear engineers,
>
> Resently I encountered another issue, after I submited a flink sql job, it
> throws an exception:
>
>
> Caused by: java.lang.ClassCastException: cannot assign instance of 
> org.apache.commons.collections.map.LinkedMap to field 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit
>  of type org.apache.commons.collections.map.LinkedMap in instance of 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
>       at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
>  ~[?:1.8.0_162]
>       at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060) 
> ~[?:1.8.0_162]
>       at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060) 
> ~[?:1.8.0_162]
>       at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202) 
> ~[?:1.8.0_162]
>       at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060) 
> ~[?:1.8.0_162]
>       at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) 
> ~[?:1.8.0_162]
>       at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427) 
> ~[?:1.8.0_162]
>       at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:541)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:322)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:159)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:548)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
>  ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759) 
> ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) 
> ~[flink-dist_2.11-1.13.2.jar:1.13.2]
>       at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_162]
>
>
>
>
> Then I searched the web, got one answer that can solve this issue:
> config "classloader.resolve-order: parent-first" in flink-conf.yaml
> Indeed, it works for this issue.
>
> But unfortunately, I'm not allowed to change the classloader.resolve-order
> to parent-first, it must be clild-first. As  parent-first will brought me
> other classload related issues.
>
> Then I tried below configuration in flink-conf.yaml:
> classloader.parent-first-patterns.additional:
> org.apache.commons.collections
>
> It can solve that exception, but it's very wired this could cause other
> issues.
>
> So my question is, is there other ways to solve the exception above?
>  Thanks so much for you help!
>
>
> Thanks && Regards,
> Hunk
>
>
>
>
>
>
>

回复