My only guess would be that you have two versions of the Apache Commons jar on 
your class path, or the version you compiled against doesn’t match what you’re 
running against, and that’s why you get:

Caused by: java.lang.ClassCastException: cannot assign instance of 
org.apache.commons.collections.map.LinkedMap to field 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit
 of type 
org.apache.commons.collections.map.LinkedMap

If that’s the case, often the cause is that your Yarn environment has a 
different version of a jar than what’s in your fat jar.

Though I would expect proper shading would prevent that from happening.

— Ken

> On Nov 20, 2018, at 5:05 AM, clay4444 <clay4me...@gmail.com 
> <mailto:clay4me...@gmail.com>> wrote:
> 
> hi all:
> 
> I know that when submitting flink jobs, flink's official recommendation is
> to put all the dependencies and business logic into a fat jar, but now our
> requirement is to separate the business logic and rely on dynamic commits,
> so I found one. One way, use the -yt and -C parameters to submit the task,
> execute it in the yarn, so that the task can be submitted, but the following
> error is always reported when running.
> 
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot
> instantiate user function.
>       at
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:239)
>       at
> org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:104)
>       at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: cannot assign instance of
> org.apache.commons.collections.map.LinkedMap to field
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit
> of type org.apache.commons.collections.map.LinkedMap in instance of
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>       at
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>       at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>       at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2251)
>       at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>       at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>       at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>       at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>       at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>       at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>       at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>       at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
>       at
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:502)
>       at
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:489)
>       at
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:477)
>       at
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:438)
>       at
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:224)
>       ... 4 more
> 
> is there anyone know about this?
> 
> 
> 
> 
> --
> Sent from: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ 
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/>

--------------------------
Ken Krugler
+1 530-210-6378
http://www.scaleunlimited.com <http://www.scaleunlimited.com/>
Custom big data solutions & training
Flink, Solr, Hadoop, Cascading & Cassandra



--------------------------
Ken Krugler
+1 530-210-6378
http://www.scaleunlimited.com
Custom big data solutions & training
Flink, Solr, Hadoop, Cascading & Cassandra

Reply via email to