[ 
https://issues.apache.org/jira/browse/FLINK-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574512#comment-16574512
 ] 

ASF GitHub Bot commented on FLINK-10107:
----------------------------------------

twalthr opened a new pull request #6528: [FLINK-10107] [e2e] Exclude 
conflicting SQL JARs from test
URL: https://github.com/apache/flink/pull/6528
 
 
   ## What is the purpose of the change
   
   This is a temporary solution for fixing the SQL Client end-to-end test for 
releases.
   
   
   ## Brief change log
   
   - Do not include conflicting SQL JARs in library folder of test.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SQL Client end-to-end test fails for releases
> ---------------------------------------------
>
>                 Key: FLINK-10107
>                 URL: https://issues.apache.org/jira/browse/FLINK-10107
>             Project: Flink
>          Issue Type: Bug
>          Components: Table API & SQL
>            Reporter: Timo Walther
>            Assignee: Timo Walther
>            Priority: Major
>              Labels: pull-request-available
>
> It seems that SQL JARs for Kafka 0.10 and Kafka 0.9 have conflicts that only 
> occur for releases and not SNAPSHOT builds. This might be due to their file 
> name. Depending on the file name either 0.9 is loaded before 0.10 and vice 
> versa.
> One of the following errors occured:
> {code}
> 2018-08-08 18:28:51,636 ERROR 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils  - 
> Failed to close coordinator
> java.lang.NoClassDefFoundError: 
> org/apache/flink/kafka09/shaded/org/apache/kafka/common/requests/OffsetCommitResponse
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:473)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:357)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.maybeAutoCommitOffsetsSync(ConsumerCoordinator.java:439)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.close(ConsumerCoordinator.java:319)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils.closeQuietly(ClientUtils.java:63)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1277)
>     at 
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1258)
>     at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:286)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.kafka09.shaded.org.apache.kafka.common.requests.OffsetCommitResponse
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at 
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 8 more
> {code}
> {code}
> java.lang.NoSuchFieldError: producer
>     at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.invoke(FlinkKafkaProducer010.java:369)
>     at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>     at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>     at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>     at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
>     at 
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to