Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #175

2021-05-28 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 479573 lines...]
[2021-05-28T23:03:06.307Z] > Task :clients:jar UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :server-common:compileJava UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :storage:api:compileJava UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :connect:api:compileJava UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :connect:api:classes UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :streams:compileJava UP-TO-DATE
[2021-05-28T23:03:06.307Z] > Task :streams:classes UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:compileJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:classes UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :storage:compileJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:javadoc SKIPPED
[2021-05-28T23:03:07.248Z] > Task :streams:jar UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :raft:compileJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:javadocJar
[2021-05-28T23:03:07.248Z] > Task :clients:compileTestJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :clients:testClasses UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :metadata:compileJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :core:compileJava NO-SOURCE
[2021-05-28T23:03:07.248Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-05-28T23:03:07.248Z] > Task :connect:json:compileTestJava UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:testClasses UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :connect:json:testJar
[2021-05-28T23:03:07.248Z] > Task :connect:json:testSrcJar
[2021-05-28T23:03:07.248Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-05-28T23:03:07.248Z] > Task :core:compileScala UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :core:classes UP-TO-DATE
[2021-05-28T23:03:07.248Z] > Task :core:compileTestJava NO-SOURCE
[2021-05-28T23:03:08.189Z] > Task :core:compileTestScala UP-TO-DATE
[2021-05-28T23:03:08.189Z] > Task :core:testClasses UP-TO-DATE
[2021-05-28T23:03:11.785Z] > Task :connect:api:javadoc
[2021-05-28T23:03:11.785Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-05-28T23:03:11.785Z] > Task :connect:api:jar UP-TO-DATE
[2021-05-28T23:03:11.785Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-05-28T23:03:11.785Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-05-28T23:03:11.785Z] > Task :connect:json:jar UP-TO-DATE
[2021-05-28T23:03:11.785Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-05-28T23:03:12.736Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-05-28T23:03:12.736Z] > Task :connect:json:publishToMavenLocal
[2021-05-28T23:03:12.736Z] > Task :connect:api:javadocJar
[2021-05-28T23:03:12.736Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-05-28T23:03:12.736Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-05-28T23:03:12.736Z] > Task :connect:api:testJar
[2021-05-28T23:03:12.736Z] > Task :connect:api:testSrcJar
[2021-05-28T23:03:12.736Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-05-28T23:03:12.736Z] > Task :connect:api:publishToMavenLocal
[2021-05-28T23:03:14.498Z] > Task :streams:javadoc
[2021-05-28T23:03:15.441Z] > Task :streams:javadocJar
[2021-05-28T23:03:15.441Z] > Task :streams:compileTestJava UP-TO-DATE
[2021-05-28T23:03:15.441Z] > Task :streams:testClasses UP-TO-DATE
[2021-05-28T23:03:16.685Z] > Task :streams:testJar
[2021-05-28T23:03:16.685Z] > Task :streams:testSrcJar
[2021-05-28T23:03:16.685Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-05-28T23:03:16.685Z] > Task :streams:publishToMavenLocal
[2021-05-28T23:03:18.447Z] > Task :clients:javadoc
[2021-05-28T23:03:19.388Z] > Task :clients:javadocJar
[2021-05-28T23:03:20.330Z] > Task :clients:testJar
[2021-05-28T23:03:21.268Z] > Task :clients:testSrcJar
[2021-05-28T23:03:21.268Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-05-28T23:03:21.268Z] > Task :clients:publishToMavenLocal
[2021-05-28T23:03:21.268Z] 
[2021-05-28T23:03:21.268Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-05-28T23:03:21.268Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-05-28T23:03:21.268Z] See 
https://docs.gradle.org/7.0.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-05-28T23:03:21.268Z] 
[2021-05-28T23:03:21.268Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2021-05-28T23:03:21.268Z] Please consult deprecation warnings for more details.
[2021-05-28T23:03:21.268Z] 
[2021-05-28T23:03:21.268Z] BUILD SUCCESSFUL in 37s
[2021-05-28T23:03:21.268Z] 71 actionable 

Kafka Serialzer for zOS Unix

2021-05-28 Thread Kevin McFadden
Is there a solution for the Kafka Serializer for Mainframe?  I am
developing  process by which a mainframe file records are being sent to a
distributed Kafka Topic.  The initial implementation is working well.
However, when implementing the serializer functions, the process fails with
a SSL error.  Even when the JKS files have been updated with the proper
certificates.


[jira] [Created] (KAFKA-12864) Move KafkaEventQueue and timeline data structures into server-common

2021-05-28 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-12864:


 Summary: Move KafkaEventQueue and timeline data structures into 
server-common
 Key: KAFKA-12864
 URL: https://issues.apache.org/jira/browse/KAFKA-12864
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe
Assignee: Colin McCabe


Move KafkaEventQueue and timeline data structures into server-common



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #174

2021-05-28 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12863) Configure controller snapshot generation

2021-05-28 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-12863:
--

 Summary: Configure controller snapshot generation
 Key: KAFKA-12863
 URL: https://issues.apache.org/jira/browse/KAFKA-12863
 Project: Kafka
  Issue Type: Sub-task
  Components: controller
Reporter: Jose Armando Garcia Sancio
Assignee: Jose Armando Garcia Sancio






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12862) Update ScalaFMT version to latest

2021-05-28 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12862:
--

 Summary: Update ScalaFMT version to latest
 Key: KAFKA-12862
 URL: https://issues.apache.org/jira/browse/KAFKA-12862
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Josep Prat
Assignee: Josep Prat


When upgrading to the latest stable scala fmt version (2.7.5) lots of classes 
need to be reformatted because of the dangling parentheses setting.

I thought it was worth creating an issue, so there is also a place to discuss 
or document possible Scala fmt config changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12861) MockProducer raises NPE when no Serializer

2021-05-28 Thread Jira
Gérald Quintana created KAFKA-12861:
---

 Summary: MockProducer raises NPE when no Serializer
 Key: KAFKA-12861
 URL: https://issues.apache.org/jira/browse/KAFKA-12861
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.7.1, 2.8.0, 2.7.0
Reporter: Gérald Quintana


Since KAFKA-10503, the MockProducer may raise NullPointerException when 
key/value serializers are not set:
*15:58:16*  java.lang.NullPointerException: null*15:58:16*  at 
org.apache.kafka.clients.producer.MockProducer.send(MockProducer.java:307)
This occurs when using MockProducer default constructor:
{code:java}
public MockProducer() {
 this(Cluster.empty(), false, null, null, null);
}{code}
The problem didn't occur on Kafka Client 2.6.

I understand this constructor is only for metadata as described in JavaDoc. 
However defaulting to a Noop serializer (MockSerializer) would be a better 
default. Removing the default constructor to force declaring a serialiszer 
could also be a solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12860) Partition offset due to non-monotonically incrementing offsets in logs

2021-05-28 Thread KahnCheny (Jira)
KahnCheny created KAFKA-12860:
-

 Summary: Partition offset due to non-monotonically incrementing 
offsets in logs
 Key: KAFKA-12860
 URL: https://issues.apache.org/jira/browse/KAFKA-12860
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.1
Reporter: KahnCheny


We encountered an issue with Kafka after running out of heap space. When 
several agents stopped running due to errors during startup:

2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/001124738758.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.log.LogSegment.recover(LogSegment.scala:278)
at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.Log.loadSegmentFiles(Log.scala:322)
at kafka.log.Log.loadSegments(Log.scala:405)
at kafka.log.Log.(Log.scala:218)
at kafka.log.Log$.apply(Log.scala:1776)
at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.


We dump Log segment file that the partition offset due to non-monotonically 
incrementing offsets in logs:

baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #173

2021-05-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-744: Migrate TaskMetadata to interface with internal implementation

2021-05-28 Thread Josep Prat
Hi there,
I updated the KIP page with Sophie's feedback. As she already mentioned,
the intention would be to include this KIP in release 3.0.0 so we can avoid
a deprecation cycle for the getTaskID method introduced in KIP-740, I hope
I managed to capture this in the KIP description.

Just adding the link again for convenience:
https://cwiki.apache.org/confluence/x/XIrOCg

Thanks in advance,

On Thu, May 27, 2021 at 10:08 PM Josep Prat  wrote:

> Hi Sophie,
>
> Thanks for the feedback, I'll update the KIP tomorrow with your feedback.
> They are all good points, and you are right, my phrasing could be
> misleading.
>
>
> Best,
>
> On Thu, May 27, 2021 at 10:02 PM Sophie Blee-Goldman
>  wrote:
>
>> Thanks for the KIP! I'm on board with the overall proposal, just a few
>> comments:
>>
>> 1) The motivation section says
>>
>> TaskMetadata should have never been a class available for the general
>> > public, but more of an internal class
>>
>>
>> which is a bit misleading as it seems to imply that TaskMetadata itself
>> was
>> never meant to be part of the public API
>> at all. It might be better to phrase this as "TaskMetadata was never
>> intended to be a public class that a user might
>> need to instantiate, but rather an API for exposing metadata which is
>> better served as an interface" --- or something
>> to that effect.
>>
>> 2) You touch on this in a later section, but it would be good to call out
>> directly in the *Public Interfaces* section that
>> you are proposing to remove the `public TaskId getTaskId()` method that we
>> added in KIP-740. Also I just want to
>> note that to do so will require getting this KIP into 3.0, otherwise we'll
>> need to go through a deprecation cycle for
>> that API. I don't anticipate this being a problem as KIP freeze is still
>> two weeks away, but it would be good to clarify.
>>
>> 3) nit: we should put the new internal implementation class under
>> the org.apache.kafka.streams.processor.internals
>> package instead of under org.apache.kafka.streams.internals. But this is
>> an
>> implementation detail and as such
>> doesn't need to be covered by the KIP in the first place.
>>
>> - Sophie
>>
>> On Thu, May 27, 2021 at 1:55 AM Josep Prat 
>> wrote:
>>
>> > I deliberately picked the most conservative approach of creating a new
>> > Interface, instead of transforming the current class into an interface.
>> > Feedback is most welcome!
>> >
>> > Best,
>> >
>> > On Thu, May 27, 2021 at 10:26 AM Josep Prat 
>> wrote:
>> >
>> > > Hi there,
>> > > I would like to propose KIP-744, to introduce TaskMetadata as an
>> > > interface, to keep the its implementation as internal use.
>> > > This KIP can be seen as a spin-off of KIP-740.
>> > >
>> > > https://cwiki.apache.org/confluence/x/XIrOCg
>> > >
>> > > Best,
>> > > --
>> > >
>> > > Josep Prat
>> > >
>> > > *Aiven Deutschland GmbH*
>> > >
>> > > Immanuelkirchstraße 26, 10405 Berlin
>> > >
>> > > Amtsgericht Charlottenburg, HRB 209739 B
>> > >
>> > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
>> > >
>> > > *m:* +491715557497
>> > >
>> > > *w:* aiven.io
>> > >
>> > > *e:* josep.p...@aiven.io
>> > >
>> >
>> >
>> > --
>> >
>> > Josep Prat
>> >
>> > *Aiven Deutschland GmbH*
>> >
>> > Immanuelkirchstraße 26, 10405 Berlin
>> >
>> > Amtsgericht Charlottenburg, HRB 209739 B
>> >
>> > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
>> >
>> > *m:* +491715557497
>> >
>> > *w:* aiven.io
>> >
>> > *e:* josep.p...@aiven.io
>> >
>>
>
>
> --
>
> Josep Prat
>
> *Aiven Deutschland GmbH*
>
> Immanuelkirchstraße 26, 10405 Berlin
>
> Amtsgericht Charlottenburg, HRB 209739 B
>
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
>
> *m:* +491715557497
>
> *w:* aiven.io
>
> *e:* josep.p...@aiven.io
>


-- 

Josep Prat

*Aiven Deutschland GmbH*

Immanuelkirchstraße 26, 10405 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

*m:* +491715557497

*w:* aiven.io

*e:* josep.p...@aiven.io


[jira] [Created] (KAFKA-12859) Kafka Server Repeated Responses

2021-05-28 Thread Lea (Jira)
Lea created KAFKA-12859:
---

 Summary: Kafka Server Repeated Responses
 Key: KAFKA-12859
 URL: https://issues.apache.org/jira/browse/KAFKA-12859
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.1.0
Reporter: Lea


When the client producer data, the Kafka server repeatedly responds with the 
same correlationId. As a result, all requests subsequent to the socket fail to 
be sent.

异常如下
{code:java}
//
java.lang.IllegalStateException: Correlation id for response (30205) does not 
match request (30211), request header: RequestHeader(apiKey=PRODUCE, 
apiVersion=5, clientId=producer-1, correlationId=30211)
at 
org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:943) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:713)
 ~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:836)
 ~[kafka-clients-2.4.0.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:335) 
~[kafka-clients-2.4.0.jar!/:na]
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244) 
~[kafka-clients-2.4.0.jar!/:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]


{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12858) dynamically update the ssl certificates of kafka connect worker without restarting connect process.

2021-05-28 Thread kaushik srinivas (Jira)
kaushik srinivas created KAFKA-12858:


 Summary: dynamically update the ssl certificates of kafka connect 
worker without restarting connect process.
 Key: KAFKA-12858
 URL: https://issues.apache.org/jira/browse/KAFKA-12858
 Project: Kafka
  Issue Type: Improvement
Reporter: kaushik srinivas


Hi,

 

We are trying to update the ssl certificates of kafka connect worker which is 
due for expiry. Is there any way to dynamically update the ssl certificate of 
connet worker as it is possible in kafka using kafka-configs.sh script ?

If not, what is the recommended way to update the ssl certificates of kafka 
connect worker without disrupting the existing traffic ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)