[jira] [Created] (KAFKA-9565) Implementation of Tiered Storage SPI to integrate with S3

2020-02-18 Thread Alexandre Dupriez (Jira)
Alexandre Dupriez created KAFKA-9565:


 Summary: Implementation of Tiered Storage SPI to integrate with S3
 Key: KAFKA-9565
 URL: https://issues.apache.org/jira/browse/KAFKA-9565
 Project: Kafka
  Issue Type: Sub-task
Reporter: Alexandre Dupriez






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-498: Add client-side configuration for maximum response size to protect against OOM

2020-02-18 Thread Alexandre Dupriez
Hello,

Thanks David to propose a new PR [1] for KIP-498 [2] to address KAFKA-4090 [3].

I find it interesting, I wonder how it stands w.r.t. avoiding
inter-layer violation.

[1] https://github.com/apache/kafka/pull/8066
[2] https://issues.apache.org/jira/browse/KAFKA-4090
[3] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response+size+to+protect+against+OOM

Le mar. 6 août 2019 à 15:37, Gokul Ramanan Subramanian
 a écrit :
>
> Hi Alexandre.
>
> Thanks for this analysis.
>
> IMHO, there are 4 ways to ago about this:
>
> 1. We don't fix the bug directly but instead update the Kafka documentation
> telling clients to configure themselves correctly - Silly but easy to
> achieve.
> 2. We adopt Stanislav's solution that fixes the problem - Easy to achieve,
> potentially adds inflexibility in the future if we want to change handshake
> protocol. However, changing the handshake protocol is going to be a
> backwards incompatible change anyway with or without Stanislav's solution.
> 3. We adopt Alexandre's solution - Easy to achieve, but has shortcomings
> Alexandre has highlighted.
> 4. We pivot KIP-498 to focus on standardizing the handshake protocol - Not
> easy to achieve, since this will be a backwards incompatible change and
> will require client and server redeployments in correct order. Further,
> this can be a hard problem to solve given that various transport layer
> protocols have different headers. In order for the "new handshake" protocol
> to work, it would have to work in the same namespace as those headers,
> which will require careful tuning of handshake constants.
>
> Any thoughts from the community on how we can proceed?
>
> Thanks.
>
> On Tue, Aug 6, 2019 at 12:41 PM Alexandre Dupriez <
> alexandre.dupr...@gmail.com> wrote:
>
> > Hello,
> >
> > I wrote a small change [1] to make clients validate the size of messages
> > received from a broker at the protocol-level.
> > However I don't like the change. It does not really solve the problem which
> > originally motivated the KIP - that is, protocol mismatch (plaintext to SSL
> > endpoint). More specifically, few problems I can see are listed below. This
> > is a non-exhaustive list. They also have been added to KIP-498 [2].
> >
> > 1) Incorrect failure mode
> > As was experimented and as can be seen as part of the integration tests,
> > when an invalid size is detected and the exception InvalidReceiveException
> > is thrown, consumers and producers keeps retrying until the poll timeout
> > expires or the maximum number of retries is reached. This is incorrect if
> > we consider the use case of protocol mismatch which motivated this change.
> > Indeed, producers and consumers would need to fail fast as retries will
> > only prolong the time to remediation from users/administrators.
> >
> > 2) Incomplete remediation
> > The proposed change cannot provide an definite guarantee against OOM in all
> > scenarios - for instance, it will still manifest if the maximum size is set
> > to 100 MB and the JVM is under memory pressure and have less than 100 MB of
> > allocatable memory.
> >
> > 3) Illegitimate message rejection
> > Even worse: what if the property is incorrectly configured and prevent
> > legitimate messages from reaching the client?
> >
> > 4) Unclear configuration parameter
> > 4.a) The name max.response.size intends to mirror the existing
> > max.request.size from the producer's configuration properties. However,
> > max.request.size intends to check the size of producer records as provided
> > by a client; while max.response.size is to check the size directly decoded
> > from the network according to Kafka's binary protocol.
> > 4.b) On the broker, the property socket.request.max.bytes is used to
> > validate the size of messages received by the server. The new property
> > serves the same purpose, which introduces duplicated semantic, even if one
> > property is characterised with the keyword "request" and the other with
> > "response", in both cases reflecting the perspective adopted from either a
> > client or a server.
> >
> > Please let me know what you think. An alternative mitigation may be worth
> > investigated for the detection of protocol mismatch in the client.
> >
> > [1] https://github.com/apache/kafka/pull/7160
> > [2]
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response+size+to+protect+against+OOM
> >
> > Le jeu. 1 août 2019 à 09:42, Alexandre Dupriez <
> > alexandre.dupr...@gmail.com>
> > a écrit :
> >
> > > Thanks Gokul and Stanislav for your answers.
> > >
> > > I went through the PR 5940 [1]. Indeed Gokul, your reasoning echoes the
> > > observations of Ismael about a potential inter-protocol layering
> > violation.
> > >
> > > As you said Stanislav, the server-side SSL engine responds with an alert
> > > with code 80 (internal_error) from what I saw when reproducing the OOM.
> > > Since the Alert i

Kafka latency spikes analysis

2020-02-18 Thread Paolo Moriello
Hello,


I'm performing an investigation on Kafka latency. During my analysis I was
able to reproduce a scenario in which Kafka latency repeatedly spikes at
constant frequency, for small amounts of time.

In my tests, in particular, latency could spike every ~2 minutes
(dependently on the throughput and input...) from an avg of ~3ms up to a
max of +500ms (p95-p99).

See image: https://imagizer.imageshack.com/img922/5308/glhkO4.png


Further investigations showed that this is most likely caused by log
segments being rolled over.


Did anybody ever noticed anything like that? Do you know if it is possible
to tune p99 performance in order to reduce/eliminate the latency spikes?


Thanks,

Paolo


Test configuration:

   - 15 brokers
   - 6 producers, ack=1, no compression
   - 1 topic, 90 partitions
   - Kafka 2.2.1


[jira] [Created] (KAFKA-9566) ProcessorContextImpl#forward throws NullPointerException if invoked from DeserializationExceptionHandler

2020-02-18 Thread Tomas Mi (Jira)
Tomas Mi created KAFKA-9566:
---

 Summary: ProcessorContextImpl#forward throws NullPointerException 
if invoked from DeserializationExceptionHandler
 Key: KAFKA-9566
 URL: https://issues.apache.org/jira/browse/KAFKA-9566
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0
Reporter: Tomas Mi


Hi, I am trying to implement custom DeserializationExceptionHandler which would 
forward an exception to downstream processor(s), but 
ProcessorContextImpl#forward throws a NullPointerException if invoked from this 
custom handler.

Handler implementation:
{code:title=MyDeserializationExceptionHandler.java}

public class MyDeserializationExceptionHandler implements 
DeserializationExceptionHandler {

@Override
public void configure(Map configs) {
}

@Override
public DeserializationHandlerResponse handle(ProcessorContext context, 
ConsumerRecord record, Exception exception) {
context.forward(null, exception, To.child("error-processor"));
return DeserializationHandlerResponse.CONTINUE;
}
}
{code}

Handler is wired as default deserialization exception handler:
{code}
private TopologyTestDriver initializeTestDriver(StreamsBuilder 
streamBuilder) {
Topology topology = streamBuilder.build();
Properties props = new Properties();
props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, 
"my-test-application");
props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
props.setProperty(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, 
StreamsConfig.EXACTLY_ONCE);

props.setProperty(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
 MyDeserializationExceptionHandler.class.getName());
return new TopologyTestDriver(topology, props);
}
{code}
 
Exception stacktrace:
{noformat}
org.apache.kafka.streams.errors.StreamsException: Fatal user code error in 
deserialization error callback
at 
org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:76)
at 
org.apache.kafka.streams.processor.internals.RecordQueue.maybeUpdateTimestamp(RecordQueue.java:160)
at 
org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:101)
at 
org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:136)
at 
org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:742)
at 
org.apache.kafka.streams.TopologyTestDriver.pipeInput(TopologyTestDriver.java:392)
...

Caused by: java.lang.NullPointerException
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:165)
at 
MyDeserializationExceptionHandler.handle(NewExceptionHandlerTest.java:204)
at 
org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:70)
 ... 33 more
{noformat}

Neither DeserializationExceptionHandler, nor ProcessorContext javadocs mention 
that ProcessorContext#forward(...) must not be invoked from 
DeserializationExceptionHandler, so I assume that this is a defect.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9567) Docs and system tests for ZooKeeper 3.5.7 and KIP-515

2020-02-18 Thread Ron Dagostino (Jira)
Ron Dagostino created KAFKA-9567:


 Summary: Docs and system tests for ZooKeeper 3.5.7 and KIP-515
 Key: KAFKA-9567
 URL: https://issues.apache.org/jira/browse/KAFKA-9567
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.5.0
Reporter: Ron Dagostino


These changes depend on [KIP-515: Enable ZK client to use the new TLS supported 
authentication|https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication],
 which was only added to 2.5.0.  The upgrade to ZooKeeper 3.5.7 was merged to 
both 2.5.0 and 2.4.1 via https://issues.apache.org/jira/browse/KAFKA-9515, but 
this change must only be merged to 2.5.0 (it will break the system tests if 
merged to 2.4.1).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Regarding KAFKA logs

2020-02-18 Thread Dhruvil Doshi
Respected sir
I am Dhruvil Doshi. Currently, I am trying to monitor kafka in kibana.
Although I am not able to understand the logs in kafka i.e. server.log and
controller.log . If you have any documentation based on logs can you please
share with me.

Thanks and Regards
Dhruvil Doshi


[jira] [Created] (KAFKA-9568) Kstreams APPLICATION_SERVER_CONFIG is not updated with static membership

2020-02-18 Thread David J. Garcia (Jira)
David J. Garcia created KAFKA-9568:
--

 Summary: Kstreams APPLICATION_SERVER_CONFIG is not updated with 
static membership
 Key: KAFKA-9568
 URL: https://issues.apache.org/jira/browse/KAFKA-9568
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: David J. Garcia


A kstreams application with static membership, and 
StreamsConfg.APPLICATION_SERVER_CONFIG set, will NOT update old server config 
upon restart of application on new host.

Steps to reproduce:

 
 # start two kstreams applications (with same consumer group) and enable static 
membership (and set application server config to :)
 # kill one of the applications and restart it on a new host(with new ip) 
before timeout ends (so that rebalancing doesn't occur).
 # the other kstreams application will now have an invalid 
application_server_config

Possible fix:

If an application restarts with a new host/identity..etc, it could trigger a 
"light-rebalance" where the other applications in the consumer group don't 
change partition assignments ,but instead just get their configuration updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-18 Thread John Roesler
Thanks, Matthias!

Regarding numbers, it would be hard to know how many applications
would benefit, since we don't know how many applications there are,
or anything about their data sets or topologies. We could do a survey,
but it seems overkill if we take the conservative approach.

I have my own practical stream processing experience that tells me this
is absolutely critical for any moderate-to-large relational stream
processing use cases. I'll leave it to you to decide if you find that
convincing, but it's definitely not an _assumption_. I've also heard from
a few Streams users who have already had to implement their own
noop-suppression transformers in order to get to production scale.

Regardless, it sounds like we can agree on taking an opportunistic approach
and targeting the optimization just to use a binary-equality check at
stateful operators. (I'd also suggest in sink nodes, when we are about to
send old and new values, since they are also already present and serialized
at that point.) We could make the KIP even more vague, and just say that
we'll drop no-op updates "when possible".

I'm curious what Bruno and the others think about this. If it seems like
a good starting point, perhaps we could move to a vote soon and get to
work on the implementation!

Thanks,
-John

On Mon, Feb 17, 2020, at 20:54, Matthias J. Sax wrote:
> Talking about optimizations and reducing downstream load:
> 
> Do we actually have any numbers? I have the impression that this KIP is
> more or less build on the _assumption_ that there is a problem. Yes,
> there are some use cases that would benefit from this; But how many
> applications would actually benefit? And how much load reduction would
> they get?
> 
> The simplest approach (following John idea to make baby steps) would be
> to apply the emit-on-change pattern only if there is a store. For this
> case we need to serialize old and new result anyway and thus a simple
> byte-array comparison is no overhead.
> 
> Sending `oldValues` by default would become expensive because we would
> need to serialize the recomputed old result, as well as the new result,
> to make the comparison (and we now the serialization is not cheap). We
> are facing a trade-off between CPU overhead and downstream load and I am
> not sure if we should hard code this. My original argument for sending
> `oldValues` was about semantics; but for an optimization, I am not sure
> if this would be the right choice.
> 
> For now, users who want to opt-in can force a materialization. A
> materialization may be expensive and if we see future demand, we could
> still add an option to send `oldValues` instead of materialization (this
> would at least save the store overhead). As we consider the KIP an
> optimization, a "config" seems to make sense.
> 
> 
> -Matthias
> 
> 
> On 2/17/20 5:21 PM, Richard Yu wrote:
> > Hi John!
> >
> > Thanks for the reply.
> >
> > About the changes we have discussed so far. I think upon further
> > consideration, we have been mostly talking about this from the perspective
> > that no stop-gap effort is acceptable. However, in recent discussion,
> if we
> > consider optimization, then it appears that the perspective I mentioned no
> > longer applies. After all, we are no longer concerned so much about
> > semantics correctness, then reducing traffic as much as possible without
> > performance tradeoffs.
> >
> > In this case, I think a cache would be a good idea for stateless
> > operations. This cache will not be backed by a store obviously. We can
> > probably use Kafka's ThreadCache. We should be able to catch a large
> > portion of the no-ops if we at least store some results in the cache. Not
> > all will be caught, but I think the impact will be significant.
> >
> > On another note, I think that we should implement competing proposals i.e.
> > one where we forward both old and new values with a reasonable proportion
> > of artificial no-ops (we do not necessarily have to rely on equals so much
> > as comparing the serialized binary data after the operation), and in
> > another scenario, the cache for stateless ops. It would be unreasonable if
> > we completely disregard either approach, since they both have merit. The
> > reason for implementing both is to perform benchmark tests on them, and
> > compare them with the original. This way, we can more clearly see what is
> > the drawbacks and the gains. So far, we have been discussing only
> > hypotheticals, and if we continue to do so, I think it is likely no ground
> > will be gained.
> >
> > After all, what we seek is optimization, and performance benchmarks
> will be
> > mandatory for a KIP of this nature.
> >
> > Hope this helps,
> > Richard
> >
> >
> >
> >
> >
> >
> > On Mon, Feb 17, 2020 at 2:12 PM John Roesler  wrote:
> >
> >> Hi again, all,
> >>
> >> Sorry on my part for my silence.
> >>
> >> I've just taken another look over the recent history of this
> discussion. It
> >> seems like the #1 point to clarify (because

[jira] [Created] (KAFKA-9569) RSM implementation for HDFS storage.

2020-02-18 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-9569:
-

 Summary: RSM implementation for HDFS storage.
 Key: KAFKA-9569
 URL: https://issues.apache.org/jira/browse/KAFKA-9569
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Satish Duggana
Assignee: Ying Zheng


This is about implementing `RemoteStorageManager` for HDFS to verify the 
proposed SPIs are sufficient. It looks like the existing RSM interface should 
be sufficient. If needed, we will discuss any required changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9512) Flaky Test LagFetchIntegrationTest.shouldFetchLagsDuringRestoration

2020-02-18 Thread Vinoth Chandar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinoth Chandar resolved KAFKA-9512.
---
Resolution: Fixed

Closing since the PR is now landed

> Flaky Test LagFetchIntegrationTest.shouldFetchLagsDuringRestoration
> ---
>
> Key: KAFKA-9512
> URL: https://issues.apache.org/jira/browse/KAFKA-9512
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.5.0
>Reporter: Matthias J. Sax
>Assignee: Vinoth Chandar
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.5.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/497/testReport/junit/org.apache.kafka.streams.integration/LagFetchIntegrationTest/shouldFetchLagsDuringRestoration/]
> {quote}java.lang.NullPointerException at 
> org.apache.kafka.streams.integration.LagFetchIntegrationTest.shouldFetchLagsDuringRestoration(LagFetchIntegrationTest.java:306){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9540) Application getting "Could not find the standby task 0_4 while closing it" error

2020-02-18 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-9540.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Application getting "Could not find the standby task 0_4 while closing it" 
> error
> 
>
> Key: KAFKA-9540
> URL: https://issues.apache.org/jira/browse/KAFKA-9540
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0, 2.5.0
>Reporter: Badai Aqrandista
>Priority: Minor
> Fix For: 2.4.1, 2.6.0
>
>
> Because of this the following line, there is a possibility that some standby 
> tasks might not be created:
> https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L436
> Then causing this line to not adding the task to standby task list:
> https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L299
> But this line assumes that all standby tasks are to be created and add it to 
> the standby list:
> https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/TaskManager.java#L168
> This results in user getting this error message on the next 
> PARTITION_ASSIGNMENT state:
> {noformat}
> Could not find the standby task 0_4 while closing it 
> (org.apache.kafka.streams.processor.internals.AssignedStandbyTasks:74)
> {noformat}
> But the harm caused by this issue is minimal: No standby task for some 
> partitions. And it is recreated on the next rebalance anyway. So, I suggest 
> lowering this message to WARN. Or probably check to WARN when standby task 
> could not be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-18 Thread Richard Yu
Hi all,

We are definitely making progress!

@John should I emphasize in the proposed behavior changes that we are only
doing binary equality checks for stateful operators?
It looks like we have come close to finalizing this part of the KIP. (I
will note in the KIP that this proposal is intended for optimization, not
semantics correctness)

I do think maybe we still have one other detail we need to discuss. So far,
there has been quite a bit of back and forth about what the behavior of
aggregations should look like in emit on change. I have seen
multiple competing proposals, so I am not completely certain which one we
should go with, or how we will be able to compromise in between them.

Let me know what your thoughts are on this matter, since we are probably
close to wrapping up most other stuff.
@Matthias J. Sax   and @Bruno, see what you think
about this.

Best,
Richard



On Tue, Feb 18, 2020 at 9:06 AM John Roesler  wrote:

> Thanks, Matthias!
>
> Regarding numbers, it would be hard to know how many applications
> would benefit, since we don't know how many applications there are,
> or anything about their data sets or topologies. We could do a survey,
> but it seems overkill if we take the conservative approach.
>
> I have my own practical stream processing experience that tells me this
> is absolutely critical for any moderate-to-large relational stream
> processing use cases. I'll leave it to you to decide if you find that
> convincing, but it's definitely not an _assumption_. I've also heard from
> a few Streams users who have already had to implement their own
> noop-suppression transformers in order to get to production scale.
>
> Regardless, it sounds like we can agree on taking an opportunistic approach
> and targeting the optimization just to use a binary-equality check at
> stateful operators. (I'd also suggest in sink nodes, when we are about to
> send old and new values, since they are also already present and serialized
> at that point.) We could make the KIP even more vague, and just say that
> we'll drop no-op updates "when possible".
>
> I'm curious what Bruno and the others think about this. If it seems like
> a good starting point, perhaps we could move to a vote soon and get to
> work on the implementation!
>
> Thanks,
> -John
>
> On Mon, Feb 17, 2020, at 20:54, Matthias J. Sax wrote:
> > Talking about optimizations and reducing downstream load:
> >
> > Do we actually have any numbers? I have the impression that this KIP is
> > more or less build on the _assumption_ that there is a problem. Yes,
> > there are some use cases that would benefit from this; But how many
> > applications would actually benefit? And how much load reduction would
> > they get?
> >
> > The simplest approach (following John idea to make baby steps) would be
> > to apply the emit-on-change pattern only if there is a store. For this
> > case we need to serialize old and new result anyway and thus a simple
> > byte-array comparison is no overhead.
> >
> > Sending `oldValues` by default would become expensive because we would
> > need to serialize the recomputed old result, as well as the new result,
> > to make the comparison (and we now the serialization is not cheap). We
> > are facing a trade-off between CPU overhead and downstream load and I am
> > not sure if we should hard code this. My original argument for sending
> > `oldValues` was about semantics; but for an optimization, I am not sure
> > if this would be the right choice.
> >
> > For now, users who want to opt-in can force a materialization. A
> > materialization may be expensive and if we see future demand, we could
> > still add an option to send `oldValues` instead of materialization (this
> > would at least save the store overhead). As we consider the KIP an
> > optimization, a "config" seems to make sense.
> >
> >
> > -Matthias
> >
> >
> > On 2/17/20 5:21 PM, Richard Yu wrote:
> > > Hi John!
> > >
> > > Thanks for the reply.
> > >
> > > About the changes we have discussed so far. I think upon further
> > > consideration, we have been mostly talking about this from the
> perspective
> > > that no stop-gap effort is acceptable. However, in recent discussion,
> > if we
> > > consider optimization, then it appears that the perspective I
> mentioned no
> > > longer applies. After all, we are no longer concerned so much about
> > > semantics correctness, then reducing traffic as much as possible
> without
> > > performance tradeoffs.
> > >
> > > In this case, I think a cache would be a good idea for stateless
> > > operations. This cache will not be backed by a store obviously. We can
> > > probably use Kafka's ThreadCache. We should be able to catch a large
> > > portion of the no-ops if we at least store some results in the cache.
> Not
> > > all will be caught, but I think the impact will be significant.
> > >
> > > On another note, I think that we should implement competing proposals
> i.e.
> > > one where we forward both old and new values with

Please give permission for KAFKA/KIP jiras

2020-02-18 Thread Galyó Csaba
Hello,

 

My username is "galyo". I would like to contribute to the Kafka project and
for that I would like to open KIFs and assign KAFKA jiras to myself. 

 

At the moment I was only able to report a new jira KAFKA-9546 but wasn't
able to assign it to myself.

 

Could you please add me as developer to these jira projects?

 

Thanks,

Csaba

 



Re: Please give permission for KAFKA/KIP jiras

2020-02-18 Thread Guozhang Wang
Hello Csaba,

I will add you to the contributor list so you can assign to yourself.


Guozhang

On Tue, Feb 18, 2020 at 11:57 AM Galyó Csaba  wrote:

> Hello,
>
>
>
> My username is "galyo". I would like to contribute to the Kafka project and
> for that I would like to open KIFs and assign KAFKA jiras to myself.
>
>
>
> At the moment I was only able to report a new jira KAFKA-9546 but wasn't
> able to assign it to myself.
>
>
>
> Could you please add me as developer to these jira projects?
>
>
>
> Thanks,
>
> Csaba
>
>
>
>

-- 
-- Guozhang


Re: Please give permission for KAFKA/KIP jiras

2020-02-18 Thread Guozhang Wang
I've added `galyo` to the list, and also I've just assigned you to the JIRA
ticket.


Guozhang

On Tue, Feb 18, 2020 at 12:04 PM Guozhang Wang  wrote:

> Hello Csaba,
>
> I will add you to the contributor list so you can assign to yourself.
>
>
> Guozhang
>
> On Tue, Feb 18, 2020 at 11:57 AM Galyó Csaba  wrote:
>
>> Hello,
>>
>>
>>
>> My username is "galyo". I would like to contribute to the Kafka project
>> and
>> for that I would like to open KIFs and assign KAFKA jiras to myself.
>>
>>
>>
>> At the moment I was only able to report a new jira KAFKA-9546 but wasn't
>> able to assign it to myself.
>>
>>
>>
>> Could you please add me as developer to these jira projects?
>>
>>
>>
>> Thanks,
>>
>> Csaba
>>
>>
>>
>>
>
> --
> -- Guozhang
>


-- 
-- Guozhang


[jira] [Resolved] (KAFKA-8025) Flaky Test RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest#shouldForwardAllDbOptionsCalls

2020-02-18 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-8025.

Resolution: Fixed

> Flaky Test 
> RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest#shouldForwardAllDbOptionsCalls
> 
>
> Key: KAFKA-8025
> URL: https://issues.apache.org/jira/browse/KAFKA-8025
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.2.0
>Reporter: Konstantine Karantasis
>Assignee: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> At least one occurence where the following unit test case failed on a jenkins 
> job that didn't involve any related changes. 
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2783/consoleFull]
> I have not been able to reproduce it locally on Linux. (For instance 20 
> consecutive runs of this class pass all test cases)
> {code:java}
> 14:06:13 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllDbOptionsCalls STARTED 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls
>  failed, log available in 
> /home/jenkins/jenkins-slave/workspace/kafka-pr-jdk11-scala2.12/streams/build/reports/testOutput/org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls.test.stdout
>  14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllDbOptionsCalls FAILED 14:06:14     
> java.lang.AssertionError: 14:06:14     Expected: a string matching the 
> pattern 'Unexpected method call DBOptions\.baseBackgroundCompactions((.* 
> 14:06:14     *)*):' 14:06:14          but: was "Unexpected method call 
> DBOptions.baseBackgroundCompactions():\n    DBOptions.close(): expected: 3, 
> actual: 0" 14:06:14         at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) 14:06:14         
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) 14:06:14       
>   at 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.verifyDBOptionsMethodCall(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:121)
>  14:06:14         at 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.shouldForwardAllDbOptionsCalls(RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest.java:101)
>  14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllColumnFamilyCalls STARTED 14:06:14 14:06:14 
> org.apache.kafka.streams.state.internals.RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapterTest
>  > shouldForwardAllColumnFamilyCalls PASSED
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8-old #211

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 48dcaa61e801a30bcb8f6c35d9afe9a69214fef5 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 48dcaa61e801a30bcb8f6c35d9afe9a69214fef5
Commit message: "KAFKA-8025: Fix flaky RocksDB test (#8126)"
 > git rev-list --no-walk b9ff91b3edaea46685723605cf6a1067b57cd300 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins1539401506594460507.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins1539401506594460507.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=48dcaa61e801a30bcb8f6c35d9afe9a69214fef5, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


[jira] [Created] (KAFKA-9570) SSL cannot be configured for Connect in standalone mode

2020-02-18 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9570:


 Summary: SSL cannot be configured for Connect in standalone mode
 Key: KAFKA-9570
 URL: https://issues.apache.org/jira/browse/KAFKA-9570
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.3.1, 2.4.0, 2.2.2, 2.2.1, 2.3.0, 2.1.1, 2.2.0, 2.1.0, 
2.0.1, 2.0.0, 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
Reporter: Chris Egerton
Assignee: Chris Egerton


When Connect is brought up in standalone, if the worker config contains _any_ 
properties that begin with the {{listeners.https.}} prefix, SSL will not be 
enabled on the worker.

This is because the relevant SSL configs are only defined in the [distributed 
worker 
config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedConfig.java#L260]
 instead of the [superclass worker 
config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java].
 This, in conjunction with [a call 
to|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L42]
 
[AbstractConfig::valuesWithPrefixAllOrNothing|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java],
 causes all configs not defined in the {{WorkerConfig}} used by the worker to 
be silently dropped when the worker configures its REST server if there is at 
least one config present with the {{listeners.https.}} prefix.

Unfortunately, the workaround of specifying all SSL configs without the 
{{listeners.https.}} prefix will also fail if any passwords need to be 
specified. This is because the password values in the {{Map}} returned from 
{{AbstractConfig::valuesWithPrefixAllOrNothing}} aren't parsed as passwords, 
but the [framework expects them to 
be|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L87].
 However, if no keystore, truststore, or key passwords need to be configured, 
then it should be possible to work around the issue by specifying all of those 
configurations without a prefix (as long as they don't conflict with any other 
configs in that namespace).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4246

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-s

Build failed in Jenkins: kafka-2.3-jdk8 #178

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
[...truncated 2.55 MB...]
kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow PASSED

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCre

Kafka Meetup hosted by Confluent at Mountain View, Thursday 5:30pm, Feb 20th, 2020

2020-02-18 Thread Guozhang Wang
Hello folks,

This is a kind reminder of the Bay Area Kafka® meetup this Thursday (Feb.
20th) 5:30pm, at Confluent's Mountain View HQ office.

*RSVP and Register* (if you intend to attend in person):
https://www.meetup.com/KafkaBayArea/events/268427599/

*Date*
5:30pm, Thursday, February 20th, 2019

*Location*
Confluent Mountain View Office - 899 W Evelyn Ave · Mountain View, CA

*Agenda*
5:30pm - 6:00pm: Networking, Pizza and drinks!
6:00pm - 6:40pm: Protecting Tenant Performance in Multi-tenant Kafka
 (Anna Povzner, Confluent)
6:40pm - 7:20pm: Running Large Scale Kafka Upgrades at Yelp
 (Manpreet Singh, Yelp)
7:20pm - 8:00pm: Additional Q&A and Networking

We will record the talks and upload the recordings to youtube

after
the event, but we'd highly recommend you guys to come and join us in person
:)


Hope to see you there!


-- 
-- Guozhang


Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

2020-02-18 Thread Boyang Chen
Thanks for the update Feyman. The updates look great, except one thing I
would like to be more specific is error cases display. In the "*2)* Add
cmdline option" you mention throwing exception when request failed, does
that suggest partial failure or a full failure? How do we deal with
different scenarios?

Also some minor syntax fix:
1. it only support remove static members -> it only supports the removal of
static members
2. "new constructor is added and the old constructor will be deprecated"
you mean the `new helper` right? Should be `new helper is added`
3. users should make sure all the stream applications should be are shutdown

Other than the above suggestions, I think the KIP is in pretty good shape.

Boyang

On Fri, Feb 14, 2020 at 9:29 PM feyman2009  wrote:

> Hi, Boyang
> You can call me Feyman :)
> Thanks for your quick reply with great advices!
> I have updated the KIP-571 , would you mind to see if it looks good ?
> Thanks !
>
> --
> 发件人:Boyang Chen 
> 发送时间:2020年2月14日(星期五) 08:35
> 收件人:dev ; feyman2009 
> 主 题:Re: [Discuss] KIP-571: Add option to force remove members in
> StreamsResetter
>
> Thanks for driving this change Feyman! Hope this is a good name to call
> you :)
>
> The motivation of the KIP looks good, and I have a couple of initial
> thoughts:
> 1. I guess the reason to use setters instead of adding a new constructor
> to MemberToRemove class is because we have two String members. Could you
> point that out upfront so that people are not asking why not adding new
> constructor?
> 2. KIP discussion usually focuses on the public side changes, so you don't
> need to copy-paste the entire class. Just the new APIs you are adding
> should be suffice
> 3. Add the description of new flag inside Public API change, whose name
> could be simplified as `--force` and people would just read the
> description. An edge case I could think of is that some ongoing
> applications are not closed when the reset tool applies, which causes more
> unexpected rebalances. So it's important to warn users to use the flag
> wisely and be responsible to shutdown old applications first.
> 4. It would be good to mention in the Compatibility section which version
> of broker and admin client we need to be able to use this new feature. Also
> what's the expected behavior when the broker is not supporting the new API.
> 5. What additional feedback for users using the new flag? Are we going to
> include a list of successfully deleted members, and some failed members?
> 6. We could separate the proposed change and public API section. In the
> proposed change section, we could have a mention of which API we are going
> to use to get the members of the stream application.
>
> And some minor style advices:
> 1. Remove the link on `member.id` inside Motivation section;
> 2. Use a code block for the new MemberToRemove and avoid unnecessary
> coloring;
> 3. Please pay more attention to style, for example `ability to  force
> removing` has double spaces.
>
> Boyang
>
>
> On Thu, Feb 13, 2020 at 1:48 AM feyman2009 
> wrote:
> Hi, all
> In order to make it possible for StreamsResetter to reset stream even
> when there are active members, since we currently only have the ability to
> remove static members, so we basically need the ability to remove dynamic
> members, this involves some public interfaces change in
> org.apache.kafka.clients.admin.MemberToRemove.
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-571%3A+Add+option+to+force+remove+members+in+StreamsResetter
> JIRA: https://issues.apache.org/jira/browse/KAFKA-9146
>
> Any comments would be highly appreciated~
> Thanks !
>
>
>


Build failed in Jenkins: kafka-trunk-jdk11 #1170

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
[...truncated 5.87 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.int

Build failed in Jenkins: kafka-trunk-jdk8 #4247

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6266: Repeated occurrence of WARN Resetting first dirty offset


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE

[jira] [Resolved] (KAFKA-9306) Kafka Consumer does not clean up all metrics after shutdown

2020-02-18 Thread Sanjana Kaundinya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjana Kaundinya resolved KAFKA-9306.
--
Fix Version/s: 2.4.1
   Resolution: Fixed

> Kafka Consumer does not clean up all metrics after shutdown
> ---
>
> Key: KAFKA-9306
> URL: https://issues.apache.org/jira/browse/KAFKA-9306
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 2.4.1
>
>
> The Kafka Consumer does not clean up all metrics after shutdown.  It seems 
> like this was a regression introduced in Kafka 2.4 when we added the 
> KafkaConsumerMetrics class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

2020-02-18 Thread Sophie Blee-Goldman
Hey Feyman,

Thanks for the KIP! I had two high-level questions:

It seems like, in the specific case motivating this KIP, we would only ever
want to remove *all* the members remaining in the group (and never just a
single member at a time). As you mention there is already an admin API to
remove static members, but we'd still need something new to handle dynamic
ones. Did you consider an API that just removes *all* remaining members
from a group, rather than requiring the caller to determine and then
specify the
group.id (static) or member.id (dynamic) for each one? This way we can just
have a single API exposed that will handle what we need to do regardless of
whether static membership is used or not.

My other question is, will this new option only work for clusters that are
on 2.3
or higher? Do you have any thoughts about whether it would be possible to
implement this feature for older clusters as well, or are we dependent on
changes only introduced in 2.3?

If so, we should make it absolutely clear what will happen if this used with
an older cluster. That is, will the reset tool exit with a clear error
message right
away, or will it potentially leave the app in a partially reset state?

Thanks!
Sophie

On Tue, Feb 18, 2020 at 4:30 PM Boyang Chen 
wrote:

> Thanks for the update Feyman. The updates look great, except one thing I
> would like to be more specific is error cases display. In the "*2)* Add
> cmdline option" you mention throwing exception when request failed, does
> that suggest partial failure or a full failure? How do we deal with
> different scenarios?
>
> Also some minor syntax fix:
> 1. it only support remove static members -> it only supports the removal of
> static members
> 2. "new constructor is added and the old constructor will be deprecated"
> you mean the `new helper` right? Should be `new helper is added`
> 3. users should make sure all the stream applications should be are
> shutdown
>
> Other than the above suggestions, I think the KIP is in pretty good shape.
>
> Boyang
>
> On Fri, Feb 14, 2020 at 9:29 PM feyman2009  wrote:
>
> > Hi, Boyang
> > You can call me Feyman :)
> > Thanks for your quick reply with great advices!
> > I have updated the KIP-571 , would you mind to see if it looks good ?
> > Thanks !
> >
> > --
> > 发件人:Boyang Chen 
> > 发送时间:2020年2月14日(星期五) 08:35
> > 收件人:dev ; feyman2009 
> > 主 题:Re: [Discuss] KIP-571: Add option to force remove members in
> > StreamsResetter
> >
> > Thanks for driving this change Feyman! Hope this is a good name to call
> > you :)
> >
> > The motivation of the KIP looks good, and I have a couple of initial
> > thoughts:
> > 1. I guess the reason to use setters instead of adding a new constructor
> > to MemberToRemove class is because we have two String members. Could you
> > point that out upfront so that people are not asking why not adding new
> > constructor?
> > 2. KIP discussion usually focuses on the public side changes, so you
> don't
> > need to copy-paste the entire class. Just the new APIs you are adding
> > should be suffice
> > 3. Add the description of new flag inside Public API change, whose name
> > could be simplified as `--force` and people would just read the
> > description. An edge case I could think of is that some ongoing
> > applications are not closed when the reset tool applies, which causes
> more
> > unexpected rebalances. So it's important to warn users to use the flag
> > wisely and be responsible to shutdown old applications first.
> > 4. It would be good to mention in the Compatibility section which version
> > of broker and admin client we need to be able to use this new feature.
> Also
> > what's the expected behavior when the broker is not supporting the new
> API.
> > 5. What additional feedback for users using the new flag? Are we going to
> > include a list of successfully deleted members, and some failed members?
> > 6. We could separate the proposed change and public API section. In the
> > proposed change section, we could have a mention of which API we are
> going
> > to use to get the members of the stream application.
> >
> > And some minor style advices:
> > 1. Remove the link on `member.id` inside Motivation section;
> > 2. Use a code block for the new MemberToRemove and avoid unnecessary
> > coloring;
> > 3. Please pay more attention to style, for example `ability to  force
> > removing` has double spaces.
> >
> > Boyang
> >
> >
> > On Thu, Feb 13, 2020 at 1:48 AM feyman2009  >
> > wrote:
> > Hi, all
> > In order to make it possible for StreamsResetter to reset stream even
> > when there are active members, since we currently only have the ability
> to
> > remove static members, so we basically need the ability to remove dynamic
> > members, this involves some public interfaces change in
> > org.apache.kafka.clients.admin.MemberToRemove.
> >
> > KIP:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP

Build failed in Jenkins: kafka-2.5-jdk8 #30

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
[...truncated 5.85 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest >

Build failed in Jenkins: kafka-trunk-jdk8 #4248

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Reduce log level to Trace for fetch offset downgrade (#8093)


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE

Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

2020-02-18 Thread Boyang Chen
Also Feyman, there is one thing I forget which is that the leave group
change was introduced in 2.4 broker instead of 2.3. Feel free to correct it
on the KIP.

On Tue, Feb 18, 2020 at 5:44 PM Sophie Blee-Goldman 
wrote:

> Hey Feyman,
>
> Thanks for the KIP! I had two high-level questions:
>
> It seems like, in the specific case motivating this KIP, we would only ever
> want to remove *all* the members remaining in the group (and never just a
> single member at a time). As you mention there is already an admin API to
> remove static members, but we'd still need something new to handle dynamic
> ones. Did you consider an API that just removes *all* remaining members
> from a group, rather than requiring the caller to determine and then
> specify the
> group.id (static) or member.id (dynamic) for each one? This way we can
> just
> have a single API exposed that will handle what we need to do regardless of
> whether static membership is used or not.
>
> My other question is, will this new option only work for clusters that are
> on 2.3
> or higher? Do you have any thoughts about whether it would be possible to
> implement this feature for older clusters as well, or are we dependent on
> changes only introduced in 2.3?
>
> If so, we should make it absolutely clear what will happen if this used
> with
> an older cluster. That is, will the reset tool exit with a clear error
> message right
> away, or will it potentially leave the app in a partially reset state?
>
> Thanks!
> Sophie
>
> On Tue, Feb 18, 2020 at 4:30 PM Boyang Chen 
> wrote:
>
> > Thanks for the update Feyman. The updates look great, except one thing I
> > would like to be more specific is error cases display. In the "*2)* Add
> > cmdline option" you mention throwing exception when request failed, does
> > that suggest partial failure or a full failure? How do we deal with
> > different scenarios?
> >
> > Also some minor syntax fix:
> > 1. it only support remove static members -> it only supports the removal
> of
> > static members
> > 2. "new constructor is added and the old constructor will be deprecated"
> > you mean the `new helper` right? Should be `new helper is added`
> > 3. users should make sure all the stream applications should be are
> > shutdown
> >
> > Other than the above suggestions, I think the KIP is in pretty good
> shape.
> >
> > Boyang
> >
> > On Fri, Feb 14, 2020 at 9:29 PM feyman2009 
> wrote:
> >
> > > Hi, Boyang
> > > You can call me Feyman :)
> > > Thanks for your quick reply with great advices!
> > > I have updated the KIP-571 , would you mind to see if it looks
> good ?
> > > Thanks !
> > >
> > > --
> > > 发件人:Boyang Chen 
> > > 发送时间:2020年2月14日(星期五) 08:35
> > > 收件人:dev ; feyman2009 
> > > 主 题:Re: [Discuss] KIP-571: Add option to force remove members in
> > > StreamsResetter
> > >
> > > Thanks for driving this change Feyman! Hope this is a good name to call
> > > you :)
> > >
> > > The motivation of the KIP looks good, and I have a couple of initial
> > > thoughts:
> > > 1. I guess the reason to use setters instead of adding a new
> constructor
> > > to MemberToRemove class is because we have two String members. Could
> you
> > > point that out upfront so that people are not asking why not adding new
> > > constructor?
> > > 2. KIP discussion usually focuses on the public side changes, so you
> > don't
> > > need to copy-paste the entire class. Just the new APIs you are adding
> > > should be suffice
> > > 3. Add the description of new flag inside Public API change, whose name
> > > could be simplified as `--force` and people would just read the
> > > description. An edge case I could think of is that some ongoing
> > > applications are not closed when the reset tool applies, which causes
> > more
> > > unexpected rebalances. So it's important to warn users to use the flag
> > > wisely and be responsible to shutdown old applications first.
> > > 4. It would be good to mention in the Compatibility section which
> version
> > > of broker and admin client we need to be able to use this new feature.
> > Also
> > > what's the expected behavior when the broker is not supporting the new
> > API.
> > > 5. What additional feedback for users using the new flag? Are we going
> to
> > > include a list of successfully deleted members, and some failed
> members?
> > > 6. We could separate the proposed change and public API section. In the
> > > proposed change section, we could have a mention of which API we are
> > going
> > > to use to get the members of the stream application.
> > >
> > > And some minor style advices:
> > > 1. Remove the link on `member.id` inside Motivation section;
> > > 2. Use a code block for the new MemberToRemove and avoid unnecessary
> > > coloring;
> > > 3. Please pay more attention to style, for example `ability to  force
> > > removing` has double spaces.
> > >
> > > Boyang
> > >
> > >
> > > On Thu, Feb 13, 2020 at 1:48 AM feyma

[jira] [Created] (KAFKA-9571) MirrorMaker task failing during pool

2020-02-18 Thread Nitish Goyal (Jira)
Nitish Goyal created KAFKA-9571:
---

 Summary: MirrorMaker task failing during pool
 Key: KAFKA-9571
 URL: https://issues.apache.org/jira/browse/KAFKA-9571
 Project: Kafka
  Issue Type: Bug
  Components: consumer, mirrormaker
Affects Versions: 2.4.0
Reporter: Nitish Goyal


I have setup kafka replication between source and target cluster

I am observing Mirror Source task getting killed after certain time with the 
following error

 


```

[[2020-02-17 22:39:57,344] ERROR Failure during poll. 
(org.apache.kafka.connect.mirror.MirrorSourceTask:161)
[2020-02-17 22:39:57,346] ERROR WorkerSourceTask\{id=MirrorSourceConnector-99} 
Task threw an uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask:179)
[2020-02-17 22:39:57,346] ERROR WorkerSourceTask\{id=MirrorSourceConnector-99} 
Task is being killed and will not recover until manually restarted 
(org.apache.kafka.connect.runtime.WorkerTask:180)

```

 

What could be the possible reason for the above?

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1171

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6266: Repeated occurrence of WARN Resetting first dirty offset

[github] MINOR: Reduce log level to Trace for fetch offset downgrade (#8093)


--
[...truncated 5.86 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.ka

Build failed in Jenkins: kafka-2.5-jdk8 #31

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Reduce log level to Trace for fetch offset downgrade (#8093)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.

Build failed in Jenkins: kafka-2.4-jdk8 #148

2020-02-18 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-8025: Fix flaky RocksDB test (#8126)


--
[...truncated 5.02 MB...]

kafka.coordinator.group.GroupMetadataTest > 
testEmptyToAwaitingRebalanceIllegalTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testDeadToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testFailedTxnOffsetCommitLeavesNoPendingState STARTED

kafka.coordinator.group.GroupMetadataTest > 
testFailedTxnOffsetCommitLeavesNoPendingState PASSED

kafka.coordinator.group.GroupMetadataTest > testNotInvokeJoinCallback STARTED

kafka.coordinator.group.GroupMetadataTest > testNotInvokeJoinCallback PASSED

kafka.coordinator.group.GroupMetadataTest > 
testCanRebalanceWhenCompletingRebalance STARTED

kafka.coordinator.group.GroupMetadataTest > 
testCanRebalanceWhenCompletingRebalance PASSED

kafka.coordinator.group.GroupMetadataTest > 
testDeadToAwaitingRebalanceIllegalTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testDeadToAwaitingRebalanceIllegalTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testInvokeJoinCallback STARTED

kafka.coordinator.group.GroupMetadataTest > testInvokeJoinCallback PASSED

kafka.coordinator.group.GroupMetadataTest > testEmptyToDeadTransition STARTED

kafka.coordinator.group.GroupMetadataTest > testEmptyToDeadTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
STARTED

kafka.coordinator.group.GroupMetadataTest > testSelectProtocolRaisesIfNoMembers 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testStableToPreparingRebalanceTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testStableToPreparingRebalanceTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testSubscribedTopicsNonConsumerGroup STARTED

kafka.coordinator.group.GroupMetadataTest > 
testSubscribedTopicsNonConsumerGroup PASSED

kafka.coordinator.group.GroupMetadataTest > testReplaceGroupInstance STARTED

kafka.coordinator.group.GroupMetadataTest > testReplaceGroupInstance PASSED

kafka.coordinator.group.GroupMetadataTest > 
testTransactionalCommitIsAbortedAndConsumerCommitWins STARTED

kafka.coordinator.group.GroupMetadataTest > 
testTransactionalCommitIsAbortedAndConsumerCommitWins PASSED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToPreparingRebalanceTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToPreparingRebalanceTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToDeadTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToDeadTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testStableToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testStableToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testOffsetCommitFailureWithAnotherPending STARTED

kafka.coordinator.group.GroupMetadataTest > 
testOffsetCommitFailureWithAnotherPending PASSED

kafka.coordinator.group.GroupMetadataTest > testSubscribedTopics STARTED

kafka.coordinator.group.GroupMetadataTest > testSubscribedTopics PASSED

kafka.coordinator.group.GroupMetadataTest > testDeadToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testDeadToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit PASSED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols STARTED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols PASSED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable STARTED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable PASSED

kafka.coordinator.group.GroupMetadataTest > testNotInvokeSyncCallback STARTED

kafka.coordinator.group.GroupMetadataTest > testNotInvokeSyncCallback PASSED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testReplaceGroupInstanceWithEmptyGroupInstanceId STARTED

kafka.coordinator.group.GroupMetadataTest > 
testReplaceGroupInstanceWithEmptyGroupInstanceId PASSED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToPrep