Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #97

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10511; Ensure monotonic start epoch/offset updates in `MockLog` 
(#9332)


--
[...truncated 3.35 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Re: [DISCUSS] KIP-630: Kafka Raft Snapshot

2020-09-28 Thread Jose Garcia Sancio
Hi Guozhang,

Thanks for your feedback. It was very helpful. See my comments below.

Changes to the KIP:
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158864763=28=27

On Sun, Sep 27, 2020 at 9:02 PM Guozhang Wang  wrote:
>
> Hello Jose,
>
> Thanks for the KIP. Overall it looks great. I have a few meta / minor
> question, or maybe just clarifications below:
>
> Meta:
>
> 1. I want to clarify that if only the active controller would generate
> snapshots, OR would any voter in the quorum would generate snapshots, OR
> would even observers generate snapshots? Originally I thought it was the
> latter case, but I think reading through the doc I got confused by some
> paragraphs. E.g. you mentioned snapshots are generated by the Controller
> module, and observers would not have that module.

Sorry for the confusion and inconsistency here. Every replica of the
cluster metadata topic partition will generate a snapshot. That
includes the voters (leader and followers) and observers. In this KIP
the leader is the Active Controller, the voters are the Kafka
Controllers and the observers are the Metadata Cache.

I went through the KIP again and made sure to enumerate both Kafka
Controllers and Metadata Cache when talking about snapshot generation
and loading.

I renamed the new configurations to be prefixed by metadata instead of
controller.

I moved the terminology section to the top.

>
> 2. Following on Jun's previous comment: currently the __consumer_metadata
> log is replicated on ALL brokers since all voters and observers would
> replicate that topic. I know this may be out of the scope of this KIP but I
> think maybe only letting the voters to replicate (and periodically
> truncate) the log while observers only maintain the in-memory state and
> snapshots is a good trade-off here, assuming snapshot loading is relatively
> fast.

This is a good idea and optimization. It would save a write. I think
we need to think about the implication to KIP-642, the dynamic quorum
reassignment KIP, if we end up allowing observers to get "promoted" to
voters.

>
> 3. When a raft client is in the middle of loading a snapshot, should it
> reject any vote / begin-/end-/describe-quorum requests at the time? More
> generally, while a snapshot is being loaded, how should we treat the
> current state of the client when handling Raft requests.

Re: requesting votes and granting votes.

In the section "Changes to Leader Election", I think this section was
improved since your review. I mentioned that the raft client needs to
look at:

1. latest/largest snapshot epoch and end offset
2. the LEO of the replicated log

The voters should use the latest/largest of these two during the
election process.

Re: quorum state

For KIP-595 and KIP-630 the snapshot doesn't include any quorum
information. This may change in KIP-642.

>
> Minor:
>
> 4."All of the live replicas (followers and observers) have replicated LBO".
> Today the raft layer does not yet maintain LBO across all replicas, is this
> information kept in the controller layer? I'm asking because I do not see
> relevant docs in KIP-631 and hence a bit confused which layer is
> responsible for bookkeeping the LBOs of all replicas.

This is not minor! :). This should be done in the raft client as part
of the fetch protocol. Note that LBO is just a rename of log start
offset. If the current raft implementation doesn't manage this
information then we will have to implement this as part of
implementing this KIP (KIP-630).

> 5. "Followers and observers will increase their log begin offset to the
> value sent on the fetch response as long as the local Kafka Controller and
> Metadata Cache has generated a snapshot with an end offset greater than or
> equal to the new log begin offset." Not sure I follow this: 1) If observers
> do not generate snapshots since they do not have a Controller module on
> them, then it is possible that observers do not have any snapshots at all
> if they do not get one from the leader, in that case they would never
> truncate the logs locally;

Observers will have a Metadata Cache which will be responsible for
generating snapshots.

> 2) could you clarify on "value sent on the fetch
> response", are you referring to the "HighWatermark", or "LogStartOffset" in
> the schema, or some other fields?

The log begin offset is the same as the log start offset. This KIP
renames that field in the fetch response. I am starting to think that
renaming this field in this KIP is not worth it. What do you think?

>
> 6. The term "SnapshotId" is introduced without definition at first. My
> understanding is it's defined as a combo of , could you
> clarify if this is the case?

Good point. I added this sentence to the Snapshot Format section and
terminology section:

"Each snapshot is uniquely identified by a SnapshotId, the epoch and
end offset of the records in the replicated log included in the
snapshot."

> BTW I think the term "endOffset" is a term
> 

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #10

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10218: Stop reading config topic in every subsequent tick 
if catchup fails once (#8973)


--
[...truncated 3.09 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #128

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10218: Stop reading config topic in every subsequent tick if 
catchup fails once (#8973)

[github] KAFKA-10511; Ensure monotonic start epoch/offset updates in `MockLog` 
(#9332)


--
[...truncated 3.35 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-28 Thread Guozhang Wang
Hello Walker,

Thanks for the updated KIP proposal. A few more comments below:

1. "The RocksDB metrics recording thread is not shutdown." Why it should
not be shut down in either client or application shutdown cases?

2. Should we deprecate the existing overloaded function with the java
UncaughtExceptionHandler?

3. Should we consider providing a default implementation of this handler
interface which is automatically set if not overridden by users, e.g. one
that would choose to SHUTDOWN_KAFKA_STREAMS_APPLICATION upon
MissingSourceTopicException in KIP-662.


Guozhang


On Mon, Sep 28, 2020 at 3:57 PM Walker Carlson 
wrote:

> I think that Guozhang and Matthias make good points.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
>
> I have updated the kip to include a StreamsUncaughtExceptionHandler
>
>
>
> On Sun, Sep 27, 2020 at 7:28 PM Guozhang Wang  wrote:
>
> > I think single-threaded clients may be common in practice, and what
> > Matthias raised is a valid concern.
> >
> > We had a related discussion in KIP-663, that maybe we can tweak the
> > `UncaughtExceptionExceptionHandler` a bit such that instead of just
> > registered users' function into the individual threads, we trigger them
> > BEFORE the thread dies in the "catch (Exception)" block. It was proposed
> > originally to make sure that in the function if a user calls
> > localThreadMetadata() the dying / throwing thread would still be
> included,
> > but maybe we could consider this reason as well.
> >
> >
> > Guozhang
> >
> >
> > On Fri, Sep 25, 2020 at 3:20 PM Matthias J. Sax 
> wrote:
> >
> > > I am wondering about the usage pattern of this new method.
> > >
> > > As already discussed, the method only works if there is at least one
> > > running thread... Do we have any sense how many apps actually run
> > > multi-threaded vs single-threaded? It seems that the feature might be
> > > quite limited without having a handler that is called _before_ the
> > > thread dies? However, for this case, I am wondering if it might be
> > > easier to just return a enum type from such a handler instead of
> calling
> > > `KakfaStreams#initiateClosingAllClients()`?
> > >
> > > In general, it seems that there is some gap between the case of
> stopping
> > > all instances from "outside" (as proposed in the KIP), vs from "inside"
> > > (what I though was the original line of thinking for this KIP?).
> > >
> > > For the network partitioning case, should we at least shutdown all
> local
> > > threads? It might be sufficient that only one thread sends the
> "shutdown
> > > signal" while all others just shut down directly? Why should the other
> > > thread wait for shutdown signal for a rebalance? Or should we recommend
> > > to call `initiateClosingAllClients()` followed to `close()` to make
> sure
> > > that at least the local threads stop (what might be a little bit odd)?
> > >
> > > -Matthias
> > >
> > > On 9/24/20 7:51 AM, John Roesler wrote:
> > > > Hello all,
> > > >
> > > > Thanks for bringing this up, Bruno. It’s a really good point that a
> > > disconnected node would miss the signal and then resurrect a
> single-node
> > > “zombie cluster” when it reconnects.
> > > >
> > > > Offhand, I can’t think of a simple and reliable way to distinguish
> this
> > > case from one in which an operator starts a node manually after a prior
> > > shutdown signal. Can you? Right now, I’m inclined to agree with Walker
> > that
> > > we should leave this as a problem for the future.
> > > >
> > > > It should certainly be mentioned in the kip, and it also deserves
> > > special mention in our javadoc and html docs for this feature.
> > > >
> > > > Thanks!
> > > > John
> > > >
> > > > On Wed, Sep 23, 2020, at 17:49, Walker Carlson wrote:
> > > >> Bruno,
> > > >>
> > > >> I think that we can't guarantee that the message will get
> > > >> propagated perfectly in every case of, say network partitioning,
> > though
> > > it
> > > >> will work for many cases. So I would say it's best effort and I will
> > > >> mention it in the kip.
> > > >>
> > > >> As for when to use it I think we can discuss if this will be
> > > >> sufficient when we come to it, as long as we document its
> > capabilities.
> > > >>
> > > >> I hope this answers your question,
> > > >>
> > > >> Walker
> > > >>
> > > >> On Tue, Sep 22, 2020 at 12:33 AM Bruno Cadonna 
> > > wrote:
> > > >>
> > > >>> Walker,
> > > >>>
> > > >>> I am sorry, but I still have a comment on the KIP although you have
> > > >>> already started voting.
> > > >>>
> > > >>> What happens when a consumer of the group skips the rebalancing
> that
> > > >>> propagates the shutdown request? Do you give a guarantee that all
> > Kafka
> > > >>> Streams clients are shutdown or is it best effort? If it is best
> > > effort,
> > > >>> I guess the proposed method might not be used in critical cases
> where
> > > >>> stopping record consumption may prevent or limit damage. I 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #96

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9584: Fix Headers ConcurrentModificationException in Streams 
(#8181)

[github] KAFKA-10218: Stop reading config topic in every subsequent tick if 
catchup fails once (#8973)


--
[...truncated 6.70 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #95

2020-09-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #6

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-9584: Fix Headers ConcurrentModificationException in 
Streams (#8181)

[John Roesler] MINOR: add task ':streams:testAll' (#9073)


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldTerminateWhenUsingTaskIdling[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldTerminateWhenUsingTaskIdling[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED


Re: [DISCUSS] KIP-630: Kafka Raft Snapshot

2020-09-28 Thread Jose Garcia Sancio
Thanks for the reply Jun. Some comments below.

Here are the changes:
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158864763=27=26

> 20. Good point on metadata cache. I think we need to make a decision
> consistently. For example, if we decide that dedicated voter nodes don't
> serve metadata requests, then we don't need to expose the voters host/port
> to the client. Which KIP should make this decision?

Makes sense. My opinion is that this should be addressed in KIP-631
since I think exposing this information is independent of
snapshotting.

Note that I think there is a long term goal to make the
__cluster_metadata topic partition readable by a Kafka Consumer but we
can address that in a future KIP.

> 31. controller.snapshot.minimum.records: For a compacted topic, we use a
> ratio instead of the number of records to determine when to compact. This
> has some advantages. For example, if we use
> controller.snapshot.minimum.records and set it to 1000, then it will
> trigger the generation of a new snapshot when the existing snapshot is
> either 10MB or 1GB. Intuitively, the larger the snapshot, the more
> expensive it is to write to disk. So, we want to wait for more data to be
> accumulated before generating the next snapshot. The ratio based setting
> achieves this. For instance, a 50% ratio requires 10MB/1GB more data to be
> accumulated to regenerate a 10MB/1GB snapshot respectively.

I agree. I proposed using a simple algorithm like
"controller.snapshot.minimum.records" since calculating a dirty ratio
may not be straightforward when replicated log records don't map 1:1
to snapshot records. But I think we can implement a heuristic for
this. There is a small complication when generating the first snapshot
but it should be implementable. Here is the latest wording of the
"When to Snapshot" section:

If the Kafka Controller generates a snapshot too frequently then it
can negatively affect the performance of the disk. If it doesn't
generate a snapshot often enough then it can increase the amount of
time it takes to load its state into memory and it can increase the
amount space taken up by the replicated log. The Kafka Controller will
have a new configuration option
controller.snapshot.min.cleanable.ratio.  If the number of snapshot
records that have changed (deleted or modified) between the latest
snapshot and the current in-memory state divided by the total number
of snapshot records is greater than
controller.snapshot.min.cleanable.ratio, then the Kafka Controller
will perform a new snapshot.

Note that new snapshot records don't count against this ratio. If a
new snapshot record was added since that last snapshot then it doesn't
affect the dirty ratio. If a snapshot record was added and then
modified or deleted then it counts against the dirty ratio.

> 32. max.replication.lag.ms: It seems this is specific to the metadata
> topic. Could we make that clear in the name?

Good catch. Done.


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #127

2020-09-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-09-28 Thread John Roesler
Hi Dongjin,

Thanks! Sorry, I missed your prior message. The proposed API looks good to me. 

I’m wondering if we should specify what kind of store view would be returned 
when querying the operation result. It seems like it must be a 
ReadOnlyKeyValueStore. Does that sound right?

Thanks!
John

On Mon, Sep 28, 2020, at 10:06, Dongjin Lee wrote:
> Hi John,
> 
> I updated the KIP with the discussion above. The 'Public Interfaces'
> section describes the new API, and the 'Rejected Alternatives' section
> describes the reasoning about why we selected this API design and rejected
> the other alternatives.
> 
> Please have a look when you are free. And please note that the KIP freeze
> for 2.7.0 is imminent.
> 
> Thanks,
> Dongjin
> 
> On Mon, Sep 21, 2020 at 11:35 PM Dongjin Lee  wrote:
> 
> > Hi John,
> >
> > I updated the PR applying the API changes we discussed above. I am now
> > updating the KIP document.
> >
> > Thanks,
> > Dongjin
> >
> > On Sat, Sep 19, 2020 at 10:42 AM John Roesler  wrote:
> >
> >> Hi Dongjin,
> >>
> >> Yes, that’s right. My the time of KIP-307, we had no choice but to add a
> >> second name. But we do have a choice with Suppress.
> >>
> >> Thanks!
> >> -John
> >>
> >> On Thu, Sep 17, 2020, at 13:14, Dongjin Lee wrote:
> >> > Hi John,
> >> >
> >> > I just reviewed KIP-307. As far as I understood, ...
> >> >
> >> > 1. There was Materialized name initially.
> >> > 2. With KIP-307, Named Operations were added.
> >> > 3. Now we have two options for materializing suppression. If we take
> >> > Materialized name here, we have two names for the same operation, which
> >> is
> >> > not feasible.
> >> >
> >> > Do I understand correctly?
> >> >
> >> > > Do you have a use case in mind for having two separate names for the
> >> > operation and the view?
> >> >
> >> > No. I am now entirely convinced with your suggestion.
> >> >
> >> > I just started to update the draft implementation. If I understand
> >> > correctly, please notify me; I will update the KIP by adding the
> >> discussion
> >> > above.
> >> >
> >> > Best,
> >> > Dongjin
> >> >
> >> > On Thu, Sep 17, 2020 at 11:06 AM John Roesler 
> >> wrote:
> >> >
> >> > > Hi Dongjin,
> >> > >
> >> > > Thanks for the reply. Yes, that’s correct, we added that method to
> >> name
> >> > > the operation. But the operation seems synonymous with the view
> >> produced
> >> > > the operation, right?
> >> > >
> >> > > During KIP-307, I remember thinking that it’s unfortunate the we had
> >> to
> >> > > have two different “name” concepts for the same thing just because
> >> setting
> >> > > the name on Materialized is equivalent both to making it queriable and
> >> > > actually materializing it.
> >> > >
> >> > > If we were to reconsider the API, it would be nice to treat these
> >> three as
> >> > > orthogonal:
> >> > > * specify a name
> >> > > * flag to make the view queriable
> >> > > * flag to materialize the view
> >> > >
> >> > > That was the context behind my suggestion. Do you have a use case in
> >> mind
> >> > > for having two separate names for the operation and the view?
> >> > >
> >> > > Thanks,
> >> > > John
> >> > >
> >> > > On Wed, Sep 16, 2020, at 11:43, Dongjin Lee wrote:
> >> > > > Hi John,
> >> > > >
> >> > > > It seems like the available alternatives in this point is clear:
> >> > > >
> >> > > > 1. Pass queriable name as a separate parameter (i.e.,
> >> > > > `KTable#suppress(Suppressed, String)`)
> >> > > > 2. Make use of the Suppression processor name as a queryable name by
> >> > > adding
> >> > > > `enableQuery` optional flag to `Suppressed`.
> >> > > >
> >> > > > However, I doubt the second approach a little bit; As far as I
> >> know, the
> >> > > > processor name is introduced in KIP-307[^1] to make debugging
> >> topology
> >> > > easy
> >> > > > and understandable. Since the processor name is an independent
> >> concept
> >> > > with
> >> > > > the materialization, I feel the first approach is more natural and
> >> > > > consistent. Is there any specific reason that you prefer the second
> >> > > > approach?
> >> > > >
> >> > > > Thanks,
> >> > > > Dongjin
> >> > > >
> >> > > > [^1]:
> >> > > >
> >> > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL
> >> > > >
> >> > > >
> >> > > >
> >> > > > On Wed, Sep 16, 2020 at 11:48 PM John Roesler 
> >> > > wrote:
> >> > > >
> >> > > > > Hi Dongjin,
> >> > > > >
> >> > > > > Yes, that's where I was leaning. Although, I'd prefer adding
> >> > > > > the option to Suppressed instead of adding a new argument to
> >> > > > > the method call.
> >> > > > >
> >> > > > > What do you think about:
> >> > > > >
> >> > > > > class Suppressed {
> >> > > > > +  public Suppressed enableQuery();
> >> > > > > }
> >> > > > >
> >> > > > > Since Suppressed already has `withName(String)`, it seems
> >> > > > > like all we need to add is a boolean flag.
> >> > > > >
> >> > > > > Does that seem sensible to you?
> >> > > > >
> >> > > 

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #9

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-9584: Fix Headers ConcurrentModificationException in 
Streams (#8181)

[John Roesler] MINOR: add task ':streams:testAll' (#9073)


--
[...truncated 3.09 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED


[jira] [Resolved] (KAFKA-10511) Fix minor behavior difference in `MockLog`

2020-09-28 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10511.
-
Resolution: Fixed

> Fix minor behavior difference in `MockLog`
> --
>
> Key: KAFKA-10511
> URL: https://issues.apache.org/jira/browse/KAFKA-10511
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Fix minor difference in the implementation of the epoch cache in MockLog. In 
> `LeaderEpochFileCache`, we ensure new entries increase both start offset and 
> epoch monotonically. We also do not allow duplicates.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #95

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10509: Added throttle connection accept rate metric (KIP-612) 
(#9317)


--
[...truncated 6.70 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED


Re: ApacheCon Bug Bash

2020-09-28 Thread Tom DuBuisson
Kafka devs,

The bug bash is tomorrow.  If any core contributors would like to chime in
we can still get the github app hooked up.

If you aren't a core contributor then you can still join the ApacheCon bug
bash and we'll be sure to share the analysis results so you can have some
fun regardless.  By installing Muse on a forked Kafka repository you could
also receive analysis of your bug fixes as a one-off from the main
project.  For anyone interested in participating, we’d love to have you,
and you can register via our eventbrite link.
https://www.eventbrite.com/e/muse-presents-apachecon-bug-bash-2020-tickets-19739441

-Tom

On Tue, Sep 22, 2020 at 3:10 PM Tom DuBuisson  wrote:

> Kafka Developers,
>
>
>
> As part of our sponsorship of ApacheCon, our company MuseDev is doing a
> Bug Bash for select Apache projects. We'll bring members of the ApacheCon
> community together to find and fix a range of security and performance bugs
> during the conference, and gameify the experience with teams, a
> leaderboard, and prizes. The bash is open to everyone whether attending the
> conference or not, and our whole dev team will also be participating to
> help fix as many bugs as we can.
>
>
>
> We're seeding the bug list with results from Muse, our code analysis
> platform, which runs as a Github App and comments on possible bugs as part
> of the pull request workflow.  Here's an example of what it looks like:
>
> https://github.com/curl/curl/pull/5971#discussion_r490252196
> 
>
>
>
> We explored a number of Apache projects and are reaching out because our
> analysis through Muse found some interesting bugs that could be fixed
> during the Bash.
>
>
>
> We're writing to see if you'd be interested in having your project
> included in the Bash. Everything is set up on our end, and if you're
> interested, we would need you to say yes on this listserv, and we’ll work
> with the Apache Infrastructure team to grant Muse access to your Github
> mirror. We'll then make sure it's all set-up and ready for the Bash. And of
> course, everyone on the project is most welcome to join the Bash and help
> us smash some bugs.
>
>
> -Tom
>


[jira] [Created] (KAFKA-10532) Do not wipe state store under EOS when closing a RESTORING active or RUNNING standby task

2020-09-28 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-10532:
-

 Summary: Do not wipe state store under EOS when closing a 
RESTORING active or RUNNING standby task
 Key: KAFKA-10532
 URL: https://issues.apache.org/jira/browse/KAFKA-10532
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang


Today whenever we are closing-dirty a task, we always wipe out the state stores 
if we are under EOS. But when the closing task was a RESTORING active, or a 
RUNNING standby, we may actually not need to wipe out the stores since we know 
that upon resuming, we would still restore the task before transit to 
processing (assuming the LEO offset would not be truncated), i.e. when they 
resumes it does not matter if the same records gets applied twice during the 
continued restoration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-671: Shutdown Streams Application when appropriate exception is thrown

2020-09-28 Thread Walker Carlson
I think that Guozhang and Matthias make good points.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler

I have updated the kip to include a StreamsUncaughtExceptionHandler



On Sun, Sep 27, 2020 at 7:28 PM Guozhang Wang  wrote:

> I think single-threaded clients may be common in practice, and what
> Matthias raised is a valid concern.
>
> We had a related discussion in KIP-663, that maybe we can tweak the
> `UncaughtExceptionExceptionHandler` a bit such that instead of just
> registered users' function into the individual threads, we trigger them
> BEFORE the thread dies in the "catch (Exception)" block. It was proposed
> originally to make sure that in the function if a user calls
> localThreadMetadata() the dying / throwing thread would still be included,
> but maybe we could consider this reason as well.
>
>
> Guozhang
>
>
> On Fri, Sep 25, 2020 at 3:20 PM Matthias J. Sax  wrote:
>
> > I am wondering about the usage pattern of this new method.
> >
> > As already discussed, the method only works if there is at least one
> > running thread... Do we have any sense how many apps actually run
> > multi-threaded vs single-threaded? It seems that the feature might be
> > quite limited without having a handler that is called _before_ the
> > thread dies? However, for this case, I am wondering if it might be
> > easier to just return a enum type from such a handler instead of calling
> > `KakfaStreams#initiateClosingAllClients()`?
> >
> > In general, it seems that there is some gap between the case of stopping
> > all instances from "outside" (as proposed in the KIP), vs from "inside"
> > (what I though was the original line of thinking for this KIP?).
> >
> > For the network partitioning case, should we at least shutdown all local
> > threads? It might be sufficient that only one thread sends the "shutdown
> > signal" while all others just shut down directly? Why should the other
> > thread wait for shutdown signal for a rebalance? Or should we recommend
> > to call `initiateClosingAllClients()` followed to `close()` to make sure
> > that at least the local threads stop (what might be a little bit odd)?
> >
> > -Matthias
> >
> > On 9/24/20 7:51 AM, John Roesler wrote:
> > > Hello all,
> > >
> > > Thanks for bringing this up, Bruno. It’s a really good point that a
> > disconnected node would miss the signal and then resurrect a single-node
> > “zombie cluster” when it reconnects.
> > >
> > > Offhand, I can’t think of a simple and reliable way to distinguish this
> > case from one in which an operator starts a node manually after a prior
> > shutdown signal. Can you? Right now, I’m inclined to agree with Walker
> that
> > we should leave this as a problem for the future.
> > >
> > > It should certainly be mentioned in the kip, and it also deserves
> > special mention in our javadoc and html docs for this feature.
> > >
> > > Thanks!
> > > John
> > >
> > > On Wed, Sep 23, 2020, at 17:49, Walker Carlson wrote:
> > >> Bruno,
> > >>
> > >> I think that we can't guarantee that the message will get
> > >> propagated perfectly in every case of, say network partitioning,
> though
> > it
> > >> will work for many cases. So I would say it's best effort and I will
> > >> mention it in the kip.
> > >>
> > >> As for when to use it I think we can discuss if this will be
> > >> sufficient when we come to it, as long as we document its
> capabilities.
> > >>
> > >> I hope this answers your question,
> > >>
> > >> Walker
> > >>
> > >> On Tue, Sep 22, 2020 at 12:33 AM Bruno Cadonna 
> > wrote:
> > >>
> > >>> Walker,
> > >>>
> > >>> I am sorry, but I still have a comment on the KIP although you have
> > >>> already started voting.
> > >>>
> > >>> What happens when a consumer of the group skips the rebalancing that
> > >>> propagates the shutdown request? Do you give a guarantee that all
> Kafka
> > >>> Streams clients are shutdown or is it best effort? If it is best
> > effort,
> > >>> I guess the proposed method might not be used in critical cases where
> > >>> stopping record consumption may prevent or limit damage. I am not
> > saying
> > >>> that it must be a guarantee, but this question should be answered in
> > the
> > >>> KIP, IMO.
> > >>>
> > >>> Best,
> > >>> Bruno
> > >>>
> > >>> On 22.09.20 01:14, Walker Carlson wrote:
> >  The error code right now is the assignor error, 2 is coded for
> > shutdown
> >  but it could be expanded to encode the causes or for other errors
> that
> > >>> need
> >  to be communicated. For example we can add error code 3 to close the
> > >>> thread
> >  but leave the client in an error state if we choose to do so in the
> > >>> future.
> > 
> >  On Mon, Sep 21, 2020 at 3:43 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> >  wrote:
> > 
> > > Thanks for the KIP Walker.
> > >
> > > In the KIP we mentioned "In order to communicate the shutdown
> request
> > >>> from
> > > 

Re: Why should SASL principal be unchanged upon reauth

2020-09-28 Thread Gokul Ramanan Subramanian
Thanks Ron. Makes sense. I'll pitch the idea of writing this KIP to my team.

On Mon, Sep 28, 2020 at 11:01 PM Ron Dagostino  wrote:

> Hi Gokul.  This would require a KIP to discuss.  If you have a compelling
> need to change this then I encourage you to create a KIP and start the
> discussion.  I’m not aware of anything specific that would prevent the
> change, but I will say that my gut instinct says it wouldn’t be a good idea
> without a truly compelling justification.
>
> Ron
>
> > On Sep 28, 2020, at 8:34 AM, Gokul Ramanan Subramanian <
> gokul24...@gmail.com> wrote:
> >
> > Thanks Ron.
> >
> > Does this mean that it would be safe to remove the "KafkaPrincipal has
> not
> > changed after reauth" check in a future version? Or do you think there
> are
> > portions of the code that work under the assumption that this check
> exists?
> >
> >> On Sun, Sep 27, 2020 at 12:07 AM Ron Dagostino 
> wrote:
> >>
> >> Hi Gokul.  I looked back at the discussion thread, and it appears it
> was an
> >> arbitrary decision.
> >>
> >>
> >>
> https://lists.apache.org/thread.html/45c09f226386c0b1dc5f9b36e112882a20414d5900f8d778969e633e%40%3Cdev.kafka.apache.org%3E
> >>
> >> Ron
> >>
> >> On Thu, Sep 24, 2020 at 11:03 AM Gokul Ramanan Subramanian <
> >> gokul24...@gmail.com> wrote:
> >>
> >>> Hi.
> >>>
> >>> I was looking through Kafka code and found that SASL KafkaPrincipals
> are
> >>> not supposed to change upon reauthentication, and if they do, the
> broker
> >>> will kill the TCP connection.
> >>>
> >>> What is the reasoning behind this limitation?
> >>>
> >>> Thanks.
> >>>
> >>
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #94

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10509: Added throttle connection accept rate metric (KIP-612) 
(#9317)

[github] KAFKA-9584: Fix Headers ConcurrentModificationException in Streams 
(#8181)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos 

Re: Why should SASL principal be unchanged upon reauth

2020-09-28 Thread Ron Dagostino
Hi Gokul.  This would require a KIP to discuss.  If you have a compelling need 
to change this then I encourage you to create a KIP and start the discussion.  
I’m not aware of anything specific that would prevent the change, but I will 
say that my gut instinct says it wouldn’t be a good idea without a truly 
compelling justification.

Ron

> On Sep 28, 2020, at 8:34 AM, Gokul Ramanan Subramanian  
> wrote:
> 
> Thanks Ron.
> 
> Does this mean that it would be safe to remove the "KafkaPrincipal has not
> changed after reauth" check in a future version? Or do you think there are
> portions of the code that work under the assumption that this check exists?
> 
>> On Sun, Sep 27, 2020 at 12:07 AM Ron Dagostino  wrote:
>> 
>> Hi Gokul.  I looked back at the discussion thread, and it appears it was an
>> arbitrary decision.
>> 
>> 
>> https://lists.apache.org/thread.html/45c09f226386c0b1dc5f9b36e112882a20414d5900f8d778969e633e%40%3Cdev.kafka.apache.org%3E
>> 
>> Ron
>> 
>> On Thu, Sep 24, 2020 at 11:03 AM Gokul Ramanan Subramanian <
>> gokul24...@gmail.com> wrote:
>> 
>>> Hi.
>>> 
>>> I was looking through Kafka code and found that SASL KafkaPrincipals are
>>> not supposed to change upon reauthentication, and if they do, the broker
>>> will kill the TCP connection.
>>> 
>>> What is the reasoning behind this limitation?
>>> 
>>> Thanks.
>>> 
>> 


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #126

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10509: Added throttle connection accept rate metric (KIP-612) 
(#9317)


--
[...truncated 3.35 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED


Re: [DISCUSS] KIP-630: Kafka Raft Snapshot

2020-09-28 Thread Jose Garcia Sancio
Thanks Jason. Some comments below.

> > Generally the number of snapshots on disk will be one. I suspect that
> users will want some control over this. We can add a configuration
> option that doesn't delete, or advances the log begin offset past, the
> N latest snapshots. We can set the default value for this
> configuration to two. What do you think?
>
> I know Zookeeper has a config like this, but I'm not sure how frequently it
> is used. I would probably suggest we pick a good number of snapshots (maybe
> just 1-2) and leave it out of the configs.
>

Sounds good to me. If followers/observers are keeping up with the
Leader, I think the description in section "When to Increase the Log
Begin Offset" will lead to one snapshot on disk in the steady state.

> > We could use the same configuration we have for Fetch but to avoid
> confusion let's add two more configurations for
> "replica.fetch.snapshot.max.bytes" and
> "replica.fetch.snapshot.response.max.bytes".
>
> My vote would probably be to reuse the existing configs. We can add new
> configs in the future if the need emerges, but I can't think of a good
> reason why a user would want these to be different.

Sounds good to me. Removed the configuration from the KIP. Updated the
FetchSnapshot request handling section to mention that the
replica.fetch.response.max.bytes configuration will be used.

> By the way, it looks like the FetchSnapshot schema now has both a partition
> level and a top level max bytes. Do we need both?

Kepted the top level MaxBytes and remove the topic partition level MaxBytes.

> > The snapshot epoch will be used when ordering snapshots and more
> importantly when setting the LastFetchedEpoch in the Fetch request. It
> is possible for a follower to have a snapshot and an empty log. In
> this case the follower will use the epoch of the snapshot when setting
> the LastFetchEpoch in the Fetch request.
>
> Just to be clear, I think it is important to include the snapshot epoch so
> that we /can/ reason about the snapshot state in the presence of data loss.
> However, if we excluded data loss, then this would strictly speaking be
> unnecessary because a snapshot offset would always be uniquely defined
> (since we do not snapshot above the high watermark). Hence it would be safe
> to leave LastFetchedEpoch undefined. Anyway, I think we're on the same page
> about the behavior, just thought it might be useful to clarify the
> reasoning.

Okay. Even though you are correct that the LastFetchEpoch shouldn't
matter since the follower is fetching committed data. I still think
that the follower should send the epoch of the snapshot for the
LastFetchedEpoch for extra validation on the leader. What do you
think?


[jira] [Created] (KAFKA-10531) KafkaBasedLog can sleep for negative values

2020-09-28 Thread Vikas Singh (Jira)
Vikas Singh created KAFKA-10531:
---

 Summary: KafkaBasedLog can sleep for negative values
 Key: KAFKA-10531
 URL: https://issues.apache.org/jira/browse/KAFKA-10531
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.6.0
Reporter: Vikas Singh
 Fix For: 2.6.1


{{time.milliseconds}} is not monotonic, so this code can throw :

{{java.lang.IllegalArgumentException: timeout value is negative}}

 
{code:java}
long started = time.milliseconds();
while (partitionInfos == null && time.milliseconds() - started < 
CREATE_TOPIC_TIMEOUT_MS) {
partitionInfos = consumer.partitionsFor(topic);
Utils.sleep(Math.min(time.milliseconds() - started, 1000));
}
{code}

We need to check for negative value before sleeping.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-09-28 Thread Jun Rao
Hi, Colin,

Thanks for the reply. A few more comments below.

62.
62.1 controller.listener.names: So, is this used for the controller or the
broker trying to connect to the controller?
62.2 If we want to take the approach to share the configs that are common
between the broker and the controller, should we share the id too?
62.3 We added some configs in KIP-595 prefixed with "quorum" and we plan to
add some controller specific configs prefixed with "controller". KIP-630
plans to add some other controller specific configs with no prefix. Should
we standardize all controller specific configs with the same prefix?

70. Could you explain the impact of process.roles a bit more? For example,
if process.roles=controller, does the node still maintain metadata cache as
described in KIP-630? Do we still return the host/port for those nodes in
the metadata response?

Thanks,

Jun

On Mon, Sep 28, 2020 at 9:24 AM Colin McCabe  wrote:

> On Fri, Sep 25, 2020, at 17:35, Jun Rao wrote:
> > Hi, Colin,
> >
> > Thanks for the reply.
> >
> > 62. Thinking about this more, I am wondering what's our overall strategy
> > for configs shared between the broker and the controller. For example,
> both
> > the broker and the controller have to define listeners. So both need
> > configs like listeners/advertised.listeners. Both the new controller and
> > the broker replicate data. So both need to define some replication
> related
> > configurations (replica.fetch.min.bytes, replica.fetch.wait.max.ms,
> etc).
> > Should we let the new controller share those configs with brokers or
> should
> > we define new configs just for the controller? It seems that we want to
> > apply the same strategy consistently for all shared configs?
> >
>
> Hi Jun,
>
> That's a good question.  I think that we should share as many
> configurations as possible.  There will be a few cases where this isn't
> possible, and we need to create a new configuration key that is
> controller-only, but I think that will be relatively rare.
>
> >
> > 63. If a node has process.roles set to controller, does one still need to
> > set broker.id on this node?
> >
>
> No, broker.id does not need to be set in that case.
>
> best,
> Colin
>
> >
> > Thanks,
> >
> > Jun
> >
> >
> >
> > On Fri, Sep 25, 2020 at 2:17 PM Colin McCabe  wrote:
> >
> > > On Fri, Sep 25, 2020, at 10:17, Jun Rao wrote:
> > > > Hi, Colin,
> > > >
> > > > Thanks for the reply.
> > > >
> > > > 60. Yes, I think you are right. We probably need the controller id
> when a
> > > > broker starts up. A broker only stores the Raft leader id in the
> metadata
> > > > file. To do the initial fetch to the Raft leader, it needs to know
> the
> > > > host/port associated with the leader id.
> > > >
> > > > 62. It seems there are 2 parts to this : (1) which listener a client
> > > should
> > > > use to initiate a connection to the controller and (2) which listener
> > > > should a controller use to accept client requests. For (1), at any
> point
> > > of
> > > > time, a client only needs to use one listener. I think
> > > > controller.listener.name is meant for the client.
> > >
> > > Hi Jun,
> > >
> > > controller.listener.names is also used by the controllers.  In the case
> > > where we have a broker and a controller in the same JVM, we have a
> single
> > > config file.  Then we need to know which listeners belong to the
> controller
> > > and which belong to the broker component.  That's why it's a list.
> > >
> > > > So, a single value seems
> > > > to make more sense. Currently, we don't have a configuration for
> (2). We
> > > > could add a new one for that and support a list. I am wondering how
> > > useful
> > > > it will be. One example that I can think of is that we can reject
> > > > non-controller related requests if accepted on controller-only
> listeners.
> > > > However, we already support separate authentication for the
> controller
> > > > listener. So, not sure how useful it is.
> > >
> > > The controller always has a separate listener and does not share
> listeners
> > > with the broker.  The main reason for this is to avoid giving clients
> the
> > > ability to launch a denial-of-service attack on the controller by
> flooding
> > > a broker port.  A lot of times, these attacks are made unintentionally
> by
> > > poorly coded or configured clients.  Additionally, broker ports can
> also be
> > > very busy with large fetch requests, and so on.  It's just a bad
> > > configuration in general to try to overload the same port for the
> > > controller and broker, and we don't want to allow it.  It would be a
> > > regression to go from the current system where control requests are
> safely
> > > isolated on a separate port, to one where they're not.  It also makes
> the
> > > code and configuration a lot messier.
> > >
> > > >
> > > > 63. (a) I think most users won't know controller.id defaults to
> > > broker.id +
> > > > 3000. So, it can be confusing for them to set up controller.connect.
> If
> > > > 

Re: [DISCUSS] KIP-516: Topic Identifiers

2020-09-28 Thread Justine Olshan
Hello all,

I just wanted to follow up on this discussion. Did we come to an
understanding about the directory structure?

I think the biggest question here is what is acceptable to leave out due to
scope vs. what is considered to be too much tech debt.
This KIP is already pretty large with the number of changes, but it also
makes sense to do things correctly, so I'd love to hear everyone's thoughts.

Thanks,
Justine

On Fri, Sep 25, 2020 at 8:19 AM Lucas Bradstreet  wrote:

> Hi Ismael,
>
> If you do not store it in a metadata file or in the directory structure
> would you then
> require the LeaderAndIsrRequest following startup to give you some notion
> of
> topic name in memory? We would still need this information for the older
> protocols, but
> perhaps this is what's meant by tech debt.
>
> Once we're free of the old non-topicID requests then I think you wouldn't
> need to retain the topic name.
> I think the ability to easily look up topic names associated with partition
> directories would still be missed
> when diagnosing issues, though maybe it wouldn't be a deal breaker with the
> right tooling.
>
> Thanks,
>
> Lucas
>
> On Fri, Sep 25, 2020 at 7:55 AM Ismael Juma  wrote:
>
> > Hi Lucas,
> >
> > Why would you include the name and id? I think you'd want to remove the
> > name from the directory name right? Jason's suggestion was that if you
> > remove the name from the directory, then why would you need the id name
> > mapping file?
> >
> > Ismael
> >
> > On Thu, Sep 24, 2020 at 4:24 PM Lucas Bradstreet 
> > wrote:
> >
> > > > 2. Part of the usage of the file is to have persistent storage of the
> > > topic
> > > ID and use it to compare with the ID supplied in the LeaderAndIsr
> > Request.
> > > There is some discussion in the KIP about changes to the directory
> > > structure, but I believe directory changes were considered to be out of
> > > scope when the KIP was written.
> > >
> > >
> > > Yeah, I was hoping to get a better understanding of why it was taken
> out
> > of
> > > scope. Perhaps Lucas Bradstreet might have more insight about the
> > decision.
> > > Basically my point is that we have to create additional infrastructure
> > here
> > > to support the name/id mapping, so I wanted to understand if we just
> > > consider this a sort of tech debt. It would be useful to cover how we
> > > handle the case when this file gets corrupted. Seems like we just have
> to
> > > trust that it matches whatever the controller tells us and rewrite it?
> > >
> > >
> > > Hi Jason, Justine,
> > >
> > > My thought process is that we will likely need the metadata file
> whether
> > we
> > > rename the directories or not.
> > > Linux supports filenames of up to 255 bytes and that would not be
> enough
> > to
> > > support a directory name
> > >  including both the name and topic ID. Given that fact, it seemed
> > > reasonable to add the metadata file
> > > and leave the directory rename until some time in the future (possibly
> > > never).
> > >
> > > If the file does get corrupted I think you're right. We would either
> have
> > > to trust it matches what the controller tells us
> > >  or error out and let an administrator resolve it by checking across
> > > replicas for consistency.
> > >
> > > Lucas
> > >
> > >
> > > On Thu, Sep 24, 2020 at 3:41 PM Jason Gustafson 
> > > wrote:
> > >
> > > > Thanks Justine. Responses below:
> > > >
> > > > > 1. Yes, the directory will still be based on the topic names.
> > > > LeaderAndIsrRequest is one of the few requests that will still
> contain
> > > the
> > > > topic name. So I think we have this covered. Sorry for confusion.
> > > >
> > > > Ah, you're right. My eyes passed right over the field.
> > > >
> > > > > 2. Part of the usage of the file is to have persistent storage of
> the
> > > > topic
> > > > ID and use it to compare with the ID supplied in the LeaderAndIsr
> > > Request.
> > > > There is some discussion in the KIP about changes to the directory
> > > > structure, but I believe directory changes were considered to be out
> of
> > > > scope when the KIP was written.
> > > >
> > > > Yeah, I was hoping to get a better understanding of why it was taken
> > out
> > > of
> > > > scope. Perhaps Lucas Bradstreet might have more insight about the
> > > decision.
> > > > Basically my point is that we have to create additional
> infrastructure
> > > here
> > > > to support the name/id mapping, so I wanted to understand if we just
> > > > consider this a sort of tech debt. It would be useful to cover how we
> > > > handle the case when this file gets corrupted. Seems like we just
> have
> > to
> > > > trust that it matches whatever the controller tells us and rewrite
> it?
> > > >
> > > > > 3. I think this is a good point, but I again I wonder about the
> scope
> > > of
> > > > the KIP. Most of the changes mentioned in the KIP are for supporting
> > > topic
> > > > deletion and I believe that is why the produce request was listed
> under
> > > > future work.
> > > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #125

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: standardize rebalance related logging for easy discovery & 
debugging (#9295)

[github] MINOR: add docs for 2.7 TRACE-level e2e latency metrics (#9339)

[github] KAFKA-10502: Use Threadlocal.remote to avoid leak on TimestampRouter 
(#9304)

[github] MINOR: Update the javadoc in GroupMetadataManager.scala (#9241)


--
[...truncated 6.70 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-09-28 Thread Colin McCabe
On Fri, Sep 25, 2020, at 17:35, Jun Rao wrote:
> Hi, Colin,
> 
> Thanks for the reply.
> 
> 62. Thinking about this more, I am wondering what's our overall strategy
> for configs shared between the broker and the controller. For example, both
> the broker and the controller have to define listeners. So both need
> configs like listeners/advertised.listeners. Both the new controller and
> the broker replicate data. So both need to define some replication related
> configurations (replica.fetch.min.bytes, replica.fetch.wait.max.ms, etc).
> Should we let the new controller share those configs with brokers or should
> we define new configs just for the controller? It seems that we want to
> apply the same strategy consistently for all shared configs?
> 

Hi Jun,

That's a good question.  I think that we should share as many configurations as 
possible.  There will be a few cases where this isn't possible, and we need to 
create a new configuration key that is controller-only, but I think that will 
be relatively rare.

>
> 63. If a node has process.roles set to controller, does one still need to
> set broker.id on this node?
> 

No, broker.id does not need to be set in that case.

best,
Colin

>
> Thanks,
> 
> Jun
> 
> 
> 
> On Fri, Sep 25, 2020 at 2:17 PM Colin McCabe  wrote:
> 
> > On Fri, Sep 25, 2020, at 10:17, Jun Rao wrote:
> > > Hi, Colin,
> > >
> > > Thanks for the reply.
> > >
> > > 60. Yes, I think you are right. We probably need the controller id when a
> > > broker starts up. A broker only stores the Raft leader id in the metadata
> > > file. To do the initial fetch to the Raft leader, it needs to know the
> > > host/port associated with the leader id.
> > >
> > > 62. It seems there are 2 parts to this : (1) which listener a client
> > should
> > > use to initiate a connection to the controller and (2) which listener
> > > should a controller use to accept client requests. For (1), at any point
> > of
> > > time, a client only needs to use one listener. I think
> > > controller.listener.name is meant for the client.
> >
> > Hi Jun,
> >
> > controller.listener.names is also used by the controllers.  In the case
> > where we have a broker and a controller in the same JVM, we have a single
> > config file.  Then we need to know which listeners belong to the controller
> > and which belong to the broker component.  That's why it's a list.
> >
> > > So, a single value seems
> > > to make more sense. Currently, we don't have a configuration for (2). We
> > > could add a new one for that and support a list. I am wondering how
> > useful
> > > it will be. One example that I can think of is that we can reject
> > > non-controller related requests if accepted on controller-only listeners.
> > > However, we already support separate authentication for the controller
> > > listener. So, not sure how useful it is.
> >
> > The controller always has a separate listener and does not share listeners
> > with the broker.  The main reason for this is to avoid giving clients the
> > ability to launch a denial-of-service attack on the controller by flooding
> > a broker port.  A lot of times, these attacks are made unintentionally by
> > poorly coded or configured clients.  Additionally, broker ports can also be
> > very busy with large fetch requests, and so on.  It's just a bad
> > configuration in general to try to overload the same port for the
> > controller and broker, and we don't want to allow it.  It would be a
> > regression to go from the current system where control requests are safely
> > isolated on a separate port, to one where they're not.  It also makes the
> > code and configuration a lot messier.
> >
> > >
> > > 63. (a) I think most users won't know controller.id defaults to
> > broker.id +
> > > 3000. So, it can be confusing for them to set up controller.connect. If
> > > this is truly needed, it seems that it's less confusing to make
> > > controller.id required.
> > > (b) I am still trying to understand if we truly need to expose a
> > > controller.id. What if we only expose broker.id and let
> > controller.connect
> > > just use broker.id? What will be missing?
> >
> > The controller has a separate ID from the broker, so knowing broker.id is
> > not helpful here.
> >
> > best,
> > Colin
> >
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Thu, Sep 24, 2020 at 10:55 PM Colin McCabe 
> > wrote:
> > >
> > > > On Thu, Sep 24, 2020, at 16:24, Jun Rao wrote:
> > > > > Hi, Colin,
> > > > >
> > > > > Thanks for the reply and the updated KIP. A few more comments below.
> > > > >
> > > >
> > > > Hi Jun,
> > > >
> > > > >
> > > > > 53. It seems that you already incorporated the changes in KIP-516.
> > With
> > > > > topic ids, we don't need to wait for the topic's data to be deleted
> > > > before
> > > > > removing the topic metadata. If the topic is recreated, we can still
> > > > delete
> > > > > the data properly based on the topic id. So, it seems that we can
> > remove
> > > > > 

Re: [DISCUSS] KIP-653: Upgrade log4j to log4j2

2020-09-28 Thread Dongjin Lee
> 3. ... For the same reason I call that a bug, I think the description in
the KIP is
incorrect.

Agree, the description in the KIP is written before you open a PR (
https://github.com/apache/kafka/pull/9266) - As you remember, I am
participating the review. I think it is a bug and should be fixed. (And it
seems like it will be.)

> In any case I think some careful testing to ensure compatibility would be
very beneficial.

Yes, I am now adding some additional verifications to make sure. It is
almost done, and I will update the KIP as soon as I complete them.

Don't hesitate to give me additional comments if it is necessary.

Best,
Dongjin

On Mon, Sep 28, 2020 at 8:03 PM Tom Bentley  wrote:

> Hi Dongjin,
>
> Sorry for the late reply.
>
> 1. I think translating the name "root" to "" would work fine.
>
> 2. The second bullet in the Connect section of the KIP seems to need some
> translation between null and OFF, similar to the name translation.
>
> 3. The third bullet seems to be about logger inheritance. As you know, I
> have an open PR (https://github.com/apache/kafka/pull/9266) to fix a bug
> where the broker/connect reports the root logger's level rather than
> respecting the actual (inherited) level of a logger in the hierarchy. For
> the same reason I call that a bug, I think the description in the KIP is
> incorrect. The behaviour change described would seem to be incompatible
> whether the PR was merged or not.
>
> I'm not an expert in log4j, but my understanding is as follows:
>
> * In original log4j, a logger and its configuration were both represented
> by a Logger instance. A Logger could be instantiated and configured
> according to the config file (or programmatically). If it was created by a
> call to the LogManager (e.g. in order to log something) its configuration
> would be inherited. This meant there was only one place to look for a
> loggers level: The Logger itself. This meant that getting or setting a
> logger's level was easy.
>
> * In log4j2 a LoggerConfig (the thing created by the config file) is a
> separate thing from the Logger (the thing on which you call warn(), debug()
> etc) itself and I think this makes it harder to provide compatibility with
> the log4j v1 behaviour for things like getting a logger's level, because
> AFAIK log4j2 doesn't provide a convenient API for doing so. Instead when
> finding a logger's level you have to look for both a LoggerConfig and a
> Logger, because the level could be set in either. This is all based on what
> I learned when I was looking at the log4j2 switch (before I knew you were
> already looking at it, if you recall). I have some code from then [1] which
> may be of use though it's in a bit of a rough state. In any case I think
> some careful testing to ensure compatibility would be very beneficial.
>
> Kind regards,
>
> Tom
>
> [1]:
>
> https://github.com/tombentley/kafka/blob/KAFKA-1368-log4j2/core/src/main/scala/kafka/utils/Log4j2Controller.scala
>
>
>
>
>
> On Wed, Sep 23, 2020 at 1:50 PM Dongjin Lee  wrote:
>
> > Hi Tom,
> >
> > Thanks for the detailed analysis. Recently, I was also thinking about API
> > compatibility. I initially thought that the difference between the root
> > logger name would break the compatibility (as the KIP states), it seems
> > like I found a workaround:
> >
> > 1. When the user requests arrive, regard the logger name 'root' as an
> empty
> > string. (i.e., translate the request into the log4j2 equivalent.)
> > 2. When generating the response, change the logger name '' into 'root'.
> > (i.e., translate the response into the log4j equivalent.)
> > 3. Remove (or make reverse) the workaround above when we make log4j2
> > default.
> >
> > In short, it seems like we can handle the API incompatibility introduced
> by
> > the root logger name change by adding a facade.
> >
> > How do you think?
> >
> > Thanks,
> > Dongjin
> >
> > On Wed, Sep 23, 2020 at 7:36 PM Tom Bentley  wrote:
> >
> > > Hi Dongjin,
> > >
> > > I'd like to see this feature, but if I understand correctly, the KIP in
> > its
> > > current form breaks a couple of Kafka APIs. For Kafka Connect it says
> > "From
> > > log4j2, the name of the root logger becomes empty string from 'root'.
> It
> > > impacts Kafka connect's dynamic logging control feature. (And should be
> > > documented.)". This seems to imply that any tooling that a user might
> > have
> > > written about logging in Kafka Connect will break because the client
> and
> > > server don't have a shared understanding of how to identify the root
> > > logger. The same would be true for the DescribeConfigs, AlterConfigs
> and
> > > IncrementalAlterConfigs protocols using the BROKER_LOGGER resource
> type,
> > I
> > > think.
> > >
> > > Maybe that's OK if the behaviour was changing in a new major release of
> > > Kafka (e.g. 3.0), but I don't think it's allowed in Kafka 2.7 given the
> > > project's compatibility requirements & semantic versioning.
> > >
> > > If these API compatibility 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-09-28 Thread Satish Duggana
Hi Dhruvil,
Thanks for looking into the KIP and sending your comments. Sorry for
the late reply, missed it in the mail thread.

1. Could you describe how retention would work with this KIP and which
threads are responsible for driving this work? I believe there are 3 kinds
of retention processes we are looking at:
  (a) Regular retention for data in tiered storage as per configured `
retention.ms` / `retention.bytes`.
  (b) Local retention for data in local storage as per configured `
local.log.retention.ms` / `local.log.retention.bytes`
  (c) Possibly regular retention for data in local storage, if the tiering
task is lagging or for data that is below the log start offset.

Local log retention is done by the existing log cleanup tasks. These
are not done for segments that are not yet copied to remote storage.
Remote log cleanup is done by the leader partition’s RLMTask.

2. When does a segment become eligible to be tiered? Is it as soon as the
segment is rolled and the end offset is less than the last stable offset as
mentioned in the KIP? I wonder if we need to consider other parameters too,
like the highwatermark so that we are guaranteed that what we are tiering
has been committed to the log and accepted by the ISR.

AFAIK, last stable offset is always <= highwatermark. This will make
sure we are always tiering the message segments which have been
accepted by ISR and transactionally completed.


3. The section on "Follower Fetch Scenarios" is useful but is a bit
difficult to parse at the moment. It would be useful to summarize the
changes we need in the ReplicaFetcher.

It may become difficult for users to read/follow if we add code changes here.

4. Related to the above, it's a bit unclear how we are planning on
restoring the producer state for a new replica. Could you expand on that?

It is mentioned in the KIP BuildingRemoteLogAuxState is introduced to
build the state like leader epoch sequence and producer snapshots
before it starts fetching the data from the leader. We will make it
clear in the KIP.


5. Similarly, it would be worth summarizing the behavior on unclean leader
election. There are several scenarios to consider here: data loss from
local log, data loss from remote log, data loss from metadata topic, etc.
It's worth describing these in detail.

We mentioned the cases about unclean leader election in the follower
fetch scenarios.
If there are errors while fetching data from remote store or metadata
store, it will work the same way as it works with local log. It
returns the error back to the caller. Please let us know if I am
missing your point here.


7. For a READ_COMMITTED FetchRequest, how do we retrieve and return the
aborted transaction metadata?

When a fetch for a remote log is accessed, we will fetch aborted
transactions along with the segment if it is not found in the local
index cache. This includes the case of transaction index not existing
in the remote log segment. That means, the cache entry can be empty or
have a list of aborted transactions.


8. The `LogSegmentData` class assumes that we have a log segment, offset
index, time index, transaction index, producer snapshot and leader epoch
index. How do we deal with cases where we do not have one or more of these?
For example, we may not have a transaction index or producer snapshot for a
particular segment. The former is optional, and the latter is only kept for
up to the 3 latest segments.

This is a good point,  we discussed this in the last meeting.
Transaction index is optional and we will copy them only if it exists.
We want to keep all the producer snapshots at each log segment rolling
and they can be removed if the log copying is successful and it still
maintains the existing latest 3 segments, We only delete the producer
snapshots which have been copied to remote log segments on leader.
Follower will keep the log segments beyond the segments which have not
been copied to remote storage. We will update the KIP with these
details.

Thanks,
Satish.

On Thu, Sep 17, 2020 at 1:47 AM Dhruvil Shah  wrote:
>
> Hi Satish, Harsha,
>
> Thanks for the KIP. Few questions below:
>
> 1. Could you describe how retention would work with this KIP and which
> threads are responsible for driving this work? I believe there are 3 kinds
> of retention processes we are looking at:
>   (a) Regular retention for data in tiered storage as per configured `
> retention.ms` / `retention.bytes`.
>   (b) Local retention for data in local storage as per configured `
> local.log.retention.ms` / `local.log.retention.bytes`
>   (c) Possibly regular retention for data in local storage, if the tiering
> task is lagging or for data that is below the log start offset.
>
> 2. When does a segment become eligible to be tiered? Is it as soon as the
> segment is rolled and the end offset is less than the last stable offset as
> mentioned in the KIP? I wonder if we need to consider other parameters too,
> like the highwatermark so that we are guaranteed that 

Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-09-28 Thread Dongjin Lee
Hi John,

I updated the KIP with the discussion above. The 'Public Interfaces'
section describes the new API, and the 'Rejected Alternatives' section
describes the reasoning about why we selected this API design and rejected
the other alternatives.

Please have a look when you are free. And please note that the KIP freeze
for 2.7.0 is imminent.

Thanks,
Dongjin

On Mon, Sep 21, 2020 at 11:35 PM Dongjin Lee  wrote:

> Hi John,
>
> I updated the PR applying the API changes we discussed above. I am now
> updating the KIP document.
>
> Thanks,
> Dongjin
>
> On Sat, Sep 19, 2020 at 10:42 AM John Roesler  wrote:
>
>> Hi Dongjin,
>>
>> Yes, that’s right. My the time of KIP-307, we had no choice but to add a
>> second name. But we do have a choice with Suppress.
>>
>> Thanks!
>> -John
>>
>> On Thu, Sep 17, 2020, at 13:14, Dongjin Lee wrote:
>> > Hi John,
>> >
>> > I just reviewed KIP-307. As far as I understood, ...
>> >
>> > 1. There was Materialized name initially.
>> > 2. With KIP-307, Named Operations were added.
>> > 3. Now we have two options for materializing suppression. If we take
>> > Materialized name here, we have two names for the same operation, which
>> is
>> > not feasible.
>> >
>> > Do I understand correctly?
>> >
>> > > Do you have a use case in mind for having two separate names for the
>> > operation and the view?
>> >
>> > No. I am now entirely convinced with your suggestion.
>> >
>> > I just started to update the draft implementation. If I understand
>> > correctly, please notify me; I will update the KIP by adding the
>> discussion
>> > above.
>> >
>> > Best,
>> > Dongjin
>> >
>> > On Thu, Sep 17, 2020 at 11:06 AM John Roesler 
>> wrote:
>> >
>> > > Hi Dongjin,
>> > >
>> > > Thanks for the reply. Yes, that’s correct, we added that method to
>> name
>> > > the operation. But the operation seems synonymous with the view
>> produced
>> > > the operation, right?
>> > >
>> > > During KIP-307, I remember thinking that it’s unfortunate the we had
>> to
>> > > have two different “name” concepts for the same thing just because
>> setting
>> > > the name on Materialized is equivalent both to making it queriable and
>> > > actually materializing it.
>> > >
>> > > If we were to reconsider the API, it would be nice to treat these
>> three as
>> > > orthogonal:
>> > > * specify a name
>> > > * flag to make the view queriable
>> > > * flag to materialize the view
>> > >
>> > > That was the context behind my suggestion. Do you have a use case in
>> mind
>> > > for having two separate names for the operation and the view?
>> > >
>> > > Thanks,
>> > > John
>> > >
>> > > On Wed, Sep 16, 2020, at 11:43, Dongjin Lee wrote:
>> > > > Hi John,
>> > > >
>> > > > It seems like the available alternatives in this point is clear:
>> > > >
>> > > > 1. Pass queriable name as a separate parameter (i.e.,
>> > > > `KTable#suppress(Suppressed, String)`)
>> > > > 2. Make use of the Suppression processor name as a queryable name by
>> > > adding
>> > > > `enableQuery` optional flag to `Suppressed`.
>> > > >
>> > > > However, I doubt the second approach a little bit; As far as I
>> know, the
>> > > > processor name is introduced in KIP-307[^1] to make debugging
>> topology
>> > > easy
>> > > > and understandable. Since the processor name is an independent
>> concept
>> > > with
>> > > > the materialization, I feel the first approach is more natural and
>> > > > consistent. Is there any specific reason that you prefer the second
>> > > > approach?
>> > > >
>> > > > Thanks,
>> > > > Dongjin
>> > > >
>> > > > [^1]:
>> > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL
>> > > >
>> > > >
>> > > >
>> > > > On Wed, Sep 16, 2020 at 11:48 PM John Roesler 
>> > > wrote:
>> > > >
>> > > > > Hi Dongjin,
>> > > > >
>> > > > > Yes, that's where I was leaning. Although, I'd prefer adding
>> > > > > the option to Suppressed instead of adding a new argument to
>> > > > > the method call.
>> > > > >
>> > > > > What do you think about:
>> > > > >
>> > > > > class Suppressed {
>> > > > > +  public Suppressed enableQuery();
>> > > > > }
>> > > > >
>> > > > > Since Suppressed already has `withName(String)`, it seems
>> > > > > like all we need to add is a boolean flag.
>> > > > >
>> > > > > Does that seem sensible to you?
>> > > > >
>> > > > > Thanks,
>> > > > > -John
>> > > > >
>> > > > > On Wed, 2020-09-16 at 21:50 +0900, Dongjin Lee wrote:
>> > > > > > Hi John,
>> > > > > >
>> > > > > > > Although it's not great to have "special snowflakes" in the
>> API,
>> > > > > Choice B
>> > > > > > does seem safer in the short term. We would basically be
>> proposing a
>> > > > > > temporary API to make the suppressed view queriable without a
>> > > > > Materialized
>> > > > > > argument.
>> > > > > >
>> > > > > > Then, it seems like you prefer `KTable#suppress(Suppressed,
>> String)`
>> > > > > (i.e.,
>> > > > > > queriable name only as a parameter) for this time, and refine

Re: Why should SASL principal be unchanged upon reauth

2020-09-28 Thread Gokul Ramanan Subramanian
Thanks Ron.

Does this mean that it would be safe to remove the "KafkaPrincipal has not
changed after reauth" check in a future version? Or do you think there are
portions of the code that work under the assumption that this check exists?

On Sun, Sep 27, 2020 at 12:07 AM Ron Dagostino  wrote:

> Hi Gokul.  I looked back at the discussion thread, and it appears it was an
> arbitrary decision.
>
>
> https://lists.apache.org/thread.html/45c09f226386c0b1dc5f9b36e112882a20414d5900f8d778969e633e%40%3Cdev.kafka.apache.org%3E
>
> Ron
>
> On Thu, Sep 24, 2020 at 11:03 AM Gokul Ramanan Subramanian <
> gokul24...@gmail.com> wrote:
>
> > Hi.
> >
> > I was looking through Kafka code and found that SASL KafkaPrincipals are
> > not supposed to change upon reauthentication, and if they do, the broker
> > will kill the TCP connection.
> >
> > What is the reasoning behind this limitation?
> >
> > Thanks.
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #124

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 5 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


Re: [DISCUSS] KIP-653: Upgrade log4j to log4j2

2020-09-28 Thread Tom Bentley
Hi Dongjin,

Sorry for the late reply.

1. I think translating the name "root" to "" would work fine.

2. The second bullet in the Connect section of the KIP seems to need some
translation between null and OFF, similar to the name translation.

3. The third bullet seems to be about logger inheritance. As you know, I
have an open PR (https://github.com/apache/kafka/pull/9266) to fix a bug
where the broker/connect reports the root logger's level rather than
respecting the actual (inherited) level of a logger in the hierarchy. For
the same reason I call that a bug, I think the description in the KIP is
incorrect. The behaviour change described would seem to be incompatible
whether the PR was merged or not.

I'm not an expert in log4j, but my understanding is as follows:

* In original log4j, a logger and its configuration were both represented
by a Logger instance. A Logger could be instantiated and configured
according to the config file (or programmatically). If it was created by a
call to the LogManager (e.g. in order to log something) its configuration
would be inherited. This meant there was only one place to look for a
loggers level: The Logger itself. This meant that getting or setting a
logger's level was easy.

* In log4j2 a LoggerConfig (the thing created by the config file) is a
separate thing from the Logger (the thing on which you call warn(), debug()
etc) itself and I think this makes it harder to provide compatibility with
the log4j v1 behaviour for things like getting a logger's level, because
AFAIK log4j2 doesn't provide a convenient API for doing so. Instead when
finding a logger's level you have to look for both a LoggerConfig and a
Logger, because the level could be set in either. This is all based on what
I learned when I was looking at the log4j2 switch (before I knew you were
already looking at it, if you recall). I have some code from then [1] which
may be of use though it's in a bit of a rough state. In any case I think
some careful testing to ensure compatibility would be very beneficial.

Kind regards,

Tom

[1]:
https://github.com/tombentley/kafka/blob/KAFKA-1368-log4j2/core/src/main/scala/kafka/utils/Log4j2Controller.scala





On Wed, Sep 23, 2020 at 1:50 PM Dongjin Lee  wrote:

> Hi Tom,
>
> Thanks for the detailed analysis. Recently, I was also thinking about API
> compatibility. I initially thought that the difference between the root
> logger name would break the compatibility (as the KIP states), it seems
> like I found a workaround:
>
> 1. When the user requests arrive, regard the logger name 'root' as an empty
> string. (i.e., translate the request into the log4j2 equivalent.)
> 2. When generating the response, change the logger name '' into 'root'.
> (i.e., translate the response into the log4j equivalent.)
> 3. Remove (or make reverse) the workaround above when we make log4j2
> default.
>
> In short, it seems like we can handle the API incompatibility introduced by
> the root logger name change by adding a facade.
>
> How do you think?
>
> Thanks,
> Dongjin
>
> On Wed, Sep 23, 2020 at 7:36 PM Tom Bentley  wrote:
>
> > Hi Dongjin,
> >
> > I'd like to see this feature, but if I understand correctly, the KIP in
> its
> > current form breaks a couple of Kafka APIs. For Kafka Connect it says
> "From
> > log4j2, the name of the root logger becomes empty string from 'root'. It
> > impacts Kafka connect's dynamic logging control feature. (And should be
> > documented.)". This seems to imply that any tooling that a user might
> have
> > written about logging in Kafka Connect will break because the client and
> > server don't have a shared understanding of how to identify the root
> > logger. The same would be true for the DescribeConfigs, AlterConfigs and
> > IncrementalAlterConfigs protocols using the BROKER_LOGGER resource type,
> I
> > think.
> >
> > Maybe that's OK if the behaviour was changing in a new major release of
> > Kafka (e.g. 3.0), but I don't think it's allowed in Kafka 2.7 given the
> > project's compatibility requirements & semantic versioning.
> >
> > If these API compatibility issues are easily fixed I think it would be
> > great to have this in 2.7, but if not it might be easier to target this
> for
> > Kafka 3.0. That would also allow you to change the logging config format
> as
> > suggested by Ismael.
> >
> > Many thanks,
> >
> > Tom
> >
> > On Tue, Sep 22, 2020 at 5:15 PM Dongjin Lee  wrote:
> >
> > > Hi devs,
> > >
> > > I updated the KIP with the migration plan I discussed with Ismael.
> > >
> > > I think 2.7.0 is the perfect time for starting migration into log4j2.
> If
> > we
> > > miss this opportunity, the migration would be much harder. So please
> > have a
> > > look at this proposal.
> > >
> > > I also opened a voting thread for this.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Thu, Sep 17, 2020 at 2:29 AM Dongjin Lee 
> wrote:
> > >
> > > > Hi Ismael,
> > > >
> > > > > Have we considered switching to the log4j2 logging config format by
> > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #123

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 4 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


[jira] [Created] (KAFKA-10530) kafka-streams-application-reset misses some internal topics

2020-09-28 Thread Oliver Weiler (Jira)
Oliver Weiler created KAFKA-10530:
-

 Summary: kafka-streams-application-reset misses some internal 
topics
 Key: KAFKA-10530
 URL: https://issues.apache.org/jira/browse/KAFKA-10530
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.0
Reporter: Oliver Weiler


While the \{{kafka-streams-application-reset}} tool works in most cases, it 
misses some internal topics when using {{Foreign Key Table-Table Joins}}.

After execution, there are still two internal topics left which were not deleted

{code}
bv4-indexer-KTABLE-FK-JOIN-SUBSCRIPTION-REGISTRATION-06-topic
bbv4-indexer-717e6cc5-acb2-498d-9d08-4814aaa71c81-StreamThread-1-consumer 
bbv4-indexer-KTABLE-FK-JOIN-SUBSCRIPTION-RESPONSE-14-topic
{code}

The reason seems to be the {{StreamsResetter.isInternalTopic}} which requires 
the internal topic to end with {{-changelog}} or {{-repartition}} (which the 
mentioned topics don't).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-673: Emit JSONs with new auto-generated schema

2020-09-28 Thread Tom Bentley
Hi Anastasia,

Thanks for those changes. I eventually figured out that the
XYZJsonDataConverter are the new classes from the factoring-out of the
Jackson dependency (KAFKA-10384), after which the proposal made much more
sense to me. Apologies for being slow on the uptake there.

I can see now that property order in the JSON in the log would be
determined by the declared field in the RPC JSON definition. That might not
always be ideal since it can be hard to visually parse large unindented
JSON, but maybe that could be dealt with once we knew there was a real
problem.

It's an implementation detail, but I wonder whether constructing a whole
tree of JsonNodes might cause performance issues. It would be more work,
but the XYZJsonDataConverter could be generated to have a method which took
a JsonGenerator, thus avoiding the need to instantiate the nodes just for
the purposes of logging.

Kind regards,

Tom

On Fri, Sep 25, 2020 at 7:05 PM Anastasia Vela  wrote:

> Hi Tom,
>
> Thanks for your input!
>
> 1. I'll add more details for the RequestConvertToJson and
> XYZJsonDataConverter classes. Hopefully it will be more clear, but just to
> answer your question, RequestConvertToJson does not return a
> XYZJsonDataConverter, but rather it returns a JsonNode which will be
> serialized. The JsonDataConverter is the new auto-generated schema for each
> request/response type that contains the method to return the JsonNode to be
> serialized.
>
> 2. There is no defined order of the properties, rather it's in the order
> that it is set in. So if you first set key B, then key A, the properties
> would appear with key B first. JsonNodes when serialized does not sort the
> keys.
>
> 3. Yes, serialization is done via Jackson databind.
>
> Thanks again,
> Anastasia
>
> On Fri, Sep 25, 2020 at 1:15 AM Tom Bentley  wrote:
>
> > Hi Anastasia,
> >
> > Thanks for the KIP, I can certainly see the benefit of this. I have a few
> > questions:
> >
> > 1. I think it would be helpful to readers to explicitly illustrate the
> > RequestConvertToJson and XYZJsonDataConverter classes (e.g. with method
> > signatures for one or two methods), because currently it's not clear (to
> me
> > at least) exactly what's being proposed. Does the RequestConvertToJson
> > return a XYZJsonDataConverter?
> >
> > 2. Does the serialization have a defined order of properties (alphabetic
> > perhaps)? My concern here is that properties appearing in order according
> > to how they are iterated in a hash map might harm human readability of
> the
> > logs.
> >
> > 3. Would the serialization be done via the Jackson databinding?
> >
> > Many thanks,
> >
> > Tom
> >
> > On Thu, Sep 24, 2020 at 11:49 PM Anastasia Vela 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to discuss KIP-673:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-673%3A+Emit+JSONs+with+new+auto-generated+schema
> > >
> > > This is a proposal to change the format of request and response traces
> to
> > > JSON, which would be easier to load and parse, because the current
> format
> > > is only JSON-like and not easily parsable.
> > >
> > > Let me know what you think,
> > > Anastasia
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #122

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 3 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #121

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 2 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #94

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update the javadoc in GroupMetadataManager.scala (#9241)


--
[...truncated 6.70 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #93

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update the javadoc in GroupMetadataManager.scala (#9241)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #120

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 1 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #119

2020-09-28 Thread Apache Jenkins Server
See 


Changes:


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H48 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
ERROR: [WS-CLEANUP] Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
ERROR: Cannot delete workspace: Unable to delete 
' Tried 3 
times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 14184 
additional exceptions)
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did leafNodes run? 
For example, 

 is 2 days 0 hr old

Not sending mail to unregistered user git...@hugo-hirsch.de


[jira] [Created] (KAFKA-10529) Controller should throttle partition reassignment

2020-09-28 Thread Badai Aqrandista (Jira)
Badai Aqrandista created KAFKA-10529:


 Summary: Controller should throttle partition reassignment 
 Key: KAFKA-10529
 URL: https://issues.apache.org/jira/browse/KAFKA-10529
 Project: Kafka
  Issue Type: Improvement
Reporter: Badai Aqrandista


With 
[KIP-455|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment]],
 reassignment can be triggered via AdminClient API. However, when reassigning a 
large number of topic partitions at once, this can cause a storm of 
LeaderAndIsr and UpdateMetadata requests, which can occupy Controller thread 
for some time. And this prevents Controller from processing other requests. 

So, Controller should throttle sending LeaderAndIsr request when actioning a 
reassignment request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #92

2020-09-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10502: Use Threadlocal.remote to avoid leak on TimestampRouter 
(#9304)


--
[...truncated 6.65 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest >