[jira] [Created] (KAFKA-10391) Streams should overwrite checkpoint excluding corrupted partitions

2020-08-11 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-10391:
-

 Summary: Streams should overwrite checkpoint excluding corrupted 
partitions
 Key: KAFKA-10391
 URL: https://issues.apache.org/jira/browse/KAFKA-10391
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


While working on https://issues.apache.org/jira/browse/KAFKA-9450 I discovered 
another bug in Streams: when some partitions are corrupted due to offsets out 
of range, we treat it as task corrupted and would close them as dirty and then 
revive. However we forget to overwrite the checkpoint file excluding those 
out-of-range partitions to let them be re-bootstrapped from the new log-start 
offset, and hence when the task is revived, it would still load the old offset 
and start from there and then get the out-of-range exception again. This may 
cause {{StreamsUpgradeTest.test_app_upgrade}} to be flaky.

We do not see this often because in the past we always delete the checkpoint 
file after loading it and we usually only see the out-of-range exception at the 
beginning of the restoration but not during restoration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9450) Decouple inner state flushing from committing

2020-08-11 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9450.
--
Fix Version/s: 2.7.0
   Resolution: Fixed

> Decouple inner state flushing from committing
> -
>
> Key: KAFKA-9450
> URL: https://issues.apache.org/jira/browse/KAFKA-9450
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.7.0
>
>
> When EOS is turned on, the commit interval is set quite low (100ms) and all 
> the store layers are flushed during a commit. This is necessary for 
> forwarding records in the cache to the changelog, but unfortunately also 
> forces rocksdb to flush the current memtable before it's full. The result is 
> a large number of small writes to disk, losing the benefits of batching, and 
> a large number of very small L0 files that are likely to slow compaction.
> Since we have to delete the stores to recreate from scratch anyways during an 
> unclean shutdown with EOS, we may as well skip flushing the innermost 
> StateStore during a commit and only do so during a graceful shutdown, before 
> a rebalance, etc. This is currently blocked on a refactoring of the state 
> store layers to allow decoupling the flush of the caching layer from the 
> actual state store.
> Note that this is especially problematic with EOS due to the necessarily-low 
> commit interval, but still hurts even with at-least-once and a much larger 
> commit interval. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #4

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove `PartitionHeader` abstraction from `FetchResponse` 
schema (#9164)


--
[...truncated 3.21 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED


[GitHub] [kafka-site] showuon commented on pull request #289: MINOR: Fix the broken past-events link

2020-08-11 Thread GitBox


showuon commented on pull request #289:
URL: https://github.com/apache/kafka-site/pull/289#issuecomment-672543644


   It's because the `quickstart.html` got removed in this PR: this PR: 
https://github.com/apache/kafka-site/pull/286. I think we should keep 
`quickstart.html`, `quickstart-zookeeper.html` and `quickstart-docker.html` 
files in the `25` and `26` folder. What do you think @scott-confluent ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #3

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: bump 2.5 versions to 2.5.1 (#9165)


--
[...truncated 6.43 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #3

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: bump 2.5 versions to 2.5.1 (#9165)


--
[...truncated 6.43 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #3

2020-08-11 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10390) kafka-server-stop lookup is not specific enough and may kill other processes

2020-08-11 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-10390:


 Summary: kafka-server-stop lookup is not specific enough and may 
kill other processes
 Key: KAFKA-10390
 URL: https://issues.apache.org/jira/browse/KAFKA-10390
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Lucas Bradstreet


kafka-server-stop.sh picks out kafka processes by:


 
{noformat}
PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print 
$1}'){noformat}
 

This is not specific enough and may match unintended processes, e.g. one that 
even includes dependencies including *.kafka.kafka.*

**A better match would be:
{noformat}
PIDS=$(ps ax | grep ' kafka\.Kafka ' | grep java | grep -v grep | awk '{print 
$1}')
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10389) Fix zero copy tagged arrays

2020-08-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10389:
---

 Summary: Fix zero copy tagged arrays
 Key: KAFKA-10389
 URL: https://issues.apache.org/jira/browse/KAFKA-10389
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


We're missing a bunch of zero-copy checks in the message generator. As an 
example, try adding the following field to `SimpleExampleMessage.json`:
{code}
{ "name": "taggedZeroCopyByteBuffer", "versions": "1+", "type": "bytes", 
"zeroCopy": true,
  "taggedVersions": "1+", "tag": 8, "ignorable": true },
{code}

The generated code has a few compilation errors because it assumes the byte 
array format. For example:
{code}
if (taggedZeroCopyByteBuffer.hasRemaining()) {
_writable.writeUnsignedVarint(8);

_writable.writeUnsignedVarint(this.taggedZeroCopyByteBuffer.length + 
ByteUtils.sizeOfUnsignedVarint(this.taggedZeroCopyByteBuffer.length + 1));

_writable.writeUnsignedVarint(this.taggedZeroCopyByteBuffer.length + 1);
_writable.writeByteArray(this.taggedZeroCopyByteBuffer);
}
{code}
The `toStruct` and `fromStruct` methods also seem to be missing checks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.1-jdk8 #2

2020-08-11 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 2.81 MB...]

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotConvertBeforeGetOnFailedCompletion PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilCancellation PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertOnlyOnceBeforeGetOnSuccessfulCompletion PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilSuccessfulCompletion PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertBeforeGetOnSuccessfulCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldConvertBeforeGetOnSuccessfulCompletion PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotCancelIfMayNotCancelWhileRunning STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldNotCancelIfMayNotCancelWhileRunning PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldCancelBeforeGetIfMayCancelWhileRunning STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldCancelBeforeGetIfMayCancelWhileRunning PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldRecordOnlyFirstErrorBeforeGetOnFailedCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldRecordOnlyFirstErrorBeforeGetOnFailedCompletion PASSED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilFailedCompletion STARTED

org.apache.kafka.connect.util.ConvertingFutureCallbackTest > 
shouldBlockUntilFailedCompletion PASSED

org.apache.kafka.connect.util.TableTest > basicOperations STARTED

org.apache.kafka.connect.util.TableTest > basicOperations PASSED


Re: [VOTE] KIP-595: A Raft Protocol for the Metadata Quorum

2020-08-11 Thread Jason Gustafson
Thanks everyone for the votes. I am going to close this with +5 binding
(me, Colin, Boyang, Jun, and Ismael) and none against.

@Jun Yes, I think it makes sense to expose the usual request metrics for
the new APIs.

Best,
Jason



On Tue, Aug 11, 2020 at 11:30 AM Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding). A couple of comments:
>
> 1. We have "quorum.voters=1@kafka-1:9092, 2@kafka-2:9092,
> 3@kafka-3:9092". Could
> this be a bit confusing given that the authority part of a url is defined
> as "authority = [userinfo@]host[:port]"?
> 2. With regards to the Quorum State file, do we have anything that helps us
> detect corruption?
>
> Ismael
>
>
> On Mon, Aug 3, 2020 at 11:03 AM Jason Gustafson 
> wrote:
>
> > Hi All, I'd like to start a vote on this proposal:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum
> > .
> > The discussion has been active for a bit more than 3 months and I think
> the
> > main points have been addressed. We have also moved some of the pieces
> into
> > follow-up proposals, such as KIP-630.
> >
> > Please keep in mind that the details are bound to change as all of
> > the pieces start coming together. As usual, we will keep this thread
> > notified of such changes.
> >
> > For me personally, this is super exciting since we have been thinking
> about
> > this work ever since I started working on Kafka! I am +1 of course.
> >
> > Best,
> > Jason
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #2

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade Gradle to 6.6 (#9160)


--
[...truncated 6.43 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread John Roesler
Thanks for your kind words, all. This is truly an honor.
-John

On Mon, 2020-08-10 at 16:26 -0700, Boyang Chen wrote:
> Congrats Mr. John!
> 
> 
> On Mon, Aug 10, 2020 at 3:02 PM Adam Bellemare 
> wrote:
> 
> > Congratulations John! You have been an excellent help to me and many
> > others. I am pleased to see this!
> > 
> > > On Aug 10, 2020, at 5:54 PM, Bill Bejeck  wrote:
> > > 
> > > Congrats!
> > > 
> > > > On Mon, Aug 10, 2020 at 4:52 PM Guozhang Wang 
> > wrote:
> > > > Congratulations!
> > > > 
> > > > > On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
> > > > > 
> > > > > Hi, Everyone,
> > > > > 
> > > > > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> > > > remained
> > > > > active in the community since becoming a committer. It's my pleasure 
> > > > > to
> > > > > announce that John is now a member of Kafka PMC.
> > > > > 
> > > > > Congratulations John!
> > > > > 
> > > > > Jun
> > > > > on behalf of Apache Kafka PMC
> > > > > 
> > > > 
> > > > --
> > > > -- Guozhang
> > > > 



Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #2

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade Gradle to 6.6 (#9160)


--
[...truncated 6.38 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[ANNOUNCE] Apache Kafka 2.5.1

2020-08-11 Thread John Roesler
The Apache Kafka community is pleased to announce the
release for Apache Kafka 2.5.1

This is a bug fix release, and it includes fixes and
improvements for 72 issues, including some critical bugs.

All of the changes in this release can be found in the
release notes:
https://www.apache.org/dist/kafka/2.5.1/RELEASE_NOTES.html


You can download the source and binary release (Scala 2.12
and 2.13) from:
https://kafka.apache.org/downloads#2.5.1


---


Apache Kafka is a distributed streaming platform with four
core APIs:


** The Producer API allows an application to publish a
stream records to one or more Kafka topics.

** The Consumer API allows an application to subscribe to
one or more topics and process the stream of records
produced to them.

** The Streams API allows an application to act as a stream
processor, consuming an input stream from one or more topics
and producing an output stream to one or more output topics,
effectively transforming the input streams to output
streams.

** The Connector API allows building and running reusable
producers or consumers that connect Kafka topics to existing
applications or data systems. For example, a connector to a
relational database might capture every change to a table.


With these APIs, Kafka can be used for two broad classes of
application:

** Building real-time streaming data pipelines that reliably
get data between systems or applications.

** Building real-time streaming applications that transform
or react to the streams of data.


Apache Kafka is in use at large and small companies
worldwide, including Capital One, Goldman Sachs, ING,
LinkedIn, Netflix, Pinterest, Rabobank, Target, The New York
Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 42 contributors to this
release!

Adam Bellemare, Andras Katona, Andy Coates, Anna Povzner, A.
Sophie Blee-Goldman, Auston, belugabehr, Bill Bejeck, Boyang
Chen, Bruno Cadonna, Chia-Ping Tsai, Chris Egerton, David
Arthur, David Jacot, Dezhi “Andy” Fang, Dima Reznik, Ego,
Evelyn Bayes, Ewen Cheslack-Postava, Greg Harris, Guozhang
Wang, Ismael Juma, Jason Gustafson, Jeff Widman, Jeremy
Custenborder, jiameixie, John Roesler, Jorge Esteban
Quilcate Otoya, Konstantine Karantasis, Lucent-Wong, Mario
Molina, Matthias J. Sax, Navinder Pal Singh Brar, Nikolay,
Rajini Sivaram, Randall Hauch, Sanjana Kaundinya, showuon,
Steve Rodrigues, Tom Bentley, Tu V. Tran, vinoth chandar

We welcome your help and feedback. For more information on
how to report problems, and to get involved, visit the
project website at https://kafka.apache.org/

Thank you!


Regards,

John Roesler



[jira] [Created] (KAFKA-10388) Casting errors in tagged struct conversion

2020-08-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10388:
---

 Summary: Casting errors in tagged struct conversion
 Key: KAFKA-10388
 URL: https://issues.apache.org/jira/browse/KAFKA-10388
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The message generator is missing some conversion logic between the generated 
struct types and instances of `Struct`. This causes casting errors when trying 
to use the `fromStruct` or `toStruct` methods. For example, in 
`SimpleExampleMessageData`, the tagged struct `myTaggedStruct` results in the 
following code in `fromStruct`:

{code}
if (_taggedFields.containsKey(8)) {
this.myTaggedStruct = (MyTaggedStruct) _taggedFields.remove(8);
} else {
this.myTaggedStruct = new MyTaggedStruct();
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] vvcephei merged pull request #291: MINOR: add 2.5.1 release to Downloads

2020-08-11 Thread GitBox


vvcephei merged pull request #291:
URL: https://github.com/apache/kafka-site/pull/291


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] vvcephei commented on pull request #291: MINOR: add 2.5.1 release to Downloads

2020-08-11 Thread GitBox


vvcephei commented on pull request #291:
URL: https://github.com/apache/kafka-site/pull/291#issuecomment-672256567


   Thanks, all!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] guozhangwang commented on pull request #291: MINOR: add 2.5.1 release to Downloads

2020-08-11 Thread GitBox


guozhangwang commented on pull request #291:
URL: https://github.com/apache/kafka-site/pull/291#issuecomment-672204528


   LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #2

2020-08-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade Gradle to 6.6 (#9160)


--
[...truncated 3.21 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 

[jira] [Created] (KAFKA-10387) Cannot include SMT configs with source connector that include topic.creation.* properties

2020-08-11 Thread Arjun Satish (Jira)
Arjun Satish created KAFKA-10387:


 Summary: Cannot include SMT configs with source connector that 
include topic.creation.* properties
 Key: KAFKA-10387
 URL: https://issues.apache.org/jira/browse/KAFKA-10387
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Arjun Satish






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-595: A Raft Protocol for the Metadata Quorum

2020-08-11 Thread Ismael Juma
Thanks for the KIP, +1 (binding). A couple of comments:

1. We have "quorum.voters=1@kafka-1:9092, 2@kafka-2:9092,
3@kafka-3:9092". Could
this be a bit confusing given that the authority part of a url is defined
as "authority = [userinfo@]host[:port]"?
2. With regards to the Quorum State file, do we have anything that helps us
detect corruption?

Ismael


On Mon, Aug 3, 2020 at 11:03 AM Jason Gustafson  wrote:

> Hi All, I'd like to start a vote on this proposal:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum
> .
> The discussion has been active for a bit more than 3 months and I think the
> main points have been addressed. We have also moved some of the pieces into
> follow-up proposals, such as KIP-630.
>
> Please keep in mind that the details are bound to change as all of
> the pieces start coming together. As usual, we will keep this thread
> notified of such changes.
>
> For me personally, this is super exciting since we have been thinking about
> this work ever since I started working on Kafka! I am +1 of course.
>
> Best,
> Jason
>


[GitHub] [kafka-site] vvcephei opened a new pull request #291: MINOR: add 2.5.1 release to Downloads

2020-08-11 Thread GitBox


vvcephei opened a new pull request #291:
URL: https://github.com/apache/kafka-site/pull/291


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] guozhangwang merged pull request #284: MINOR: Change the arrow direction based on the view state is expanded or not

2020-08-11 Thread GitBox


guozhangwang merged pull request #284:
URL: https://github.com/apache/kafka-site/pull/284


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: New Website Layout

2020-08-11 Thread Tom Bentley
Hi Ben,

Thanks for fixing that. Another problem I've just noticed is a couple of
garbled headings. E.g. scroll down from
https://kafka.apache.org/documentation.html#design_compactionbasics and the
"What guarantees does log compaction provide?" section is rendering as

$1 class="anchor-heading">$8$9$10


with the . Similar thing in
https://kafka.apache.org/documentation.html#design_quotas. The source HTML
looks OK to me.

Kind regards,

Tom

On Mon, Aug 10, 2020 at 2:15 PM Ben Stopford  wrote:

> Good spot. Thanks.
>
> On Thu, 6 Aug 2020 at 18:59, Ben Weintraub  wrote:
>
> > Plus one to Tom's request - the ability to easily generate links to
> > specific config options is extremely valuable.
> >
> > On Thu, Aug 6, 2020 at 10:09 AM Tom Bentley  wrote:
> >
> > > Hi Ben,
> > >
> > > The documentation for the configs (broker, producer etc) used to
> function
> > > as links as well as anchors, which made the url fragments more
> > > discoverable, because you could click on the link and then copy+paste
> the
> > > browser URL:
> > >
> > > 
> > >> > href="#batch.size">batch.size
> > > 
> > >
> > > What seems to have happened with the new layout is the  tags are
> > empty,
> > > and no longer enclose the config name,
> > >
> > > 
> > >   
> > >   batch.size
> > > 
> > >
> > > meaning you can't click on the link to copy and paste the URL. Could
> the
> > > old behaviour be restored?
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > On Wed, Aug 5, 2020 at 12:43 PM Luke Chen  wrote:
> > >
> > > > When entering streams doc, it'll always show:
> > > > *You're viewing documentation for an older version of Kafka - check
> out
> > > our
> > > > current documentation here.*
> > > >
> > > >
> > > >
> > > > On Wed, Aug 5, 2020 at 6:44 PM Ben Stopford 
> wrote:
> > > >
> > > > > Thanks for the PR and feedback Michael. Appreciated.
> > > > >
> > > > > On Wed, 5 Aug 2020 at 10:49, Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Thank you, it looks great!
> > > > > >
> > > > > > I found a couple of small issues:
> > > > > > - It's not rendering correctly with http.
> > > > > > - It's printing "called" to the console. I opened a PR to remove
> > the
> > > > > > console.log() call:
> https://github.com/apache/kafka-site/pull/278
> > > > > >
> > > > > > On Wed, Aug 5, 2020 at 9:45 AM Ben Stopford 
> > > wrote:
> > > > > > >
> > > > > > > The new website layout has gone live as you may have seen.
> There
> > > are
> > > > a
> > > > > > > couple of rendering issues in the streams developer guide that
> > > we're
> > > > > > > getting addressed. If anyone spots anything else could they
> > please
> > > > > reply
> > > > > > to
> > > > > > > this thread.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > Ben
> > > > > > >
> > > > > > > On Fri, 26 Jun 2020 at 11:48, Ben Stopford 
> > > wrote:
> > > > > > >
> > > > > > > > Hey folks
> > > > > > > >
> > > > > > > > We've made some updates to the website's look and feel. There
> > is
> > > a
> > > > > > staged
> > > > > > > > version in the link below.
> > > > > > > >
> > > > > > > > https://ec2-13-57-18-236.us-west-1.compute.amazonaws.com/
> > > > > > > > username: kafka
> > > > > > > > password: streaming
> > > > > > > >
> > > > > > > > Comments welcomed.
> > > > > > > >
> > > > > > > > Ben
> > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Ben Stopford
> > > > >
> > > > > Lead Technologist, Office of the CTO
> > > > >
> > > > > 
> > > > >
> > > >
> > >
> >
>
>
> --
>
> Ben Stopford
>
> Lead Technologist, Office of the CTO
>
> 
>


Re: [IMPORTANT] - Migration to ci-builds.a.o

2020-08-11 Thread Ismael Juma
I migrated kafka-trunk and kafka 2.x builds to the new CI server. I have
not migrated kafka 1.x and older since 1.1.0 is nearly 2.5 years old by
now. Does anyone feel strongly that we should migrate 1.x builds?

The PR builds have not been migrated yet as the plugin we depend on is not
installed in the new CI server. Waiting for Apache Infra to either install
it or to tell us that we need an alternative.

Let me know if you have any questions or if you see anything weird.

Ismael

On Tue, Aug 11, 2020 at 12:50 AM Ismael Juma  wrote:

> Hi all,
>
> Looks like we have 4 days to migrate all of our Jenkins builds to the new
> CI server. I have started the process:
>
> https://ci-builds.apache.org/job/Kafka/
>
> To avoid confusion, I have disabled the equivalent builds in the old CI
> server. I will migrate a few more tomorrow.
>
> Ismael
>
> -- Forwarded message -
> From: Gavin McDonald 
> Date: Thu, Jul 16, 2020 at 9:33 AM
> Subject: [IMPORTANT] - Migration to ci-builds.a.o
> To: builds 
>
>
> Hi All,
>
> This NOTICE is for everyone on builds.apache.org. We are migrating to a
> new
> Cloudbees based Client Master called https://ci-builds.apache.org. The
> migrations of all jobs needs to be done before the switch off date of 15th
> August 2020, so you have a maximum of 4 weeks.
>
> There is no tool or automated way of migrating your jobs, the
> differences in the platforms, the plugins and the setup makes it impossible
> to do in a safe way. So, you all need to start creating new jobs on
> ci-infra.a.o and then , when you are happy, turn off your old builds on
> builds.a.o.
>
> There are currently 4 agents over there ready to take jobs, plus a floating
> agent which is shared amongst many masters (more to come). I will migrate
> away 2 more agents from builds.a.o to ci-builds.a.o every few days, and
> will keep an eye of load across both and adjust accordingly.
>
> If needed, create a ticket on INFRA jira for any issues that crop up, or
> email here on builds@a.o. there may be one or two plugins we need to
> install/tweak etc.
>
> We will be not using 'Views' at the top level, but rather we will take
> advantage of 'Folders Plus' - each project will get its own Folder and have
> authorisation access to create/edit jobs etc within that folder. I will
> create these folders as you ask for them to start with. This setup allows
> for credentials isolation amongst other benefits, including but not limited
> to exclusive agents (Controlled Agents) for your own use , should you have
> any project targeted donations of agents.
>
> As with other aspects of the ASF, projects can choose to just enable all
> committers access to their folder, just ask.
>
> We will re-use builds.apache.org as a CNAME to ci-builds.a.o but will not
> be setting up any forwarding rules or anything like that.
>
> So, please, get started *now *on this so you can be sure we are all
> completed before the final cutoff date of 15th August 2020.
>
> Any questions - I expect a few (dozen :) ) - ask away and/or file INFRA
> tickets.
>
> Hadoop and related projects have their own migration path to follow, same
> cut off date, Cassandra, Beam, CouchDB have already migrated and are doing
> very well in their new homes.
>
> Lets get going ...
>
> --
>
> *Gavin McDonald*
> Systems Administrator
> ASF Infrastructure Team
>


[jira] [Created] (KAFKA-10386) Fix record serialization with flexible versions

2020-08-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10386:
---

 Summary: Fix record serialization with flexible versions
 Key: KAFKA-10386
 URL: https://issues.apache.org/jira/browse/KAFKA-10386
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The generated serde code for the "records" type uses a mix of compact and 
non-compact length representations which leads to serialization errors. We 
should update the generator logic to use the compact representation 
consistently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Vahid Hashemian
Congrats John!

--Vahid

On Tue, Aug 11, 2020, 02:37 Ismael Juma  wrote:

> Congratulations John!
>
> Ismael
>
> On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> remained
> > active in the community since becoming a committer. It's my pleasure to
> > announce that John is now a member of Kafka PMC.
> >
> > Congratulations John!
> >
> > Jun
> > on behalf of Apache Kafka PMC
> >
>


Build failed in Jenkins: Kafka » kafka-2.0-jdk8 #1

2020-08-11 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 435.61 KB...]

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed STARTED

kafka.log.LogValidatorTest > testCompressedBatchWithoutRecordsNotAllowed PASSED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion STARTED

kafka.log.LogValidatorTest > testInvalidInnerMagicVersion PASSED

kafka.log.LogValidatorTest > testInvalidOffsetRangeAndRecordCount STARTED

kafka.log.LogValidatorTest > testInvalidOffsetRangeAndRecordCount PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 PASSED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed PASSED

kafka.log.LogValidatorTest > testNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testNonCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 PASSED

kafka.log.LogValidatorTest > testRecompressionV1 STARTED

kafka.log.LogValidatorTest > testRecompressionV1 PASSED

kafka.log.LogValidatorTest > testRecompressionV2 STARTED

kafka.log.LogValidatorTest > testRecompressionV2 PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED


[GitHub] [kafka-site] tom1299 opened a new pull request #290: MINOR: Add missing to to testing

2020-08-11 Thread GitBox


tom1299 opened a new pull request #290:
URL: https://github.com/apache/kafka-site/pull/290


   Just added a missing "to" to testing.html



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #1

2020-08-11 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #1

2020-08-11 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.22 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED


[jira] [Created] (KAFKA-10385) When DetailStat activated Final Print is necessary when result is out of time.

2020-08-11 Thread sebastien diaz (Jira)
sebastien diaz created KAFKA-10385:
--

 Summary: When DetailStat activated Final Print is necessary when 
result is out of time.
 Key: KAFKA-10385
 URL: https://issues.apache.org/jira/browse/KAFKA-10385
 Project: Kafka
  Issue Type: Bug
Reporter: sebastien diaz


The condition to see a detailed stats is conditionned by the time.
|if (currentTimeMillis - lastReportTime >= config.reportingInterval) {|

    if (config.showDetailedStats)

But when you finish the loop, you haven t the last stats of the performance 
tests. Just the previous status exists.

 

I propose just to remove the contition to show the final statistic result.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-08-11 Thread Unmesh Joshi
>>Hi Unmesh,
>>Thanks, I'll take a look.
Thanks. I will be adding more to the prototype and will be happy to help
and collaborate.

Thanks,
Unmesh

On Tue, Aug 11, 2020 at 12:28 AM Colin McCabe  wrote:

> Hi Jose,
>
> That'a s good point that I hadn't considered.  It's probably worth having
> a separate leader change message, as you mentioned.
>
> Hi Unmesh,
>
> Thanks, I'll take a look.
>
> best,
> Colin
>
>
> On Fri, Aug 7, 2020, at 11:56, Jose Garcia Sancio wrote:
> > Hi Unmesh,
> >
> > Very cool prototype!
> >
> > Hi Colin,
> >
> > The KIP proposes a record called IsrChange which includes the
> > partition, topic, isr, leader and leader epoch. During normal
> > operation ISR changes do not result in leader changes. Similarly,
> > leader changes do not necessarily involve ISR changes. The controller
> > implementation that uses ZK modeled them together because
> > 1. All of this information is stored in one znode.
> > 2. ZK's optimistic lock requires that you specify the new value
> completely
> > 3. The change to that znode was being performed by both the controller
> > and the leader.
> >
> > None of these reasons are true in KIP-500. Have we considered having
> > two different records? For example
> >
> > 1. IsrChange record which includes topic, partition, isr
> > 2. LeaderChange record which includes topic, partition, leader and
> leader epoch.
> >
> > I suspect that making this change will also require changing the
> > message AlterIsrRequest introduced in KIP-497: Add inter-broker API to
> > alter ISR.
> >
> > Thanks
> > -Jose
> >
>


[GitHub] [kafka-site] showuon commented on pull request #289: MINOR: Fix the broken past-events link

2020-08-11 Thread GitBox


showuon commented on pull request #289:
URL: https://github.com/apache/kafka-site/pull/289#issuecomment-671857049


   @scott-confluent @mjsax , could you help review this PR? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] showuon opened a new pull request #289: MINOR: Fix the wrong past-events link

2020-08-11 Thread GitBox


showuon opened a new pull request #289:
URL: https://github.com/apache/kafka-site/pull/289


   Fix the wrong past-events link. It should be in the `kafka-summit.org` 
domain, not under `kafka.apache.org`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Ismael Juma
Congratulations John!

Ismael

On Mon, Aug 10, 2020 at 1:11 PM Jun Rao  wrote:

> Hi, Everyone,
>
> John Roesler has been a Kafka committer since Nov. 5, 2019. He has remained
> active in the community since becoming a committer. It's my pleasure to
> announce that John is now a member of Kafka PMC.
>
> Congratulations John!
>
> Jun
> on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Mickael Maison
Congrats John!

On Tue, Aug 11, 2020 at 9:09 AM Tom Bentley  wrote:
>
> Congratulations John!
>
> On Tue, Aug 11, 2020 at 7:19 AM Bruno Cadonna  wrote:
>
> > Wow, that is awesome! Congrats, John!
> >
> > Bruno
> >
> > On 10.08.20 22:11, Jun Rao wrote:
> > > Hi, Everyone,
> > >
> > > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> > remained
> > > active in the community since becoming a committer. It's my pleasure to
> > > announce that John is now a member of Kafka PMC.
> > >
> > > Congratulations John!
> > >
> > > Jun
> > > on behalf of Apache Kafka PMC
> > >
> >
> >


Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Tom Bentley
Congratulations John!

On Tue, Aug 11, 2020 at 7:19 AM Bruno Cadonna  wrote:

> Wow, that is awesome! Congrats, John!
>
> Bruno
>
> On 10.08.20 22:11, Jun Rao wrote:
> > Hi, Everyone,
> >
> > John Roesler has been a Kafka committer since Nov. 5, 2019. He has
> remained
> > active in the community since becoming a committer. It's my pleasure to
> > announce that John is now a member of Kafka PMC.
> >
> > Congratulations John!
> >
> > Jun
> > on behalf of Apache Kafka PMC
> >
>
>


Fwd: [IMPORTANT] - Migration to ci-builds.a.o

2020-08-11 Thread Ismael Juma
Hi all,

Looks like we have 4 days to migrate all of our Jenkins builds to the new
CI server. I have started the process:

https://ci-builds.apache.org/job/Kafka/

To avoid confusion, I have disabled the equivalent builds in the old CI
server. I will migrate a few more tomorrow.

Ismael

-- Forwarded message -
From: Gavin McDonald 
Date: Thu, Jul 16, 2020 at 9:33 AM
Subject: [IMPORTANT] - Migration to ci-builds.a.o
To: builds 


Hi All,

This NOTICE is for everyone on builds.apache.org. We are migrating to a new
Cloudbees based Client Master called https://ci-builds.apache.org. The
migrations of all jobs needs to be done before the switch off date of 15th
August 2020, so you have a maximum of 4 weeks.

There is no tool or automated way of migrating your jobs, the
differences in the platforms, the plugins and the setup makes it impossible
to do in a safe way. So, you all need to start creating new jobs on
ci-infra.a.o and then , when you are happy, turn off your old builds on
builds.a.o.

There are currently 4 agents over there ready to take jobs, plus a floating
agent which is shared amongst many masters (more to come). I will migrate
away 2 more agents from builds.a.o to ci-builds.a.o every few days, and
will keep an eye of load across both and adjust accordingly.

If needed, create a ticket on INFRA jira for any issues that crop up, or
email here on builds@a.o. there may be one or two plugins we need to
install/tweak etc.

We will be not using 'Views' at the top level, but rather we will take
advantage of 'Folders Plus' - each project will get its own Folder and have
authorisation access to create/edit jobs etc within that folder. I will
create these folders as you ask for them to start with. This setup allows
for credentials isolation amongst other benefits, including but not limited
to exclusive agents (Controlled Agents) for your own use , should you have
any project targeted donations of agents.

As with other aspects of the ASF, projects can choose to just enable all
committers access to their folder, just ask.

We will re-use builds.apache.org as a CNAME to ci-builds.a.o but will not
be setting up any forwarding rules or anything like that.

So, please, get started *now *on this so you can be sure we are all
completed before the final cutoff date of 15th August 2020.

Any questions - I expect a few (dozen :) ) - ask away and/or file INFRA
tickets.

Hadoop and related projects have their own migration path to follow, same
cut off date, Cassandra, Beam, CouchDB have already migrated and are doing
very well in their new homes.

Lets get going ...

-- 

*Gavin McDonald*
Systems Administrator
ASF Infrastructure Team


Re: [DISCUSS] KIP-647: Add ability to handle late messages in streams-aggregation

2020-08-11 Thread Bruno Cadonna

Hi Igor,

Thanks for the KIP!

Similar to Matthias, I am also wondering why you rejected the more 
general solution involving a callback. I also think that writing to a 
topic is just one of multiple ways to handle late records. For example, 
one could compute statistics over the late records before or instead 
writing the records to a topic. Or it could write the records to a 
database to analyse.


Best,
Bruno

On 28.07.20 05:14, Matthias J. Sax wrote:

Thanks for the KIP Igor.

What you propose sounds a little bit like a "dead-letter-queue" pattern.
Thus, I am wondering if we should try to do a built-in
"dead-letter-queue" feature that would be general purpose? For example,
uses can drop message in the source node if they don't have a valid
timestamp or if a deserialization error occurs and face a similar issue
for those cases (even if it might be a little simpler to handle those
cases, as custom user code is executed).

For a general purpose DLQ, the feature should be expose at the Processor
API level though, and the DSL would just use this feature (instead of
introducing it as a DSL feature).

Late records are of course only defined at the DSL level as for the PAPI
users need to define custom semantics. Also, late records are not really
corrupted. However, the pattern seems similar enough, ie, piping late
data into a topic is just a special case for a DLQ?

I am also wondering, if piping late records into a DLQ is the only way
to handle them? For example, I could imagine that a user just wants to
trigger a side-effect (similar to what you mention in rejected
alternatives)? Or maybe a user might even want to somehow process those
record and feed them back into the actually processing pipeline.

Last, a DLQ is only useful if somebody consumes from the topic and does
something with the data. Can you elaborate on the use-case how a user
would use the preserved late records?



-Matthias

On 7/27/20 1:45 AM, Igor Piddubnyi wrote:

Hi everybody,
I would like to start off the discussion for KIP-647:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-647%3A+Add+ability+to+handle+late+messages+in+streams-aggregation



This KIP proposes a minor adjustment in the kafka-streams
aggregation-api, adding an ability for processing late messages.
[WIP] PR here:https://github.com/apache/kafka/pull/9017

Please check.
Regards, Igor.








Re: [ANNOUNCE] New Kafka PMC Member: John Roesler

2020-08-11 Thread Bruno Cadonna

Wow, that is awesome! Congrats, John!

Bruno

On 10.08.20 22:11, Jun Rao wrote:

Hi, Everyone,

John Roesler has been a Kafka committer since Nov. 5, 2019. He has remained
active in the community since becoming a committer. It's my pleasure to
announce that John is now a member of Kafka PMC.

Congratulations John!

Jun
on behalf of Apache Kafka PMC