Build failed in Jenkins: kafka-trunk-jdk11 #534

2019-05-16 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H42 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Timeout after 10 minutes
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: remote: Enumerating objects: 72, done.
remote: Counting objects:   1% (1/72)   remote: Counting objects:   2% 
(2/72)   remote: Counting objects:   4% (3/72)   remote: 
Counting objects:   5% (4/72)   remote: Counting objects:   6% (5/72)   
remote: Counting objects:   8% (6/72)   remote: Counting 
objects:   9% (7/72)   remote: Counting objects:  11% (8/72)   
remote: Counting objects:  12% (9/72)   remote: Counting objects:  13% 
(10/72)   remote: Counting objects:  15% (11/72)   remote: 
Counting objects:  16% (12/72)   remote: Counting objects:  18% (13/72) 
  remote: Counting objects:  19% (14/72)   remote: Counting 
objects:  20% (15/72)   remote: Counting objects:  22% (16/72)  
 remote: Counting objects:  23% (17/72)   remote: Counting objects:  
25% (18/72)   remote: Counting objects:  26% (19/72)   remote: 
Counting objects:  27% (20/72)   remote: Counting objects:  29% (21/72) 
  remote: Counting objects:  30% (22/72)   remote: Counting 
objects:  31% (23/72)   remote: Counting objects:  33% (24/72)  
 remote: Counting objects:  34% (25/72)   remote: Counting objects:  
36% (26/72)   remote: Counting objects:  37% (27/72)   remote: 
Counting objects:  38% (28/72)   remote: Counting objects:  40% (29/72) 
  remote: Counting objects:  41% (30/72)   remote: Counting 
objects:  43% (31/72)   remote: Counting objects:  44% (32/72)  
 remote: Counting objects:  45% (33/72)   remote: Counting objects:  
47% (34/72)   remote: Counting objects:  48% (35/72)   remote: 
Counting objects:  50% (36/72)   remote: Counting objects:  51% (37/72) 
  remote: Counting objects:  52% (38/72)   remote: Counting 
objects:  54% (39/72)   remote: Counting objects:  55% (40/72)  
 remote: Counting objects:  56% (41/72)   remote: Counting objects:  
58% (42/72)   remote: Counting objects:  59% (43/72)   remote: 
Counting objects:  61% (44/72)   remote: Counting objects:  62% (45/72) 
  remote: Counting objects:  63% (46/72)   remote: Counting 
objects:  65% (47/72)   remote: Counting objects:  66% (48/72)  
 remote: Counting objects:  68% (49/72)   remote: Counting objects:  
69% (50/72)   remote: Counting objects:  70% (51/72)   remote: 
Counting objects:  72% (52/72)   remote: Counting objects:  73% (53/72) 
  remote: Counting objects:  75% (54/72)   remote: Counting 
objects:  76% (55/72)   remote: Counting objects:  77% (56/72)  
 remote: Counting objects:  79% (57/72)   remote: Counting objects:  
80% (58/72)   remote: Counting objects:  81% (59/72)   remote: 
Counting objects:  83% (60/72)   remote: Counting objects:  84% (61/72) 
  remote: Counting objects:  86% (62/72)   remote: Counting 
objects:  87% (63/72)   remote: Counting objects:  88% (64/72)  
 remote: Counting objects:  90% (65/72)   remote: Counting objects:  
91% (66/72)   remote: Counting objects:  93% (67/72)   remote: 
Counting objects:  94% (68/72)   remote: Counting objects:  95% (69/72) 
  remote: Counting objects:  97% (70/72)   remote: Counting 
objects:  98% (71/72)   remote: Counting objects: 100% (72/72)  
 remote: Counting objects: 100% (72/72), done.
remote: Compressing objects:   2% (1/49)   remote: Compressing objects: 
  4% (2/49)   remote: Compressing objects:   6% (3/49)   
remote: Compressing objects:   8% (4/49)   remote: Compressing objects: 
 10% (5/49)   remote: 

Build failed in Jenkins: kafka-trunk-jdk11 #533

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-5505: Incremental cooperative rebalancing in Connect (KIP-415)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (ubuntu trusty) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Timeout after 10 minutes
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: remote: Enumerating objects: 72, done.
remote: Counting objects:   1% (1/72)   remote: Counting objects:   2% 
(2/72)   remote: Counting objects:   4% (3/72)   remote: 
Counting objects:   5% (4/72)   remote: Counting objects:   6% (5/72)   
remote: Counting objects:   8% (6/72)   remote: Counting 
objects:   9% (7/72)   remote: Counting objects:  11% (8/72)   
remote: Counting objects:  12% (9/72)   remote: Counting objects:  13% 
(10/72)   remote: Counting objects:  15% (11/72)   remote: 
Counting objects:  16% (12/72)   remote: Counting objects:  18% (13/72) 
  remote: Counting objects:  19% (14/72)   remote: Counting 
objects:  20% (15/72)   remote: Counting objects:  22% (16/72)  
 remote: Counting objects:  23% (17/72)   remote: Counting objects:  
25% (18/72)   remote: Counting objects:  26% (19/72)   remote: 
Counting objects:  27% (20/72)   remote: Counting objects:  29% (21/72) 
  remote: Counting objects:  30% (22/72)   remote: Counting 
objects:  31% (23/72)   remote: Counting objects:  33% (24/72)  
 remote: Counting objects:  34% (25/72)   remote: Counting objects:  
36% (26/72)   remote: Counting objects:  37% (27/72)   remote: 
Counting objects:  38% (28/72)   remote: Counting objects:  40% (29/72) 
  remote: Counting objects:  41% (30/72)   remote: Counting 
objects:  43% (31/72)   remote: Counting objects:  44% (32/72)  
 remote: Counting objects:  45% (33/72)   remote: Counting objects:  
47% (34/72)   remote: Counting objects:  48% (35/72)   remote: 
Counting objects:  50% (36/72)   remote: Counting objects:  51% (37/72) 
  remote: Counting objects:  52% (38/72)   remote: Counting 
objects:  54% (39/72)   remote: Counting objects:  55% (40/72)  
 remote: Counting objects:  56% (41/72)   remote: Counting objects:  
58% (42/72)   remote: Counting objects:  59% (43/72)   remote: 
Counting objects:  61% (44/72)   remote: Counting objects:  62% (45/72) 
  remote: Counting objects:  63% (46/72)   remote: Counting 
objects:  65% (47/72)   remote: Counting objects:  66% (48/72)  
 remote: Counting objects:  68% (49/72)   remote: Counting objects:  
69% (50/72)   remote: Counting objects:  70% (51/72)   remote: 
Counting objects:  72% (52/72)   remote: Counting objects:  73% (53/72) 
  remote: Counting objects:  75% (54/72)   remote: Counting 
objects:  76% (55/72)   remote: Counting objects:  77% (56/72)  
 remote: Counting objects:  79% (57/72)   remote: Counting objects:  
80% (58/72)   remote: Counting objects:  81% (59/72)   remote: 
Counting objects:  83% (60/72)   remote: Counting objects:  84% (61/72) 
  remote: Counting objects:  86% (62/72)   remote: Counting 
objects:  87% (63/72)   remote: Counting objects:  88% (64/72)  
 remote: Counting objects:  90% (65/72)   remote: Counting objects:  
91% (66/72)   remote: Counting objects:  93% (67/72)   remote: 
Counting objects:  94% (68/72)   remote: Counting objects:  95% (69/72) 
  remote: Counting objects:  97% (70/72)   remote: Counting 
objects:  98% (71/72)   remote: Counting objects: 100% (72/72)  
 remote: Counting objects: 100% (72/72), done.
remote: Compressing objects:   2% (1/49)   remote: Compressing objects: 
  4% (2/49)   remote: Compressing objects:   6% (3/49)   

Build failed in Jenkins: kafka-trunk-jdk8 #3643

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[rhauch] Add '?expand' query param for additional info on '/connectors'. (#6658)

--
[...truncated 2.43 MB...]

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.LongConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testDeserializingDataWithTooManyBytes PASSED


Re: [VOTE] 2.2.1 RC1

2019-05-16 Thread Vahid Hashemian
Since there is no vote on this RC yet, I'll extend the deadline to Monday,
May 20, at 9:00 am.

Thanks in advance for checking / testing / voting.

--Vahid


On Mon, May 13, 2019, 20:15 Vahid Hashemian 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.2.1.
>
> Compared to RC0, this release candidate also fixes the following issues:
>
>- [KAFKA-6789] - Add retry logic in AdminClient requests
>- [KAFKA-8348] - Document of kafkaStreams improvement
>- [KAFKA-7633] - Kafka Connect requires permission to create internal
>topics even if they exist
>- [KAFKA-8240] - Source.equals() can fail with NPE
>- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
>record, causing unlimited growth of __consumer_offsets
>- [KAFKA-8352] - Connect System Tests are failing with 404
>
> Release notes for the 2.2.1 release:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
>
> * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> https://github.com/apache/kafka/releases/tag/2.2.1-rc1
>
> * Documentation:
> https://kafka.apache.org/22/documentation.html
>
> * Protocol:
> https://kafka.apache.org/22/protocol.html
>
> * Successful Jenkins builds for the 2.2 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
>
> Thanks!
> --Vahid
>


[jira] [Resolved] (KAFKA-5505) Connect: Do not restart connector and existing tasks on task-set change

2019-05-16 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5505.
--
Resolution: Fixed
  Reviewer: Randall Hauch

> Connect: Do not restart connector and existing tasks on task-set change
> ---
>
> Key: KAFKA-5505
> URL: https://issues.apache.org/jira/browse/KAFKA-5505
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Per Steffensen
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.0
>
>
> I am writing a connector with a frequently changing task-set. It is really 
> not working very well, because the connector and all existing tasks are 
> restarted when the set of tasks changes. E.g. if the connector is running 
> with 10 tasks, and an additional task is needed, the connector itself and all 
> 10 existing tasks are restarted, just to make the 11th task run also. My 
> tasks have a fairly heavy initialization, making it extra annoying. I would 
> like to see a change, introducing a "mode", where only new/deleted tasks are 
> started/stopped when notifying the system that the set of tasks changed 
> (calling context.requestTaskReconfiguration() - or something similar).
> Discussed this issue a little on dev@kafka.apache.org in the thread "Kafka 
> Connect: To much restarting with a SourceConnector with dynamic set of tasks"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3642

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8256; Replace Heartbeat request/response with automated protocol

[ismael] KAFKA-8376; Least loaded node should consider connections which are

--
[...truncated 1.45 MB...]
kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[4] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[4] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[4] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[4] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[4] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0[4] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleanerWithMessageFormatV0[4] PASSED

kafka.log.TimeIndexTest > testTruncate STARTED

kafka.log.TimeIndexTest > testTruncate PASSED

kafka.log.TimeIndexTest > testEntry STARTED

kafka.log.TimeIndexTest > testEntry PASSED

kafka.log.TimeIndexTest > testAppend STARTED

kafka.log.TimeIndexTest > testAppend PASSED

kafka.log.TimeIndexTest > testEntryOverflow STARTED

kafka.log.TimeIndexTest > testEntryOverflow PASSED

kafka.log.TimeIndexTest > testLookUp STARTED

kafka.log.TimeIndexTest > testLookUp PASSED

kafka.log.TimeIndexTest > testSanityCheck STARTED

kafka.log.TimeIndexTest > testSanityCheck PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED

kafka.log.ProducerStateManagerTest > testBasicIdMapping STARTED

kafka.log.ProducerStateManagerTest > testBasicIdMapping PASSED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState STARTED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState PASSED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot STARTED

kafka.log.ProducerStateManagerTest > 

[jira] [Resolved] (KAFKA-3522) Consider adding version information into rocksDB storage format

2019-05-16 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-3522.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: architecture
> Fix For: 2.3.0
>
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Question about integrating kafka broker with a service

2019-05-16 Thread Zhou, Thomas
Hi,

I am one of the Kafka users and I have a question about how to integrate our 
service with Kafka. Basically, we want to enable Kafka with TLS and we want to 
enable mutual authentication use SSL context. We’ve already got a service which 
will sign the cert and manage the key. Our goal is to let Kafka broker side and 
client side integrate this service so people will not need to worry about 
rotating the key and other stuff. I know that kafka design is un-pluggable, but 
can I get some advice about how difficult it is to make kafka pluggable with a 
service as I mentioned.
I will really appreciate if you could give some advice.


Thanks & Regards,
Thomas


Build failed in Jenkins: kafka-trunk-jdk11 #532

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-8376; Least loaded node should consider connections which are

[rhauch] Add '?expand' query param for additional info on '/connectors'. (#6658)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H35 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5a95c2e1cd555d5f3ec148cc7c765d1bb7d716f9 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5a95c2e1cd555d5f3ec148cc7c765d1bb7d716f9
Commit message: "Add '?expand' query param for additional info on 
'/connectors'. (#6658)"
 > git rev-list --no-walk 855f899bb523f3b334f711926a7db4cc75ebb4b4 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins5280490211560870061.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins5280490211560870061.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Build failed in Jenkins: kafka-trunk-jdk8 #3641

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8220; Avoid kicking out static group members through rebalance

[jason] MINOR: Added missing method parameter to `performAssignment` javadoc

[bbejeck] KAFKA-8347: Choose next record to process by timestamp (#6719)

[github] KAFAK-3522: Add TopologyTestDriver unit tests (#6179)

--
[...truncated 2.44 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-8377) KTable#transformValue might lead to incorrect result in joins

2019-05-16 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8377:
--

 Summary: KTable#transformValue might lead to incorrect result in 
joins
 Key: KAFKA-8377
 URL: https://issues.apache.org/jira/browse/KAFKA-8377
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.0.0
Reporter: Matthias J. Sax


Kafka Streams uses an optimization to not materialize every result KTable. If a 
non-materialized KTable is input to a join, the lookup into the table results 
in a lookup of the parents table plus a call to the operator. For example,
{code:java}
KTable nonMaterialized = materializedTable.filter(...);
KTable table2 = ...

table2.join(nonMaterialized,...){code}
If there is a table2 input record, the lookup to the other side is performed as 
a lookup into materializedTable plus applying the filter().

For stateless operation like filter, this is safe. However, #transformValues() 
might have an attached state store. Hence, when an input record r is processed 
by #transformValues() with current state S, it might produce an output record 
r' (that is not materialized). When the join later does a lookup to get r from 
the parent table, there is no guarantee that #transformValues() again produces 
r' because its state might not be the same any longer.

Hence, it seems to be required, to always materialize the result of a 
KTable#transformValues() operation if there is state. Note, that if there would 
be a consecutive filter() after tranformValue(), it would also be ok to 
materialize the filter() result. Furthermore, if there is no downstream join(), 
materialization is also not required.

Basically, it seems to be unsafe to apply `KTableValueGetter` on a stateful 
#transformValues()` operator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #531

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8256; Replace Heartbeat request/response with automated protocol

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H28 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 855f899bb523f3b334f711926a7db4cc75ebb4b4 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 855f899bb523f3b334f711926a7db4cc75ebb4b4
Commit message: "KAFKA-8256; Replace Heartbeat request/response with automated 
protocol (#6691)"
 > git rev-list --no-walk 16b408898e75b00ddf6b607246833cdbcd56f507 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins7860153177079962577.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins7860153177079962577.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Resolved] (KAFKA-8256) Replace Heartbeat request/response with automated protocol

2019-05-16 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8256.

Resolution: Fixed

> Replace Heartbeat request/response with automated protocol
> --
>
> Key: KAFKA-8256
> URL: https://issues.apache.org/jira/browse/KAFKA-8256
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3640

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-3816: Add MDC logging to Connect runtime (#5743)

--
[...truncated 1.45 MB...]
kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED

kafka.log.ProducerStateManagerTest > testBasicIdMapping STARTED

kafka.log.ProducerStateManagerTest > testBasicIdMapping PASSED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState STARTED

kafka.log.ProducerStateManagerTest > updateProducerTransactionState PASSED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot STARTED

kafka.log.ProducerStateManagerTest > testRecoverFromSnapshot PASSED

kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate STARTED

kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate PASSED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic STARTED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic PASSED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn STARTED

kafka.log.ProducerStateManagerTest > testLastStableOffsetCompletedTxn PASSED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers STARTED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset PASSED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached STARTED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica STARTED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica PASSED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile STARTED


[jira] [Resolved] (KAFKA-6852) Allow Log Levels to be dynamically configured

2019-05-16 Thread Yeva Byzek (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeva Byzek resolved KAFKA-6852.
---
Resolution: Duplicate

This is getting addressed in KAFKA-7800

> Allow Log Levels to be dynamically configured
> -
>
> Key: KAFKA-6852
> URL: https://issues.apache.org/jira/browse/KAFKA-6852
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0.1
>Reporter: Yeva Byzek
>Priority: Major
>  Labels: supportability, usability
>
> For operational workflows like troubleshooting, it is useful to change log 
> levels to get more information.
> The current two ways to change log levels is painful:
> 1. Changing logging level configuration requires restarting processes.  
> Challenge: service disruption
> 2. Can be also done through JMX MBean.  Challenge: for the most part 
> customers don't have JMX exposed to the operators.  It exists as PURELY a way 
> to get stats.
> This jira is to make logging level changes dynamic without service restart 
> and without JMX interface.
> This needs to be done in all Kafka components



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-455: Create an Administrative API for Replica Reassignment

2019-05-16 Thread Colin McCabe
Hi George,

Yes, KIP-455 allows the reassignment of individual partitions to be cancelled.  
I think it's very important for these operations to be at the partition level.

best,
Colin

On Tue, May 14, 2019, at 16:34, George Li wrote:
>  Hi Colin,
> 
> Thanks for the updated KIP.  It has very good improvements of Kafka 
> reassignment operations. 
> 
> One question, looks like the KIP includes the Cancellation of 
> individual pending reassignments as well when the 
> AlterPartitionReasisgnmentRequest has empty replicas for the 
> topic/partition. Will you also be implementing the the partition 
> cancellation/rollback in the PR ?    If yes,  it will make KIP-236 (it 
> has PR already) trivial, since the cancel all pending reassignments, 
> one just needs to do a ListPartitionRessignmentRequest, then submit 
> empty replicas for all those topic/partitions in 
> one AlterPartitionReasisgnmentRequest. 
> 
> 
> Thanks,
> George
> 
> On Friday, May 10, 2019, 8:44:31 PM PDT, Colin McCabe 
>  wrote:  
>  
>  On Fri, May 10, 2019, at 17:34, Colin McCabe wrote:
> > On Fri, May 10, 2019, at 16:43, Jason Gustafson wrote:
> > > Hi Colin,
> > > 
> > > I think storing reassignment state at the partition level is the right 
> > > move
> > > and I also agree that replicas should understand that there is a
> > > reassignment in progress. This makes KIP-352 a trivial follow-up for
> > > example. The only doubt I have is whether the leader and isr znode is the
> > > right place to store the target reassignment. It is a bit odd to keep the
> > > target assignment in a separate place from the current assignment, right? 
> > > I
> > > assume the thinking is probably that although the current assignment 
> > > should
> > > probably be in the leader and isr znode as well, it is hard to move the
> > > state in a compatible way. Is that right? But if we have no plan to remove
> > > the assignment znode, do you see a downside to storing the target
> > > assignment there as well?
> > >
> > 
> > Hi Jason,
> > 
> > That's a good point -- it's probably better to keep the target 
> > assignment in the same znode as the current assignment, for 
> > consistency.  I'll change the KIP.
> 
> Hi Jason,
> 
> Thanks again for the review.
> 
> I took another look at this, and I think we should stick with the 
> initial proposal of putting the reassignment state into 
> /brokers/topics/[topic]/partitions/[partitionId]/state.  The reason is 
> because we'll want to bump the leader epoch for the partition when 
> changing the reassignment state, and the leader epoch resides in that 
> znode anyway.  I agree there is some inconsistency here, but so be it: 
> if we were to greenfield these zookeeper data structures, we might do 
> it differently, but the proposed scheme will work fine and be 
> extensible for the future.
> 
> > 
> > > A few additional questions:
> > > 
> > > 1. Should `alterPartitionReassignments` be `alterPartitionAssignments`?
> > > It's the current assignment we're altering, right?
> > 
> > That's fair.  AlterPartitionAssigments reads a little better, and I'll 
> > change it to that.
> 
> +1.  I've changed the RPC and API name in the wiki.
> 
> > 
> > > 2. Does this change affect the Metadata API? In other words, are clients
> > > aware of reassignments? If so, then we probably need a change to
> > > UpdateMetadata as well. The only alternative I can think of would be to
> > > represent the replica set in the Metadata request as the union of the
> > > current and target replicas, but I can't think of any benefit to hiding
> > > reassignments. Note that if we did this, we probably wouldn't need a
> > > separate API to list reassignments.
> > 
> > I thought about this a bit... and I think on balance, you're right.  We 
> > should keep this information together with the replica nodes, isr 
> > nodes, and offline replicas, and that information is available in the 
> > MetadataResponse. 
> >  However, I do think in order to do this, we'll need a flag in the 
> > MetadataRequest that specifiies "only show me reassigning partitions".  
> > I'll add this.
> 
> I revisited this, and I think we should stick with the original 
> proposal of having a separate ListPartitionReassignments API.  There 
> really is no use case where the Producer or Consumer needs to know 
> about a reassignment.  They should just be notified when the set of 
> partitions changes, which doesn't require changes to 
> MetadataRequest/Response.  The Admin client only cares if someone is 
> managing the reassignment.  So adding this state to the 
> MetadataResponse adds overhead for no real benefit.  In the common case 
> where there is no ongoing reassignment, it would be 4 bytes per 
> partition of extra overhead in the MetadataResponse.
> 
> In general, I think we have a problem of oversharing in the 
> MetadataRequest/Response.  As we 10x or 100x the number of partitions 
> we support, we'll need to get stricter about giving clients only the 
> information 

Build failed in Jenkins: kafka-trunk-jdk11 #530

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[github] KAFAK-3522: Add TopologyTestDriver unit tests (#6179)

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (ubuntu trusty) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 16b408898e75b00ddf6b607246833cdbcd56f507 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 16b408898e75b00ddf6b607246833cdbcd56f507
Commit message: "KAFAK-3522: Add TopologyTestDriver unit tests (#6179)"
 > git rev-list --no-walk 80784271043d2da8e93055a5f9b1bcfd53347461 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins4841445112212215317.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins4841445112212215317.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


Build failed in Jenkins: kafka-trunk-jdk11 #529

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8220; Avoid kicking out static group members through rebalance

[jason] MINOR: Added missing method parameter to `performAssignment` javadoc

[bbejeck] KAFKA-8347: Choose next record to process by timestamp (#6719)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (ubuntu trusty) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Timeout after 10 minutes
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
https://github.com/apache/kafka.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: remote: Enumerating objects: 35, done.
remote: Counting objects:   2% (1/35)   remote: Counting objects:   5% 
(2/35)   remote: Counting objects:   8% (3/35)   remote: 
Counting objects:  11% (4/35)   remote: Counting objects:  14% (5/35)   
remote: Counting objects:  17% (6/35)   remote: Counting 
objects:  20% (7/35)   remote: Counting objects:  22% (8/35)   
remote: Counting objects:  25% (9/35)   remote: Counting objects:  28% 
(10/35)   remote: Counting objects:  31% (11/35)   remote: 
Counting objects:  34% (12/35)   remote: Counting objects:  37% (13/35) 
  remote: Counting objects:  40% (14/35)   remote: Counting 
objects:  42% (15/35)   remote: Counting objects:  45% (16/35)  
 remote: Counting objects:  48% (17/35)   remote: Counting objects:  
51% (18/35)   remote: Counting objects:  54% (19/35)   remote: 
Counting objects:  57% (20/35)   remote: Counting objects:  60% (21/35) 
  remote: Counting objects:  62% (22/35)   remote: Counting 
objects:  65% (23/35)   remote: Counting objects:  68% (24/35)  
 remote: Counting objects:  71% (25/35)   remote: Counting objects:  
74% (26/35)   remote: Counting objects:  77% (27/35)   remote: 
Counting objects:  80% (28/35)   remote: Counting objects:  82% (29/35) 
  remote: Counting objects:  85% (30/35)   remote: Counting 
objects:  88% (31/35)   remote: Counting objects:  91% (32/35)  
 remote: Counting objects:  94% (33/35)   remote: Counting objects:  
97% (34/35)   remote: Counting objects: 100% (35/35)   remote: 
Counting objects: 100% (35/35), done.
remote: Compressing objects:   3% (1/32)   remote: Compressing objects: 
  6% (2/32)   remote: Compressing objects:   9% (3/32)   
remote: Compressing objects:  12% (4/32)   remote: Compressing objects: 
 15% (5/32)   remote: Compressing objects:  18% (6/32)   
remote: Compressing objects:  21% (7/32)   remote: Compressing objects: 
 25% (8/32)   remote: Compressing objects:  28% (9/32)   
remote: Compressing objects:  31% (10/32)   remote: Compressing 
objects:  34% (11/32)   remote: Compressing objects:  37% (12/32)   
remote: Compressing objects:  40% (13/32)   remote: Compressing 
objects:  43% (14/32)   remote: Compressing objects:  46% (15/32)   
remote: Compressing objects:  50% (16/32)   remote: Compressing 
objects:  53% (17/32)   remote: Compressing objects:  56% (18/32)   
remote: Compressing objects:  59% (19/32)   remote: Compressing 
objects:  62% (20/32)   remote: Compressing objects:  65% (21/32)   
remote: Compressing objects:  68% (22/32)   remote: Compressing 
objects:  71% (23/32)   remote: Compressing objects:  75% (24/32)   
remote: Compressing objects:  78% (25/32)   remote: Compressing 
objects:  81% (26/32)   remote: Compressing objects:  84% (27/32)   
remote: Compressing objects:  87% (28/32)   remote: Compressing 
objects:  90% (29/32)   remote: Compressing objects:  93% (30/32)   
remote: Compressing objects:  96% (31/32)   remote: Compressing 
objects: 100% (32/32)   remote: Compressing objects: 100% (32/32), 
done.
Receiving objects:   0% (1/169787)   Receiving objects:   0% (1418/169787), 
596.01 KiB | 525.00 KiB/s   Receiving objects:   1% 

[jira] [Resolved] (KAFKA-8220) Avoid kicking out members through rebalance timeout

2019-05-16 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8220.

Resolution: Fixed

> Avoid kicking out members through rebalance timeout
> ---
>
> Key: KAFKA-8220
> URL: https://issues.apache.org/jira/browse/KAFKA-8220
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> As stated in KIP-345, we will no longer evict unjoined members out of the 
> group. We need to take care the edge case when the leader fails to rejoin and 
> switch to a new leader in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #528

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-3816: Add MDC logging to Connect runtime (#5743)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H28 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision b395ef418237a37b95d6452f0111998f63c047db 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b395ef418237a37b95d6452f0111998f63c047db
Commit message: "KAFKA-3816: Add MDC logging to Connect runtime (#5743)"
 > git rev-list --no-walk 2327b3555849dd73d5695525548410e5257298e4 # timeout=10
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: No tool found matching GRADLE_4_10_2_HOME
[kafka-trunk-jdk11] $ /bin/bash -xe /tmp/jenkins5381425174655374623.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins5381425174655374623.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: No tool found matching GRADLE_4_10_2_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_10_2_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user csh...@gmail.com
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Created] (KAFKA-8376) Flaky test ClientAuthenticationFailureTest.testTransactionalProducerWithInvalidCredentials test.

2019-05-16 Thread Manikumar (JIRA)
Manikumar created KAFKA-8376:


 Summary: Flaky test 
ClientAuthenticationFailureTest.testTransactionalProducerWithInvalidCredentials 
test.
 Key: KAFKA-8376
 URL: https://issues.apache.org/jira/browse/KAFKA-8376
 Project: Kafka
  Issue Type: Bug
Reporter: Manikumar
 Fix For: 2.3.0
 Attachments: t1.txt

The test is going into hang state and test run was not completing. I think the 
flakiness is due to timing issues and https://github.com/apache/kafka/pull/5971

Attaching the thread dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-458: Connector Client Config Override Policy

2019-05-16 Thread Magesh Nandakumar
While implementing the KIP it became evident that for the validate call
from the Connect Rest interface it was required to pass each config
individually to the policy implementation to be able to determine which
overrides are allowed. This happened since the interface relied on using
Exception(PolicyViolationException) as the mechanism to indicate an error
in the configuration. This defeats the purpose of the interface which
allows all overridden configurations to be passed together. To overcome
this limitation, I have updated the KIP (specifically the interface) to
return a list of ConfigValue (org.apache.kafka.common.config.ConfigValue )
instead of throwing an exception. This pattern is already followed in the
validate() of the Connector class. This change doesn't alter the semantics
or behavior of the out-of-the-box policies or how policies are configured
or enforced. Let me know if you have any questions or comments on the
change.

Apart from the above interface change, I have also made another minor
change which was missed out in the list of Configs allowed by the
`Principal` policy. I have included `security.protocol` and
`sasl.mechanism` which were missed out.

On Fri, May 10, 2019 at 9:23 AM Magesh Nandakumar 
wrote:

> Thanks a lot, Colin.  This KIP has now passed voting with 3 binding votes
> ( Randall, Rajini & Colin) and 1 non-binding vote (Chris).
> Thanks a lot, everyone for the feedback & discussion on this KIP.
>
> On Fri, May 10, 2019 at 9:12 AM Colin McCabe  wrote:
>
>> +1 (binding).  Thanks, Magesh.
>>
>> cheers,
>> Colin
>>
>> On Thu, May 9, 2019, at 18:31, Randall Hauch wrote:
>> > I'm still +1 and like the simplification.
>> >
>> > Randall
>> >
>> > On Thu, May 9, 2019 at 5:54 PM Magesh Nandakumar 
>> > wrote:
>> >
>> > > I have updated the KIP to remove the `Ignore` policy and also the
>> > > useOverrides()
>> > > method in the interface.
>> > > Thanks a lot for your thoughts, Colin. I believe this certainly
>> simplifies
>> > > the KIP.
>> > >
>> > > On Thu, May 9, 2019 at 3:44 PM Magesh Nandakumar <
>> mage...@confluent.io>
>> > > wrote:
>> > >
>> > > > Unless anyone has objections, I'm going to update the KIP to remove
>> the
>> > > > `Ignore` policy and make `None` as the default. I will also remove
>> the `
>> > > > default boolean useOverrides()` in the interface which was
>> introduced for
>> > > > the purpose of backward compatibility.
>> > > >
>> > > > On Thu, May 9, 2019 at 3:27 PM Randall Hauch 
>> wrote:
>> > > >
>> > > >> I have also seen users include in connector configs the
>> `producer.*` and
>> > > >> `consumer.*` properties that should go into the worker configs. But
>> > > those
>> > > >> don't match, and the likelihood that someone is already using
>> > > >> `producer.override.*` or `consumer.override.*` properties in their
>> > > >> connector configs does seem pretty tiny.
>> > > >>
>> > > >> I'd be fine with removing the `Ignore` for backward compatibility.
>> Still
>> > > >> +1
>> > > >> either way.
>> > > >>
>> > > >> On Thu, May 9, 2019 at 5:23 PM Magesh Nandakumar <
>> mage...@confluent.io>
>> > > >> wrote:
>> > > >>
>> > > >> > To add more details regarding the backward compatibility; I have
>> > > >> generally
>> > > >> > seen users trying to set "producer.request.timeout.ms
>> > > >> > " in their
>> connector
>> > > >> config
>> > > >> > under the assumption that it will get used and would never come
>> back
>> > > to
>> > > >> > remove it. The initial intent of the KIP was to use the same
>> prefix
>> > > but
>> > > >> > since that potentially collided with MM2 configs, we agreed to
>> use a
>> > > >> > different prefix "producer.override". With this context, I think
>> the
>> > > >> > likelihood of someone using this is very small and should
>> generally
>> > > not
>> > > >> be
>> > > >> > a problem.
>> > > >> >
>> > > >> > On Thu, May 9, 2019 at 3:15 PM Magesh Nandakumar <
>> > > mage...@confluent.io>
>> > > >> > wrote:
>> > > >> >
>> > > >> > > Colin,
>> > > >> > >
>> > > >> > > Thanks a lot for the feedback.  As you said, the possibilities
>> of
>> > > >> someone
>> > > >> > > having "producer.override.request.timeout.ms" in their
>> connector
>> > > >> config
>> > > >> > > in AK 2.2 or lower is very slim. But the key thing is if in
>> case,
>> > > >> someone
>> > > >> > > has it AK2.2 doesn't do anything with it and it silently
>> ignores the
>> > > >> > > configuration. If others think that it's not really a problem,
>> then
>> > > >> I'm
>> > > >> > > fine with removing the complicated compatibility issue.
>> > > >> > >
>> > > >> > > I have explicitly called out the behavior when the exception is
>> > > >> thrown.
>> > > >> > >
>> > > >> > > Let me know what you think.
>> > > >> > >
>> > > >> > > Thanks,
>> > > >> > > Magesh
>> > > >> > >
>> > > >> > > On Thu, May 9, 2019 at 2:45 PM Colin McCabe <
>> cmcc...@apache.org>
>> > > >> wrote:
>> > > >> > >
>> > > >> > >> Hi Magesh,
>> > > >> > >>
>> > > >> 

Build failed in Jenkins: kafka-2.2-jdk8 #119

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Enable console logs in Connect tests (#6745)

--
[...truncated 2.71 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED


[jira] [Created] (KAFKA-8375) Offset jumps back after commit

2019-05-16 Thread Markus Dybeck (JIRA)
Markus Dybeck created KAFKA-8375:


 Summary: Offset jumps back after commit
 Key: KAFKA-8375
 URL: https://issues.apache.org/jira/browse/KAFKA-8375
 Project: Kafka
  Issue Type: Bug
  Components: offset manager
Affects Versions: 1.1.1
Reporter: Markus Dybeck
 Attachments: Skärmavbild 2019-05-16 kl. 08.41.53.png

*Setup*

Kafka: 1.1.1
Kafka-client: 1.1.1
Zookeeper: 3.4.11
Akka streams: 0.20

*Topic config*

DELETE_RETENTION_MS_CONFIG: "5000"
CLEANUP_POLICY_CONFIG: "compact,delete"
RETENTION_BYTES_CONFIG: 2000L
RETENTION_MS_CONFIG: 3600


*Behavior*
We have 7 Consumers consuming from 7 partitions, and some of the consumers lag 
jumped back a bit randomly. No new messages were pushed to the topic during the 
time.  We didn't see any strange logs during the time, and the brokers did not 
restart either.

Either way, if there would be a restart or rebalance going on, we can not 
understand why the offset would jump back after it was committed? 

We did observe it both with logs and by watching metrics of the lag. Our logs 
pointed out that after we committed the offset, around 30-35 seconds later we 
consumed an earlier committed message and then the loop begun. The behavior was 
the same after a restart of all the consumers. The behavior then stopped after 
a while all by itself.


We have no clue going forward, or if these might be an issue with akka. But is 
there any known issue that might cause this?

Attaching a screendump with metrics that shows the lag for one partition.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #268

2019-05-16 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-8320 : fix retriable exception package for source connectors

[rhauch] MINOR: Enable console logs in Connect tests (#6745)

--
[...truncated 894.38 KB...]

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson PASSED

kafka.zk.ReassignPartitionsZNodeTest > testEncode STARTED

kafka.zk.ReassignPartitionsZNodeTest > testEncode PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED