[jira] [Created] (KAFKA-12164) ssue when kafka connect worker pod restart, during creation of nested partition directories in hdfs file system.

2021-01-07 Thread kaushik srinivas (Jira)
kaushik srinivas created KAFKA-12164:


 Summary: ssue when kafka connect worker pod restart, during 
creation of nested partition directories in hdfs file system.
 Key: KAFKA-12164
 URL: https://issues.apache.org/jira/browse/KAFKA-12164
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: kaushik srinivas


In our production labs, an issue is observed. Below is the sequence of the same.
 # hdfs connector is added to the connect worker.
 # hdfs connector is creating folders in hdfs /test1=1/test2=2/
Based on the custom partitioner. Here test1 and test2 are two separate nested 
directories derived from multiple fields in the record using a custom 
partitioner.
 # Now kafka connect hdfs connector uses below function calls to create the 
directories in the hdfs file system.
fs.mkdirs(new Path(filename));
ref: 
[https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/storage/HdfsStorage.java]

Now the important thing to note is that if mkdirs() is a non atomic operation 
(i.e can result in partial execution if interrupted)
then suppose the first directory ie test1 is created and just before creation 
of test2 in hdfs happens if there is a restart to the connect worker pod. Then 
the hdfs file system will remain with partial folders created for partitions 
during the restart time frames.

So we might have conditions in hdfs as below
/test1=0/test2=0/
/test1=1/
/test1=2/test2=2
/test1=3/test2=3

So the second partition has a missing directory in it. And if hive integration 
is enabled, hive metastore exceptions will occur since there is a partition 
expected from hive table is missing for few partitions in hdfs.

*This can occur to any connector with some ongoing non atomic operation and a 
restart is triggered to kafka connect worker pod. This will result in some 
partially completed states in the system and may cause issues for the connector 
to continue its operation*.

*This is a very critical issue and needs some attention on ways for handling 
the same.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #382

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9566: Improve DeserializationExceptionHandler JavaDocs (#9837)

[github] MINOR: revise error message from TransactionalRequestResult#await 
(#9843)


--
[...truncated 3.51 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@62a7729d, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@62a7729d, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bd011d6, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bd011d6, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56116ced, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56116ced, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5f9f6d25, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5f9f6d25, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord 

[jira] [Created] (KAFKA-12163) Controller should ensure zkVersion is monotonically increasing when sending UpdateMetadata requests.

2021-01-07 Thread Badai Aqrandista (Jira)
Badai Aqrandista created KAFKA-12163:


 Summary: Controller should ensure zkVersion is monotonically 
increasing when sending UpdateMetadata requests.
 Key: KAFKA-12163
 URL: https://issues.apache.org/jira/browse/KAFKA-12163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Badai Aqrandista


When sending UpdateMetadata requests, controller does not currently perform any 
check to ensure zkVersion is monotonically increasing. If Zookeeper gets into a 
bad state, this can cause Kafka cluster to get into a bad state and possible 
data loss as well.

 

Controller should perform a check to protect the Kafka clusters from getting 
into a bad state.

 

Following shows an example of zkVersion going backward at 2020-12-08 
14:10:46,420.

  

 
{noformat}
[2020-11-23 00:56:20,315] TRACE [Controller id=1153 epoch=196] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=195, leader=2152, leaderEpoch=210, isr=[2154, 
2152, 1153, 1152], zkVersion=535, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 1154) 
for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-11-23 01:15:28,449] TRACE [Controller id=1153 epoch=196] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=195, leader=2152, leaderEpoch=210, isr=[2154, 
2152, 1153, 1152], zkVersion=535, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(1151) for partition 
Execution_CustomsStatus-6 (state.change.logger)
[2020-11-24 00:15:17,042] TRACE [Controller id=1153 epoch=196] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=196, leader=2152, leaderEpoch=211, isr=[2154, 
2152, 1152], zkVersion=536, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[1153]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 
1154, 1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-06 21:53:14,887] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2154, leaderEpoch=212, isr=[2154, 
1152, 1153], zkVersion=538, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[2152]) to brokers Set(2153, 1152, 2154, 2151, 1153, 1154, 
1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-06 22:11:43,739] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2154, leaderEpoch=212, isr=[2154, 
1152, 1153], zkVersion=538, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(2152) for partition 
Execution_CustomsStatus-6 (state.change.logger)
[2020-12-06 22:11:43,815] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2154, leaderEpoch=212, isr=[2154, 
1152, 1153], zkVersion=538, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 1154, 
1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-06 22:12:12,602] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2154, leaderEpoch=212, isr=[2154, 
1152, 1153, 2152], zkVersion=539, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 1154, 
1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-06 22:12:17,019] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2152, leaderEpoch=213, isr=[2154, 
1152, 1153, 2152], zkVersion=540, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 1154, 
1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-07 00:08:46,077] TRACE [Controller id=1152 epoch=197] Sending 
UpdateMetadata request 
UpdateMetadataPartitionState(topicName='Execution_CustomsStatus', 
partitionIndex=6, controllerEpoch=197, leader=2152, leaderEpoch=214, isr=[1152, 
1153, 2152], zkVersion=541, replicas=[2152, 2154, 1152, 1153], 
offlineReplicas=[2154]) to brokers Set(2153, 1152, 2154, 2151, 1153, 2152, 
1154, 1151) for partition Execution_CustomsStatus-6 (state.change.logger)
[2020-12-07 00:08:54,790] TRACE [Controller id=1152 epoch=197] Sending 

[jira] [Created] (KAFKA-12162) Kafka broker continued to run after failing to create "/brokers/ids/X" znode.

2021-01-07 Thread Badai Aqrandista (Jira)
Badai Aqrandista created KAFKA-12162:


 Summary: Kafka broker continued to run after failing to create 
"/brokers/ids/X" znode.
 Key: KAFKA-12162
 URL: https://issues.apache.org/jira/browse/KAFKA-12162
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Badai Aqrandista


We found that Kafka broker continued to run after it failed to create 
"/brokers/ids/X" znode and still acted as a partition leader. Here's the log 
snippet.

 

 
{code:java}
[2020-12-08 14:10:25,040] INFO Client successfully logged in. 
(org.apache.zookeeper.Login)
[2020-12-08 14:10:25,040] INFO Client will use DIGEST-MD5 as SASL mechanism. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2020-12-08 14:10:25,040] INFO Opening socket connection to server 
kafkaprzk4.example.com/0.0.0.0:2181. Will attempt to SASL-authenticate using 
Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,045] INFO Socket connection established, initiating 
session, client: /0.0.0.0:36056, server: kafkaprzk4.example.com/0.0.0.0:2181 
(org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,054] WARN Unable to reconnect to ZooKeeper service, 
session 0x5002ed5001b has expired (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,054] INFO Unable to reconnect to ZooKeeper service, 
session 0x5002ed5001b has expired, closing socket connection 
(org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,055] INFO EventThread shut down for session: 
0x5002ed5001b (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,055] INFO [ZooKeeperClient Kafka server] Session expired. 
(kafka.zookeeper.ZooKeeperClient)
[2020-12-08 14:10:25,059] INFO [ZooKeeperClient Kafka server] Initializing a 
new session to 
kafkaprzk1.example.com:2181,kafkaprzk2.example.com:2181,kafkaprzk3.example.com:2181,kafkaprzk4.example.com:2181,kafkaprzk5.example.com:2181/kafkaprod/01.
 (kafka.zookeeper.ZooKeeperClient)
[2020-12-08 14:10:25,059] INFO Initiating client connection, 
connectString=kafkaprzk1.example.com:2181,kafkaprzk2.example.com:2181,kafkaprzk3.example.com:2181,kafkaprzk4.example.com:2181,kafkaprzk5.example.com:2181/kafkaprod/01
 sessionTimeout=22500 
watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1b11171f 
(org.apache.zookeeper.ZooKeeper)
[2020-12-08 14:10:25,060] INFO jute.maxbuffer value is 4194304 Bytes 
(org.apache.zookeeper.ClientCnxnSocket)
[2020-12-08 14:10:25,060] INFO zookeeper.request.timeout value is 0. feature 
enabled= (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,061] INFO Client successfully logged in. 
(org.apache.zookeeper.Login)
[2020-12-08 14:10:25,061] INFO Client will use DIGEST-MD5 as SASL mechanism. 
(org.apache.zookeeper.client.ZooKeeperSaslClient)
[2020-12-08 14:10:25,061] INFO Opening socket connection to server 
kafkaprzk4.example.com/0.0.0.0:2181. Will attempt to SASL-authenticate using 
Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,065] INFO Socket connection established, initiating 
session, client: /0.0.0.0:36058, server: kafkaprzk4.example.com/0.0.0.0:2181 
(org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,070] INFO Creating /brokers/ids/2152 (is it secure? true) 
(kafka.zk.KafkaZkClient)
[2020-12-08 14:10:25,081] INFO Session establishment complete on server 
kafkaprzk4.example.com/0.0.0.0:2181, sessionid = 0x400645dad9d0001, negotiated 
timeout = 22500 (org.apache.zookeeper.ClientCnxn)
[2020-12-08 14:10:25,112] ERROR Error while creating ephemeral at 
/brokers/ids/2152, node already exists and owner '288459910057232437' does not 
match current session '288340729659195393' 
(kafka.zk.KafkaZkClient$CheckedEphemeral)
[2020-12-08 14:10:29,814] WARN [Producer clientId=confluent-metrics-reporter] 
Got error produce response with correlation id 520478 on topic-partition 
_confluent-metrics-9, retrying (9 attempts left). Error: 
NOT_LEADER_FOR_PARTITION (org.apache.kafka.clients.producer.internals.Sender)
[2020-12-08 14:10:29,814] WARN [Producer clientId=confluent-metrics-reporter] 
Received invalid metadata error in produce request on partition 
_confluent-metrics-9 due to 
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.. Going to request metadata update now 
(org.apache.kafka.clients.producer.internals.Sender)
[2020-12-08 14:10:29,818] WARN [Producer clientId=confluent-metrics-reporter] 
Got error produce response with correlation id 520480 on topic-partition 
_confluent-metrics-4, retrying (9 attempts left). Error: 
NOT_LEADER_FOR_PARTITION (org.apache.kafka.clients.producer.internals.Sender)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #336

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9566: Improve DeserializationExceptionHandler JavaDocs (#9837)


--
[...truncated 6.97 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7d7310b0, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7d7310b0, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@430b426b, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@430b426b, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED


Re: How does a consumer know the given partition is removed?

2021-01-07 Thread Luke Chen
Hi Bruno,
Thanks for the update!

Hi Boyuan,
For listTopics() method, it'll *always* do a remote call, which will have
performance impact for sure.
For partitionsFor() method, it'll *check cache first*, if not found in
cache, then do a remote call to retrieve the topic partition info.

So, I think partitionsFor() should be a better option for you.

Thanks.
Luke


On Fri, Jan 8, 2021 at 2:38 AM Boyuan Zhang  wrote:

> Thanks, folks!
>
> It seems like partitionsFor() and listTopics() is what I want. Do we have
> performance estimates on these 2 API calls, e.g., the time cost of waiting
> for responses? I would invoke these API along a hot path so I want to have
> a general idea on how bad it could be.
>
> Many thanks to your help!
>
> On Thu, Jan 7, 2021 at 1:44 AM Bruno Cadonna  wrote:
>
> > Hi Luke,
> >
> > I am afraid the ConsumerRebalanceListener will not work in this case
> > since Boyuan assigns the partitions manually. The Java docs you linked
> > state
> >
> > If the consumer directly assigns partitions, those partitions will never
> > be reassigned and this callback is not applicable.
> >
> >
> > Hi Boyuan,
> >
> > The consumer has methods partitionsFor() and listTopics(). Probably
> > there is a better way to get the information you want that I am not
> > aware of.
> >
> > Best,
> > Bruno
> >
> > On 07.01.21 05:09, Luke Chen wrote:
> > > Hi Boyuan,
> > > You can create a *ConsumerRebalanceListener* and do something you want
> > when
> > > *onPartitionsRevoked. *
> > > Please check this java doc for more information:
> > >
> >
> https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html
> > >
> > > Thanks.
> > > Luke
> > >
> > > On Thu, Jan 7, 2021 at 8:45 AM Boyuan Zhang 
> wrote:
> > >
> > >> Hi team,
> > >>
> > >> I'm working on a long run application, which uses the Kafka Consumer
> > API to
> > >> poll messages from a given topic and partition. I'm assigning the
> topic
> > and
> > >> partition manually by using consumer.assign() API and polling messages
> > by
> > >> using consumer.poll().
> > >>
> > >> One common scenario for my application is that certain partitions
> could
> > be
> > >> removed outside of my application and my application needs to know one
> > >> partition has been removed to stop processing that partition. My
> > question
> > >> is that is there any way to get the removal information when I do
> > >> consumer.assign() or consumer.poll() or any APIs that I can use?
> > >>
> > >> Thanks for your help!
> > >>
> > >
> >
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #381

2021-01-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12161) Raft observers should not require an id to fetch

2021-01-07 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12161:
---

 Summary: Raft observers should not require an id to fetch
 Key: KAFKA-12161
 URL: https://issues.apache.org/jira/browse/KAFKA-12161
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


It is useful to allow observers to replay the metadata log without requiring a 
replica id. For example, this can be used by tools in order to inspect the 
current metadata state. In order to support this, we should modify 
`KafkaRaftClient` so that the broker id is not required.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12160) KafkaStreams configs are documented incorrectly

2021-01-07 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-12160:
---

 Summary: KafkaStreams configs are documented incorrectly
 Key: KAFKA-12160
 URL: https://issues.apache.org/jira/browse/KAFKA-12160
 Project: Kafka
  Issue Type: Bug
  Components: docs, streams
Reporter: Matthias J. Sax


In version 2.3, we removed the KafkaStreams default of `max.poll.interval.ms` 
and fall-back to the consumer default. However, the docs still contain 
`Integer.MAX_VALUE` as default.

Because we rely on the consumer default, we should actually remove 
`max.poll.interval.ms` from the Kafka Streams docs completely. We might want to 
fix this is some older versions, too. Not sure how far back we want to go.

Furhtermore, in 2.7 docs, the section of "Default Values" and "Parameters 
controlled by Kafka Streams" contain incorrect information.

cf 
https://kafka.apache.org/27/documentation/streams/developer-guide/config-streams.html#default-values

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-687: Automatic Reloading of Security Store

2021-01-07 Thread Gwen Shapira
+1, Thank you!

On Wed, Jan 6, 2021 at 12:27 PM Boyang Chen  wrote:
>
> Hey folks,
>
> just bumping up this thread and see if you have further comments.
>
> On Wed, Dec 16, 2020 at 3:25 PM Boyang Chen 
> wrote:
>
> > Hey there,
> >
> > I would like to start the voting for KIP-687:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-687
> > %3A+Automatic+Reloading+of+Security+Store to make the security store
> > reloading automated.
> >
> > Best,
> > Boyang
> >



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-01-07 Thread Gwen Shapira
+1, Thank you!!!

On Wed, Jan 6, 2021 at 9:29 PM John Roesler  wrote:
>
> Hello All,
>
> I'd like to volunteer to be the release manager for our next
> feature release, 2.8.0. If there are no objections, I'll
> send out the release plan soon.
>
> Thanks,
> John Roesler
>


-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #335

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10779; Reassignment tool sets throttles incorrectly when 
overriding a reassignment (#9807)


--
[...truncated 3.48 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@15a4bced,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@15a4bced,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@65709bac,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@65709bac,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2891f324,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2891f324,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6cf47af, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6cf47af, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@36bcd04f,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@36bcd04f,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@57f080cf,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@57f080cf,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@705f41c6,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@705f41c6,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@cd1825d, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@cd1825d, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c3f1520, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c3f1520, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@f9a0090, 
timestamped = false, caching = true, logging = true] STARTED


Re: Why many "Load Bug xxx" JIRA bug by Tim?

2021-01-07 Thread Matthias J. Sax
Might be a good idea, but it's out of scope for us, but Infra would need
to do this.

Maybe leave a comment on INFRA-21268 or create a new INFRA ticket for it?


-Matthias

On 1/7/21 9:14 AM, Adam Bellemare wrote:
> If we do look to enable Captchas, I think it would be important that we
> avoid corporate offerings (eg: Google's).
> 
> 
> On Thu, Jan 7, 2021 at 12:12 PM Govinda Sakhare 
> wrote:
> 
>> Hi,
>>
>> If it is possible, we should configure/enable Captcha to prevent automated
>> spamming attacks.
>>
>> Thanks
>> Govinda
>>
>> On Wed, Jan 6, 2021 at 11:30 PM Matthias J. Sax  wrote:
>>
>>> This was a spamming attack.
>>>
>>> The user was blocked and the corresponding tickets were deleted. (Cf.
>>> https://issues.apache.org/jira/browse/INFRA-21268)
>>>
>>> The "problem" is, that anybody can create an Jira account and create
>>> tickets. It's in the spirit of open source and the ASF to not lock down
>>> Jira, to make it easy for people to report issues.
>>>
>>> The drawback is, that stuff like this can happen. It's easy to write a
>>> bot to spam the Jira board...
>>>
>>> Because Jira is managed by the ASF infra-team, Kafka committers/PMC
>>> cannot block users and thus it takes a little longer to react to an
>>> issue like this, as we need to wait for the infra team to help out.
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 1/6/21 1:14 AM, M. Manna wrote:
 I had to register this as spam and block them. I couldn’t disable it
>> from
 ASF JiRA.

  I’m also curious to know how/why such surge occurred.

 Regards,

 On Wed, 6 Jan 2021 at 03:45, Luke Chen  wrote:

> Hi,
> I received a lot of JIRA notification emails today, and they are all
> titled: "Load Bug xxx" by Tim.
> The bug content doesn't look like a real bug, they are like generated
>> by
> automation.
> I'm wondering why that could happen?
> Do we have any way to delete them all?
>
> Thanks.
> Luke
>

>>>
>>
>>
>> --
>> Thanks  & Regards,
>> Govinda Sakhare.
>>
> 


Re: [DISCUSS] Apache Kafka 2.8.0 release

2021-01-07 Thread Bill Bejeck
Thanks for volunteering John!  It's a +1 from me as well.

-Bill

On Thu, Jan 7, 2021 at 1:01 AM Ismael Juma  wrote:

> Thanks for volunteering John! +1
>
> Ismael
>
> On Wed, Jan 6, 2021, 9:29 PM John Roesler  wrote:
>
> > Hello All,
> >
> > I'd like to volunteer to be the release manager for our next
> > feature release, 2.8.0. If there are no objections, I'll
> > send out the release plan soon.
> >
> > Thanks,
> > John Roesler
> >
> >
>


[jira] [Resolved] (KAFKA-10779) Reassignment tool sets throttles incorrectly when overriding a reassignment

2021-01-07 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10779.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Reassignment tool sets throttles incorrectly when overriding a reassignment
> ---
>
> Key: KAFKA-10779
> URL: https://issues.apache.org/jira/browse/KAFKA-10779
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: dengziming
>Priority: Major
> Fix For: 2.8.0
>
>
> The logic in `ReassignPartitionsCommand.calculateProposedMoveMap` assumes 
> that adding replicas are not included in the replica set returned from 
> `Metadata` or `ListPartitionReassignments`.  This is evident in the test case 
> `ReassignPartitionsUnitTest.testMoveMap`. Because of this incorrect 
> assumption, the move map is computed incorrectly which can result in the 
> wrong throttles being applied. As far as I can tell, this is only an issue 
> when overriding an existing reassignment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12159) kafka-console-producer prompt should redisplay after an error output

2021-01-07 Thread Victoria Bialas (Jira)
Victoria Bialas created KAFKA-12159:
---

 Summary: kafka-console-producer prompt should redisplay after an 
error output
 Key: KAFKA-12159
 URL: https://issues.apache.org/jira/browse/KAFKA-12159
 Project: Kafka
  Issue Type: Bug
  Components: producer , tools
 Environment: Mac OSX Catalina 10.15.7, iTerm, Linux
Reporter: Victoria Bialas


*BLUF:* The kafka-console-producer should return its prompt after outputting an 
error message (if it is still running, which in most cases, it is). Current 
behaviour is it doesn't return a prompt, and hitting return at that point shuts 
it down.

*DETAIL AND EXAMPLE:* The console producer utility behaves in a less than 
optimal way when you get an error. It doesn’t return the producer prompt even 
though the producer is still running and accessible. If you hit return after 
the error, it shuts down the producer, forcing you to restart it when in fact 
that wasn’t necessary.

This makes it confusing to demo to users in Docs how to test producers and 
consumers in scenarios where an error is generated. (I am adding a tip o write 
around this which will be published soon. It will be at the end of step 7 in 
[Demo: Enabling Schema Validation on a Topic at the Command 
Line|http://example.com].

Here is an example from Confluent Platform. The scenario has you try to send a 
message with schema validation on (which will fail due to the message format), 
then disable schema validation and resend the message or another in a similar 
format, which should then succeed.
 # With Confluent "schema validation" on, try to send a message that doesn't 
conform to the schema defined for a topic.
*Producer*

{code:java}
Last login: Wed Jan  6 17:51:01 on ttys004
Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>1,my first record
>2,my second record
>[2021-01-06 18:25:08,722] ERROR Error when sending message to topic 
>test-schemas with key: 1 bytes, value: 16 bytes with error: 
>(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.InvalidRecordException: This record has failed the 
validation on broker and hence will be rejected.

org.apache.kafka.common.KafkaException: No key found on line 3:
at 
kafka.tools.ConsoleProducer$LineMessageReader.readMessage(ConsoleProducer.scala:290)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:51)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>3,my third record
>

{code}
You can see that you lose the producer prompt after the error and get only a 
blank line, which leads you to believe you've lost the producer (it's actually 
still running). If you hit return, the producer shuts down, forcing you to 
restart the producer to continue (when in fact, this isn't necessary.)

*Consumer*
The consumer still shows only a previous message that was sent when schema 
validation was disabled earlier in the demo.

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-consumer --bootstrap-server 
localhost:9092 --from-beginning --topic test-schemas --property 
print.key=trueVickys-MacBook-Pro:~ vicky$ kafka-console-consumer 
--bootstrap-server localhost:9092 --from-beginning --topic test-schemas 
--property print.key=true1
my first record
{code}

 # If instead, after disabling schema validation in a different shell, you 
return to the producer window and copy-paste or type the message on the blank 
line following the error, and then hit return, the message will send, and you 
will see it in the running consumer.
*Producer*

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>1,my first record
>2,my second record
>[2021-01-07 11:35:30,443] ERROR Error when sending message to topic 
>test-schemas with key: 1 bytes, value: 16 bytes with error: 
>(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.InvalidRecordException: One or more records have been 
rejected
3,my third record
>
{code}
*Consumer*

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-consumer --bootstrap-server 
localhost:9092 --from-beginning --topic test-schemas --property 
print.key=trueVickys-MacBook-Pro:~ vicky$ kafka-console-consumer 
--bootstrap-server localhost:9092 --from-beginning --topic test-schemas 
--property print.key=true1
my first record3
my third record{code}

If the prompt was simply shown again after the error, it would solve the 
usability problem.

cc: [~mjsax], [~guozhang], [~gshapira_impala_35cc], [~abhishekd.i...@gmail.com]

 



--
This message was sent by 

[jira] [Created] (KAFKA-12158) Consider better return type of RaftClient.scheduleAppend

2021-01-07 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12158:
---

 Summary: Consider better return type of RaftClient.scheduleAppend
 Key: KAFKA-12158
 URL: https://issues.apache.org/jira/browse/KAFKA-12158
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


Currently `RaftClient` has the following Append API:

{code}
Long scheduleAppend(int epoch, List records);
{code}

There are a few possible cases that the single return value is trying to handle:

1. The epoch doesn't match or we are not the current leader => return 
Long.MaxValue
2. We failed to allocate memory to write the the batch (backpressure case) => 
return null
3. We successfully scheduled the append => return the expected offset

It might be better to define a richer type so that the cases that must be 
handled are clearer. At a minimum, it would be better to return `OptionalLong` 
and get rid of the null case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #69

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Mickael Maison] Bump version to 2.6.1

[Mickael Maison] MINOR: Update 2.6 branch version to 2.6.2-SNAPSHOT


--
[...truncated 3.17 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task 

[GitHub] [kafka-site] bbejeck commented on a change in pull request #319: Updates for 2.6.1

2021-01-07 Thread GitBox


bbejeck commented on a change in pull request #319:
URL: https://github.com/apache/kafka-site/pull/319#discussion_r553560365



##
File path: 26/documentation.html
##
@@ -41,7 +41,7 @@ 1.2 Use Cases
   
   1.3 Quick Start
-  
+  

Review comment:
   Also, you'll want to keep the `` links as well
   
   cf https://github.com/apache/kafka-site/pull/313

##
File path: 26/documentation.html
##
@@ -30,7 +30,7 @@
 
   

-   
+   

Review comment:
   @mimaison you'll want to keep the `` link format 
   cf https://github.com/apache/kafka-site/pull/307





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10633) Constant probing rebalances in Streams 2.6

2021-01-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10633.
-
Resolution: Fixed

> Constant probing rebalances in Streams 2.6
> --
>
> Key: KAFKA-10633
> URL: https://issues.apache.org/jira/browse/KAFKA-10633
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Bradley Peterson
>Priority: Major
> Fix For: 2.6.1
>
> Attachments: Discover 2020-10-21T23 34 03.867Z - 2020-10-21T23 44 
> 46.409Z.csv
>
>
> We are seeing a few issues with the new rebalancing behavior in Streams 2.6. 
> This ticket is for constant probing rebalances on one StreamThread, but I'll 
> mention the other issues, as they may be related.
> First, when we redeploy the application we see tasks being moved, even though 
> the task assignment was stable before redeploying. We would expect to see 
> tasks assigned back to the same instances and no movement. The application is 
> in EC2, with persistent EBS volumes, and we use static group membership to 
> avoid rebalancing. To redeploy the app we terminate all EC2 instances. The 
> new instances will reattach the EBS volumes and use the same group member id.
> After redeploying, we sometimes see the group leader go into a tight probing 
> rebalance loop. This doesn't happen immediately, it could be several hours 
> later. Because the redeploy caused task movement, we see expected probing 
> rebalances every 10 minutes. But, then one thread will go into a tight loop 
> logging messages like "Triggering the followup rebalance scheduled for 
> 1603323868771 ms.", handling the partition assignment (which doesn't change), 
> then "Requested to schedule probing rebalance for 1603323868771 ms." This 
> repeats several times a second until the app is restarted again. I'll attach 
> a log export from one such incident.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10633) Constant probing rebalances in Streams 2.6

2021-01-07 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-10633:
-

> Constant probing rebalances in Streams 2.6
> --
>
> Key: KAFKA-10633
> URL: https://issues.apache.org/jira/browse/KAFKA-10633
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Bradley Peterson
>Priority: Major
> Fix For: 2.6.1
>
> Attachments: Discover 2020-10-21T23 34 03.867Z - 2020-10-21T23 44 
> 46.409Z.csv
>
>
> We are seeing a few issues with the new rebalancing behavior in Streams 2.6. 
> This ticket is for constant probing rebalances on one StreamThread, but I'll 
> mention the other issues, as they may be related.
> First, when we redeploy the application we see tasks being moved, even though 
> the task assignment was stable before redeploying. We would expect to see 
> tasks assigned back to the same instances and no movement. The application is 
> in EC2, with persistent EBS volumes, and we use static group membership to 
> avoid rebalancing. To redeploy the app we terminate all EC2 instances. The 
> new instances will reattach the EBS volumes and use the same group member id.
> After redeploying, we sometimes see the group leader go into a tight probing 
> rebalance loop. This doesn't happen immediately, it could be several hours 
> later. Because the redeploy caused task movement, we see expected probing 
> rebalances every 10 minutes. But, then one thread will go into a tight loop 
> logging messages like "Triggering the followup rebalance scheduled for 
> 1603323868771 ms.", handling the partition assignment (which doesn't change), 
> then "Requested to schedule probing rebalance for 1603323868771 ms." This 
> repeats several times a second until the app is restarted again. I'll attach 
> a log export from one such incident.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #361

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] MINOR: Add a log to print acl change notification details


--
[...truncated 7.02 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@526431f3,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@526431f3,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@25ab2a45,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@25ab2a45,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@382ee7d0,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@382ee7d0,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@b5b6385, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@b5b6385, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c330b4,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@78c330b4,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@446d4eb, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@446d4eb, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b19b3d4, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4b19b3d4, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b0ea679, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1b0ea679, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7aa9633c, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@7aa9633c, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@48fa010e, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2021-01-07 Thread Boyang Chen
Hey David, thanks for the feedback.

On Thu, Jan 7, 2021 at 2:37 AM David Jacot  wrote:

> Hi Boyang,
>
> Thanks for the KIP. I am fine with it in general. I just have a few
> comments.
>
> With the proposal, we don't have the guarantee that both the new keystore
> and the new truststore will be picked up together so we may end up with
> the new keystore and the old truststore for a short period of time, or
> permanently
> if the second one can't be reloaded for any reason.
>
> This could disallow clients to authenticate for a while if the new keystore
> and the
> new trustore are not crafted to work with their old versions.
>
> I wonder how this would work in practice. Do we already have guards in
> place to avoid this or could we add something to ensure that listeners are
> updated only if both the truststore and the keystore works with each other?
>
> We don't have this issue today as both the truststore and the keystore are
> reloaded when the AlterConfig RPC is received so the admin can control
> this process. It is all or nothing.
>
> I'm not sure how we achieve that today, if we are updating both key store
and trust store
in the same request, it is still possible that one update succeeded but
another failed. Does users
have awareness to make the guarantee by themselves? My
point is that today there is no reliable way to achieve atomic update
either, and hoping users
to make the correct sequence of decisions would be hard.

I think that this is acceptable but it is worth clearly mentioning that
> there is no
> guarantee from that regard in the KIP, and later in the doc. Perhaps, we
> could
> also mention that updating them in place is not a best practice and that
> using
> new paths gives better control to the admin.
>
> Best,
> David
>
> On Wed, Jan 6, 2021 at 6:55 PM Jason Gustafson  wrote:
>
> > Thanks Boyang. Someone mentioned my email never showed up, but basically
> I
> > suggested tying the refresh configuration more directly to the
> > configurations it would affect. I'm happy with the updates.
> >
> > -Jason
> >
> > On Tue, Jan 5, 2021 at 8:34 PM Boyang Chen 
> > wrote:
> >
> > > Thanks Jason for the feedback. I separated the time configs for key
> store
> > > and trust store, and rename the configs as you proposed.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Mon, Dec 14, 2020 at 3:47 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > Hey there,
> > > >
> > > > bumping up this thread to see if there are further questions
> regarding
> > > the
> > > > updated proposal.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > > > On Thu, Dec 10, 2020 at 11:52 AM Boyang Chen <
> > reluctanthero...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> After some offline discussions, we believe that it's the right
> > direction
> > > >> to go by doing a hybrid approach which includes both file-watch
> > trigger
> > > and
> > > >> interval based reloading. The former guarantees a swift change in
> 99%
> > > time,
> > > >> while the latter provides a time-based guarantee in the worst case
> > when
> > > the
> > > >> file-watch does not take effect. The current default reloading
> > interval
> > > is
> > > >> set to 5 min. I have updated the KIP and ticket, feel free to check
> > out
> > > and
> > > >> see if it makes sense.
> > > >>
> > > >> Best,
> > > >> Boyang
> > > >>
> > > >> On Tue, Dec 8, 2020 at 8:58 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> Hey Gwen, thanks for the feedback.
> > > >>>
> > > >>> On Sun, Dec 6, 2020 at 10:06 PM Gwen Shapira 
> > > wrote:
> > > >>>
> > >  Agree with Igor. IIRC, we also encountered cases where filewatch
> was
> > >  not triggered as expected. An interval will give us a better
> > >  worse-case scenario that is easily controlled by the Kafka admin.
> > > 
> > >  Are the cases you were referring to happening in the cloud
> > > environment?
> > > >>> Should we investigate instead of simply assuming the standard API
> > won't
> > > >>> work? I checked around and found a similar complaint here
> > > >>> .
> > > >>>
> > > >>> I would be partially agreeing that we want to have a reliable
> > approach
> > > >>> for all different operating systems in general, but would be great
> if
> > > we
> > > >>> could reach a quantitative measure of file-watch success rate if
> > > possible
> > > >>> for us to make the call. Eventually, the benefit of file-watch is
> > more
> > > >>> prompt reaction time and less configuration to the broker.
> > > >>>
> > >  Gwen
> > > 
> > >  On Sun, Dec 6, 2020 at 8:17 AM Igor Soarez  wrote:
> > >  >
> > >  >
> > >  > > > The proposed change relies on a file watch, why not also
> have
> > a
> > >  polling
> > >  > > > interval to check the file for changes?
> > >  > > >
> > >  > > > The periodical check could work, the slight downside is that
> > we
> > >  need
> > 

Re: How does a consumer know the given partition is removed?

2021-01-07 Thread Boyuan Zhang
Thanks, folks!

It seems like partitionsFor() and listTopics() is what I want. Do we have
performance estimates on these 2 API calls, e.g., the time cost of waiting
for responses? I would invoke these API along a hot path so I want to have
a general idea on how bad it could be.

Many thanks to your help!

On Thu, Jan 7, 2021 at 1:44 AM Bruno Cadonna  wrote:

> Hi Luke,
>
> I am afraid the ConsumerRebalanceListener will not work in this case
> since Boyuan assigns the partitions manually. The Java docs you linked
> state
>
> If the consumer directly assigns partitions, those partitions will never
> be reassigned and this callback is not applicable.
>
>
> Hi Boyuan,
>
> The consumer has methods partitionsFor() and listTopics(). Probably
> there is a better way to get the information you want that I am not
> aware of.
>
> Best,
> Bruno
>
> On 07.01.21 05:09, Luke Chen wrote:
> > Hi Boyuan,
> > You can create a *ConsumerRebalanceListener* and do something you want
> when
> > *onPartitionsRevoked. *
> > Please check this java doc for more information:
> >
> https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html
> >
> > Thanks.
> > Luke
> >
> > On Thu, Jan 7, 2021 at 8:45 AM Boyuan Zhang  wrote:
> >
> >> Hi team,
> >>
> >> I'm working on a long run application, which uses the Kafka Consumer
> API to
> >> poll messages from a given topic and partition. I'm assigning the topic
> and
> >> partition manually by using consumer.assign() API and polling messages
> by
> >> using consumer.poll().
> >>
> >> One common scenario for my application is that certain partitions could
> be
> >> removed outside of my application and my application needs to know one
> >> partition has been removed to stop processing that partition. My
> question
> >> is that is there any way to get the removal information when I do
> >> consumer.assign() or consumer.poll() or any APIs that I can use?
> >>
> >> Thanks for your help!
> >>
> >
>


[GitHub] [kafka-site] mimaison opened a new pull request #319: Updates for 2.6.1

2021-01-07 Thread GitBox


mimaison opened a new pull request #319:
URL: https://github.com/apache/kafka-site/pull/319


   - Copied docs from 2.6.1 artifacts
   - Updated downloads.html to include 2.6.1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2021-01-07 Thread Boyang Chen
Thanks Rajini for the comments.

On Thu, Jan 7, 2021 at 2:27 AM Rajini Sivaram 
wrote:

> Hi Boyang,
>
> Thanks for the KIP, I have a few questions:
>
> 1) Will it be possible to enable/disable automatic file reloading? If not,
> we should mention in the compatibility section.
>
I don't think we need to support disabling reloading as it will become the
major mechanism to refresh
the security store. Will make that clear in the KIP.

>
> 2) We are introducing new common SSL configs and updating common code to
> perform automated updates. What does this mean for clients? Are we going to
> automatically reload client key stores and trust stores?
>
> This config would not be applied to clients. Will make that clear in the
config comments.

3) We should mention in the compatibility section that we are changing the
> audit log/authorization model for dynamic updates of SSL stores. At the
> moment, only a user with powerful Cluster:Alter permissions can dynamically
> update SSL stores on brokers. The KIP removes this restriction and relies
> purely on file system permissions for file-based stores, unlike for example
> PEM store updates which would still rely on Kafka permissions.
>
> Sounds good, will make that clear.

4) Is it really necessary to add a file watcher that is not guaranteed 100%
> of the time, if we are adding refresh interval configs? In particular, if
> we extend this to clients now or at some point in the future, wouldn't it
> be better just to use deterministic refresh intervals without the overhead
> of watching?
>
> It is a used hybrid approach in certain server side scenarios. I guess the
main advantage
is that we have a best case guarantee for immediate refresh, as well as a
worst case
guarantee when the trigger fails, but not too frequent to introduce
performance problems.


> 5) The KIP doesn't talk about validation of key and trust stores. I am
> guessing we will continue to perform the same validation that we perform
> today. But what happens if validation fails? We should mention in the KIP
> that we no longer provide feedback for this case (unlike admin client
> requests that returned an error).
>
We suggested to expose error metrics and write error messages in the server
side log. As we
decouple the update mechanism from AlterConfig request, this is
unavoidable. Note that we are
only applying an in-place security file update without name change, which I
believe is one edge case.


> 5a) If a key store doesn't conform (e.g. the DN was changed), we would fail
> the update if we apply the current validation. Would we do the validation
> every 5 minutes after that forever even though the file wasn't updated
> since? Or will we remember the last reloaded time to avoid reloading if
> file hasn't changed?
>
Could you elaborate the definition for `DN`? Curious what does `conform`
suggest?


> 5b) We perform additional validation for inter-broker key and trust stores
> to ensure we never break the broker with dynamic updates. Since this
> validation matches key store with trust store, it relies on the order in
> which stores are updated. With reloading in the admin client, user had
> control over the update. We should document any restrictions on the order
> in which files need to be updated on the file system to perform updates of
> inter-broker SSL stores. And as with 5a), it will be good to document the
> behaviour if validation fails.
>
> So you are suggesting that the key store update should happen before the
trust store, or vice versa?
I'm not familiar with the restriction TBH.

Regards,
>
> Rajini
>
>
>
> On Wed, Jan 6, 2021 at 5:55 PM Jason Gustafson  wrote:
>
> > Thanks Boyang. Someone mentioned my email never showed up, but basically
> I
> > suggested tying the refresh configuration more directly to the
> > configurations it would affect. I'm happy with the updates.
> >
> > -Jason
> >
> > On Tue, Jan 5, 2021 at 8:34 PM Boyang Chen 
> > wrote:
> >
> > > Thanks Jason for the feedback. I separated the time configs for key
> store
> > > and trust store, and rename the configs as you proposed.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Mon, Dec 14, 2020 at 3:47 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > Hey there,
> > > >
> > > > bumping up this thread to see if there are further questions
> regarding
> > > the
> > > > updated proposal.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > > > On Thu, Dec 10, 2020 at 11:52 AM Boyang Chen <
> > reluctanthero...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> After some offline discussions, we believe that it's the right
> > direction
> > > >> to go by doing a hybrid approach which includes both file-watch
> > trigger
> > > and
> > > >> interval based reloading. The former guarantees a swift change in
> 99%
> > > time,
> > > >> while the latter provides a time-based guarantee in the worst case
> > when
> > > the
> > > >> file-watch does not take effect. The current default reloading
> > interval
> > > is
> > 

Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #29

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] MINOR: Add a log to print acl change notification details


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED


Re: [RESULTS] [VOTE] Release Kafka version 2.6.1

2021-01-07 Thread Mickael Maison
Hi Gary,

Yes I expect it to be ready in the next couple of days.

Thanks

On Thu, Jan 7, 2021 at 5:07 PM Gary Russell  wrote:
>
> Thanks; it would be greatly appreciated if this can happen before Jan 14 at 
> the latest.
> 
> From: Mickael Maison 
> Sent: Thursday, January 7, 2021 11:29 AM
> To: dev ; Users ; kafka-clients 
> 
> Subject: [RESULTS] [VOTE] Release Kafka version 2.6.1
>
> This vote passes with 3 +1 votes (3 bindings) and no 0 or -1 votes.
>
> +1 votes
> PMC Members:
> * Rajini Sivaram
> * Manikumar Reddy
> * Matthias J. Sax
>
> Committers:
> * No votes
>
> Community:
> * No votes
>
> 0 votes
> * No votes
>
> -1 votes
> * No votes
>
> Vote thread:
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread.html%2Fr18b6939b16652cdf757110b991d71580d000ce142af2923eebf2094d%2540%253Cdev.kafka.apache.org%253Edata=04%7C01%7Cgrussell%40vmware.com%7Cbbf67482d45d45178acc08d8b3296ce1%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637456337793517132%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=Tm%2FXNfPNUK3td3yIEvw%2BKx2var0dLEqY7NsRg6mr8w0%3Dreserved=0
>
> I'll continue with the release process and the release announcement
> will follow in the next few days.
>
> Mickael


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #334

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] MINOR: Add a log to print acl change notification details


--
[...truncated 3.48 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #380

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] MINOR: Add a log to print acl change notification details


--
[...truncated 3.51 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@83be613, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@83be613, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51d8ee80, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@51d8ee80, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@72249659, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@72249659, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@758181ee, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@758181ee, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@225dea33, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@225dea33, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3467ac16, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@3467ac16, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@62a7729d, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@62a7729d, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bd011d6, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1bd011d6, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56116ced, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56116ced, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5f9f6d25, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5f9f6d25, 
timestamped = false, caching = false, logging = 

Re: Why many "Load Bug xxx" JIRA bug by Tim?

2021-01-07 Thread Adam Bellemare
If we do look to enable Captchas, I think it would be important that we
avoid corporate offerings (eg: Google's).


On Thu, Jan 7, 2021 at 12:12 PM Govinda Sakhare 
wrote:

> Hi,
>
> If it is possible, we should configure/enable Captcha to prevent automated
> spamming attacks.
>
> Thanks
> Govinda
>
> On Wed, Jan 6, 2021 at 11:30 PM Matthias J. Sax  wrote:
>
> > This was a spamming attack.
> >
> > The user was blocked and the corresponding tickets were deleted. (Cf.
> > https://issues.apache.org/jira/browse/INFRA-21268)
> >
> > The "problem" is, that anybody can create an Jira account and create
> > tickets. It's in the spirit of open source and the ASF to not lock down
> > Jira, to make it easy for people to report issues.
> >
> > The drawback is, that stuff like this can happen. It's easy to write a
> > bot to spam the Jira board...
> >
> > Because Jira is managed by the ASF infra-team, Kafka committers/PMC
> > cannot block users and thus it takes a little longer to react to an
> > issue like this, as we need to wait for the infra team to help out.
> >
> >
> > -Matthias
> >
> >
> > On 1/6/21 1:14 AM, M. Manna wrote:
> > > I had to register this as spam and block them. I couldn’t disable it
> from
> > > ASF JiRA.
> > >
> > >  I’m also curious to know how/why such surge occurred.
> > >
> > > Regards,
> > >
> > > On Wed, 6 Jan 2021 at 03:45, Luke Chen  wrote:
> > >
> > >> Hi,
> > >> I received a lot of JIRA notification emails today, and they are all
> > >> titled: "Load Bug xxx" by Tim.
> > >> The bug content doesn't look like a real bug, they are like generated
> by
> > >> automation.
> > >> I'm wondering why that could happen?
> > >> Do we have any way to delete them all?
> > >>
> > >> Thanks.
> > >> Luke
> > >>
> > >
> >
>
>
> --
> Thanks  & Regards,
> Govinda Sakhare.
>


Re: Why many "Load Bug xxx" JIRA bug by Tim?

2021-01-07 Thread Govinda Sakhare
Hi,

If it is possible, we should configure/enable Captcha to prevent automated
spamming attacks.

Thanks
Govinda

On Wed, Jan 6, 2021 at 11:30 PM Matthias J. Sax  wrote:

> This was a spamming attack.
>
> The user was blocked and the corresponding tickets were deleted. (Cf.
> https://issues.apache.org/jira/browse/INFRA-21268)
>
> The "problem" is, that anybody can create an Jira account and create
> tickets. It's in the spirit of open source and the ASF to not lock down
> Jira, to make it easy for people to report issues.
>
> The drawback is, that stuff like this can happen. It's easy to write a
> bot to spam the Jira board...
>
> Because Jira is managed by the ASF infra-team, Kafka committers/PMC
> cannot block users and thus it takes a little longer to react to an
> issue like this, as we need to wait for the infra team to help out.
>
>
> -Matthias
>
>
> On 1/6/21 1:14 AM, M. Manna wrote:
> > I had to register this as spam and block them. I couldn’t disable it from
> > ASF JiRA.
> >
> >  I’m also curious to know how/why such surge occurred.
> >
> > Regards,
> >
> > On Wed, 6 Jan 2021 at 03:45, Luke Chen  wrote:
> >
> >> Hi,
> >> I received a lot of JIRA notification emails today, and they are all
> >> titled: "Load Bug xxx" by Tim.
> >> The bug content doesn't look like a real bug, they are like generated by
> >> automation.
> >> I'm wondering why that could happen?
> >> Do we have any way to delete them all?
> >>
> >> Thanks.
> >> Luke
> >>
> >
>


-- 
Thanks  & Regards,
Govinda Sakhare.


Re: [RESULTS] [VOTE] Release Kafka version 2.6.1

2021-01-07 Thread Gary Russell
Thanks; it would be greatly appreciated if this can happen before Jan 14 at the 
latest.

From: Mickael Maison 
Sent: Thursday, January 7, 2021 11:29 AM
To: dev ; Users ; kafka-clients 

Subject: [RESULTS] [VOTE] Release Kafka version 2.6.1

This vote passes with 3 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Rajini Sivaram
* Manikumar Reddy
* Matthias J. Sax

Committers:
* No votes

Community:
* No votes

0 votes
* No votes

-1 votes
* No votes

Vote thread:
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread.html%2Fr18b6939b16652cdf757110b991d71580d000ce142af2923eebf2094d%2540%253Cdev.kafka.apache.org%253Edata=04%7C01%7Cgrussell%40vmware.com%7Cbbf67482d45d45178acc08d8b3296ce1%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637456337793517132%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=Tm%2FXNfPNUK3td3yIEvw%2BKx2var0dLEqY7NsRg6mr8w0%3Dreserved=0

I'll continue with the release process and the release announcement
will follow in the next few days.

Mickael


[RESULTS] [VOTE] Release Kafka version 2.6.1

2021-01-07 Thread Mickael Maison
This vote passes with 3 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Rajini Sivaram
* Manikumar Reddy
* Matthias J. Sax

Committers:
* No votes

Community:
* No votes

0 votes
* No votes

-1 votes
* No votes

Vote thread:
https://lists.apache.org/thread.html/r18b6939b16652cdf757110b991d71580d000ce142af2923eebf2094d%40%3Cdev.kafka.apache.org%3E

I'll continue with the release process and the release announcement
will follow in the next few days.

Mickael


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #87

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] MINOR: Add a log to print acl change notification details


--
[...truncated 3.44 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #332

2021-01-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10874: Fix flaky 
ClientQuotasRequestTest.testAlterIpQuotasRequest (#9778)


--
[...truncated 3.48 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED


Re: [kafka-clients] Re: [VOTE] 2.6.1 RC3

2021-01-07 Thread Mickael Maison
Hi Ismael,

As far as I can tell, Kafka is not using the jackson-databind API that
had the issue. If this is fine, I'll mark this vote as passed and I'll
continue the release process.
Thanks

On Tue, Jan 5, 2021 at 4:27 PM Ismael Juma  wrote:
>
> Hi Mickael,
>
> Does that CVE even affect Kafka? Not sure if we gain much by delaying the
> release even longer. People who really care about the CVE can also use
> 2.7.0.
>
> Ismael
>
> On Tue, Jan 5, 2021 at 8:12 AM Mickael Maison  wrote:
>
> > Hi,
> >
> > Thanks for the votes. However, I'm going to build a new RC to pick the
> > commit [1] that addresses CVE-2020-25649.
> >
> > 1:
> > https://github.com/apache/kafka/commit/101ba2844f92451f633d11fd2ad3813f15d4a4f3
> >
> > So closing this vote, I'll open a new one soon
> >
> > On Fri, Dec 18, 2020 at 4:45 AM Ismael Juma  wrote:
> > >
> > > Looks like you have your votes Mickael. :)
> > >
> > > Ismael
> > >
> > > On Fri, Dec 11, 2020 at 7:23 AM Mickael Maison 
> > wrote:
> > >>
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >> This is the fourth candidate for release of Apache Kafka 2.6.1.
> > >>
> > >> Since RC2, the following JIRAs have been fixed: KAFKA-10811, KAFKA-10802
> > >>
> > >> Release notes for the 2.6.1 release:
> > >> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/RELEASE_NOTES.html
> > >>
> > >> *** Please download, test and vote by Friday, December 18, 12 PM ET ***
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >> https://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > >> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/
> > >>
> > >> * Maven artifacts to be voted upon:
> > >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >>
> > >> * Javadoc:
> > >> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/javadoc/
> > >>
> > >> * Tag to be voted upon (off 2.6 branch) is the 2.6.1 tag:
> > >> https://github.com/apache/kafka/releases/tag/2.6.1-rc3
> > >>
> > >> * Documentation:
> > >> https://kafka.apache.org/26/documentation.html
> > >>
> > >> * Protocol:
> > >> https://kafka.apache.org/26/protocol.html
> > >>
> > >> * Successful Jenkins builds for the 2.6 branch:
> > >> Unit/integration tests:
> > >> https://ci-builds.apache.org/job/Kafka/job/kafka-2.6-jdk8/62/
> > >>
> > >> /**
> > >>
> > >> Thanks,
> > >> Mickael
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> > Groups "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> > an email to kafka-clients+unsubscr...@googlegroups.com.
> > > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYn43BkKArgdL2jJn00a5Suf_89NG4n3-OpqMvKARPNNQ%40mail.gmail.com
> > .
> >


Re: [VOTE] KIP-700: Add Describe Cluster API

2021-01-07 Thread Nikolay Izhikov
+1

> 6 янв. 2021 г., в 16:53, David Jacot  написал(а):
> 
> Hi all,
> 
> I'd like to start the vote on KIP-700: Add Describe Cluster API. This KIP
> is here:
> https://cwiki.apache.org/confluence/x/jQ4mCg
> 
> Please take a look and vote if you can.
> 
> Best,
> David



Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2021-01-07 Thread David Jacot
Hi Boyang,

Thanks for the KIP. I am fine with it in general. I just have a few
comments.

With the proposal, we don't have the guarantee that both the new keystore
and the new truststore will be picked up together so we may end up with
the new keystore and the old truststore for a short period of time, or
permanently
if the second one can't be reloaded for any reason.

This could disallow clients to authenticate for a while if the new keystore
and the
new trustore are not crafted to work with their old versions.

I wonder how this would work in practice. Do we already have guards in
place to avoid this or could we add something to ensure that listeners are
updated only if both the truststore and the keystore works with each other?

We don't have this issue today as both the truststore and the keystore are
reloaded when the AlterConfig RPC is received so the admin can control
this process. It is all or nothing.

I think that this is acceptable but it is worth clearly mentioning that
there is no
guarantee from that regard in the KIP, and later in the doc. Perhaps, we
could
also mention that updating them in place is not a best practice and that
using
new paths gives better control to the admin.

Best,
David

On Wed, Jan 6, 2021 at 6:55 PM Jason Gustafson  wrote:

> Thanks Boyang. Someone mentioned my email never showed up, but basically I
> suggested tying the refresh configuration more directly to the
> configurations it would affect. I'm happy with the updates.
>
> -Jason
>
> On Tue, Jan 5, 2021 at 8:34 PM Boyang Chen 
> wrote:
>
> > Thanks Jason for the feedback. I separated the time configs for key store
> > and trust store, and rename the configs as you proposed.
> >
> > Best,
> > Boyang
> >
> > On Mon, Dec 14, 2020 at 3:47 PM Boyang Chen 
> > wrote:
> >
> > > Hey there,
> > >
> > > bumping up this thread to see if there are further questions regarding
> > the
> > > updated proposal.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Thu, Dec 10, 2020 at 11:52 AM Boyang Chen <
> reluctanthero...@gmail.com
> > >
> > > wrote:
> > >
> > >> After some offline discussions, we believe that it's the right
> direction
> > >> to go by doing a hybrid approach which includes both file-watch
> trigger
> > and
> > >> interval based reloading. The former guarantees a swift change in 99%
> > time,
> > >> while the latter provides a time-based guarantee in the worst case
> when
> > the
> > >> file-watch does not take effect. The current default reloading
> interval
> > is
> > >> set to 5 min. I have updated the KIP and ticket, feel free to check
> out
> > and
> > >> see if it makes sense.
> > >>
> > >> Best,
> > >> Boyang
> > >>
> > >> On Tue, Dec 8, 2020 at 8:58 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hey Gwen, thanks for the feedback.
> > >>>
> > >>> On Sun, Dec 6, 2020 at 10:06 PM Gwen Shapira 
> > wrote:
> > >>>
> >  Agree with Igor. IIRC, we also encountered cases where filewatch was
> >  not triggered as expected. An interval will give us a better
> >  worse-case scenario that is easily controlled by the Kafka admin.
> > 
> >  Are the cases you were referring to happening in the cloud
> > environment?
> > >>> Should we investigate instead of simply assuming the standard API
> won't
> > >>> work? I checked around and found a similar complaint here
> > >>> .
> > >>>
> > >>> I would be partially agreeing that we want to have a reliable
> approach
> > >>> for all different operating systems in general, but would be great if
> > we
> > >>> could reach a quantitative measure of file-watch success rate if
> > possible
> > >>> for us to make the call. Eventually, the benefit of file-watch is
> more
> > >>> prompt reaction time and less configuration to the broker.
> > >>>
> >  Gwen
> > 
> >  On Sun, Dec 6, 2020 at 8:17 AM Igor Soarez  wrote:
> >  >
> >  >
> >  > > > The proposed change relies on a file watch, why not also have
> a
> >  polling
> >  > > > interval to check the file for changes?
> >  > > >
> >  > > > The periodical check could work, the slight downside is that
> we
> >  need
> >  > > additional configurations to schedule the interval. Do you think
> > the
> >  > > file-watch approach has any extra overhead than the interval
> based
> >  solution?
> >  >
> >  > I don't think so. The reason I'm asking this is the KIP currently
> >  includes:
> >  >
> >  >   "When the file watch does not work for unknown reason, user
> could
> >  still try to change the store path in an explicit AlterConfig call
> in
> > the
> >  worst case."
> >  >
> >  > Having the interval in addition to the file watch could result in
> a
> >  better worst case scenario.
> >  > I understand it would require introducing at least one new
> >  configuration for the interval, so maybe this doesn't have to solved
> > in
> >  

Re: [jira] [Created] (KAFKA-12157) test Upgrade 2.7.0 from 2.0.0 occur a question

2021-01-07 Thread wenbing shen
Hi,team
I ran into a problem while testing the 2.0.0 rolling upgrade 2.7.0. I tried
to reproduce it again and it failed. I am curious how this problem is
caused, can anyone help analyze it?
Thanks.
Wenbing

Wenbing Shen (Jira)  于2021年1月7日周四 下午6:26写道:

> Wenbing Shen created KAFKA-12157:
> 
>
>  Summary: test Upgrade 2.7.0 from 2.0.0 occur a question
>  Key: KAFKA-12157
>  URL: https://issues.apache.org/jira/browse/KAFKA-12157
>  Project: Kafka
>   Issue Type: Bug
>   Components: log
> Affects Versions: 2.7.0
> Reporter: Wenbing Shen
>  Attachments: 1001server.log, 1001serverlog.png, 1003server.log,
> 1003serverlog.png, 1003statechange.log
>
> I was in a test environment, rolling upgrade from version 2.0.0 to version
> 2.7.0, and encountered the following problems. When the rolling upgrade
> progressed to the second round, I stopped the first broker(1001) in the
> second round and the following error occurred. When an agent processes the
> client producer request, the starting offset of the leader epoch of the
> partition leader suddenly becomes 0, and then continues to process write
> requests for the same partition, and an error log will appear.All partition
> leaders with 1001 replicas are transferred to the 1003 node, and these
> partitions on the 1003 node will generate this error if they receive
> production requests.When I restart 1001, the 1001 broker will report the
> following error:
>
> [2021-01-06 16:46:55,955] ERROR (ReplicaFetcherThread-8-1003
> kafka.server.ReplicaFetcherThread 76) [ReplicaFetcher replicaId=1001,
> leaderId=1003, fetcherId=8] Unexpected error occurred while processing data
> for partition test-perf1-9 at offset 9666953
>
> I use the following command to make a production request:
>
> nohup /home/kafka/software/kafka/bin/kafka-producer-perf-test.sh
> --num-records 1 --record-size 1000 --throughput 3
> --producer-props bootstrap.servers=hdp1:9092,hdp2:9092,hdp3:9092 acks=1
> --topic test-perf1 > 1pro.log 2>&1 &
>
>
>
> I tried to reproduce the problem again, but after three attempts, it did
> not reappear. I am curious how this problem occurred and why the 1003
> broker resets startOffset to 0 of leaderEpoch 4 when the offset is assigned
> by broker in Log.append function.
>
>
>
> broker 1003: server.log
>
> [2021-01-06 16:37:59,492] WARN (data-plane-kafka-request-handler-131
> kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-9]
> New epoch en
> try EpochEntry(epoch=4, startOffset=0) caused truncation of conflicting
> entries ListBuffer(EpochEntry(epoch=4, startOffset=9667122),
> EpochEntry(epoch=3, star
> tOffset=9195729), EpochEntry(epoch=2, startOffset=8348201)). Cache now
> contains 0 entries.
> [2021-01-06 16:37:59,493] WARN (data-plane-kafka-request-handler-131
> kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-8]
> New epoch en
> try EpochEntry(epoch=3, startOffset=0) caused truncation of conflicting
> entries ListBuffer(EpochEntry(epoch=3, startOffset=9667478),
> EpochEntry(epoch=2, star
> tOffset=9196127), EpochEntry(epoch=1, startOffset=8342787)). Cache now
> contains 0 entries.
> [2021-01-06 16:37:59,495] WARN (data-plane-kafka-request-handler-131
> kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-2]
> New epoch en
> try EpochEntry(epoch=3, startOffset=0) caused truncation of conflicting
> entries ListBuffer(EpochEntry(epoch=3, startOffset=9667478),
> EpochEntry(epoch=2, star
> tOffset=9196127), EpochEntry(epoch=1, startOffset=8336727)). Cache now
> contains 0 entries.
> [2021-01-06 16:37:59,498] ERROR (data-plane-kafka-request-handler-142
> kafka.server.ReplicaManager 76) [ReplicaManager broker=1003] Error
> processing append op
> eration on partition test-perf1-9
> java.lang.IllegalArgumentException: Received invalid partition leader
> epoch entry EpochEntry(epoch=4, startOffset=-3)
>  at
> kafka.server.epoch.LeaderEpochFileCache.assign(LeaderEpochFileCache.scala:67)
>  at
> kafka.server.epoch.LeaderEpochFileCache.assign(LeaderEpochFileCache.scala:59)
>  at kafka.log.Log.maybeAssignEpochStartOffset(Log.scala:1268)
>  at kafka.log.Log.$anonfun$append$6(Log.scala:1181)
>  at kafka.log.Log$$Lambda$935/184936331.accept(Unknown Source)
>  at java.lang.Iterable.forEach(Iterable.java:75)
>  at kafka.log.Log.$anonfun$append$2(Log.scala:1179)
>  at kafka.log.Log.append(Log.scala:2387)
>  at kafka.log.Log.appendAsLeader(Log.scala:1050)
>  at
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1079)
>  at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1067)
>  at
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$4(ReplicaManager.scala:953)
>  at kafka.server.ReplicaManager$$Lambda$1025/1369541490.apply(Unknown
> Source)
>  at
> scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
>  at
> 

Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2021-01-07 Thread Rajini Sivaram
Hi Boyang,

Thanks for the KIP, I have a few questions:

1) Will it be possible to enable/disable automatic file reloading? If not,
we should mention in the compatibility section.

2) We are introducing new common SSL configs and updating common code to
perform automated updates. What does this mean for clients? Are we going to
automatically reload client key stores and trust stores?

3) We should mention in the compatibility section that we are changing the
audit log/authorization model for dynamic updates of SSL stores. At the
moment, only a user with powerful Cluster:Alter permissions can dynamically
update SSL stores on brokers. The KIP removes this restriction and relies
purely on file system permissions for file-based stores, unlike for example
PEM store updates which would still rely on Kafka permissions.

4) Is it really necessary to add a file watcher that is not guaranteed 100%
of the time, if we are adding refresh interval configs? In particular, if
we extend this to clients now or at some point in the future, wouldn't it
be better just to use deterministic refresh intervals without the overhead
of watching?

5) The KIP doesn't talk about validation of key and trust stores. I am
guessing we will continue to perform the same validation that we perform
today. But what happens if validation fails? We should mention in the KIP
that we no longer provide feedback for this case (unlike admin client
requests that returned an error).
5a) If a key store doesn't conform (e.g. the DN was changed), we would fail
the update if we apply the current validation. Would we do the validation
every 5 minutes after that forever even though the file wasn't updated
since? Or will we remember the last reloaded time to avoid reloading if
file hasn't changed?
5b) We perform additional validation for inter-broker key and trust stores
to ensure we never break the broker with dynamic updates. Since this
validation matches key store with trust store, it relies on the order in
which stores are updated. With reloading in the admin client, user had
control over the update. We should document any restrictions on the order
in which files need to be updated on the file system to perform updates of
inter-broker SSL stores. And as with 5a), it will be good to document the
behaviour if validation fails.

Regards,

Rajini



On Wed, Jan 6, 2021 at 5:55 PM Jason Gustafson  wrote:

> Thanks Boyang. Someone mentioned my email never showed up, but basically I
> suggested tying the refresh configuration more directly to the
> configurations it would affect. I'm happy with the updates.
>
> -Jason
>
> On Tue, Jan 5, 2021 at 8:34 PM Boyang Chen 
> wrote:
>
> > Thanks Jason for the feedback. I separated the time configs for key store
> > and trust store, and rename the configs as you proposed.
> >
> > Best,
> > Boyang
> >
> > On Mon, Dec 14, 2020 at 3:47 PM Boyang Chen 
> > wrote:
> >
> > > Hey there,
> > >
> > > bumping up this thread to see if there are further questions regarding
> > the
> > > updated proposal.
> > >
> > > Best,
> > > Boyang
> > >
> > > On Thu, Dec 10, 2020 at 11:52 AM Boyang Chen <
> reluctanthero...@gmail.com
> > >
> > > wrote:
> > >
> > >> After some offline discussions, we believe that it's the right
> direction
> > >> to go by doing a hybrid approach which includes both file-watch
> trigger
> > and
> > >> interval based reloading. The former guarantees a swift change in 99%
> > time,
> > >> while the latter provides a time-based guarantee in the worst case
> when
> > the
> > >> file-watch does not take effect. The current default reloading
> interval
> > is
> > >> set to 5 min. I have updated the KIP and ticket, feel free to check
> out
> > and
> > >> see if it makes sense.
> > >>
> > >> Best,
> > >> Boyang
> > >>
> > >> On Tue, Dec 8, 2020 at 8:58 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hey Gwen, thanks for the feedback.
> > >>>
> > >>> On Sun, Dec 6, 2020 at 10:06 PM Gwen Shapira 
> > wrote:
> > >>>
> >  Agree with Igor. IIRC, we also encountered cases where filewatch was
> >  not triggered as expected. An interval will give us a better
> >  worse-case scenario that is easily controlled by the Kafka admin.
> > 
> >  Are the cases you were referring to happening in the cloud
> > environment?
> > >>> Should we investigate instead of simply assuming the standard API
> won't
> > >>> work? I checked around and found a similar complaint here
> > >>> .
> > >>>
> > >>> I would be partially agreeing that we want to have a reliable
> approach
> > >>> for all different operating systems in general, but would be great if
> > we
> > >>> could reach a quantitative measure of file-watch success rate if
> > possible
> > >>> for us to make the call. Eventually, the benefit of file-watch is
> more
> > >>> prompt reaction time and less configuration to the broker.
> > >>>
> >  Gwen
> > 
> >  On Sun, Dec 6, 

[jira] [Created] (KAFKA-12157) test Upgrade 2.7.0 from 2.0.0 occur a question

2021-01-07 Thread Wenbing Shen (Jira)
Wenbing Shen created KAFKA-12157:


 Summary: test Upgrade 2.7.0 from 2.0.0 occur a question
 Key: KAFKA-12157
 URL: https://issues.apache.org/jira/browse/KAFKA-12157
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.7.0
Reporter: Wenbing Shen
 Attachments: 1001server.log, 1001serverlog.png, 1003server.log, 
1003serverlog.png, 1003statechange.log

I was in a test environment, rolling upgrade from version 2.0.0 to version 
2.7.0, and encountered the following problems. When the rolling upgrade 
progressed to the second round, I stopped the first broker(1001) in the second 
round and the following error occurred. When an agent processes the client 
producer request, the starting offset of the leader epoch of the partition 
leader suddenly becomes 0, and then continues to process write requests for the 
same partition, and an error log will appear.All partition leaders with 1001 
replicas are transferred to the 1003 node, and these partitions on the 1003 
node will generate this error if they receive production requests.When I 
restart 1001, the 1001 broker will report the following error:

[2021-01-06 16:46:55,955] ERROR (ReplicaFetcherThread-8-1003 
kafka.server.ReplicaFetcherThread 76) [ReplicaFetcher replicaId=1001, 
leaderId=1003, fetcherId=8] Unexpected error occurred while processing data for 
partition test-perf1-9 at offset 9666953

I use the following command to make a production request:

nohup /home/kafka/software/kafka/bin/kafka-producer-perf-test.sh --num-records 
1 --record-size 1000 --throughput 3 --producer-props 
bootstrap.servers=hdp1:9092,hdp2:9092,hdp3:9092 acks=1 --topic test-perf1 > 
1pro.log 2>&1 &

 

I tried to reproduce the problem again, but after three attempts, it did not 
reappear. I am curious how this problem occurred and why the 1003 broker resets 
startOffset to 0 of leaderEpoch 4 when the offset is assigned by broker in 
Log.append function.

 

broker 1003: server.log

[2021-01-06 16:37:59,492] WARN (data-plane-kafka-request-handler-131 
kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-9] New 
epoch en
try EpochEntry(epoch=4, startOffset=0) caused truncation of conflicting entries 
ListBuffer(EpochEntry(epoch=4, startOffset=9667122), EpochEntry(epoch=3, star
tOffset=9195729), EpochEntry(epoch=2, startOffset=8348201)). Cache now contains 
0 entries.
[2021-01-06 16:37:59,493] WARN (data-plane-kafka-request-handler-131 
kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-8] New 
epoch en
try EpochEntry(epoch=3, startOffset=0) caused truncation of conflicting entries 
ListBuffer(EpochEntry(epoch=3, startOffset=9667478), EpochEntry(epoch=2, star
tOffset=9196127), EpochEntry(epoch=1, startOffset=8342787)). Cache now contains 
0 entries.
[2021-01-06 16:37:59,495] WARN (data-plane-kafka-request-handler-131 
kafka.server.epoch.LeaderEpochFileCache 70) [LeaderEpochCache test-perf1-2] New 
epoch en
try EpochEntry(epoch=3, startOffset=0) caused truncation of conflicting entries 
ListBuffer(EpochEntry(epoch=3, startOffset=9667478), EpochEntry(epoch=2, star
tOffset=9196127), EpochEntry(epoch=1, startOffset=8336727)). Cache now contains 
0 entries.
[2021-01-06 16:37:59,498] ERROR (data-plane-kafka-request-handler-142 
kafka.server.ReplicaManager 76) [ReplicaManager broker=1003] Error processing 
append op
eration on partition test-perf1-9
java.lang.IllegalArgumentException: Received invalid partition leader epoch 
entry EpochEntry(epoch=4, startOffset=-3)
 at 
kafka.server.epoch.LeaderEpochFileCache.assign(LeaderEpochFileCache.scala:67)
 at 
kafka.server.epoch.LeaderEpochFileCache.assign(LeaderEpochFileCache.scala:59)
 at kafka.log.Log.maybeAssignEpochStartOffset(Log.scala:1268)
 at kafka.log.Log.$anonfun$append$6(Log.scala:1181)
 at kafka.log.Log$$Lambda$935/184936331.accept(Unknown Source)
 at java.lang.Iterable.forEach(Iterable.java:75)
 at kafka.log.Log.$anonfun$append$2(Log.scala:1179)
 at kafka.log.Log.append(Log.scala:2387)
 at kafka.log.Log.appendAsLeader(Log.scala:1050)
 at 
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1079)
 at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1067)
 at 
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$4(ReplicaManager.scala:953)
 at kafka.server.ReplicaManager$$Lambda$1025/1369541490.apply(Unknown Source)
 at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
 at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
 at scala.collection.mutable.HashMap.map(HashMap.scala:35)
 at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:941)
 at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:621)
 at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:625)

 

broker 1001:server.log

[2021-01-06 16:46:55,955] 

[jira] [Created] (KAFKA-12156) Document consequences of single threaded response handling

2021-01-07 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12156:
---

 Summary: Document consequences of single threaded response handling
 Key: KAFKA-12156
 URL: https://issues.apache.org/jira/browse/KAFKA-12156
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Tom Bentley
Assignee: Tom Bentley


If users block the response handling thread in one call waiting for the result 
of a second "nested" call then the client effectively hangs because the 2nd 
call's response will never be processed. For example:


admin.listTopics().names().thenApply(topics -> {
// ... Some code to decide the topicsToCreate based on the topics
admin.createTopics(topicsToCreate).all().get()
return null;
}).get();


The {{createTopics()...get()}} block's indefinitely preventing the 
{{ListTopics}} response processing from dealing with the {{CreateTopics}} 
response.

This can be surprising to users of the Admin API, so we should at least 
document that this pattern should not be used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: How does a consumer know the given partition is removed?

2021-01-07 Thread Bruno Cadonna

Hi Luke,

I am afraid the ConsumerRebalanceListener will not work in this case 
since Boyuan assigns the partitions manually. The Java docs you linked state


If the consumer directly assigns partitions, those partitions will never 
be reassigned and this callback is not applicable.



Hi Boyuan,

The consumer has methods partitionsFor() and listTopics(). Probably 
there is a better way to get the information you want that I am not 
aware of.


Best,
Bruno

On 07.01.21 05:09, Luke Chen wrote:

Hi Boyuan,
You can create a *ConsumerRebalanceListener* and do something you want when
*onPartitionsRevoked. *
Please check this java doc for more information:
https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html

Thanks.
Luke

On Thu, Jan 7, 2021 at 8:45 AM Boyuan Zhang  wrote:


Hi team,

I'm working on a long run application, which uses the Kafka Consumer API to
poll messages from a given topic and partition. I'm assigning the topic and
partition manually by using consumer.assign() API and polling messages by
using consumer.poll().

One common scenario for my application is that certain partitions could be
removed outside of my application and my application needs to know one
partition has been removed to stop processing that partition. My question
is that is there any way to get the removal information when I do
consumer.assign() or consumer.poll() or any APIs that I can use?

Thanks for your help!





[jira] [Resolved] (KAFKA-10874) Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest

2021-01-07 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-10874.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

> Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest
> --
>
> Key: KAFKA-10874
> URL: https://issues.apache.org/jira/browse/KAFKA-10874
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
> Fix For: 2.8.0
>
>
> [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-9748/8/testReport/junit/kafka.server/ClientQuotasRequestTest/Build___JDK_15___testAlterIpQuotasRequest/?cloudbees-analytics-link=scm-reporting%2Ftests%2Ffailed]
>  
> {quote}
> Build / JDK 15 / kafka.server.ClientQuotasRequestTest.testAlterIpQuotasRequest
> {quote}
>  
> {quote}
> java.lang.AssertionError: expected:<150.0> but was:<100.0>
>  at org.junit.Assert.fail(Assert.java:89)
>  at org.junit.Assert.failNotEquals(Assert.java:835)
>  at org.junit.Assert.assertEquals(Assert.java:555)
>  at org.junit.Assert.assertEquals(Assert.java:685)
>  at 
> kafka.server.ClientQuotasRequestTest.$anonfun$testAlterIpQuotasRequest$1(ClientQuotasRequestTest.scala:215)
>  at 
> kafka.server.ClientQuotasRequestTest.$anonfun$testAlterIpQuotasRequest$1$adapted(ClientQuotasRequestTest.scala:206)
>  at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>  at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
>  at 
> kafka.server.ClientQuotasRequestTest.verifyIpQuotas$1(ClientQuotasRequestTest.scala:206)
>  at 
> kafka.server.ClientQuotasRequestTest.testAlterIpQuotasRequest(ClientQuotasRequestTest.scala:228)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>  at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at