[jira] [Commented] (KAFKA-6221) ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation

2017-12-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276412#comment-16276412
 ] 

Ismael Juma commented on KAFKA-6221:


[~golovanov-me], the original issue was about log messages that have no impact 
in cluster health. Your issue seems to be a different one since you are 
claiming that there is a cluster health impact. I would suggest filing a 
separate issue and submitting more information on the other errors you are 
seeing and what do you mean by "degrade".

[~alex.dunayevsky], can you please explain why you have increased the priority? 
You claimed that "everything works fine" apart from the log message that can be 
confusing for people who are not familiar with Kafka and distributed systems 
where cluster state propagation is asynchronous.

> ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic 
> creation 
> ---
>
> Key: KAFKA-6221
> URL: https://issues.apache.org/jira/browse/KAFKA-6221
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.1, 1.0.0
> Environment: RHEL 7
>Reporter: Alex Dunayevsky
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This issue appeared to happen frequently on 0.10.2.0. 
> On 0.10.2.1 and 1.0.0 it's a way harder to reproduce. 
> We'll focus on reproducing it on 0.10.2.1 and 1.0.0.
> *TOPOLOGY:* 
>   3 brokers, 1 zk.
> *REPRODUCING STRATEGY:* 
> Create a few dozens topics (say, 40) one by one, each with replication factor 
> 2. Number of partitions, generally, does not matter but, for easier 
> reproduction, should not be too small (around 30 or so). 
> *CREATE 40 TOPICS:*
> {code:java} for i in {1..40}; do bin/kafka-topics.sh --create --topic 
> "topic${i}_p28_r2" --partitions 28 --replication-factor 2 --zookeeper :2165; 
> done {code}
> *ERRORS*
> {code:java}
> *BROKER 1*
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> *BROKER 2*
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 

[jira] [Comment Edited] (KAFKA-6221) ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation

2017-12-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276412#comment-16276412
 ] 

Ismael Juma edited comment on KAFKA-6221 at 12/4/17 7:53 AM:
-

[~golovanov-me], the original issue was about log messages that have no impact 
on cluster health. Your issue seems to be a different one since you are 
claiming that there is a cluster health impact. I would suggest filing a 
separate issue and submitting more information on the other errors you are 
seeing and what do you mean by "degrade".

[~alex.dunayevsky], can you please explain why you have increased the priority? 
You claimed that "everything works fine" apart from the log message that can be 
confusing for people who are not familiar with Kafka and distributed systems 
where cluster state propagation is asynchronous.


was (Author: ijuma):
[~golovanov-me], the original issue was about log messages that have no impact 
in cluster health. Your issue seems to be a different one since you are 
claiming that there is a cluster health impact. I would suggest filing a 
separate issue and submitting more information on the other errors you are 
seeing and what do you mean by "degrade".

[~alex.dunayevsky], can you please explain why you have increased the priority? 
You claimed that "everything works fine" apart from the log message that can be 
confusing for people who are not familiar with Kafka and distributed systems 
where cluster state propagation is asynchronous.

> ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic 
> creation 
> ---
>
> Key: KAFKA-6221
> URL: https://issues.apache.org/jira/browse/KAFKA-6221
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.1, 1.0.0
> Environment: RHEL 7
>Reporter: Alex Dunayevsky
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This issue appeared to happen frequently on 0.10.2.0. 
> On 0.10.2.1 and 1.0.0 it's a way harder to reproduce. 
> We'll focus on reproducing it on 0.10.2.1 and 1.0.0.
> *TOPOLOGY:* 
>   3 brokers, 1 zk.
> *REPRODUCING STRATEGY:* 
> Create a few dozens topics (say, 40) one by one, each with replication factor 
> 2. Number of partitions, generally, does not matter but, for easier 
> reproduction, should not be too small (around 30 or so). 
> *CREATE 40 TOPICS:*
> {code:java} for i in {1..40}; do bin/kafka-topics.sh --create --topic 
> "topic${i}_p28_r2" --partitions 28 --replication-factor 2 --zookeeper :2165; 
> done {code}
> *ERRORS*
> {code:java}
> *BROKER 1*
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] 

[jira] [Updated] (KAFKA-6221) ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation

2017-12-03 Thread Alex Dunayevsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Dunayevsky updated KAFKA-6221:
---
Priority: Major  (was: Minor)

> ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic 
> creation 
> ---
>
> Key: KAFKA-6221
> URL: https://issues.apache.org/jira/browse/KAFKA-6221
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.1, 1.0.0
> Environment: RHEL 7
>Reporter: Alex Dunayevsky
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This issue appeared to happen frequently on 0.10.2.0. 
> On 0.10.2.1 and 1.0.0 it's a way harder to reproduce. 
> We'll focus on reproducing it on 0.10.2.1 and 1.0.0.
> *TOPOLOGY:* 
>   3 brokers, 1 zk.
> *REPRODUCING STRATEGY:* 
> Create a few dozens topics (say, 40) one by one, each with replication factor 
> 2. Number of partitions, generally, does not matter but, for easier 
> reproduction, should not be too small (around 30 or so). 
> *CREATE 40 TOPICS:*
> {code:java} for i in {1..40}; do bin/kafka-topics.sh --create --topic 
> "topic${i}_p28_r2" --partitions 28 --replication-factor 2 --zookeeper :2165; 
> done {code}
> *ERRORS*
> {code:java}
> *BROKER 1*
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> *BROKER 2*
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:36,410] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,0] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> 

[jira] [Updated] (KAFKA-6221) ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation

2017-12-03 Thread Alex Dunayevsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Dunayevsky updated KAFKA-6221:
---
Affects Version/s: 0.10.1.1

> ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic 
> creation 
> ---
>
> Key: KAFKA-6221
> URL: https://issues.apache.org/jira/browse/KAFKA-6221
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.1, 1.0.0
> Environment: RHEL 7
>Reporter: Alex Dunayevsky
>Priority: Minor
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This issue appeared to happen frequently on 0.10.2.0. 
> On 0.10.2.1 and 1.0.0 it's a way harder to reproduce. 
> We'll focus on reproducing it on 0.10.2.1 and 1.0.0.
> *TOPOLOGY:* 
>   3 brokers, 1 zk.
> *REPRODUCING STRATEGY:* 
> Create a few dozens topics (say, 40) one by one, each with replication factor 
> 2. Number of partitions, generally, does not matter but, for easier 
> reproduction, should not be too small (around 30 or so). 
> *CREATE 40 TOPICS:*
> {code:java} for i in {1..40}; do bin/kafka-topics.sh --create --topic 
> "topic${i}_p28_r2" --partitions 28 --replication-factor 2 --zookeeper :2165; 
> done {code}
> *ERRORS*
> {code:java}
> *BROKER 1*
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> *BROKER 2*
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:36,410] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,0] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. 

[jira] [Commented] (KAFKA-6303) Potential lack of synchronization in NioEchoServer#AcceptorThread

2017-12-03 Thread huxihx (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276243#comment-16276243
 ] 

huxihx commented on KAFKA-6303:
---

`newChannels` is already wrapped with Collections.synchronizedList, so it  
should be thread-safe for a non-compound action, but I do agree that it needs 
an explicit locking for a compound action(say putIfAbsent).

> Potential lack of synchronization in NioEchoServer#AcceptorThread
> -
>
> Key: KAFKA-6303
> URL: https://issues.apache.org/jira/browse/KAFKA-6303
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> In the run() method:
> {code}
> SocketChannel socketChannel = 
> ((ServerSocketChannel) key.channel()).accept();
> socketChannel.configureBlocking(false);
> newChannels.add(socketChannel);
> {code}
> Modification to newChannels should be protected by synchronized block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6305) log.retention.hours, retention.ms not working in segments deletion for kafka topic

2017-12-03 Thread VinayKumar (JIRA)
VinayKumar created KAFKA-6305:
-

 Summary: log.retention.hours, retention.ms not working in segments 
deletion for kafka topic
 Key: KAFKA-6305
 URL: https://issues.apache.org/jira/browse/KAFKA-6305
 Project: Kafka
  Issue Type: Bug
  Components: config, log
Affects Versions: 0.10.2.1
 Environment: CentOS 7 
Reporter: VinayKumar


I'm using Kafka 0.10.2.1 version, and have log.retention.hours set to 
24hrs(log.retention.hours=24). But the topic partition segments are not deleted 
as per this retention, and segments more than 24hr are still there in the 
partition logs.

Further, I updated the retention.ms to 18 hrs(retention.ms=6480) for the 
topic, but still segments more than this retention period are there in the 
partition logs.
Can someone please help on this problem. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6304) The controller should allow updating the partition reassignment for the partitions being reassigned

2017-12-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-6304:

Summary: The controller should allow updating the partition reassignment 
for the partitions being reassigned  (was: The controller should allow update 
the partition reassignment for the partitions being reassigned)

> The controller should allow updating the partition reassignment for the 
> partitions being reassigned
> ---
>
> Key: KAFKA-6304
> URL: https://issues.apache.org/jira/browse/KAFKA-6304
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
> Fix For: 1.1.0
>
>
> Currently the controller will not process the partition reassignment of a 
> partition if the partition is already being reassigned.
> The issue is that if there is a broker failure during the partition 
> reassignment, the partition reassignment may never finish. And the users may 
> want to cancel the partition reassignment. However, the controller will 
> refuse to do that unless user manually deletes the partition reassignment zk 
> path, force a controller switch and then issue the revert command. This is 
> pretty involved. It seems reasonable for the controller to replace the 
> ongoing stuck reassignment and replace it with the updated partition 
> assignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6304) The controller should allow update the partition reassignment for the partitions being reassigned

2017-12-03 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-6304:
---

 Summary: The controller should allow update the partition 
reassignment for the partitions being reassigned
 Key: KAFKA-6304
 URL: https://issues.apache.org/jira/browse/KAFKA-6304
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin


Currently the controller will not process the partition reassignment of a 
partition if the partition is already being reassigned.

The issue is that if there is a broker failure during the partition 
reassignment, the partition reassignment may never finish. And the users may 
want to cancel the partition reassignment. However, the controller will refuse 
to do that unless user manually deletes the partition reassignment zk path, 
force a controller switch and then issue the revert command. This is pretty 
involved. It seems reasonable for the controller to replace the ongoing stuck 
reassignment and replace it with the updated partition assignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6304) The controller should allow update the partition reassignment for the partitions being reassigned

2017-12-03 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-6304:

Affects Version/s: 1.0.0
Fix Version/s: 1.1.0
  Component/s: core
   controller
   Issue Type: Improvement  (was: Bug)

> The controller should allow update the partition reassignment for the 
> partitions being reassigned
> -
>
> Key: KAFKA-6304
> URL: https://issues.apache.org/jira/browse/KAFKA-6304
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
> Fix For: 1.1.0
>
>
> Currently the controller will not process the partition reassignment of a 
> partition if the partition is already being reassigned.
> The issue is that if there is a broker failure during the partition 
> reassignment, the partition reassignment may never finish. And the users may 
> want to cancel the partition reassignment. However, the controller will 
> refuse to do that unless user manually deletes the partition reassignment zk 
> path, force a controller switch and then issue the revert command. This is 
> pretty involved. It seems reasonable for the controller to replace the 
> ongoing stuck reassignment and replace it with the updated partition 
> assignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6303) Potential lack of synchronization in NioEchoServer#AcceptorThread

2017-12-03 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-6303:
-

 Summary: Potential lack of synchronization in 
NioEchoServer#AcceptorThread
 Key: KAFKA-6303
 URL: https://issues.apache.org/jira/browse/KAFKA-6303
 Project: Kafka
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


In the run() method:
{code}
SocketChannel socketChannel = 
((ServerSocketChannel) key.channel()).accept();
socketChannel.configureBlocking(false);
newChannels.add(socketChannel);
{code}
Modification to newChannels should be protected by synchronized block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5632) Message headers not supported by Kafka Streams

2017-12-03 Thread Narendra Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276122#comment-16276122
 ] 

Narendra Kumar commented on KAFKA-5632:
---

[~mjsax] How about putting headers as a field in RecordContext ? This way any 
process node can update the header if it has access to ProcessorContext. But I 
am not sure if this is a good idea to update record context by user code.

> Message headers not supported by Kafka Streams
> --
>
> Key: KAFKA-5632
> URL: https://issues.apache.org/jira/browse/KAFKA-5632
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.11.0.0
>Reporter: CJ Woolard
>Priority: Minor
>  Labels: needs-kip
>
> The new message headers functionality introduced in Kafka 0.11.0.0 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers)
>  does not appear to be respected by Kafka Streams, specifically message 
> headers set on input topics to a Kafka Streams topology do not get propagated 
> to the corresponding output topics of the topology. 
> It appears that it's at least partially due to the 
> SourceNodeRecordDeserializer not properly respecting message headers here:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/SourceNodeRecordDeserializer.java#L60
> where it isn't using the new ConsumerRecord constructor which supports 
> headers:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java#L122
> For additional background here is the line before which we noticed that we 
> still have the message headers, and after which we no longer have them:
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordQueue.java#L93
> In terms of a potential solution there are a few different scenarios to 
> consider:
> 1. A stream processor with one input and one output, i.e. 1-to-1, (A 
> map/transformation for example). This is the simplest case, and one proposal 
> would be to directly propagate any message headers from input to output.
> 2. A stream processor with one input and many outputs, i.e. 1-to-many, (A 
> flatmap step for example). 
> 3. A stream processor with multiple inputs per output, i.e. many-to-1, (A 
> join step for example). 
> One proposal for supporting all possible scenarios would be to expose 
> overloads in the Kafka Streams DSL methods to allow the user the ability to 
> specify logic for handling of message headers. 
> For additional background the use case is similar to a distributed tracing 
> use case, where the following previous work may be useful for aiding in 
> design discussions:
> Dapper 
> (https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36356.pdf)
>  
> or 
> Zipkin (https://github.com/openzipkin/zipkin)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6302) Topic can not be recreated after it is deleted

2017-12-03 Thread Waleed Fateem (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276063#comment-16276063
 ] 

Waleed Fateem commented on KAFKA-6302:
--

Hi kic,

I think you should be able to recreate a topic with the same name without any 
issues, but this is with the assumption that the topic doesn't already exist. 
Did you make sure that the topic was deleted properly?

Did you run the kafkak-topics --zookeeper ZHOST:2181 --describe command after 
deletion to confirm that your topic was in fact deleted and not in the "marked 
for deletion" state?

What error, if any, did you see when you attempted to create the topic again?

I don't believe the topic will be deleted so long as you have clients connected 
to the topic. I think at that point Kafka is just going to mark the topic for 
deletion. I would need to run a test to confirm. 

> Topic can not be recreated after it is deleted
> --
>
> Key: KAFKA-6302
> URL: https://issues.apache.org/jira/browse/KAFKA-6302
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients
>Affects Versions: 1.0.0
>Reporter: kic
>
> I use an embedded kafka for unit test. My application relies on the ability 
> to recreate topics programmatically. Currently it is not possible to 
> re-create a topic after it has been deleted.
> {code}
> // needs compile time depedency 
> 'net.manub:scalatest-embedded-kafka_2.11:1.0.0' and 
> 'org.apache.kafka:kafka-clients:1.0.0'
> package kic.kafka.embedded
> import java.util.Properties
> import org.apache.kafka.clients.admin.{AdminClient, NewTopic}
> import org.scalatest._
> import scala.collection.JavaConverters._
> class EmbeddedKafaJavaWrapperTest extends FlatSpec with Matchers {
>   val props = new Properties()
>   val testTopic = "test-topic"
>   "The admin client" should "be able to create, delete and re-create topics" 
> in {
> props.setProperty("bootstrap.servers", "localhost:10001")
> props.setProperty("delete.enable.topic", "true")
> props.setProperty("group.id", "test-client")
> props.setProperty("key.deserializer", 
> "org.apache.kafka.common.serialization.LongDeserializer")
> props.setProperty("value.deserializer", 
> "org.apache.kafka.common.serialization.StringDeserializer")
> props.setProperty("clinet.id", "test-client")
> props.setProperty("key.serializer", 
> "org.apache.kafka.common.serialization.LongSerializer")
> props.setProperty("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer")
> EmbeddedKafaJavaWrapper.start(10001, 10002, props)
> try {
>   implicit val admin = AdminClient.create(props)
>   // create topic and confirm it exists
>   createTopic(testTopic)
>   val topics = listTopics()
>   info(s"topics: $topics")
>   topics should contain(testTopic)
>   // now we should be able to send something to this topic
>   // TODO create producer and send something
>   // delete topic
>   deleteTopic(testTopic)
>   listTopics() shouldNot contain(testTopic)
>   // recreate topic
>   createTopic(testTopic)
>   // listTopics() should contain(testTopic)
>   // and finally consume from the topic and expect to get 0 entries
>   // TODO create consumer and poll once
> } finally {
>   EmbeddedKafaJavaWrapper.stop()
> }
>   }
>   def listTopics()(implicit admin: AdminClient) =
> admin.listTopics().names().get()
>   def createTopic(topic: String)(implicit admin: AdminClient) =
> admin.createTopics(Seq(new NewTopic(topic, 1, 1)).asJava)
>   def deleteTopic(topic: String)(implicit admin: AdminClient) =
> admin.deleteTopics(Seq("test-topic").asJava).all().get()
> }
> {code}
> Btw, what happens to connected producers/consumers when I delete a topic? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6301) Incorrect Java Regex example '*' for mirroring all topics

2017-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276057#comment-16276057
 ] 

ASF GitHub Bot commented on KAFKA-6301:
---

GitHub user waleedfateem opened a pull request:

https://github.com/apache/kafka/pull/4289

KAFKA-6301 changing regular expression in ops.html from '*' to '.*'

The documentation for section "Mirroring data between clusters" states the 
following:
Or you could mirror all topics using --whitelist '*'
The regular expression should be '.*' instead.

This fix makes the change directly to the ops.html file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/waleedfateem/kafka KAFKA-6301

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4289


commit f26dd298c24d22656d26575a03201a43c326b4df
Author: waleedfateem 
Date:   2017-12-03T19:42:47Z

KAFKA-6301 changing incorrect Java Regex example for mirroring all topics 
from '*' to '.*' in ops.html




> Incorrect Java Regex example '*' for mirroring all topics
> -
>
> Key: KAFKA-6301
> URL: https://issues.apache.org/jira/browse/KAFKA-6301
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0, 0.11.0.0, 1.0.0
>Reporter: Waleed Fateem
>Assignee: Waleed Fateem
>Priority: Minor
>  Labels: documentation, mirror-maker
>
> The documentation for section "Mirroring data between clusters" states the 
> following:
> Or you could mirror all topics using --whitelist '*'
> The regular expression should be '.*' instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6199) Single broker with fast growing heap usage

2017-12-03 Thread Robin Tweedie (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robin Tweedie updated KAFKA-6199:
-
Attachment: histo_live_80.txt

here's another {{-histo:live}} from around 80% heap.

> Single broker with fast growing heap usage
> --
>
> Key: KAFKA-6199
> URL: https://issues.apache.org/jira/browse/KAFKA-6199
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.1
> Environment: Amazon Linux
>Reporter: Robin Tweedie
> Attachments: Screen Shot 2017-11-10 at 1.55.33 PM.png, Screen Shot 
> 2017-11-10 at 11.59.06 AM.png, dominator_tree.png, histo_live.txt, 
> histo_live_80.txt, merge_shortest_paths.png, path2gc.png
>
>
> We have a single broker in our cluster of 25 with fast growing heap usage 
> which necessitates us restarting it every 12 hours. If we don't restart the 
> broker, it becomes very slow from long GC pauses and eventually has 
> {{OutOfMemory}} errors.
> See {{Screen Shot 2017-11-10 at 11.59.06 AM.png}} for a graph of heap usage 
> percentage on the broker. A "normal" broker in the same cluster stays below 
> 50% (averaged) over the same time period.
> We have taken heap dumps when the broker's heap usage is getting dangerously 
> high, and there are a lot of retained {{NetworkSend}} objects referencing 
> byte buffers.
> We also noticed that the single affected broker logs a lot more of this kind 
> of warning than any other broker:
> {noformat}
> WARN Attempting to send response via channel for which there is no open 
> connection, connection id 13 (kafka.network.Processor)
> {noformat}
> See {{Screen Shot 2017-11-10 at 1.55.33 PM.png}} for counts of that WARN log 
> message visualized across all the brokers (to show it happens a bit on other 
> brokers, but not nearly as much as it does on the "bad" broker).
> I can't make the heap dumps public, but would appreciate advice on how to pin 
> down the problem better. We're currently trying to narrow it down to a 
> particular client, but without much success so far.
> Let me know what else I could investigate or share to track down the source 
> of this leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6301) Incorrect Java Regex example '*' for mirroring all topics

2017-12-03 Thread Waleed Fateem (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Waleed Fateem reassigned KAFKA-6301:


Assignee: Waleed Fateem

> Incorrect Java Regex example '*' for mirroring all topics
> -
>
> Key: KAFKA-6301
> URL: https://issues.apache.org/jira/browse/KAFKA-6301
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0, 0.11.0.0, 1.0.0
>Reporter: Waleed Fateem
>Assignee: Waleed Fateem
>Priority: Minor
>  Labels: documentation, mirror-maker
>
> The documentation for section "Mirroring data between clusters" states the 
> following:
> Or you could mirror all topics using --whitelist '*'
> The regular expression should be '.*' instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6297) Consumer fetcher should handle UnsupportedVersionException more diligently

2017-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276001#comment-16276001
 ] 

ASF GitHub Bot commented on KAFKA-6297:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4286


> Consumer fetcher should handle UnsupportedVersionException more diligently
> --
>
> Key: KAFKA-6297
> URL: https://issues.apache.org/jira/browse/KAFKA-6297
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Guozhang Wang
>
> Today if the consumer is talking to an older versioned broker that does not 
> support newer fetch versions, it will simply block without printing any 
> warning logs. This is because when {{UnsupportedVersionException}} gets 
> raised inside {{ConsumerNetworkClient}}, the {{Fetcher}}'s handling logic is 
> only logging it and moves on (and hence retries forever):
> {code}
>@Override
> public void onFailure(RuntimeException e) {
> log.debug("Fetch request {} to {} failed", 
> request.fetchData(), fetchTarget, e);
> }
> {code}
> We should at least logging {{UnsupportedVersionException}} specifically as 
> WARN or even let the consumer to fail fast and gracefully upon this error.
> Side note: There are two system tests in 
> {{streams_broker_compatibility_test.ps}} that are disabled atm -- after this 
> got fixed, we need to re-enable those tests (and also update them 
> accordingly). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6302) Topic can not be recreated after it is deleted

2017-12-03 Thread kic (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kic updated KAFKA-6302:
---
Description: 
I use an embedded kafka for unit test. My application relies on the ability to 
recreate topics programmatically. Currently it is not possible to re-create a 
topic after it has been deleted.

{code}
// needs compile time depedency 'net.manub:scalatest-embedded-kafka_2.11:1.0.0' 
and 'org.apache.kafka:kafka-clients:1.0.0'
package kic.kafka.embedded

import java.util.Properties

import org.apache.kafka.clients.admin.{AdminClient, NewTopic}
import org.scalatest._

import scala.collection.JavaConverters._

class EmbeddedKafaJavaWrapperTest extends FlatSpec with Matchers {
  val props = new Properties()
  val testTopic = "test-topic"

  "The admin client" should "be able to create, delete and re-create topics" in 
{
props.setProperty("bootstrap.servers", "localhost:10001")
props.setProperty("delete.enable.topic", "true")
props.setProperty("group.id", "test-client")
props.setProperty("key.deserializer", 
"org.apache.kafka.common.serialization.LongDeserializer")
props.setProperty("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer")
props.setProperty("clinet.id", "test-client")
props.setProperty("key.serializer", 
"org.apache.kafka.common.serialization.LongSerializer")
props.setProperty("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer")

EmbeddedKafaJavaWrapper.start(10001, 10002, props)

try {
  implicit val admin = AdminClient.create(props)

  // create topic and confirm it exists
  createTopic(testTopic)
  val topics = listTopics()
  info(s"topics: $topics")
  topics should contain(testTopic)

  // now we should be able to send something to this topic
  // TODO create producer and send something

  // delete topic
  deleteTopic(testTopic)
  listTopics() shouldNot contain(testTopic)

  // recreate topic
  createTopic(testTopic)
  // listTopics() should contain(testTopic)

  // and finally consume from the topic and expect to get 0 entries
  // TODO create consumer and poll once
} finally {
  EmbeddedKafaJavaWrapper.stop()
}

  }

  def listTopics()(implicit admin: AdminClient) =
admin.listTopics().names().get()

  def createTopic(topic: String)(implicit admin: AdminClient) =
admin.createTopics(Seq(new NewTopic(topic, 1, 1)).asJava)

  def deleteTopic(topic: String)(implicit admin: AdminClient) =
admin.deleteTopics(Seq("test-topic").asJava).all().get()

}
{code}

Btw, what happens to connected producers/consumers when I delete a topic? 

  was:
I use an embedded kafka for unit test. My application relies on the ability to 
recreate topics programmatically. Currently it is not possible to re-create a 
topic after it has been deleted.

{code}
// needs compile time depedency 'net.manub:scalatest-embedded-kafka_2.11:1.0.0'
package kic.kafka.embedded

import java.util.Properties

import org.apache.kafka.clients.admin.{AdminClient, NewTopic}
import org.scalatest._

import scala.collection.JavaConverters._

class EmbeddedKafaJavaWrapperTest extends FlatSpec with Matchers {
  val props = new Properties()
  val testTopic = "test-topic"

  "The admin client" should "be able to create, delete and re-create topics" in 
{
props.setProperty("bootstrap.servers", "localhost:10001")
props.setProperty("delete.enable.topic", "true")
props.setProperty("group.id", "test-client")
props.setProperty("key.deserializer", 
"org.apache.kafka.common.serialization.LongDeserializer")
props.setProperty("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer")
props.setProperty("clinet.id", "test-client")
props.setProperty("key.serializer", 
"org.apache.kafka.common.serialization.LongSerializer")
props.setProperty("value.serializer", 
"org.apache.kafka.common.serialization.StringSerializer")

EmbeddedKafaJavaWrapper.start(10001, 10002, props)

try {
  implicit val admin = AdminClient.create(props)

  // create topic and confirm it exists
  createTopic(testTopic)
  val topics = listTopics()
  info(s"topics: $topics")
  topics should contain(testTopic)

  // now we should be able to send something to this topic
  // TODO create producer and send something

  // delete topic
  deleteTopic(testTopic)
  listTopics() shouldNot contain(testTopic)

  // recreate topic
  createTopic(testTopic)
  // listTopics() should contain(testTopic)

  // and finally consume from the topic and expect to get 0 entries
  // TODO create consumer and poll once
} finally {
  EmbeddedKafaJavaWrapper.stop()
}

  }

  def listTopics()(implicit admin: AdminClient) =
admin.listTopics().names().get()

  def createTopic(topic: String)(implicit admin: 

[jira] [Commented] (KAFKA-6193) Only delete reassign_partitions znode after reassignment is complete

2017-12-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16275982#comment-16275982
 ] 

Ted Yu commented on KAFKA-6193:
---

Updated JIRA title according to Ismael's discovery.

> Only delete reassign_partitions znode after reassignment is complete
> 
>
> Key: KAFKA-6193
> URL: https://issues.apache.org/jira/browse/KAFKA-6193
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ismael Juma
> Fix For: 1.1.0
>
> Attachments: 6193.out
>
>
> From 
> https://builds.apache.org/job/kafka-trunk-jdk8/2198/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/
>  :
> {code}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics(ReassignPartitionsClusterTest.scala:524)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6193) Only delete reassign_partitions znode after reassignment is complete

2017-12-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-6193:
--
Summary: Only delete reassign_partitions znode after reassignment is 
complete  (was: 
ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics
 fails sometimes)

> Only delete reassign_partitions znode after reassignment is complete
> 
>
> Key: KAFKA-6193
> URL: https://issues.apache.org/jira/browse/KAFKA-6193
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ismael Juma
> Fix For: 1.1.0
>
> Attachments: 6193.out
>
>
> From 
> https://builds.apache.org/job/kafka-trunk-jdk8/2198/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/
>  :
> {code}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics(ReassignPartitionsClusterTest.scala:524)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6301) Incorrect Java Regex example '*' for mirroring all topics

2017-12-03 Thread Waleed Fateem (JIRA)
Waleed Fateem created KAFKA-6301:


 Summary: Incorrect Java Regex example '*' for mirroring all topics
 Key: KAFKA-6301
 URL: https://issues.apache.org/jira/browse/KAFKA-6301
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0, 0.11.0.0, 0.10.2.0
Reporter: Waleed Fateem
Priority: Minor


The documentation for section "Mirroring data between clusters" states the 
following:

Or you could mirror all topics using --whitelist '*'

The regular expression should be '.*' instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6193) ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics fails sometimes

2017-12-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16275885#comment-16275885
 ] 

Ismael Juma commented on KAFKA-6193:


Found the reason for the failures and updated the PR to fix them.

> ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics
>  fails sometimes
> --
>
> Key: KAFKA-6193
> URL: https://issues.apache.org/jira/browse/KAFKA-6193
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ismael Juma
> Fix For: 1.1.0
>
> Attachments: 6193.out
>
>
> From 
> https://builds.apache.org/job/kafka-trunk-jdk8/2198/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldPerformMultipleReassignmentOperationsOverVariousTopics/
>  :
> {code}
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.shouldPerformMultipleReassignmentOperationsOverVariousTopics(ReassignPartitionsClusterTest.scala:524)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)