[jira] [Commented] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796435#comment-16796435
 ] 

Suman B N commented on KAFKA-6078:
--

[~mjsax], Can I take a deeper look into it? Please assign the ticket in my name.

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7382) We shoud guarantee at lest one replica of partition should be alive when create or update topic

2018-12-12 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718954#comment-16718954
 ] 

Suman B N commented on KAFKA-7382:
--

The check is not if all replicas are up. Instead, it is if atleast one replica 
is up. Creating such topics with all invalid brokers will end up in messing up 
kafka cluster with all invalid topics.
Also why do we even need to allow topics to be created when those brokers are 
offline? Such topics once created cannot be deleted.

> We shoud guarantee at lest one replica of partition should be alive when 
> create or update topic
> ---
>
> Key: KAFKA-7382
> URL: https://issues.apache.org/jira/browse/KAFKA-7382
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: zhaoshijie
>Priority: Major
>
> For example:I have brokers: 1,2,3,4,5. I create a new topic by command: 
> {code:java}
> sh kafka-topics.sh --create --topic replicaserror --zookeeper localhost:2181 
> --replica-assignment 11:12:13,12:13:14,14:15:11,14:12:11,13:14:11
> {code}
> Then kafkaController will process this,after partitionStateMachine and 
> replicaStateMachine handle state change,topic metadatas and state will be 
> strange,partitions is on NewPartition and replicas is on OnlineReplica. 
> Next wo can not delete this topic(bacase state change illegal ),This will 
> cause a number of problems.So i think wo shoud check replicas assignment when 
> create or update topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7465) kafka-topics.sh command not honouring --disable-rack-aware property when adding partitions.

2018-10-01 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634618#comment-16634618
 ] 

Suman B N commented on KAFKA-7465:
--

Fixed. Pull request is [here|https://github.com/apache/kafka/pull/5721].

> kafka-topics.sh command not honouring --disable-rack-aware property when 
> adding partitions.
> ---
>
> Key: KAFKA-7465
> URL: https://issues.apache.org/jira/browse/KAFKA-7465
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0, 1.0.0, 1.1.0, 2.0.0
>Reporter: Suman B N
>Priority: Minor
> Fix For: 2.0.1
>
>
> kafka-topics.sh command not honouring --disable-rack-aware property when 
> adding partitions.
> Create topic honours --disable-rack-aware property. Where as alter topic 
> always honours default rackAwarMode(Enforced) to add partitions.
> Steps:
>  * Start brokers. 
>     0 -> r1, 1 -> r1, 2 -> r2, 3 -> r2
>  * Create topic _topic1_ with 8 partitions and 2 RF. No partition should have 
> replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be 
> created. Create topic _topic2_ with 8 partitions and 2 RF with 
> --disable-rack-aware. Partition can have replicas on same rack brokers. Ex: 
> 0,1 and 2,3 replica if present is acceptable.
>  * Add 8 more partitions to _topic1._ No newly added partition should have 
> replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be 
> created.
>  * Add 8 more partitions to _topic2_ with --disable-rack-aware_._ Newly added 
> partition can have replicas on same rack brokers. Ex: 0,1 and 2,3 replica if 
> present is acceptable. Try repeating this step, no matter what, partitions 
> are always rack-aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7465) kafka-topics.sh command not honouring --disable-rack-aware property when adding partitions.

2018-10-01 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634538#comment-16634538
 ] 

Suman B N commented on KAFKA-7465:
--

Working on this fix. Please assgin the jira to me. [~cmccabe] [~omkreddy]

> kafka-topics.sh command not honouring --disable-rack-aware property when 
> adding partitions.
> ---
>
> Key: KAFKA-7465
> URL: https://issues.apache.org/jira/browse/KAFKA-7465
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0, 1.0.0, 1.1.0, 2.0.0
>Reporter: Suman B N
>Priority: Minor
> Fix For: 2.0.1
>
>
> kafka-topics.sh command not honouring --disable-rack-aware property when 
> adding partitions.
> Create topic honours --disable-rack-aware property. Where as alter topic 
> always honours default rackAwarMode(Enforced) to add partitions.
> Steps:
>  * Start brokers. 
>     0 -> r1, 1 -> r1, 2 -> r2, 3 -> r2
>  * Create topic _topic1_ with 8 partitions and 2 RF. No partition should have 
> replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be 
> created. Create topic _topic2_ with 8 partitions and 2 RF with 
> --disable-rack-aware. Partition can have replicas on same rack brokers. Ex: 
> 0,1 and 2,3 replica if present is acceptable.
>  * Add 8 more partitions to _topic1._ No newly added partition should have 
> replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be 
> created.
>  * Add 8 more partitions to _topic2_ with --disable-rack-aware_._ Newly added 
> partition can have replicas on same rack brokers. Ex: 0,1 and 2,3 replica if 
> present is acceptable. Try repeating this step, no matter what, partitions 
> are always rack-aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7465) kafka-topics.sh command not honouring --disable-rack-aware property when adding partitions.

2018-10-01 Thread Suman B N (JIRA)
Suman B N created KAFKA-7465:


 Summary: kafka-topics.sh command not honouring 
--disable-rack-aware property when adding partitions.
 Key: KAFKA-7465
 URL: https://issues.apache.org/jira/browse/KAFKA-7465
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.0.0, 1.1.0, 1.0.0, 0.11.0.0
Reporter: Suman B N
 Fix For: 2.0.1


kafka-topics.sh command not honouring --disable-rack-aware property when adding 
partitions.

Create topic honours --disable-rack-aware property. Where as alter topic always 
honours default rackAwarMode(Enforced) to add partitions.

Steps:
 * Start brokers. 
    0 -> r1, 1 -> r1, 2 -> r2, 3 -> r2
 * Create topic _topic1_ with 8 partitions and 2 RF. No partition should have 
replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be created. 
Create topic _topic2_ with 8 partitions and 2 RF with --disable-rack-aware. 
Partition can have replicas on same rack brokers. Ex: 0,1 and 2,3 replica if 
present is acceptable.
 * Add 8 more partitions to _topic1._ No newly added partition should have 
replicas on same rack brokers. Ex: 0,1 and 2,3 replica should never be created.
 * Add 8 more partitions to _topic2_ with --disable-rack-aware_._ Newly added 
partition can have replicas on same rack brokers. Ex: 0,1 and 2,3 replica if 
present is acceptable. Try repeating this step, no matter what, partitions are 
always rack-aware.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6764) ConsoleConsumer behavior inconsistent when specifying --partition with --from-beginning

2018-09-26 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628748#comment-16628748
 ] 

Suman B N commented on KAFKA-6764:
--

Fixed. Pull request is [https://github.com/apache/kafka/pull/5637]

> ConsoleConsumer behavior inconsistent when specifying --partition with 
> --from-beginning 
> 
>
> Key: KAFKA-6764
> URL: https://issues.apache.org/jira/browse/KAFKA-6764
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Larry McQueary
>Assignee: Larry McQueary
>Priority: Minor
>  Labels: newbie
>
> Per its usage statement, {{kafka-console-consumer.sh}} ignores 
> {{\-\-from-beginning}} when the specified consumer group has committed 
> offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if 
> {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in 
> all cases, whether there are committed offsets or not.
> This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} 
> is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only 
> passed to the 
> constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79]
>  for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. Hence, it 
> is honored in this case and not the other.
> This case should either be handled consistently, or the usage statement 
> should be modified to indicate the actual behavior/usage. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7412) Bug prone response from producer.send(ProducerRecord, Callback) if Kafka broker is not running

2018-09-24 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626414#comment-16626414
 ] 

Suman B N commented on KAFKA-7412:
--

Even I have used the same client, broker, and code, but all the time there is 
an exception - org.apache.kafka.common.errors.TimeoutException. Re-run again 
and see if you can reproduce.

> Bug prone response from producer.send(ProducerRecord, Callback) if Kafka 
> broker is not running
> --
>
> Key: KAFKA-7412
> URL: https://issues.apache.org/jira/browse/KAFKA-7412
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 2.0.0
>Reporter: Michal Turek
>Priority: Major
> Attachments: metadata_when_kafka_is_stopped.png
>
>
> Hi there, I have probably found a bug in Java Kafka producer client.
> Scenario & current behavior:
> - Start Kafka broker, single instance.
> - Start application that produces messages to Kafka.
> - Let the application to load partitions for a topic to warm up the producer, 
> e.g. send a message to Kafka. I'm not sure if this is necessary step, but our 
> code does it.
> - Gracefully stop the Kafka broker.
> - Application logs now contains "org.apache.kafka.clients.NetworkClient: 
> [Producer clientId=...] Connection to node 0 could not be established. Broker 
> may not be available." so the client is aware about the Kafka unavailability.
> - Trigger the producer to send a message using 
> KafkaProducer.send(ProducerRecord, Callback) method.
> - The callback that notifies business code receives non-null RecordMetadata 
> and null Exception after request.timeout.ms. The metadata contains offset -1 
> which is value of ProduceResponse.INVALID_OFFSET.
> Expected behavior:
> - If the Kafka is not running and the message is not appended to the log, the 
> callback should contain null RecordMetadata and non-null Exception. At least 
> I subjectively understand the Javadoc this way, "exception on production 
> error" in simple words.
> - Developer that is not aware of this behavior and that doesn't test for 
> offset -1, may consider the message as successfully send and properly acked 
> by the broker.
> Known workaround
> - Together with checking for non-null exception in the callback, add another 
> condition for ProduceResponse.INVALID_OFFSET.
> {noformat}
> try {
> producer.send(record, (metadata, exception) -> {
> if (metadata != null) {
> if (metadata.offset() != 
> ProduceResponse.INVALID_OFFSET) {
> // Success
> } else {
> // Failure
> }
> } else {
> // Failure
> }
> });
> } catch (Exception e) {
> // Failure
> }
> {noformat}
> Used setup
> - Latest Kafka 2.0.0 for both broker and Java client.
> - Originally found with broker 0.11.0.1 and client 2.0.0.
> - Code is analogy of the one in Javadoc of KafkaProducer.send().
> - Used producer configuration (others use defaults).
> {noformat}
> bootstrap.servers = "localhost:9092"
> client.id = "..."
> acks = "all"
> retries = 1
> linger.ms = "20"
> compression.type = "lz4"
> request.timeout.ms = 5000 # The same behavior is with default, this is to 
> speed up the tests
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7382) We shoud guarantee at lest one replica of partition should be alive when create or update topic

2018-09-20 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621601#comment-16621601
 ] 

Suman B N commented on KAFKA-7382:
--

Fixed. Pull request: https://github.com/apache/kafka/pull/5665

> We shoud guarantee at lest one replica of partition should be alive when 
> create or update topic
> ---
>
> Key: KAFKA-7382
> URL: https://issues.apache.org/jira/browse/KAFKA-7382
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: zhaoshijie
>Priority: Major
>
> For example:I have brokers: 1,2,3,4,5. I create a new topic by command: 
> {code:java}
> sh kafka-topics.sh --create --topic replicaserror --zookeeper localhost:2181 
> --replica-assignment 11:12:13,12:13:14,14:15:11,14:12:11,13:14:11
> {code}
> Then kafkaController will process this,after partitionStateMachine and 
> replicaStateMachine handle state change,topic metadatas and state will be 
> strange,partitions is on NewPartition and replicas is on OnlineReplica. 
> Next wo can not delete this topic(bacase state change illegal ),This will 
> cause a number of problems.So i think wo shoud check replicas assignment when 
> create or update topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7382) We shoud guarantee at lest one replica of partition should be alive when create or update topic

2018-09-19 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621066#comment-16621066
 ] 

Suman B N commented on KAFKA-7382:
--

Working on this. I already have a fix for this.

> We shoud guarantee at lest one replica of partition should be alive when 
> create or update topic
> ---
>
> Key: KAFKA-7382
> URL: https://issues.apache.org/jira/browse/KAFKA-7382
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: zhaoshijie
>Priority: Major
>
> For example:I have brokers: 1,2,3,4,5. I create a new topic by command: 
> {code:java}
> sh kafka-topics.sh --create --topic replicaserror --zookeeper localhost:2181 
> --replica-assignment 11:12:13,12:13:14,14:15:11,14:12:11,13:14:11
> {code}
> Then kafkaController will process this,after partitionStateMachine and 
> replicaStateMachine handle state change,topic metadatas and state will be 
> strange,partitions is on NewPartition and replicas is on OnlineReplica. 
> Next wo can not delete this topic(bacase state change illegal ),This will 
> cause a number of problems.So i think wo shoud check replicas assignment when 
> create or update topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7382) We shoud guarantee at lest one replica of partition should be alive when create or update topic

2018-09-19 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620641#comment-16620641
 ] 

Suman B N commented on KAFKA-7382:
--

Agreed. In alterTopic, it is already handled. In createTopic it should be 
handled.

> We shoud guarantee at lest one replica of partition should be alive when 
> create or update topic
> ---
>
> Key: KAFKA-7382
> URL: https://issues.apache.org/jira/browse/KAFKA-7382
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0
>Reporter: zhaoshijie
>Priority: Major
>
> For example:I have brokers: 1,2,3,4,5. I create a new topic by command: 
> {code:java}
> sh kafka-topics.sh --create --topic replicaserror --zookeeper localhost:2181 
> --replica-assignment 11:12:13,12:13:14,14:15:11,14:12:11,13:14:11
> {code}
> Then kafkaController will process this,after partitionStateMachine and 
> replicaStateMachine handle state change,topic metadatas and state will be 
> strange,partitions is on NewPartition and replicas is on OnlineReplica. 
> Next wo can not delete this topic(bacase state change illegal ),This will 
> cause a number of problems.So i think wo shoud check replicas assignment when 
> create or update topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6764) ConsoleConsumer behavior inconsistent when specifying --partition with --from-beginning

2018-09-10 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609249#comment-16609249
 ] 

Suman B N edited comment on KAFKA-6764 at 9/10/18 9:06 PM:
---

consumer.subscribe() should be called whenever group id is specified no matter 
whether partition/offset is specified or not. If partition/offset is specified, 
then it should be seek(TopicPartition) accordingly. 
Console-consumer without group id can be assigned partition/offset based on 
config specifed. Unless specified, subscribe() is honoured in current scenario. 
That provides consistent behaviour as follows:
* With group-id, always subscribe and auto-assign. Seek based on 
partition/offset config(Not sure if seek should be supported console-consumer 
when a group-id is specified). Because assign() and subscribe() can't be used 
together for the same topic-consumer-group combination
* Without group-id (Auto Commit Offset is always disabled):
** Partition/Offset specified, then assign.
** Partition/Offset not specified, then subscribe.

Correct me if I am wrong.


was (Author: sumannewton):
consumer.subscribe() should be called whenever group id is specified no matter 
whether partition/offset is specified or not. If partition/offset is specified, 
then it should be seek(TopicPartition) accordingly. 
Console-consumer without group id can be assigned partition/offset based on 
config specifed. Unless specified, subscribe() is honoured in current scenario. 
That provides consistent behaviour as follows:
* With group-id, always subscribe and auto-assign. Seek based on 
partition/offset config.
* Without group-id (Auto Commit Offset is always disabled):
** Partition/Offset specified, then assign.
** Partition/Offset not specified, then subscribe.

> ConsoleConsumer behavior inconsistent when specifying --partition with 
> --from-beginning 
> 
>
> Key: KAFKA-6764
> URL: https://issues.apache.org/jira/browse/KAFKA-6764
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Larry McQueary
>Assignee: Larry McQueary
>Priority: Minor
>  Labels: newbie
>
> Per its usage statement, {{kafka-console-consumer.sh}} ignores 
> {{\-\-from-beginning}} when the specified consumer group has committed 
> offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if 
> {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in 
> all cases, whether there are committed offsets or not.
> This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} 
> is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only 
> passed to the 
> constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79]
>  for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. Hence, it 
> is honored in this case and not the other.
> This case should either be handled consistently, or the usage statement 
> should be modified to indicate the actual behavior/usage. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6764) ConsoleConsumer behavior inconsistent when specifying --partition with --from-beginning

2018-09-10 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609249#comment-16609249
 ] 

Suman B N commented on KAFKA-6764:
--

consumer.subscribe() should be called whenever group id is specified no matter 
whether partition/offset is specified or not. If partition/offset is specified, 
then it should be seek(TopicPartition) accordingly. 
Console-consumer without group id can be assigned partition/offset based on 
config specifed. Unless specified, subscribe() is honoured in current scenario. 
That provides consistent behaviour as follows:
* With group-id, always subscribe and auto-assign. Seek based on 
partition/offset config.
* Without group-id (Auto Commit Offset is always disabled):
** Partition/Offset specified, then assign.
** Partition/Offset not specified, then subscribe.

> ConsoleConsumer behavior inconsistent when specifying --partition with 
> --from-beginning 
> 
>
> Key: KAFKA-6764
> URL: https://issues.apache.org/jira/browse/KAFKA-6764
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Larry McQueary
>Assignee: Larry McQueary
>Priority: Minor
>  Labels: newbie
>
> Per its usage statement, {{kafka-console-consumer.sh}} ignores 
> {{\-\-from-beginning}} when the specified consumer group has committed 
> offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if 
> {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in 
> all cases, whether there are committed offsets or not.
> This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} 
> is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only 
> passed to the 
> constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79]
>  for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. Hence, it 
> is honored in this case and not the other.
> This case should either be handled consistently, or the usage statement 
> should be modified to indicate the actual behavior/usage. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)