[jira] [Commented] (KAFKA-8021) KafkaProducer.flush() can show unexpected behavior when a batch is split

2019-03-01 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781820#comment-16781820
 ] 

Sriharsha Chintalapani commented on KAFKA-8021:
---

[~amendhekar] have you noticed this issue with the latest release of 2.1.1

> KafkaProducer.flush() can show unexpected behavior when a batch is split
> 
>
> Key: KAFKA-8021
> URL: https://issues.apache.org/jira/browse/KAFKA-8021
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.11.0.0
>Reporter: Abhishek Mendhekar
>Assignee: Abhishek Mendhekar
>Priority: Major
>
> KafkaProducer.flush() marks the flush in progress and then waits for all 
> incomplete batches to be completed (waits on the producer batch futures to 
> finish).
> The behavior is seen when a batch is split due to MESSAGE_TOO_LARGE exception.
> The large batch is split into smaller batches (2 or more) but 
> ProducerBatch.split() marks the large batch future as complete before adding 
> the new batches in the incomplete list of batches. At this time if the 
> KafkaProducer.flush() is called then it'll make a copy of existing incomplete 
> list of batches and waits for them to complete while ignoring the large batch 
> that was split into smaller batches.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8191) Add pluggability of KeyManager to generate the broker Private Keys and Certificates

2019-04-04 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-8191:
--
Fix Version/s: (was: 1.1.1)
   (was: 1.1.0)

> Add pluggability of KeyManager to generate the broker Private Keys and 
> Certificates
> ---
>
> Key: KAFKA-8191
> URL: https://issues.apache.org/jira/browse/KAFKA-8191
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.1.0, 1.1.1
>Reporter: Sai Sandeep
>Priority: Minor
>  Labels: security
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  
> *Context:* Currently, in SslFactory.java, if the keystore is created null 
> (caused by passing an empty config value to ssl.keystore.location), the 
> default Sun KeyManager is used ignoring the 'ssl.keymanager.algorithm' 
> provided.
> We need changes to fetch KeyManager from the KeyManagerFactory based on the 
> provided keymanager algorithm, populated by 'ssl.keymanager.algorithm' if the 
> keystore is found empty
>  
> *Background and Use Case:* Kafka allows users to configure truststore and 
> keystore to enable TLS connections from clients to brokers. Often this means 
> during deployment, one needs to pre-provision keystores to enable clients to 
> communicate with brokers on TLS port. Most of the time users end up 
> configuring a long-lived certificate which is not good for security. Although 
> KAFKA-4701 introduced the reload of keystores it still a cumbersome to 
> distribute these files onto compute system for clients. 
> There are several projects that allows one to distribute the certificates 
> through a local agent, example [Spiffe|[https://spiffe.io/]]. To take 
> advantage of such systems we need changes to consider 
> 'ssl.keymanager.algorithm' for KeyManagerFactory creation
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8191) Add pluggability of KeyManager to generate the broker Private Keys and Certificates

2019-04-04 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-8191:
-

Assignee: Sai Sandeep

> Add pluggability of KeyManager to generate the broker Private Keys and 
> Certificates
> ---
>
> Key: KAFKA-8191
> URL: https://issues.apache.org/jira/browse/KAFKA-8191
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.1.0, 1.1.1
>Reporter: Sai Sandeep
>Assignee: Sai Sandeep
>Priority: Minor
>  Labels: security
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  
> *Context:* Currently, in SslFactory.java, if the keystore is created null 
> (caused by passing an empty config value to ssl.keystore.location), the 
> default Sun KeyManager is used ignoring the 'ssl.keymanager.algorithm' 
> provided.
> We need changes to fetch KeyManager from the KeyManagerFactory based on the 
> provided keymanager algorithm, populated by 'ssl.keymanager.algorithm' if the 
> keystore is found empty
>  
> *Background and Use Case:* Kafka allows users to configure truststore and 
> keystore to enable TLS connections from clients to brokers. Often this means 
> during deployment, one needs to pre-provision keystores to enable clients to 
> communicate with brokers on TLS port. Most of the time users end up 
> configuring a long-lived certificate which is not good for security. Although 
> KAFKA-4701 introduced the reload of keystores it still a cumbersome to 
> distribute these files onto compute system for clients. 
> There are several projects that allows one to distribute the certificates 
> through a local agent, example [Spiffe|[https://spiffe.io/]]. To take 
> advantage of such systems we need changes to consider 
> 'ssl.keymanager.algorithm' for KeyManagerFactory creation
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8203) plaintext connections to SSL secured broker can be handled more elegantly

2019-04-09 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-8203:
-

Assignee: Satish Duggana

> plaintext connections to SSL secured broker can be handled more elegantly
> -
>
> Key: KAFKA-8203
> URL: https://issues.apache.org/jira/browse/KAFKA-8203
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.1.1
>Reporter: Jorg Heymans
>Assignee: Satish Duggana
>Priority: Major
>
> Mailing list thread: 
> [https://lists.apache.org/thread.html/39935157351c0ad590e6cf02027816d664f1fd3724a25c1133a3bba6@%3Cusers.kafka.apache.org%3E]
> -reproduced here
> We have our brokers secured with these standard properties
>  
> {code:java}
> listeners=SSL://a.b.c:9030 
> ssl.truststore.location=... 
> ssl.truststore.password=... 
> ssl.keystore.location=... 
> ssl.keystore.password=... 
> ssl.key.password=... 
> ssl.client.auth=required 
> ssl.enabled.protocols=TLSv1.2 {code}
> It's a bit surprising to see that when a (java) client attempts to connect 
> without having SSL configured, so doing a PLAINTEXT connection instead, it 
> does not get a TLS exception indicating that SSL is required. Somehow i would 
> have expected a hard transport-level exception making it clear that non-SSL 
> connections are not allowed, instead the client sees this (when debug logging 
> is enabled)
> {code:java}
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 
> 21234bee31165527 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer 
> - [Consumer clientId=consumer-1, groupId=my-test-group] Kafka consumer 
> initialized [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Subscribed to topic(s): 
> events [main] DEBUG 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Sending FindCoordinator request 
> to broker a.b.c:9030 (id: -1 rack: null) [main] DEBUG 
> org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
> groupId=my-test-group] Initiating connection to node a.b.c:9030 (id: -1 rack: 
> null) using address /a.b.c [main] DEBUG 
> org.apache.kafka.common.metrics.Metrics - Added sensor with name 
> node--1.bytes-sent [main] DEBUG org.apache.kafka.common.metrics.Metrics - 
> Added sensor with name node--1.bytes-received [main] DEBUG 
> org.apache.kafka.common.metrics.Metrics - Added sensor with name 
> node--1.latency [main] DEBUG org.apache.kafka.common.network.Selector - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Created socket with 
> SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 [main] DEBUG 
> org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, 
> groupId=my-test-group] Completed connection to node -1. Fetching API 
> versions. [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Initiating API versions fetch 
> from node -1. [main] DEBUG org.apache.kafka.common.network.Selector - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Connection with /a.b.c 
> disconnected java.io.EOFException at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119)
>  at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) 
> at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) 
> at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) 
> at org.apache.kafka.common.network.Selector.poll(Selector.java:467) at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
>  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
>  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164) 
> at eu.europa.ec.han.TestConsumer.main(TestConsumer.java:22) [main] DEBUG 
> org.apache.kafka.clients.NetworkClient - [Consumer

[jira] [Commented] (KAFKA-8205) Kafka SSL encryption of data at rest

2019-04-09 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16813659#comment-16813659
 ] 

Sriharsha Chintalapani commented on KAFKA-8205:
---

[~nitena2019] This is question should be in the mailing list rather than 
opening a JIRA.

Kafka doesn't have data at rest encryption yet. Kafka Ssl provides wire 
encryption only

What you mean by your data in logs are encrypted? is it possible that what you 
are seeing is serialized data from Producers?

And make sure you don't have disk encryption from OS or other third-party 
turned on

> Kafka SSL encryption of data at rest
> 
>
> Key: KAFKA-8205
> URL: https://issues.apache.org/jira/browse/KAFKA-8205
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.1
> Environment: All
>Reporter: Niten Aggarwal
>Priority: Major
>
> Recently we enabled SSL on our kafka cluster which earlier had SASL 
> PLAINTEXT. Everything works fine from both producer and consumer standpoint 
> as expected with one strange behavior. We noticed data in the log file is 
> also encrypted which we didn't thought of because SSL is meant for transport 
> level security not to encrypt data at rest.
> It doesn't mean we have any issues with that but would like to understand 
> what enables to perform encrypting data at rest. Do we have a way to:-
> 1) turn it off
> 2) Extend the encryption algorithm if company would like to use their own key 
> management system and different algorithm.
> After going through Kafka docs, we realized there is a KIP already in 
> discussion but how come it's implemented without been approved?
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-317%3A+Add+transparent+data+encryption+functionality]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3532) add principal.builder.class that can extract user from a field

2017-09-04 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153085#comment-16153085
 ] 

Sriharsha Chintalapani commented on KAFKA-3532:
---

[~kaufman] Sorry didn't see you earlier comment. I am not working on this you 
can take this over.

> add principal.builder.class that can extract user from a field
> --
>
> Key: KAFKA-3532
> URL: https://issues.apache.org/jira/browse/KAFKA-3532
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>Assignee: Kaufman Ng
>
> By default, the user name associated with an SSL connection looks like the 
> following. Often, people may want to extract one of the field (e.g., CN) as 
> the user. It would be good if we can have a built-in principal builder that 
> does that.
> CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-3532) add principal.builder.class that can extract user from a field

2017-09-04 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-3532:
-

Assignee: Kaufman Ng  (was: Sriharsha Chintalapani)

> add principal.builder.class that can extract user from a field
> --
>
> Key: KAFKA-3532
> URL: https://issues.apache.org/jira/browse/KAFKA-3532
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>Assignee: Kaufman Ng
>
> By default, the user name associated with an SSL connection looks like the 
> following. Often, people may want to extract one of the field (e.g., CN) as 
> the user. It would be good if we can have a built-in principal builder that 
> does that.
> CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6359) Work for KIP-236

2018-10-19 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657417#comment-16657417
 ] 

Sriharsha Chintalapani commented on KAFKA-6359:
---

[~tombentley] Are you still interested in finishing this Patch. I would like to 
help finish or review the patch. We would like to have this feature.

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7739) Kafka Tiered Storage

2018-12-14 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-7739:
-

 Summary: Kafka Tiered Storage
 Key: KAFKA-7739
 URL: https://issues.apache.org/jira/browse/KAFKA-7739
 Project: Kafka
  Issue Type: New Feature
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani


More detais are in the KIP 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7374) Tiered Storage

2018-12-14 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-7374.
---
Resolution: Duplicate

> Tiered Storage
> --
>
> Key: KAFKA-7374
> URL: https://issues.apache.org/jira/browse/KAFKA-7374
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.0.0
>Reporter: Maciej Bryński
>Priority: Major
>
> Both Pravega and Pulsar gives possibility to use tiered storage.
> We can store old messages on different FS like S3 or HDFS.
> Kafka should give similar possibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6627) Console producer default config values override explicitly provided properties

2019-01-02 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-6627:
-

Assignee: Kan Li  (was: Sandor Murakozi)

> Console producer default config values override explicitly provided properties
> --
>
> Key: KAFKA-6627
> URL: https://issues.apache.org/jira/browse/KAFKA-6627
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Kan Li
>Priority: Minor
>  Labels: newbie
>
> Some producer properties can be provided through custom parameters (e.g. 
> {{\-\-request-required-acks}}) and explicitly through 
> {{\-\-producer-property}}. At the moment, some of the custom parameters have 
> default values which actually override explicitly provided properties. For 
> example, if you set {{\-\-producer-property acks=all}} when starting the 
> console producer, the argument will be ignored since 
> {{\-\-request-required-acks}} has a default value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6431) Lock contention in Purgatory

2019-01-09 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-6431:
--
Fix Version/s: 2.2.0

> Lock contention in Purgatory
> 
>
> Key: KAFKA-6431
> URL: https://issues.apache.org/jira/browse/KAFKA-6431
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, purgatory
>Reporter: Ying Zheng
>Assignee: Ying Zheng
>Priority: Minor
> Fix For: 2.2.0
>
>
> Purgatory is the data structure in Kafka broker that manages delayed 
> operations. There is a ConcurrentHashMap (Kafka Pool) maps each operation key 
> to the operations (in a ConcurrentLinkedQueue) that are interested in the key.
> When an operation is done or expired, it's removed from the list 
> (ConcurrentLinkedQueue). When the list is empty, it's removed from the 
> ConcurrentHashMap. The 2nd operation has to be protected by a lock, to avoid 
> adding new operations into a list that is being removed. This is currently 
> done by a globally shared ReentrantReadWriteLock. All the read operations on 
> purgatory have to acquire the read permission of this lock. The list removing 
> operations needs the write permission of this lock.
> Our profiling result shows that Kafka broker is spending a nontrivial amount 
> of time on this read write lock.
> The problem is exacerbated when there are a large amount of short operations. 
> For example, when we are doing sync produce operations (acks=all), a 
> DelayedProduce operation is added and then removed for each message. If the 
> QPS of the topic is not high, it's very likely that, when the operation is 
> done and removed, the list of that key (topic partitions) also becomes empty, 
> and has to be removed when holding the write lock. This operation blocks all 
> the read / write operations on entire purgatory for awhile. As there are tens 
> of IO threads accessing purgatory concurrently, this shared lock can easily 
> become a bottleneck. 
> Actually, we only want to avoid concurrent read / write on the same key. The 
> operations on different keys do not conflict with each other.
> I suggest to shard purgatory into smaller partitions, and lock each 
> individual partition independently.
> Assuming there are 10 io threads actively accessing purgatory, sharding 
> purgatory into 512 partitions will make the probability for 2 or more threads 
> accessing the same partition at the same time to be about 2%. We can also use 
> ReentrantLock instead of ReentrantReadWriteLock. When the read operations are 
> not much more than write operations, ReentrantLock has lower overhead than 
> ReentrantReadWriteLock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6431) Lock contention in Purgatory

2019-01-09 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-6431.
---
Resolution: Fixed

> Lock contention in Purgatory
> 
>
> Key: KAFKA-6431
> URL: https://issues.apache.org/jira/browse/KAFKA-6431
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, purgatory
>Reporter: Ying Zheng
>Assignee: Ying Zheng
>Priority: Minor
> Fix For: 2.2.0
>
>
> Purgatory is the data structure in Kafka broker that manages delayed 
> operations. There is a ConcurrentHashMap (Kafka Pool) maps each operation key 
> to the operations (in a ConcurrentLinkedQueue) that are interested in the key.
> When an operation is done or expired, it's removed from the list 
> (ConcurrentLinkedQueue). When the list is empty, it's removed from the 
> ConcurrentHashMap. The 2nd operation has to be protected by a lock, to avoid 
> adding new operations into a list that is being removed. This is currently 
> done by a globally shared ReentrantReadWriteLock. All the read operations on 
> purgatory have to acquire the read permission of this lock. The list removing 
> operations needs the write permission of this lock.
> Our profiling result shows that Kafka broker is spending a nontrivial amount 
> of time on this read write lock.
> The problem is exacerbated when there are a large amount of short operations. 
> For example, when we are doing sync produce operations (acks=all), a 
> DelayedProduce operation is added and then removed for each message. If the 
> QPS of the topic is not high, it's very likely that, when the operation is 
> done and removed, the list of that key (topic partitions) also becomes empty, 
> and has to be removed when holding the write lock. This operation blocks all 
> the read / write operations on entire purgatory for awhile. As there are tens 
> of IO threads accessing purgatory concurrently, this shared lock can easily 
> become a bottleneck. 
> Actually, we only want to avoid concurrent read / write on the same key. The 
> operations on different keys do not conflict with each other.
> I suggest to shard purgatory into smaller partitions, and lock each 
> individual partition independently.
> Assuming there are 10 io threads actively accessing purgatory, sharding 
> purgatory into 512 partitions will make the probability for 2 or more threads 
> accessing the same partition at the same time to be about 2%. We can also use 
> ReentrantLock instead of ReentrantReadWriteLock. When the read operations are 
> not much more than write operations, ReentrantLock has lower overhead than 
> ReentrantReadWriteLock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4453) add request prioritization

2019-01-10 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16739798#comment-16739798
 ] 

Sriharsha Chintalapani commented on KAFKA-4453:
---

Hi [~mgharat] checking to see if you are working on this JIRA. We are 
interested in this feature as well.

> add request prioritization
> --
>
> Key: KAFKA-4453
> URL: https://issues.apache.org/jira/browse/KAFKA-4453
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Onur Karaman
>Assignee: Mayuresh Gharat
>Priority: Major
>
> Today all requests (client requests, broker requests, controller requests) to 
> a broker are put into the same queue. They all have the same priority. So a 
> backlog of requests ahead of the controller request will delay the processing 
> of controller requests. This causes requests infront of the controller 
> request to get processed based on stale state.
> Side effects may include giving clients stale metadata\[1\], rejecting 
> ProduceRequests and FetchRequests\[2\], and data loss (for some 
> unofficial\[3\] definition of data loss in terms of messages beyond the high 
> watermark)\[4\].
> We'd like to minimize the number of requests processed based on stale state. 
> With request prioritization, controller requests get processed before regular 
> queued up requests, so requests can get processed with up-to-date state.
> \[1\] Say a client's MetadataRequest is sitting infront of a controller's 
> UpdateMetadataRequest on a given broker's request queue. Suppose the 
> MetadataRequest is for a topic whose partitions have recently undergone 
> leadership changes and that these leadership changes are being broadcasted 
> from the controller in the later UpdateMetadataRequest. Today the broker 
> processes the MetadataRequest before processing the UpdateMetadataRequest, 
> meaning the metadata returned to the client will be stale. The client will 
> waste a roundtrip sending requests to the stale partition leader, get a 
> NOT_LEADER_FOR_PARTITION error, and will have to start all over and query the 
> topic metadata again.
> \[2\] Clients can issue ProduceRequests to the wrong broker based on stale 
> metadata, causing rejected ProduceRequests. Based on how long the client acts 
> based on the stale metadata, the impact may or may not be visible to a 
> producer application. If the number of rejected ProduceRequests does not 
> exceed the max number of retries, the producer application would not be 
> impacted. On the other hand, if the retries are exhausted, the failed produce 
> will be visible to the producer application.
> \[3\] The official definition of data loss in kafka is when we lose a 
> "committed" message. A message is considered "committed" when all in sync 
> replicas for that partition have applied it to their log.
> \[4\] Say a number of ProduceRequests are sitting infront of a controller's 
> LeaderAndIsrRequest on a given broker's request queue. Suppose the 
> ProduceRequests are for partitions whose leadership has recently shifted out 
> from the current broker to another broker in the replica set. Today the 
> broker processes the ProduceRequests before the LeaderAndIsrRequest, meaning 
> the ProduceRequests are getting processed on the former partition leader. As 
> part of becoming a follower for a partition, the broker truncates the log to 
> the high-watermark. With weaker ack settings such as acks=1, the leader may 
> successfully write to its own log, respond to the user with a success, 
> process the LeaderAndIsrRequest making the broker a follower of the 
> partition, and truncate the log to a point before the user's produced 
> messages. So users have a false sense that their produce attempt succeeded 
> while in reality their messages got erased. While technically part of what 
> they signed up for with acks=1, it can still come as a surprise.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6359) Work for KIP-236

2019-01-17 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-6359:
-

Assignee: (was: Tom Bentley)

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6359) Work for KIP-236

2019-01-17 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16745566#comment-16745566
 ] 

Sriharsha Chintalapani commented on KAFKA-6359:
---

[~satish.duggana] sorry didn't notice your message. [~sql_consulting] from my 
team is already making progress sorry for the confusion here.

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: GEORGE LI
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6359) Work for KIP-236

2019-01-17 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-6359:
-

Assignee: GEORGE LI

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: GEORGE LI
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7864) AdminZkClient.validateTopicCreate() should validate that partitions are 0-based

2019-01-23 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-7864:
-

Assignee: Ryan

> AdminZkClient.validateTopicCreate() should validate that partitions are 
> 0-based
> ---
>
> Key: KAFKA-7864
> URL: https://issues.apache.org/jira/browse/KAFKA-7864
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Ryan
>Priority: Major
>  Labels: newbie
>
> AdminZkClient.validateTopicCreate() currently doesn't validate that partition 
> ids in a topic are consecutive, starting from 0. The client code depends on 
> that. So, it would be useful to tighten up the check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-7116:
-

 Summary: Provide separate SSL configs for Kafka Broker replication
 Key: KAFKA-7116
 URL: https://issues.apache.org/jira/browse/KAFKA-7116
 Project: Kafka
  Issue Type: Improvement
Reporter: Sriharsha Chintalapani
Assignee: GEORGE LI


Currently, we are using one set of SSL configs in server.properties for the 
broker to accept the connections and replication to use as client side as well. 
For the most part, we can use the server configs for the replication client as 
well but there are cases where we need separation. 

Example Inter-broker connections would like to have SSL authentication but want 
to disable encryption for replication. This is not possible right now due to 
same config name "cipher_suites" being used for both server and client.  

Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-7116.
---
Resolution: Implemented

Thanks [~ijuma]

> Provide separate SSL configs for Kafka Broker replication
> -
>
> Key: KAFKA-7116
> URL: https://issues.apache.org/jira/browse/KAFKA-7116
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: GEORGE LI
>Priority: Major
>
> Currently, we are using one set of SSL configs in server.properties for the 
> broker to accept the connections and replication to use as client side as 
> well. For the most part, we can use the server configs for the replication 
> client as well but there are cases where we need separation. 
> Example Inter-broker connections would like to have SSL authentication but 
> want to disable encryption for replication. This is not possible right now 
> due to same config name "cipher_suites" being used for both server and 
> client.  
> Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7219) Add topic/partition level metrics.

2018-08-01 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16565641#comment-16565641
 ] 

Sriharsha Chintalapani commented on KAFKA-7219:
---

[~satish.duggana] one more suggestion to add a Highwatermark metric to each 
topic/partition. This will make it easier to calculate the consumer lag from 
metrics instead of making a broker request to get the highwatermark for a topic 
partition.

> Add topic/partition level metrics.
> --
>
> Key: KAFKA-7219
> URL: https://issues.apache.org/jira/browse/KAFKA-7219
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
>  Labels: needs-kip
>
> Currently, Kafka generates different metrics for topics on a broker.
>   - MessagesInPerSec
>   - BytesInPerSec
>   - BytesOutPerSec
>   - BytesRejectedPerSec
>   - ReplicationBytesInPerSec
>   - ReplicationBytesOutPerSec
>   - FailedProduceRequestsPerSec
>   - FailedFetchRequestsPerSec
>   - TotalProduceRequestsPerSec
>   - TotalFetchRequestsPerSec
>   - FetchMessageConversionsPerSec
>   - ProduceMessageConversionsPerSec
> Add metrics for individual partitions instead of having only at topic level. 
> Some of these partition level metrics are useful for monitoring applications 
> to monitor individual topic/partitions.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6868) BufferUnderflowException in client when querying consumer group information

2018-08-01 Thread Sriharsha Chintalapani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16565753#comment-16565753
 ] 

Sriharsha Chintalapani commented on KAFKA-6868:
---

[~hachikuji] can we include this into 1.1 branch. This is a critical fix 
affecting the clients from 0.10 onwards to query group offset info. If we are 
planning on doing a future 1.1.x release we should include this Jira as well.

> BufferUnderflowException in client when querying consumer group information
> ---
>
> Key: KAFKA-6868
> URL: https://issues.apache.org/jira/browse/KAFKA-6868
> Project: Kafka
>  Issue Type: Bug
>Reporter: Xavier Léauté
>Assignee: Colin P. McCabe
>Priority: Blocker
>
> Exceptions get thrown when describing consumer group or querying group 
> offsets from a 1.0 cluster
> Stacktrace is a result of calling 
> {{AdminClient.describeConsumerGroups(Collection 
> groupIds).describedGroups().entrySet()}} followed by 
> {{KafkaFuture.whenComplete()}}
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'version': java.nio.BufferUnderflowException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
>   at
> [snip]
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'version': java.nio.BufferUnderflowException
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:76)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:105)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$21$1.handleResponse(KafkaAdminClient.java:2307)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:960)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1045)
>   ... 1 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)