[jira] [Resolved] (KAFKA-6431) Lock contention in Purgatory

2019-01-09 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-6431.
---
Resolution: Fixed

> Lock contention in Purgatory
> 
>
> Key: KAFKA-6431
> URL: https://issues.apache.org/jira/browse/KAFKA-6431
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, purgatory
>Reporter: Ying Zheng
>Assignee: Ying Zheng
>Priority: Minor
> Fix For: 2.2.0
>
>
> Purgatory is the data structure in Kafka broker that manages delayed 
> operations. There is a ConcurrentHashMap (Kafka Pool) maps each operation key 
> to the operations (in a ConcurrentLinkedQueue) that are interested in the key.
> When an operation is done or expired, it's removed from the list 
> (ConcurrentLinkedQueue). When the list is empty, it's removed from the 
> ConcurrentHashMap. The 2nd operation has to be protected by a lock, to avoid 
> adding new operations into a list that is being removed. This is currently 
> done by a globally shared ReentrantReadWriteLock. All the read operations on 
> purgatory have to acquire the read permission of this lock. The list removing 
> operations needs the write permission of this lock.
> Our profiling result shows that Kafka broker is spending a nontrivial amount 
> of time on this read write lock.
> The problem is exacerbated when there are a large amount of short operations. 
> For example, when we are doing sync produce operations (acks=all), a 
> DelayedProduce operation is added and then removed for each message. If the 
> QPS of the topic is not high, it's very likely that, when the operation is 
> done and removed, the list of that key (topic partitions) also becomes empty, 
> and has to be removed when holding the write lock. This operation blocks all 
> the read / write operations on entire purgatory for awhile. As there are tens 
> of IO threads accessing purgatory concurrently, this shared lock can easily 
> become a bottleneck. 
> Actually, we only want to avoid concurrent read / write on the same key. The 
> operations on different keys do not conflict with each other.
> I suggest to shard purgatory into smaller partitions, and lock each 
> individual partition independently.
> Assuming there are 10 io threads actively accessing purgatory, sharding 
> purgatory into 512 partitions will make the probability for 2 or more threads 
> accessing the same partition at the same time to be about 2%. We can also use 
> ReentrantLock instead of ReentrantReadWriteLock. When the read operations are 
> not much more than write operations, ReentrantLock has lower overhead than 
> ReentrantReadWriteLock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7374) Tiered Storage

2018-12-14 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-7374.
---
Resolution: Duplicate

> Tiered Storage
> --
>
> Key: KAFKA-7374
> URL: https://issues.apache.org/jira/browse/KAFKA-7374
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.0.0
>Reporter: Maciej Bryński
>Priority: Major
>
> Both Pravega and Pulsar gives possibility to use tiered storage.
> We can store old messages on different FS like S3 or HDFS.
> Kafka should give similar possibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7739) Kafka Tiered Storage

2018-12-14 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-7739:
-

 Summary: Kafka Tiered Storage
 Key: KAFKA-7739
 URL: https://issues.apache.org/jira/browse/KAFKA-7739
 Project: Kafka
  Issue Type: New Feature
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani


More detais are in the KIP 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-7116.
---
Resolution: Implemented

Thanks [~ijuma]

> Provide separate SSL configs for Kafka Broker replication
> -
>
> Key: KAFKA-7116
> URL: https://issues.apache.org/jira/browse/KAFKA-7116
> Project: Kafka
>  Issue Type: Improvement
>    Reporter: Sriharsha Chintalapani
>Assignee: GEORGE LI
>Priority: Major
>
> Currently, we are using one set of SSL configs in server.properties for the 
> broker to accept the connections and replication to use as client side as 
> well. For the most part, we can use the server configs for the replication 
> client as well but there are cases where we need separation. 
> Example Inter-broker connections would like to have SSL authentication but 
> want to disable encryption for replication. This is not possible right now 
> due to same config name "cipher_suites" being used for both server and 
> client.  
> Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-7116:
-

 Summary: Provide separate SSL configs for Kafka Broker replication
 Key: KAFKA-7116
 URL: https://issues.apache.org/jira/browse/KAFKA-7116
 Project: Kafka
  Issue Type: Improvement
Reporter: Sriharsha Chintalapani
Assignee: GEORGE LI


Currently, we are using one set of SSL configs in server.properties for the 
broker to accept the connections and replication to use as client side as well. 
For the most part, we can use the server configs for the replication client as 
well but there are cases where we need separation. 

Example Inter-broker connections would like to have SSL authentication but want 
to disable encryption for replication. This is not possible right now due to 
same config name "cipher_suites" being used for both server and client.  

Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-25 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2000:
--
Fix Version/s: 0.10.2.0

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0, 0.10.3.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-25 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2000:
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1850
[https://github.com/apache/kafka/pull/1850]

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.3.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832410#comment-15832410
 ] 

Sriharsha Chintalapani commented on KAFKA-2000:
---

[~jeffwidman] 

[~omkreddy] working on it.

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2017-01-20 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2000:
--
Assignee: Manikumar Reddy  (was: Sriharsha Chintalapani)

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-2000_2015-05-03_10:39:11.patch, KAFKA-2000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3539) KafkaProducer.send() may block even though it returns the Future

2016-12-15 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3539:
--
Assignee: Manikumar Reddy

> KafkaProducer.send() may block even though it returns the Future
> 
>
> Key: KAFKA-3539
> URL: https://issues.apache.org/jira/browse/KAFKA-3539
> Project: Kafka
>  Issue Type: Bug
>Reporter: Oleg Zhurakousky
>Assignee: Manikumar Reddy
>Priority: Critical
>
> You can get more details from the us...@kafka.apache.org by searching on the 
> thread with the subject "KafkaProducer block on send".
> The bottom line is that method that returns Future must never block, since it 
> essentially violates the Future contract as it was specifically designed to 
> return immediately passing control back to the user to check for completion, 
> cancel etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3540) KafkaConsumer.close() may block indefinitely

2016-12-15 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3540:
--
Assignee: Manikumar Reddy

> KafkaConsumer.close() may block indefinitely
> 
>
> Key: KAFKA-3540
> URL: https://issues.apache.org/jira/browse/KAFKA-3540
> Project: Kafka
>  Issue Type: Bug
>Reporter: Oleg Zhurakousky
>Assignee: Manikumar Reddy
>
> KafkaConsumer API doc states 
> {code}
> Close the consumer, waiting indefinitely for any needed cleanup. . . . 
> {code}
> That is not acceptable as it creates an artificial deadlock which directly 
> affects systems that rely on Kafka API essentially rendering them unavailable.
> Consider adding _close(timeout)_ method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-11-29 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15706194#comment-15706194
 ] 

Sriharsha Chintalapani commented on KAFKA-1696:
---

[~singhashish] [~omkreddy]  is working on it. I think its best to break this 
down into multiple JIRAs and distribute the work.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4345) Run decktape test for each pull request

2016-11-23 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-4345.
---
Resolution: Fixed

Issue resolved by pull request 2064
[https://github.com/apache/kafka/pull/2064]

> Run decktape test for each pull request
> ---
>
> Key: KAFKA-4345
> URL: https://issues.apache.org/jira/browse/KAFKA-4345
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.10.0.1
>Reporter: Raghav Kumar Gautam
>Assignee: Raghav Kumar Gautam
> Fix For: 0.10.2.0
>
>
> As of now the ducktape tests that we have for kafka and not run for pull 
> request. We can run these test using travis-ci.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentification with multiple actives Kafka producers/consumers

2016-09-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526764#comment-15526764
 ] 

Sriharsha Chintalapani commented on KAFKA-4180:
---

[~ggrossetie] [~ecomar] This is duplicate of KAFKA-3302. We cannot make changes 
to only SASL plain. The changes should be applied to LoginManager in general.
Currently, LoginManager is a singleton and we need to break that and pass that 
into networking layer.

> Shared authentification with multiple actives Kafka producers/consumers
> ---
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3294) Kafka REST API

2016-09-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526465#comment-15526465
 ] 

Sriharsha Chintalapani commented on KAFKA-3294:
---

[~lindong] [~mgharat]  Here is KIP draft [~omkreddy] put together. It will be 
great if you can go over it. 

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3294) Kafka REST API

2016-09-27 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3294:
--
Assignee: Manikumar Reddy  (was: Parth Brahmbhatt)

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Manikumar Reddy
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentification with multiple actives Kafka producers/consumers

2016-09-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526445#comment-15526445
 ] 

Sriharsha Chintalapani commented on KAFKA-4180:
---

[~ggrossetie] [~mimaison] Isn't this duplicate of this JIRA here 
https://issues.apache.org/jira/browse/KAFKA-3302. I already started work on 
this. Let me know if this is not what your requirements are.

> Shared authentification with multiple actives Kafka producers/consumers
> ---
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1954) Speed Up The Unit Tests

2016-09-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15507444#comment-15507444
 ] 

Sriharsha Chintalapani commented on KAFKA-1954:
---

[~baluchicken] feel free to take it over.

> Speed Up The Unit Tests
> ---
>
> Key: KAFKA-1954
> URL: https://issues.apache.org/jira/browse/KAFKA-1954
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>    Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Attachments: KAFKA-1954.patch
>
>
> The server unit tests are pretty slow. They take about 8m40s on my machine. 
> Combined with slow scala compile time this is kind of painful.
> Almost all of this time comes from the integration tests which start one or 
> more brokers and then shut them down.
> Our finding has been that these integration tests are actually quite useful 
> so we probably can't just get rid of them.
> Here are some times:
> Zk startup: 100ms
> Kafka server startup: 600ms
> Kafka server shutdown: 500ms
>  
> So you can see that an integration test suite with 10 tests that starts and 
> stops a 3 node cluster for each test will take ~34 seconds even if the tests 
> themselves are instantaneous.
> I think the best solution to this is to get the test harness classes in shape 
> and then performance tune them a bit as this would potentially speed 
> everything up. There are several test harness classes:
> - ZooKeeperTestHarness
> - KafkaServerTestHarness
> - ProducerConsumerTestHarness
> - IntegrationTestHarness (similar to ProducerConsumerTestHarness but using 
> new clients)
> Unfortunately often tests don't use the right harness, they often use a 
> lower-level harness than they should and manually create stuff. Usually the 
> cause of this is that the harness is missing some feature.
> I think the right thing to do here is
> 1. Get the tests converted to the best possible harness. If you are testing 
> producers and consumers then you should use the harness that creates all that 
> and shuts it down for you.
> 2. Optimize the harnesses to be faster.
> How can we optimize the harnesses? I'm not sure, I would solicit ideas. Here 
> are a few:
> 1. It's worth analyzing the logging to see what is taking up time in the 
> startup and shutdown.
> 2. There may be things like controlled shutdown that we can disable (since we 
> are anyway going to discard the brokers after shutdown.
> 3. The harnesses could probably start all the servers and all the clients in 
> parallel.
> 4. We maybe able to tune down the resource usage in the server config for 
> test cases a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-09-01 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456228#comment-15456228
 ] 

Sriharsha Chintalapani commented on KAFKA-3199:
---

[~tucu00] HADOOP-13558 is not relevant here. We don't accept the subjects that 
comes from host application and there is no such provision. A client will 
initiate the LoginManager and will take care of the life cycle of the subject.

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1573) Transient test failures on LogTest.testCorruptLog

2016-08-23 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1573:
--
Assignee: Priyank Shah

> Transient test failures on LogTest.testCorruptLog
> -
>
> Key: KAFKA-1573
> URL: https://issues.apache.org/jira/browse/KAFKA-1573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Priyank Shah
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> Here is an example of the test failure trace:
> junit.framework.AssertionFailedError: expected:<87> but was:<68>
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.failNotEquals(Assert.java:277)
>   at junit.framework.Assert.assertEquals(Assert.java:64)
>   at junit.framework.Assert.assertEquals(Assert.java:130)
>   at junit.framework.Assert.assertEquals(Assert.java:136)
>   at 
> kafka.log.LogTest$$anonfun$testCorruptLog$1.apply$mcVI$sp(LogTest.scala:615)
>   at 
> scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
>   at 
> scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
>   at kafka.log.LogTest.testCorruptLog(LogTest.scala:595)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
>   at 
> org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
>   at 
> org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:71)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
>   at 
> org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at $Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>

[jira] [Updated] (KAFKA-2081) testUncleanLeaderElectionEnabledByTopicOverride transient failure

2016-08-23 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2081:
--
Assignee: Priyank Shah

> testUncleanLeaderElectionEnabledByTopicOverride transient failure
> -
>
> Key: KAFKA-2081
> URL: https://issues.apache.org/jira/browse/KAFKA-2081
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Priyank Shah
>  Labels: transient-unit-test-failure
>
> Saw the following failure.
> kafka.integration.UncleanLeaderElectionTest > 
> testUncleanLeaderElectionEnabledByTopicOverride FAILED
> junit.framework.AssertionFailedError: expected: but 
> was:
> at junit.framework.Assert.fail(Assert.java:47)
> at junit.framework.Assert.failNotEquals(Assert.java:277)
> at junit.framework.Assert.assertEquals(Assert.java:64)
> at junit.framework.Assert.assertEquals(Assert.java:71)
> at 
> kafka.integration.UncleanLeaderElectionTest.verifyUncleanLeaderElectionEnabled(UncleanLeaderElectionTest.scala:179)
> at 
> kafka.integration.UncleanLeaderElectionTest.testUncleanLeaderElectionEnabledByTopicOverride(UncleanLeaderElectionTest.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3168) Failure in kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests

2016-08-23 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3168:
--
Assignee: Priyank Shah

> Failure in kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests
> --
>
> Key: KAFKA-3168
> URL: https://issues.apache.org/jira/browse/KAFKA-3168
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Priyank Shah
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: Published messages should be in the log
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
>   at 
> kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests(PrimitiveApiTest.scala:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent

[jira] [Updated] (KAFKA-4044) log actual socket send/receive buffer size after connecting in Selector

2016-08-15 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-4044:
--
Assignee: Manikumar Reddy

> log actual socket send/receive buffer size after connecting in Selector
> ---
>
> Key: KAFKA-4044
> URL: https://issues.apache.org/jira/browse/KAFKA-4044
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Manikumar Reddy
>  Labels: newbie
>
> In BlockingChannel, we had the following code to log the actual socket buffer 
> size when the socket connection is established. This can be helpful when 
> tuning the socket buffer size for long latency network. It would be useful to 
> add that in Selector.pollSelectionKeys when the socket is connected.
> val msg = "Created socket with SO_TIMEOUT = %d (requested %d), 
> SO_RCVBUF = %d (requested %d), SO_SNDBUF = %d (requested %d), 
> connectTimeoutMs = %d."
> debug(msg.format(channel.socket.getSoTimeout,
>  readTimeoutMs,
>  channel.socket.getReceiveBufferSize, 
>  readBufferSize,
>  channel.socket.getSendBufferSize,
>  writeBufferSize,
>  connectTimeoutMs))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2016-08-12 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15419134#comment-15419134
 ] 

Sriharsha Chintalapani commented on KAFKA-2629:
---

[~singhashish] happy to review and get this in.

> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3375) Suppress and fix compiler warnings where reasonable and tweak compiler settings

2016-07-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385167#comment-15385167
 ] 

Sriharsha Chintalapani commented on KAFKA-3375:
---

[~ijuma] let me try that. Thanks for the quick reply.

> Suppress and fix compiler warnings where reasonable and tweak compiler 
> settings
> ---
>
> Key: KAFKA-3375
> URL: https://issues.apache.org/jira/browse/KAFKA-3375
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> This will make it easier to do KAFKA-2982.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3375) Suppress and fix compiler warnings where reasonable and tweak compiler settings

2016-07-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385156#comment-15385156
 ] 

Sriharsha Chintalapani commented on KAFKA-3375:
---

Hi [~ijuma] as part of this patch there is scalaCompileOptions. This causing 
interesting behavior in spark streaming env. Spark steaming uses old consumer 
libs. Both Kafka and Scala built with 2.10.5 and it fails at 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/client/ClientUtils.scala#L62
 . If I remove the catch from that block it works fine. Later came to realize 
the new addtionalCompileOptions causing this weird behavior. After removing the 
compileOptions it works fine.

{code}
java.lang.VerifyError: Stack map does not match the one at exception handler 198
Exception Details:
  Location:

kafka/client/ClientUtils$.fetchTopicMetadata(Lscala/collection/Set;Lscala/collection/Seq;Lkafka/producer/ProducerConfig;I)Lkafka/api/TopicMetadataResponse;
 @198: astore
  Reason:
Type top (current frame, locals[12]) is not assignable to 
'kafka/producer/SyncProducer' (stack map, locals[12])
  Current Frame:
bci: @71
flags: { }
locals: { 'kafka/client/ClientUtils$', 'scala/collection/Set', 
'scala/collection/Seq', 'kafka/producer/ProducerConfig', integer, integer, 
'scala/runtime/IntRef', 'kafka/api/TopicMetadataRequest', 
'kafka/api/TopicMetadataResponse', 'java/lang/Throwable', 
'scala/collection/Seq', 'java/lang/Throwable' }
stack: { 'java/lang/Throwable' }
  Stackmap Frame:
bci: @198
flags: { }
locals: { 'kafka/client/ClientUtils$', 'scala/collection/Set', 
'scala/collection/Seq', 'kafka/producer/ProducerConfig', integer, integer, 
'scala/runtime/IntRef', 'kafka/api/TopicMetadataRequest', 
'kafka/api/TopicMetadataResponse', 'java/lang/Throwable', 
'scala/collection/Seq', top, 'kafka/producer/SyncProducer' }
stack: { 'java/lang/Throwable' }
  Bytecode:
000: 0336 05bb 00a1 5903 b700 a43a 06bb 00a6
010: 59b2 00ab b600 af15 042d b600 b42b b900
020: ba01 00b7 00bd 3a07 0157 013a 0801 5701
030: 3a09 b200 c22c b200 c7b6 00cb b600 cfc0
040: 00d1 3a0a a700 353a 0b2a bb00 0b59 2b15
050: 0419 0619 0ab7 00d8 bb00 0d59 190b b700
060: dbb6 00dd 190b 3a09 1906 1906 b400 e104
070: 60b5 00e1 190c b600 e419 06b4 00e1 190a
080: b900 e801 00a2 006b 1505 9a00 66b2 00ed
090: 2d19 0a19 06b4 00e1 b900 f102 00c0 00f3
0a0: b600 f73a 0c2a bb00 0f59 2b15 0419 0619
0b0: 0ab7 00f8 b600 fa19 0c19 07b6 00fe 3a08
0c0: 0436 05a7 0019 3a0d 1906 1906 b400 e104
0d0: 60b5 00e1 190c b600 e419 0dbf 1906 1906
0e0: b400 e104 60b5 00e1 190c b600 e4a7 ff8c
0f0: 1505 9900 122a bb00 1159 2bb7 0101 b601
100: 0319 08b0 bb01 0559 bb01 0759 b201 0c13
110: 010e b601 12b7 0114 b201 0c05 bd00 0459
120: 032b 5359 0419 0a53 b601 18b6 011c 1909
130: b701 1fbf  
  Exception Handler Table:
bci [183, 198] => handler: 71
bci [183, 198] => handler: 198
bci [71, 104] => handler: 198
  Stackmap Table:

full_frame(@71,{Object[#2],Object[#182],Object[#209],Object[#177],Integer,Integer,Object[#161],Object[#166],Object[#211],Object[#72],Object[#209],Object[#213]},{Object[#72]})
chop_frame(@121,1)

full_frame(@198,{Object[#2],Object[#182],Object[#209],Object[#177],Integer,Integer,Object[#161],Object[#166],Object[#211],Object[#72],Object[#209],Top,Object[#213]},{Object[#72]})
same_frame(@220)
chop_frame(@240,2)
same_frame(@260)

at 
kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:67)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
16/07/17 00:56:17 INFO ConsumerFetcherManager: 
[ConsumerFetcherManager-1468716976795] Added fetcher for partitions 
ArrayBuffer()
16/07/17 00:56:17 WARN ConsumerFetcherManager$LeaderFinderThread: 
[spark_9_r7-kfzu-spark-bikas-2-1468716976671-97da4a8d-leader-finder-thread], 
Failed to find leader for Set([test_spark,0])
{code}
the code I am running is 
https://github.com/apache/spark/blob/master/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaInputDStream.scala
and here is the streaming job in spark 
https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java
 .



> Suppress and fix compiler warnings where reasonable and tweak compiler 
> settings
> ---
>
> Key: KAFKA-3375
>

[jira] [Commented] (KAFKA-3948) Invalid broker port in Zookeeper when SSL is enabled

2016-07-14 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378603#comment-15378603
 ] 

Sriharsha Chintalapani commented on KAFKA-3948:
---

[~bharatviswa] [~gquintana] host and port in zookeeper data is used to maintain 
the backward compatibility. So only plaintext port and host will be registered 
there. If you are depending zookeeper data to parse broker hosts I suggest you 
to look under endpoints section.


> Invalid broker port in Zookeeper when SSL is enabled
> 
>
> Key: KAFKA-3948
> URL: https://issues.apache.org/jira/browse/KAFKA-3948
> Project: Kafka
>  Issue Type: Bug
>  Components: core, network
>Affects Versions: 0.9.0.1
>Reporter: Gérald Quintana
>Assignee: Bharat Viswanadham
>
> With broker config
> {code}
> listeners=SSL://:9093,PLAINTEXT://:9092
> port=9093
> {code}
> gives in Zookeeper /brokers/ids/1
> {code}
> {"jmx_port":,"timestamp":"1468249905473","endpoints":["SSL://kafka1:9093","PLAINTEXT://kafka1:9092"],"host":"kafka1","version":2,"port":9092}
> {code}
> Notice that port 9092 not 9093
> Then, different scenario, with config:
> {code}
> listeners=SSL://:9093
> port=9093
> {code}
> gives in Zookeeper /brokers/ids/1
> {code}
> {"jmx_port":,"timestamp":"1468250372974","endpoints":["SSL://kafka1:9093"],"host":null,"version":2,"port":-1}
> {code}
> Now host is null and port is -1
> Setting advertised.port doesn't help



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2016-07-12 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15373066#comment-15373066
 ] 

Sriharsha Chintalapani commented on KAFKA-1194:
---

[~rperi] any update on this.

> The kafka broker cannot delete the old log files after the configured time
> --
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1
> Environment: window
>Reporter: Tao Qin
>Assignee: Jay Kreps
>  Labels: features, patch
> Fix For: 0.10.1.0
>
> Attachments: KAFKA-1194.patch, kafka-1194-v1.patch, 
> kafka-1194-v2.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>  at scala.collection.immutable.List.foreach(List.scala:76)
>  at kafka.log.Log.deleteOldSegments(Log.scala:418)
>  at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>  at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>  at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>  at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3950:
--
Assignee: Manikumar Reddy

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3927) kafka broker config docs issue

2016-07-05 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15362776#comment-15362776
 ] 

Sriharsha Chintalapani commented on KAFKA-3927:
---

[~guoxu1231] config/server.properties is an example config not the defaults.
with Long.MaxValue's the idea is you don't need to flush every x number of 
messages and have the replication for the topic partitions. Logs are flushed 
when they get rotated based on other configs such as log.segment.size etc.. 
Agree on having sane defaults for these. Given that docs is not wrong can you 
open another JIRA for correcting the defaults.

> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3927) kafka broker config docs issue

2016-07-04 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-3927.
---
Resolution: Not A Bug

> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3927) kafka broker config docs issue

2016-07-04 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361829#comment-15361829
 ] 

Sriharsha Chintalapani commented on KAFKA-3927:
---

[~guoxu1231]
for log.flush.interval.messages
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L93
for log.flush.scheduler.interval.ms
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaConfig.scala#L95

in 64-bit machines thats equals to 
{code}
scala> Long.MaxValue
res0: Long = 9223372036854775807
{code}
and there is no default value specified for log.flush.interval.ms hence null.

> kafka broker config docs issue
> --
>
> Key: KAFKA-3927
> URL: https://issues.apache.org/jira/browse/KAFKA-3927
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.10.0.0
>Reporter: Shawn Guo
>Priority: Minor
>
> https://kafka.apache.org/documentation.html#brokerconfigs
> log.flush.interval.messages 
> default value is "9223372036854775807"
> log.flush.interval.ms 
> default value is null
> log.flush.scheduler.interval.ms 
> default value is "9223372036854775807"
> etc. obviously these default values are incorrect. how these doc get 
> generated ? it looks confusing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3628) Native Schema Registry in Kafka

2016-06-29 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15356233#comment-15356233
 ] 

Sriharsha Chintalapani commented on KAFKA-3628:
---

draft version is here 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-67+-+Kafka+Schema+Registry
 .
I'll be updating more detail in next few days.

> Native Schema Registry in Kafka
> ---
>
> Key: KAFKA-3628
> URL: https://issues.apache.org/jira/browse/KAFKA-3628
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>
> Instead of having external schema service. We can use topic config store the 
> schema. I'll write detailed KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3893) Kafka Borker ID disappears from /borkers/ids

2016-06-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345288#comment-15345288
 ] 

Sriharsha Chintalapani edited comment on KAFKA-3893 at 6/22/16 10:30 PM:
-

[~chaithrar...@gmail.com] it looks like brokers are loosing connection to 
zookeeper and hence the ephemeral node under /broker/ids will disappear. I 
advise you to set the connection timeout 3 ms . 
I advise you to post your questions in kafka mailing lists before opening a 
JIRA as it doesn't look like a bug in kafka


was (Author: sriharsha):
[~chaithrar...@gmail.com] it looks like brokers are loosing connection to 
zookeeper and hence the ephemeral node under /broker/ids will disappear. I 
advise you to set the connection timeout 3 ms . 
I advise you post your questions in kafka mailing lists before opening a JIRA 
as it doesn't look like a bug in kafka

> Kafka Borker ID disappears from /borkers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3893) Kafka Borker ID disappears from /borkers/ids

2016-06-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345288#comment-15345288
 ] 

Sriharsha Chintalapani commented on KAFKA-3893:
---

[~chaithrar...@gmail.com] it looks like brokers are loosing connection to 
zookeeper and hence the ephemeral node under /broker/ids will disappear. I 
advise you to set the connection timeout 3 ms . 
I advise you post your questions in kafka mailing lists before opening a JIRA 
as it doesn't look like a bug in kafka

> Kafka Borker ID disappears from /borkers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3830) getTGT() debug logging exposes confidential information

2016-06-15 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3830:
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1498
[https://github.com/apache/kafka/pull/1498]

> getTGT() debug logging exposes confidential information
> ---
>
> Key: KAFKA-3830
> URL: https://issues.apache.org/jira/browse/KAFKA-3830
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> We have the same issue as the one described in ZOOKEEPER-2405.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2016-06-07 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15319281#comment-15319281
 ] 

Sriharsha Chintalapani commented on KAFKA-1194:
---

[~rperi] can you send the patch as github PR.

> The kafka broker cannot delete the old log files after the configured time
> --
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1
> Environment: window
>Reporter: Tao Qin
>Assignee: Jay Kreps
>  Labels: features, patch
> Fix For: 0.10.1.0
>
> Attachments: KAFKA-1194.patch, kafka-1194-v1.patch, 
> kafka-1194-v2.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>  at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>  at scala.collection.immutable.List.foreach(List.scala:76)
>  at kafka.log.Log.deleteOldSegments(Log.scala:418)
>  at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>  at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>  at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>  at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>  at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>  at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3797) Improve security of __consumer_offsets topic

2016-06-07 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15318934#comment-15318934
 ] 

Sriharsha Chintalapani commented on KAFKA-3797:
---

[~hachikuji] for second option , replication will be ok as the kafka user will 
be in super.users config. If we want to support external tool with a different 
user than we should do so via issuing a ACL to modify committed offsets.

> Improve security of __consumer_offsets topic
> 
>
> Key: KAFKA-3797
> URL: https://issues.apache.org/jira/browse/KAFKA-3797
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> By default, we allow clients to override committed offsets and group metadata 
> using the Produce API as long as they have Write access to the 
> __consumer_offsets topic. From one perspective, this is fine: administrators 
> can restrict access to this topic to trusted users. From another, it seems 
> less than ideal for Write permission on that topic to subsume Group-level 
> permissions for the full cluster. With this access, a user can cause all 
> kinds of mischief including making the group "lose" data by setting offsets 
> ahead of the actual position. This is probably not obvious to administrators 
> who grant access to topics using a wildcard and it increases the risk from 
> incorrectly applying topic patterns (if we ever add support for them). It 
> seems reasonable to consider safer default behavior:
> 1. A simple option to fix this would be to prevent wildcard topic rules from 
> applying to internal topics. To write to an internal topic, you need a 
> separate rule which explicitly grants authorization to that topic.
> 2. A more extreme and perhaps safer option might be to prevent all writes to 
> this topic (and potentially other internal topics) through the Produce API. 
> Do we have any use cases which actually require writing to 
> __consumer_offsets? The only potential case that comes to mind is replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3712) Kafka should support user impersonation

2016-05-13 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3712:
-

 Summary: Kafka should support user impersonation
 Key: KAFKA-3712
 URL: https://issues.apache.org/jira/browse/KAFKA-3712
 Project: Kafka
  Issue Type: Improvement
Reporter: Sriharsha Chintalapani
Assignee: Parth Brahmbhatt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-05-05 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15272696#comment-15272696
 ] 

Sriharsha Chintalapani commented on KAFKA-3525:
---

[~fpj] reservered.broker.max.id is for backward compatibility. If a user is 
already setting broker.id than they can do so till reserved.broker.max.id and 
auto generation of broker.id (in absence of broker.id in server.properties) 
will start from reserved.broker.max.id 

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3633) Kafka Consumer API breaking backward compatibility

2016-04-28 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15262762#comment-15262762
 ] 

Sriharsha Chintalapani commented on KAFKA-3633:
---

[~gwenshap] the above patch didn't make any changes to the interface. Its just 
provides old methods. Will start a discussion thread around this.

> Kafka Consumer API breaking backward compatibility
> --
>
> Key: KAFKA-3633
> URL: https://issues.apache.org/jira/browse/KAFKA-3633
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> KAFKA-2991 and KAFKA-3006 broke the backward compatibility. In storm we 
> already using 0.9.0.1 consumer api for the KafkaSpout. We should atleast kept 
> the older methods and shouldn't be breaking backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3633) Kafka Consumer API breaking backward compatibility

2016-04-28 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15262646#comment-15262646
 ] 

Sriharsha Chintalapani commented on KAFKA-3633:
---

[~gwenshap] I understand the API is marked as unstable. But the reality is 
different for lot of projects using the client API they end up having to ship 
another version that can only work with 0.10. Its in our interest as Kafka 
Community to make these changes less painful. Whats the above patch does is it 
to bring back old methods with deprecated tag. Its not as drastic change as the 
one we've in 0.10 release and in the next release remove those methods. I don't 
see how this causes any issues with proposed new api unless I am missing 
something.

> Kafka Consumer API breaking backward compatibility
> --
>
> Key: KAFKA-3633
> URL: https://issues.apache.org/jira/browse/KAFKA-3633
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> KAFKA-2991 and KAFKA-3006 broke the backward compatibility. In storm we 
> already using 0.9.0.1 consumer api for the KafkaSpout. We should atleast kept 
> the older methods and shouldn't be breaking backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3294) Kafka REST API

2016-04-28 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15262631#comment-15262631
 ] 

Sriharsha Chintalapani commented on KAFKA-3294:
---

[~lindong] we just shared the initial work we have. We are writing a kip and 
will post it in mailing list once its ready.

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Parth Brahmbhatt
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3633) Kafka Consumer API breaking backward compatibility

2016-04-27 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3633:
--
Fix Version/s: 0.10.0.0

> Kafka Consumer API breaking backward compatibility
> --
>
> Key: KAFKA-3633
> URL: https://issues.apache.org/jira/browse/KAFKA-3633
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> KAFKA-2991 and KAFKA-3006 broke the backward compatibility. In storm we 
> already using 0.9.0.1 consumer api for the KafkaSpout. We should atleast kept 
> the older methods and shouldn't be breaking backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3633) Kafka Consumer API breaking backward compatibility

2016-04-27 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3633:
-

 Summary: Kafka Consumer API breaking backward compatibility
 Key: KAFKA-3633
 URL: https://issues.apache.org/jira/browse/KAFKA-3633
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
Priority: Blocker


KAFKA-2991 and KAFKA-3006 broke the backward compatibility. In storm we already 
using 0.9.0.1 consumer api for the KafkaSpout. We should atleast kept the older 
methods and shouldn't be breaking backward compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3294) Kafka REST API

2016-04-26 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259216#comment-15259216
 ] 

Sriharsha Chintalapani commented on KAFKA-3294:
---

[~chienle] I am familiar with it. I want this to be part of Apache Kafka not in 
a external github project.

[~mgharat] Its an external netty server with REST API endpoints exposed where 
clients can send messages , internally it will use producers to send that data 
to brokers. Yes similar to mirror maker. What are your thoughts on integrating 
within kafka brokers?

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Parth Brahmbhatt
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3628) Native Schema Registry in Kafka

2016-04-26 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3628:
--
Issue Type: New Feature  (was: Bug)

> Native Schema Registry in Kafka
> ---
>
> Key: KAFKA-3628
> URL: https://issues.apache.org/jira/browse/KAFKA-3628
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
>
> Instead of having external schema service. We can use topic config store the 
> schema. I'll write detailed KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3628) Native Schema Registry in Kafka

2016-04-26 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-3628:
-

Assignee: Sriharsha Chintalapani

> Native Schema Registry in Kafka
> ---
>
> Key: KAFKA-3628
> URL: https://issues.apache.org/jira/browse/KAFKA-3628
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
>
> Instead of having external schema service. We can use topic config store the 
> schema. I'll write detailed KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3628) Native Schema Registry in Kafka

2016-04-26 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3628:
-

 Summary: Native Schema Registry in Kafka
 Key: KAFKA-3628
 URL: https://issues.apache.org/jira/browse/KAFKA-3628
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani


Instead of having external schema service. We can use topic config store the 
schema. I'll write detailed KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3294) Kafka REST API

2016-04-26 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15258503#comment-15258503
 ] 

Sriharsha Chintalapani commented on KAFKA-3294:
---

[~mgharat] we are actually building an external component that fronts the kafka 
brokers and uses produces api to push data coming from REST calls.

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Parth Brahmbhatt
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3294) Kafka REST API

2016-04-26 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3294:
--
Assignee: Parth Brahmbhatt  (was: Sriharsha Chintalapani)

> Kafka REST API
> --
>
> Key: KAFKA-3294
> URL: https://issues.apache.org/jira/browse/KAFKA-3294
> Project: Kafka
>  Issue Type: New Feature
>    Reporter: Sriharsha Chintalapani
>Assignee: Parth Brahmbhatt
>
> This JIRA is to build Kafka REST API for producer, consumer and also any 
> administrative tasks such as create topic, delete topic. We do have lot of 
> kafka client api support in different languages but having REST API for 
> producer and consumer will make it easier for users to read or write Kafka. 
> Also having administrative API will help in managing a cluster or building 
> administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-04-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15248064#comment-15248064
 ] 

Sriharsha Chintalapani commented on KAFKA-1696:
---

[~singhashish] Yes we are actively working on it.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3567) Add --security-protocol option console tools

2016-04-16 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3567:
-

 Summary: Add --security-protocol option console tools
 Key: KAFKA-3567
 URL: https://issues.apache.org/jira/browse/KAFKA-3567
 Project: Kafka
  Issue Type: Improvement
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3541) OffsetIndex-Memory Mapped files gets corrupted on a shared drive or encrypted drive

2016-04-11 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3541:
-

 Summary: OffsetIndex-Memory Mapped files gets corrupted on a 
shared drive or encrypted drive
 Key: KAFKA-3541
 URL: https://issues.apache.org/jira/browse/KAFKA-3541
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Parth Brahmbhatt


In our customer environmnets we've seen index files getting corrupted when a 
logsegment rolled over and after debugging we found out the issue is due to 
MemoryMappedFiles in OffsetIndex.
This happened on disks encrypted safenetfs and on CIFS disks so far.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3531) support subnet in ACL tool

2016-04-09 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3531:
--
Assignee: Parth Brahmbhatt

> support subnet in ACL tool
> --
>
> Key: KAFKA-3531
> URL: https://issues.apache.org/jira/browse/KAFKA-3531
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>
> In the ACL tool, we currently support individual ip or all ips. It will be 
> useful to add support for things like a subnet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3532) add principal.builder.class that can extract user from a field

2016-04-09 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-3532:
-

Assignee: Sriharsha Chintalapani

> add principal.builder.class that can extract user from a field
> --
>
> Key: KAFKA-3532
> URL: https://issues.apache.org/jira/browse/KAFKA-3532
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
>Reporter: Jun Rao
>    Assignee: Sriharsha Chintalapani
>
> By default, the user name associated with an SSL connection looks like the 
> following. Often, people may want to extract one of the field (e.g., CN) as 
> the user. It would be good if we can have a built-in principal builder that 
> does that.
> CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-04-07 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3525:
--
Fix Version/s: 0.10.0.0

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Priority: Minor
> Fix For: 0.10.0.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3525) max.reserved.broker.id off-by-one error

2016-04-07 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-3525:
--
Priority: Blocker  (was: Minor)

> max.reserved.broker.id off-by-one error
> ---
>
> Key: KAFKA-3525
> URL: https://issues.apache.org/jira/browse/KAFKA-3525
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Reporter: Alan Braithwaite
>Priority: Blocker
> Fix For: 0.10.0.0
>
>
> There's an off-by-one error in the config check / id generation for 
> max.reserved.broker.id setting.  The auto-generation will generate 
> max.reserved.broker.id as the initial broker id as it's currently written.
> Not sure what the consequences of this are if there's already a broker with 
> that id as I didn't test that behavior.
> This can return 0 + max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/utils/ZkUtils.scala#L213-L215
> However, this does a <= check, which is inclusive of max.reserved.broker.id:
> https://github.com/apache/kafka/blob/8dbd688b1617968329087317fa6bde8b8df0392e/core/src/main/scala/kafka/server/KafkaConfig.scala#L984-L986



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3302) Pass kerberos keytab and principal as part of client config

2016-02-29 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3302:
-

 Summary: Pass kerberos keytab and principal as part of client 
config 
 Key: KAFKA-3302
 URL: https://issues.apache.org/jira/browse/KAFKA-3302
 Project: Kafka
  Issue Type: Improvement
  Components: security
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.10.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3291) DumpLogSegment tool should also provide an option to only verify index sanity.

2016-02-29 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-3291.
---
   Resolution: Fixed
Fix Version/s: 0.10.0.0

Issue resolved by pull request 975
[https://github.com/apache/kafka/pull/975]

> DumpLogSegment tool should also provide an option to only verify index sanity.
> --
>
> Key: KAFKA-3291
> URL: https://issues.apache.org/jira/browse/KAFKA-3291
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.0.0
>
>
> DumpLogSegment tool should call index.sanityCheck function as part of index 
> sanity check as that function determines if an index will be rebuilt on 
> restart or not. This is a cheap check as it only checks the file size and can 
> help in scenarios where customer is trying to figure out which index files 
> will be rebuilt on startup which directly affects the broker bootstrap time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3294) Kafka REST API

2016-02-25 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-3294:
-

 Summary: Kafka REST API
 Key: KAFKA-3294
 URL: https://issues.apache.org/jira/browse/KAFKA-3294
 Project: Kafka
  Issue Type: New Feature
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani


This JIRA is to build Kafka REST API for producer, consumer and also any 
administrative tasks such as create topic, delete topic. We do have lot of 
kafka client api support in different languages but having REST API for 
producer and consumer will make it easier for users to read or write Kafka. 
Also having administrative API will help in managing a cluster or building 
administrative dashboards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-02-25 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168023#comment-15168023
 ] 

Sriharsha Chintalapani commented on KAFKA-1696:
---

[~gwenshap] Port is not relevant to this JIRA. That discussion is part of 
KIP-43. And we decided to single port for SASL. Each mechanism inside SASL 
shouldn't have its own port.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-2547.
---
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 679
[https://github.com/apache/kafka/pull/679]

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.1.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2508) Replace UpdateMetadata{Request,Response} with org.apache.kafka.common.requests equivalent

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2508:
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 896
[https://github.com/apache/kafka/pull/896]

> Replace UpdateMetadata{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2508
> URL: https://issues.apache.org/jira/browse/KAFKA-2508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3199) LoginManager should allow using an existing Subject

2016-02-16 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15149619#comment-15149619
 ] 

Sriharsha Chintalapani commented on KAFKA-3199:
---

[~adam.kunicki] what happens when a thread from kafka is trying to renew the 
same subject and so does the client application. 

> LoginManager should allow using an existing Subject
> ---
>
> Key: KAFKA-3199
> URL: https://issues.apache.org/jira/browse/KAFKA-3199
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Adam Kunicki
>Assignee: Adam Kunicki
>Priority: Critical
>
> LoginManager currently creates a new Login in the constructor which then 
> performs a login and starts a ticket renewal thread. The problem here is that 
> because Kafka performs its own login, it doesn't offer the ability to re-use 
> an existing subject that's already managed by the client application.
> The goal of LoginManager appears to be to be able to return a valid Subject. 
> It would be a simple fix to have LoginManager.acquireLoginManager() check for 
> a new config e.g. kerberos.use.existing.subject. 
> This would instead of creating a new Login in the constructor simply call 
> Subject.getSubject(AccessController.getContext()); to use the already logged 
> in Subject.
> This is also doable without introducing a new configuration and simply 
> checking if there is already a valid Subject available, but I think it may be 
> preferable to require that users explicitly request this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1696) Kafka should be able to generate Hadoop delegation tokens

2016-01-30 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15125212#comment-15125212
 ] 

Sriharsha Chintalapani commented on KAFKA-1696:
---

[~singhashish] We are working on it.Will post the KIP to wiki.

> Kafka should be able to generate Hadoop delegation tokens
> -
>
> Key: KAFKA-1696
> URL: https://issues.apache.org/jira/browse/KAFKA-1696
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Jay Kreps
>Assignee: Parth Brahmbhatt
>
> For access from MapReduce/etc jobs run on behalf of a user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3131) Inappropriate logging level for SSL Problem

2016-01-22 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-3131:
-

Assignee: Sriharsha Chintalapani

> Inappropriate logging level for SSL Problem
> ---
>
> Key: KAFKA-3131
> URL: https://issues.apache.org/jira/browse/KAFKA-3131
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Jake Robb
>    Assignee: Sriharsha Chintalapani
>Priority: Minor
> Attachments: kafka-ssl-error-debug-log.txt
>
>
> I didn't have my truststore set up correctly. The Kafka producer waited until 
> the connection timed out (60 seconds in my case) and then threw this 
> exception:
> {code}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:706)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:453)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
> {code}
> I changed my log level to DEBUG and found this, less than two seconds after 
> startup:
> {code}
> [DEBUG] @ 2016-01-22 10:10:34,095 
> [User: ; Server: ; Client: ; URL: ; ChangeGroup: ]
>  org.apache.kafka.common.network.Selector  - Connection with kafka02/10.0.0.2 
> disconnected 
> javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1364)
>   at 
> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:529)
>   at 
> sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1194)
>   at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1166)
>   at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:377)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:242)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1708)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:303)
>   at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:295)
>   at 
> sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1369)
>   at 
> sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:156)
>   at sun.security.ssl.Handshaker.processLoop(Handshaker.java:925)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:865)
>   at sun.security.ssl.Handshaker$1.run(Handshaker.java:862)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1302)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:335)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:413)
>   at 
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   ... 6 more
> Caused by: sun.security.validator.ValidatorException: PKIX path building 
> failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to 
> find valid certification path to requested target
>   at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
>   at 
> sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
>   at sun.security.validator.Validator.validate(Validator.java:260)
>   at 
> sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:281)
>   at 
> sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509Tr

[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-12-17 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062547#comment-15062547
 ] 

Sriharsha Chintalapani commented on KAFKA-2000:
---

[~ijuma] Yes.

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2887) TopicMetadataRequest creates topic if it does not exist

2015-11-24 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025859#comment-15025859
 ] 

Sriharsha Chintalapani commented on KAFKA-2887:
---

[~nickpan47] already have a patch on that JIRA. I'll rebase send against this 
JIRA in a day or two.

> TopicMetadataRequest creates topic if it does not exist
> ---
>
> Key: KAFKA-2887
> URL: https://issues.apache.org/jira/browse/KAFKA-2887
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0
> Environment: Centos6, Java 1.7.0_75
>Reporter: Andrew Winterman
>Priority: Minor
>
> We wired up a probe http endpoint to make TopicMetadataRequests with a 
> possible topic name. If no topic was found, we expected an empty response. 
> However if we asked for the same topic twice, it would exist the second time!
> I think this is a bug because the purpose of the TopicMetadaRequest is to 
> provide  information about the cluster, not mutate it. I can provide example 
> code if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2887) TopicMetadataRequest creates topic if it does not exist

2015-11-24 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025843#comment-15025843
 ] 

Sriharsha Chintalapani edited comment on KAFKA-2887 at 11/25/15 12:41 AM:
--

[~yipan] [~AWinterman] This issue is solved here in this jira 
https://issues.apache.org/jira/browse/KAFKA-1507 .  It provides create topic 
request and makes it producer side request for creating topic.  If there is 
enough interest I can rebase against the master and send a new patch.


was (Author: sriharsha):
[~yipan] [~AWinterman] This issue is solved here in this jira 
https://issues.apache.org/jira/browse/KAFKA-1507 .  It provides create topic 
request and makes it producer side request for creating topic. 

> TopicMetadataRequest creates topic if it does not exist
> ---
>
> Key: KAFKA-2887
> URL: https://issues.apache.org/jira/browse/KAFKA-2887
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0
> Environment: Centos6, Java 1.7.0_75
>Reporter: Andrew Winterman
>Priority: Minor
>
> We wired up a probe http endpoint to make TopicMetadataRequests with a 
> possible topic name. If no topic was found, we expected an empty response. 
> However if we asked for the same topic twice, it would exist the second time!
> I think this is a bug because the purpose of the TopicMetadaRequest is to 
> provide  information about the cluster, not mutate it. I can provide example 
> code if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2887) TopicMetadataRequest creates topic if it does not exist

2015-11-24 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025843#comment-15025843
 ] 

Sriharsha Chintalapani commented on KAFKA-2887:
---

[~yipan] [~AWinterman] This issue is solved here in this jira 
https://issues.apache.org/jira/browse/KAFKA-1507 .  It provides create topic 
request and makes it producer side request for creating topic. 

> TopicMetadataRequest creates topic if it does not exist
> ---
>
> Key: KAFKA-2887
> URL: https://issues.apache.org/jira/browse/KAFKA-2887
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.0
> Environment: Centos6, Java 1.7.0_75
>Reporter: Andrew Winterman
>Priority: Minor
>
> We wired up a probe http endpoint to make TopicMetadataRequests with a 
> possible topic name. If no topic was found, we expected an empty response. 
> However if we asked for the same topic twice, it would exist the second time!
> I think this is a bug because the purpose of the TopicMetadaRequest is to 
> provide  information about the cluster, not mutate it. I can provide example 
> code if needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2834) kafka-merge-pr.py should run unit tests before pushing it to trunk

2015-11-13 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15004729#comment-15004729
 ] 

Sriharsha Chintalapani commented on KAFKA-2834:
---

[~guozhang] Its good wait to till Jenkins job to finish. But it doesn't look 
like everyone following that pattern. Adding unit tests check locally is good 
to have . It will have another level check and its better to run these tests 
than breaking the trunk.

> kafka-merge-pr.py should run unit tests before pushing it to trunk
> --
>
> Key: KAFKA-2834
> URL: https://issues.apache.org/jira/browse/KAFKA-2834
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2834) kafka-merge-pr.py should run tests before pushing it to trunk

2015-11-13 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15004573#comment-15004573
 ] 

Sriharsha Chintalapani commented on KAFKA-2834:
---

[~guozhang] unit tests actually. I've seen trunk has compilation errors or test 
failures recently having this will help while merging in.

> kafka-merge-pr.py should run tests before pushing it to trunk
> -
>
> Key: KAFKA-2834
> URL: https://issues.apache.org/jira/browse/KAFKA-2834
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2834) kafka-merge-pr.py should run unit tests before pushing it to trunk

2015-11-13 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2834:
--
Summary: kafka-merge-pr.py should run unit tests before pushing it to trunk 
 (was: kafka-merge-pr.py should run tests before pushing it to trunk)

> kafka-merge-pr.py should run unit tests before pushing it to trunk
> --
>
> Key: KAFKA-2834
> URL: https://issues.apache.org/jira/browse/KAFKA-2834
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2834) kafka-merge-pr.py should run tests before pushing it to trunk

2015-11-13 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-2834:
-

 Summary: kafka-merge-pr.py should run tests before pushing it to 
trunk
 Key: KAFKA-2834
 URL: https://issues.apache.org/jira/browse/KAFKA-2834
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.9.1.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-09 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2788:
--
Assignee: Parth Brahmbhatt

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-11-02 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14985681#comment-14985681
 ] 

Sriharsha Chintalapani commented on KAFKA-2441:
---

[~gwenshap] Please go ahead. Thanks.

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983162#comment-14983162
 ] 

Sriharsha Chintalapani commented on KAFKA-2681:
---

[~gwenshap] Please go ahead. Thanks.

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-10-30 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983043#comment-14983043
 ] 

Sriharsha Chintalapani commented on KAFKA-2681:
---

[~junrao] Here is the wiki 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-10-27 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976914#comment-14976914
 ] 

Sriharsha Chintalapani commented on KAFKA-2441:
---

[~granthenke] I am working on it. Thanks.

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-10-26 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974372#comment-14974372
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~junrao] Pretty much all the services using kdc works like this. Although our 
socket connections are long-living, in reality they dont' stay connected 
forever. Removing someone from KDC is possible but that doesn't happen often. 
Even than it would be good practice to remove ACLs of that principal.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>    Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-10-25 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973566#comment-14973566
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~junrao] once the connection is established we don't do SASL auth again. Its 
for the new connections i.e if the kerberos ticket is not renewed we won't be 
able to establish a new connection . We don't invalidate the already 
established sasl connection. I don't see a reason to do this. If for any reason 
someone wants to un-authorize a session thats already established they can do 
so via Authorizer and remove the permissions. Can you give me the details of 
the use case you are looking at.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>    Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-23 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972339#comment-14972339
 ] 

Sriharsha Chintalapani commented on KAFKA-2644:
---

[~junrao] If possible we should use kerberos instead of MiniKDC. I am not sure 
how the ducktape tests run if they are using vagrant I've vagrant kerberos 
setup details here https://github.com/harshach/kafka-vagrant 

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-23 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971107#comment-14971107
 ] 

Sriharsha Chintalapani commented on KAFKA-2675:
---

1. Decide on `serviceName` configuration: do we want to keep it in two places?
We should keep this in two places. Configuring serviceName in jaas file as been 
the way to go in all other projects. We only kept in two places because of IBM 
jdk.
3. Implement or remove SASL_KAFKA_SERVER_REALM config
  This is required on the client side. Its very common scenario where 
server/broker in one relam and clients are in another . In this case clients 
needs to configure the server realm. By default we use clients realm to connect 
to server.

> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread
> 5. Write test that shows authentication failure due to invalid user
> 6. Write test that shows authentication failure due to wrong password
> 7. Write test that shows authentication failure due ticket expiring



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-21 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14967569#comment-14967569
 ] 

Sriharsha Chintalapani commented on KAFKA-2644:
---

[~rsivaram] as long as you can create valid keytabs and with MiniKDC we do that 
already. So should be fine.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2681) SASL authentication in official docs

2015-10-21 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-2681:
-

Assignee: Sriharsha Chintalapani

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>    Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-21 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966892#comment-14966892
 ] 

Sriharsha Chintalapani commented on KAFKA-2644:
---

[~rsivaram] we already have SaslTestHarness.scala which starts MiniKDC. If you 
want to run full kdc you can look at the vagrant setup.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-10-20 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966059#comment-14966059
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~junrao] working on it. I'll post it on the wiki.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2472:
--
Assignee: Ismael Juma  (was: Sriharsha Chintalapani)

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2472) Fix kafka ssl configs to not throw warnings

2015-10-19 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963213#comment-14963213
 ] 

Sriharsha Chintalapani commented on KAFKA-2472:
---

[~ijuma] No worries. Take it over.

> Fix kafka ssl configs to not throw warnings
> ---
>
> Key: KAFKA-2472
> URL: https://issues.apache.org/jira/browse/KAFKA-2472
> Project: Kafka
>  Issue Type: Bug
>    Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up fix on kafka-1690.
> [2015-08-25 18:20:48,236] WARN The configuration ssl.truststore.password = 
> striker was supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)
> [2015-08-25 18:20:48,236] WARN The configuration security.protocol = SSL was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.producer.ProducerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2456) Disable SSLv3 for ssl.enabledprotocols config on client & broker side

2015-10-16 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14961293#comment-14961293
 ] 

Sriharsha Chintalapani commented on KAFKA-2456:
---

[~benstopford] please go for it.

> Disable SSLv3 for ssl.enabledprotocols config on client & broker side
> -
>
> Key: KAFKA-2456
> URL: https://issues.apache.org/jira/browse/KAFKA-2456
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Sriharsha Chintalapani
>    Assignee: Sriharsha Chintalapani
> Fix For: 0.9.0.0
>
>
> This is a follow-up on KAFKA-1690 . Currently users have option to pass in 
> SSLv3 we should not be allowing this as its deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2015-10-14 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957801#comment-14957801
 ] 

Sriharsha Chintalapani commented on KAFKA-2629:
---

[~singhashish] 
I still disagree with this
"That being said, running an executable is a little different than ordinary 
file access. However I don't believe that it adds risk to the overall security 
of the application."
It does add risk to overall security of the application.

If others want this as feature I am ok with it as long as this is optional 
config rather than default behavior.

> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2015-10-09 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14950932#comment-14950932
 ] 

Sriharsha Chintalapani edited comment on KAFKA-2629 at 10/9/15 6:30 PM:


[~singhashish] The distribution of ssl.properties along with a plaintext 
password is been a common way of doing things. In Hadoop they do this as well.  
Not just for ssl in case of kerberos you depend on file system permissions for 
keytabs to keep it secure.  I don't see ssl properties file any different than 
keystore file permissions.
Honestly, I never seen any system doing this so far for SSL. Why do you think 
filesystem permission not suffice and do you have any example anyone else doing 
this.

In your proposal you are saying an executable is also protected by same file 
system permissions than how it is providing any additional security ?. 


was (Author: sriharsha):
[~singhashish] The distribution of ssl.properties along with a plaintext 
password is been a common way of doing things. In Hadoop they do this as well.  
Not just for ssl in case of kerberos you depend on file system permissions for 
keytabs to keep it secure.  I don't see ssl properties file any different than 
keystore file permissions.
Honestly, I never seen any system doing this so far for SSL. Why do you think 
filesystem permission not suffice and do you have any example anyone else doing 
this.

> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2015-10-09 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14950932#comment-14950932
 ] 

Sriharsha Chintalapani commented on KAFKA-2629:
---

[~singhashish] The distribution of ssl.properties along with a plaintext 
password is been a common way of doing things. In Hadoop they do this as well.  
Not just for ssl in case of kerberos you depend on file system permissions for 
keytabs to keep it secure.  I don't see ssl properties file any different than 
keystore file permissions.
Honestly, I never seen any system doing this so far for SSL. Why do you think 
filesystem permission not suffice and do you have any example anyone else doing 
this.

> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2609) SSL renegotiation code paths need more tests

2015-10-06 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944993#comment-14944993
 ] 

Sriharsha Chintalapani commented on KAFKA-2609:
---

Yes [~ijuma]. Even after we get this in I would see this as optional rather 
turn it on by default.

> SSL renegotiation code paths need more tests
> 
>
> Key: KAFKA-2609
> URL: https://issues.apache.org/jira/browse/KAFKA-2609
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> If renegotiation is triggered when read interest is off, at the moment it 
> looks like read interest is never turned back on. More unit tests are 
> required to test different renegotiation scenarios since these are much 
> harder to exercise in system tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2609) SSL renegotiation code paths need more tests

2015-10-06 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944938#comment-14944938
 ] 

Sriharsha Chintalapani commented on KAFKA-2609:
---

[~rsivaram] [~ijuma] Do we need to release this as part of 0.9.0? When ssl 
patch got in we decided to re-visit renegotiation as part of next release. Also 
lets make this optional i.e turned off by default I don't see many users using 
weak crypto.

> SSL renegotiation code paths need more tests
> 
>
> Key: KAFKA-2609
> URL: https://issues.apache.org/jira/browse/KAFKA-2609
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> If renegotiation is triggered when read interest is off, at the moment it 
> looks like read interest is never turned back on. More unit tests are 
> required to test different renegotiation scenarios since these are much 
> harder to exercise in system tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2587) Transient test failure: `SimpleAclAuthorizerTest`

2015-09-29 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-2587.
---
   Resolution: Fixed
Fix Version/s: 0.9.0.0

> Transient test failure: `SimpleAclAuthorizerTest`
> -
>
> Key: KAFKA-2587
> URL: https://issues.apache.org/jira/browse/KAFKA-2587
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> I've seen `SimpleAclAuthorizerTest ` fail a couple of times since its recent 
> introduction. Here's one such build:
> https://builds.apache.org/job/kafka-trunk-git-pr/576/console
> [~parth.brahmbhatt], can you please take a look and see if it's an easy fix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1804) Kafka network thread lacks top exception handler

2015-09-28 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934173#comment-14934173
 ] 

Sriharsha Chintalapani commented on KAFKA-1804:
---

[~olindaspider] that sounds right to me. Are you planning on sending a patch.

> Kafka network thread lacks top exception handler
> 
>
> Key: KAFKA-1804
> URL: https://issues.apache.org/jira/browse/KAFKA-1804
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Oleg Golovin
>Priority: Critical
>
> We have faced the problem that some kafka network threads may fail, so that 
> jstack attached to Kafka process showed fewer threads than we had defined in 
> our Kafka configuration. This leads to API requests processed by this thread 
> getting stuck unresponed.
> There were no error messages in the log regarding thread failure.
> We have examined Kafka code to find out there is no top try-catch block in 
> the network thread code, which could at least log possible errors.
> Could you add top-level try-catch block for the network thread, which should 
> recover network thread in case of exception?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1686) Implement SASL/Kerberos

2015-09-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14903836#comment-14903836
 ] 

Sriharsha Chintalapani edited comment on KAFKA-1686 at 9/23/15 2:27 AM:


[~junrao] need 2 days will update the pr. Sorry for the delay.


was (Author: sriharsha):
[~junrao] need a 2 days will update the pr. Sorry for the delay.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-09-22 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14903836#comment-14903836
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~junrao] need a 2 days will update the pr. Sorry for the delay.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>    Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >