[jira] [Commented] (KAFKA-4715) Consumer/Producer config does not work with related enums

2017-02-06 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15854193#comment-15854193
 ] 

Bryan Baugher commented on KAFKA-4715:
--

If thats the plan we can just use the existing name field on CompressionType.

> Consumer/Producer config does not work with related enums
> -
>
> Key: KAFKA-4715
> URL: https://issues.apache.org/jira/browse/KAFKA-4715
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bryan Baugher
>Priority: Minor
>
> We have some code that builds producer/consumer config and sometimes uses the 
> related enum like this,
> {code}
> Properties properties = new Properties();
> properties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, 
> CompressionType.SNAPPY.name());
> ...
> Producer producer = new KafkaProducer(properties);
> {code}
> We get,
> {code}
> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
> ...
> Caused by: java.lang.IllegalArgumentException: Unknown compression name: 
> SNAPPY
> {code}
> We've seen the same for others like ConsumerConfig.AUTO_OFFSET_RESET_CONFIG 
> and its OffsetResetStrategy enum.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4013) SaslServerCallbackHandler should include cause for exception

2016-08-02 Thread Bryan Baugher (JIRA)
Bryan Baugher created KAFKA-4013:


 Summary: SaslServerCallbackHandler should include cause for 
exception
 Key: KAFKA-4013
 URL: https://issues.apache.org/jira/browse/KAFKA-4013
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Bryan Baugher


SaslServerCallbackHandler can throw an exception setting the authorized id for 
a user[1] but will not include the cause which makes it hard to debug

[1] - 
https://github.com/apache/kafka/blob/0.10.0.0/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslServerCallbackHandler.java#L92



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4012) KerberosShortNamer should implement toString()

2016-08-02 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15404741#comment-15404741
 ] 

Bryan Baugher commented on KAFKA-4012:
--

Pull request, https://github.com/apache/kafka/pull/1694

> KerberosShortNamer should implement toString()
> --
>
> Key: KAFKA-4012
> URL: https://issues.apache.org/jira/browse/KAFKA-4012
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Bryan Baugher
>
> KerberosShortNamer will throw an exception where it uses toString()[1] but 
> its not implemented so it doesn't provide much value
> [1] - 
> https://github.com/apache/kafka/blob/0.10.0.0/clients/src/main/java/org/apache/kafka/common/security/kerberos/KerberosShortNamer.java#L98



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4012) KerberosShortNamer should implement toString()

2016-08-02 Thread Bryan Baugher (JIRA)
Bryan Baugher created KAFKA-4012:


 Summary: KerberosShortNamer should implement toString()
 Key: KAFKA-4012
 URL: https://issues.apache.org/jira/browse/KAFKA-4012
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Bryan Baugher


KerberosShortNamer will throw an exception where it uses toString()[1] but its 
not implemented so it doesn't provide much value

[1] - 
https://github.com/apache/kafka/blob/0.10.0.0/clients/src/main/java/org/apache/kafka/common/security/kerberos/KerberosShortNamer.java#L98



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3345) ProducerResponse could gracefully handle no throttle time provided

2016-03-10 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15189757#comment-15189757
 ] 

Bryan Baugher commented on KAFKA-3345:
--

Looks like this is no longer as easy with the addition of KAFKA-3025. 

> ProducerResponse could gracefully handle no throttle time provided
> --
>
> Key: KAFKA-3345
> URL: https://issues.apache.org/jira/browse/KAFKA-3345
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bryan Baugher
>Priority: Minor
>
> When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
> the old producer using 0.9 libraries could write to a cluster running 0.8 if 
> 'request.required.acks' was set to 0. If it was set to anything else it would 
> fail with,
> {code}
> java.nio.BufferUnderflowException
>   at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
>   at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
>   at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
> {code}
> In 0.9 there was a one line change to the response here[1] to look for a 
> throttle time value in the response. It seems if the 0.9 code gracefully 
> handled throttle time not being provided this would work. Would you be open 
> to this change?
> [1] - 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/api/ProducerResponse.scala#L41



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3345) ProducerResponse could gracefully handle no throttle time provided

2016-03-07 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated KAFKA-3345:
-
Description: 
When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
the old producer using 0.9 libraries could write to a cluster running 0.8 if 
'request.required.acks' was set to 0. If it was set to anything else it would 
fail with,

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
{code}

In 0.9 there was a one line change to the response here[1] to look for a 
throttle time value in the response. It seems if the 0.9 code gracefully 
handled throttle time not being provided this would work. Would you be open to 
this change?

[1] - 
https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/api/ProducerResponse.scala#L41

  was:
When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
the old producer using 0.9 libraries could write to a cluster running 0.8 if 
'request.required.acks' was set to 0. If it was set to anything else it would 
fail with,

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
{code}

In 0.9 there was a one line change to the response here[1] to look for a 
throttle time value in the response. It seems if the 0.9 code gracefully 
handled throttle time not being provided this would work. Would you be open to 
this change?


> ProducerResponse could gracefully handle no throttle time provided
> --
>
> Key: KAFKA-3345
> URL: https://issues.apache.org/jira/browse/KAFKA-3345
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bryan Baugher
>Priority: Minor
>
> When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
> the old producer using 0.9 libraries could write to a cluster running 0.8 if 
> 'request.required.acks' was set to 0. If it was set to anything else it would 
> fail with,
> {code}
> java.nio.BufferUnderflowException
>   at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
>   at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
>   at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
> {code}
> In 0.9 there was a one line change to the response here[1] to look for a 
> throttle time value in the response. It seems if the 0.9 code gracefully 
> handled throttle time not being provided this would work. Would you be open 
> to this change?
> [1] - 
> https://github.com/apache/kafka/blob/0.9.0.1/core/src/main/scala/kafka/api/ProducerResponse.scala#L41



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3345) ProducerResponse could gracefully handle no throttle time provided

2016-03-07 Thread Bryan Baugher (JIRA)
Bryan Baugher created KAFKA-3345:


 Summary: ProducerResponse could gracefully handle no throttle time 
provided
 Key: KAFKA-3345
 URL: https://issues.apache.org/jira/browse/KAFKA-3345
 Project: Kafka
  Issue Type: Improvement
Reporter: Bryan Baugher
Priority: Minor


When doing some compatibility testing between kafka 0.8 and 0.9 I found that 
the old producer using 0.9 libraries could write to a cluster running 0.8 if 
'request.required.acks' was set to 0. If it was set to anything else it would 
fail with,

{code}
java.nio.BufferUnderflowException
at java.nio.Buffer.nextGetIndex(Buffer.java:506) 
at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:361) 
at kafka.api.ProducerResponse$.readFrom(ProducerResponse.scala:41) 
at kafka.producer.SyncProducer.send(SyncProducer.scala:109) 
{code}

In 0.9 there was a one line change to the response here[1] to look for a 
throttle time value in the response. It seems if the 0.9 code gracefully 
handled throttle time not being provided this would work. Would you be open to 
this change?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2172) Round-robin partition assignment strategy too restrictive

2015-05-06 Thread Bryan Baugher (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530767#comment-14530767
 ] 

Bryan Baugher commented on KAFKA-2172:
--

I originally asked out on the mailing list about this[1] but I'm also having 
troubles with the round robin partitioning because of its requirements. Similar 
to the above it makes deployments difficult as when changing our topic 
subscriptions the consumer group stops consuming messages. In our case our 
consumers are building their topic subscriptions from config they retrieve 
regularly from a REST service. Every consumer should have the same topic 
subscription except when the config changes and there some lag before all 
consumers retrieve the new config.

Would you be open to a patch that provides another assignor which takes a 
simpler approach and just assigns each partition to a consumer interested in 
that topic with the least number of partitions assigned? This would not provide 
the optimal solution in the case where topic subscriptions are not equal but 
should generally do fine and should come up with the same answer as the round 
robin assignor when they are.

[1] - 
http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3CCANZ-JHE6TRf%2BHdT-%3DK9AKFVXasLjg445cmcRVEBi5tG93XTNqA%40mail.gmail.com%3E

> Round-robin partition assignment strategy too restrictive
> -
>
> Key: KAFKA-2172
> URL: https://issues.apache.org/jira/browse/KAFKA-2172
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Rosenberg
>
> The round-ropin partition assignment strategy, was introduced for the 
> high-level consumer, starting with 0.8.2.1.  This appears to be a very 
> attractive feature, but it has an unfortunate restriction, which prevents it 
> from being easily utilized.  That is that it requires all consumers in the 
> consumer group have identical topic regex selectors, and that they have the 
> same number of consumer threads.
> It turns out this is not always the case for our deployments.  It's not 
> unusual to run multiple consumers within a single process (with different 
> topic selectors), or we might have multiple processes dedicated for different 
> topic subsets.  Agreed, we could change these to have separate group ids for 
> each sub topic selector (but unfortunately, that's easier said than done).  
> In several cases, we do at least have separate client.ids set for each 
> sub-consumer, so it would be incrementally better if we could at least loosen 
> the requirement such that each set of topics selected by a groupid/clientid 
> pair are the same.
> But, if we want to do a rolling restart for a new version of a consumer 
> config, the cluster will likely be in a state where it's not possible to have 
> a single config until the full rolling restart completes across all nodes.  
> This results in a consumer outage while the rolling restart is happening.
> Finally, it's especially problematic if we want to canary a new version for a 
> period before rolling to the whole cluster.
> I'm not sure why this restriction should exist (as it obviously does not 
> exist for the 'range' assignment strategy).  It seems it could be made to 
> work reasonably well with heterogenous topic selection and heterogenous 
> thread counts.  The documentation states that "The round-robin partition 
> assignor lays out all the available partitions and all the available consumer 
> threads. It then proceeds to do a round-robin assignment from partition to 
> consumer thread."
> If the assignor can "lay out all the available partitions and all the 
> available consumer threads", it should be able to uniformly assign partitions 
> to the available threads.  In each case, if a thread belongs to a consumer 
> that doesn't have that partition selected, just move to the next available 
> thread that does have the selection, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2114) Unable to change min.insync.replicas default

2015-04-13 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated KAFKA-2114:
-
Description: 
Following the comment here[1] I was unable to change the min.insync.replicas 
default value. I tested this by setting up a 3 node cluster, wrote to a topic 
with a replication factor of 3, using request.required.acks=-1 and setting 
min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
brokers but I was still able to write successfully. Only after running the 
alter topic command setting min.insync.replicas=2 on the topic did I see write 
failures.

[1] - 
http://mail-archives.apache.org/mod_mbox/kafka-users/201504.mbox/%3CCANZ-JHF71yqKE6%2BKKhWe2EGUJv6R3bTpoJnYck3u1-M35sobgg%40mail.gmail.com%3E

  was:Following the comment here[1] I was unable to change the 
min.insync.replicas default value. I tested this by setting up a 3 node 
cluster, wrote to a topic with a replication factor of 3, using 
request.required.acks=-1 and setting min.insync.replicas=2 on the broker's 
server.properties. I then shutdown 2 brokers but I was still able to write 
successfully. Only after running the alter topic command setting 
min.insync.replicas=2 on the topic did I see write failures.


> Unable to change min.insync.replicas default
> 
>
> Key: KAFKA-2114
> URL: https://issues.apache.org/jira/browse/KAFKA-2114
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bryan Baugher
>Assignee: Gwen Shapira
> Fix For: 0.8.2.1
>
>
> Following the comment here[1] I was unable to change the min.insync.replicas 
> default value. I tested this by setting up a 3 node cluster, wrote to a topic 
> with a replication factor of 3, using request.required.acks=-1 and setting 
> min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
> brokers but I was still able to write successfully. Only after running the 
> alter topic command setting min.insync.replicas=2 on the topic did I see 
> write failures.
> [1] - 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201504.mbox/%3CCANZ-JHF71yqKE6%2BKKhWe2EGUJv6R3bTpoJnYck3u1-M35sobgg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2114) Unable to change min.insync.replicas default

2015-04-11 Thread Bryan Baugher (JIRA)
Bryan Baugher created KAFKA-2114:


 Summary: Unable to change min.insync.replicas default
 Key: KAFKA-2114
 URL: https://issues.apache.org/jira/browse/KAFKA-2114
 Project: Kafka
  Issue Type: Bug
Reporter: Bryan Baugher
 Fix For: 0.8.2.1


Following the comment here[1] I was unable to change the min.insync.replicas 
default value. I tested this by setting up a 3 node cluster, wrote to a topic 
with a replication factor of 3, using request.required.acks=-1 and setting 
min.insync.replicas=2 on the broker's server.properties. I then shutdown 2 
brokers but I was still able to write successfully. Only after running the 
alter topic command setting min.insync.replicas=2 on the topic did I see write 
failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1189) kafka-server-stop.sh doesn't stop broker

2013-12-19 Thread Bryan Baugher (JIRA)
Bryan Baugher created KAFKA-1189:


 Summary: kafka-server-stop.sh doesn't stop broker
 Key: KAFKA-1189
 URL: https://issues.apache.org/jira/browse/KAFKA-1189
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.8.0
 Environment: RHEL 6.4 64bit, Java 6u35
Reporter: Bryan Baugher
Priority: Minor


Just before the 0.8.0 release this commit[1] changed the signal in the 
kafka-server-stop.sh script from SIGTERM to SIGINT. This doesn't seem to stop 
the broker. Changing this back to SIGTERM (or -15) fixes the issues.

[1] - 
https://github.com/apache/kafka/commit/51de7c55d2b3107b79953f401fc8c9530bd0eea0



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)