[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187287#comment-16187287
 ] 

Manikumar commented on KAFKA-5734:
--

[~jangbora] looking at the metrics above, they are per client-id quota metrics. 
 inactive quota metrics will be deleted after one hour.  looks like you are 
creating many producer instances within an hour. As mentioned [~huxi_2b],  You 
can use single producer instance across multiple threads. (or) you can reuse 
client.id string (or)  if you not using quotas, you can disable quota configs.

Please close the JIRA, if you satisfy with one of the above approaches.

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187287#comment-16187287
 ] 

Manikumar edited comment on KAFKA-5734 at 10/1/17 6:02 AM:
---

[~jangbora] looking at the metrics above, they represent per client-id quota 
metrics. Inactive quota metrics will be deleted after one hour.  looks like you 
are creating many producer instances within an hour. As mentioned [~huxi_2b],  
You can use single producer instance across multiple threads. (or) you can 
reuse client.id string (or)  if you are not using quotas, you can disable quota 
configs.

Please close the JIRA, if you satisfy with one of the above approaches.


was (Author: omkreddy):
[~jangbora] looking at the metrics above, they are per client-id quota metrics. 
 inactive quota metrics will be deleted after one hour.  looks like you are 
creating many producer instances within an hour. As mentioned [~huxi_2b],  You 
can use single producer instance across multiple threads. (or) you can reuse 
client.id string (or)  if you not using quotas, you can disable quota configs.

Please close the JIRA, if you satisfy with one of the above approaches.

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6005) Reject JoinGroup request from first member with empty protocol type/protocol list

2017-10-03 Thread Manikumar (JIRA)
Manikumar created KAFKA-6005:


 Summary: Reject JoinGroup request from first member with empty 
protocol type/protocol list
 Key: KAFKA-6005
 URL: https://issues.apache.org/jira/browse/KAFKA-6005
 Project: Kafka
  Issue Type: Bug
Reporter: Manikumar
Assignee: Manikumar
Priority: Minor
 Fix For: 1.0.0


Currently, if the first group member joins with empty 
partition.assignment.strategy, then the group won't allow any other members 
with valid protocols.  This JIRA is to add validation to reject JoinGroup 
requests from the first member with empty protocol type or empty protocol list

Also, add consumer-side validations to check at least one partition assigner 
class name is configured while using subscribe APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6005) Reject JoinGroup request from first member with empty protocol type/protocol list

2017-10-03 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16189475#comment-16189475
 ] 

Manikumar commented on KAFKA-6005:
--

PR: https://github.com/apache/kafka/pull/3957

> Reject JoinGroup request from first member with empty protocol type/protocol 
> list
> -
>
> Key: KAFKA-6005
> URL: https://issues.apache.org/jira/browse/KAFKA-6005
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
> Fix For: 1.0.0
>
>
> Currently, if the first group member joins with empty 
> partition.assignment.strategy, then the group won't allow any other members 
> with valid protocols.  This JIRA is to add validation to reject JoinGroup 
> requests from the first member with empty protocol type or empty protocol list
> Also, add consumer-side validations to check at least one partition assigner 
> class name is configured while using subscribe APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6022) mirror maker stores offset in zookeeper

2017-10-09 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198271#comment-16198271
 ] 

Manikumar commented on KAFKA-6022:
--

looks like you are using old consumer API on mirror maker. By default, old 
consumer API uses zookeeper for offset storage ( check 
offsets.storage=zookeeper config property). If you are on newer versions of 
Kafka, then it is advisable to use new consumer on mirror maker.
You can enable new consumer on mirror maker by using  "---new.consumer" command 
line option

> mirror maker stores offset in zookeeper
> ---
>
> Key: KAFKA-6022
> URL: https://issues.apache.org/jira/browse/KAFKA-6022
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ronald van de Kuil
>Priority: Minor
>
> I happened to notice that the mirror maker stores its offset in zookeeper. 
> I do not see it using:
> bin/kafka-consumer-groups.sh --bootstrap-server pi1:9092 --new-consumer --list
> I do however see consumers that store their offset in kafka.
> I would guess that storing the offset in zookeeper is old style?
> Would it be an idea to update the mirror maker to the new consumer style?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6022) mirror maker stores offset in zookeeper

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198416#comment-16198416
 ] 

Manikumar commented on KAFKA-6022:
--

Since 0.10.1.0,  no need to pass --new-consumer option. You just need to pass 
broker connect string (bootstrap.servers) in consumer.config file
Check here: http://kafka.apache.org/documentation.html#upgrade_1010_notable

check for below question in the FAQ section.
How do we migrate to committing offsets to Kafka (rather than Zookeeper) in 
0.8.2?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ

Yes, If you have an option, it is better to mirror cluster from scratch using 
new consumer.

Pl close the JIRA, if it solves your issue.



> mirror maker stores offset in zookeeper
> ---
>
> Key: KAFKA-6022
> URL: https://issues.apache.org/jira/browse/KAFKA-6022
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ronald van de Kuil
>Priority: Minor
>
> I happened to notice that the mirror maker stores its offset in zookeeper. 
> I do not see it using:
> bin/kafka-consumer-groups.sh --bootstrap-server pi1:9092 --new-consumer --list
> I do however see consumers that store their offset in kafka.
> I would guess that storing the offset in zookeeper is old style?
> Would it be an idea to update the mirror maker to the new consumer style?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6042) Kafka Request Handler deadlocks and brings down the cluster.

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198453#comment-16198453
 ] 

Manikumar commented on KAFKA-6042:
--

This may be related KAFKA-5970

> Kafka Request Handler deadlocks and brings down the cluster.
> 
>
> Key: KAFKA-6042
> URL: https://issues.apache.org/jira/browse/KAFKA-6042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1
> Environment: kafka version: 0.11.0.1
> client versions: 0.8.2.1-0.10.2.1
> platform: aws (eu-west-1a)
> nodes: 36 x r4.xlarge
> disk storage: 2.5 tb per node (~73% usage per node)
> topics: 250
> number of partitions: 48k (approx)
> os: ubuntu 14.04
> jvm: Java(TM) SE Runtime Environment (build 1.8.0_131-b11), Java HotSpot(TM) 
> 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Ben Corlett
>Priority: Critical
> Attachments: thread_dump.txt.gz
>
>
> We have been experiencing a deadlock that happens on a consistent server 
> within our cluster. This happens multiple times a week currently. It first 
> started happening when we upgraded to 0.11.0.0. Sadly 0.11.0.1 failed to 
> resolve the issue.
> Sequence of events:
> At a seemingly random time broker 125 goes into a deadlock. As soon as it is 
> deadlocked it will remove all the ISR's for any partition is its the leader 
> for.
> [2017-10-10 00:06:10,061] INFO Partition [XX,24] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,073] INFO Partition [XX,974] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,079] INFO Partition [XX,64] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,081] INFO Partition [XX,21] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,084] INFO Partition [XX,12] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,085] INFO Partition [XX,61] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,086] INFO Partition [XX,53] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,088] INFO Partition [XX,27] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,090] INFO Partition [XX,182] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,091] INFO Partition [XX,16] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> 
> The other nodes fail to connect to the node 125 
> [2017-10-10 00:08:42,318] WARN [ReplicaFetcherThread-0-125]: Error in fetch 
> to broker 125, request (type=FetchRequest, replicaId=101, maxWait=500, 
> minBytes=1, maxBytes=10485760, fetchData={XX-94=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-22=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-58=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-11=(offset=78932482, 
> logStartOffset=50881481, maxBytes=1048576), XX-55=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-19=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-91=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-5=(offset=903857106, 
> logStartOffset=0, maxBytes=1048576), XX-80=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-88=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-34=(offset=308, 
> logStartOffset=308, maxBytes=1048576), XX-7=(offset=369990, 
> logStartOffset=369990, maxBytes=1048576), XX-0=(offset=57965795, 
> logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 125 was disconnected before the response 
> was read
> at 
> org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:93)
> at 
> kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:207)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:151)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
> As node 125 removed all the ISRs as it was locking up, a failover for any 
> partition without an unclean leader election is not possible. This breaks 

[jira] [Commented] (KAFKA-6042) Kafka Request Handler deadlocks and brings down the cluster.

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198497#comment-16198497
 ] 

Manikumar commented on KAFKA-6042:
--

It would we great if you can use KAFKA-5970 and validate in your cluster.

Kafka Community has not yet started discussion on 0.11.0.2 release.  




> Kafka Request Handler deadlocks and brings down the cluster.
> 
>
> Key: KAFKA-6042
> URL: https://issues.apache.org/jira/browse/KAFKA-6042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 0.11.0.1
> Environment: kafka version: 0.11.0.1
> client versions: 0.8.2.1-0.10.2.1
> platform: aws (eu-west-1a)
> nodes: 36 x r4.xlarge
> disk storage: 2.5 tb per node (~73% usage per node)
> topics: 250
> number of partitions: 48k (approx)
> os: ubuntu 14.04
> jvm: Java(TM) SE Runtime Environment (build 1.8.0_131-b11), Java HotSpot(TM) 
> 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Ben Corlett
>Priority: Critical
> Attachments: thread_dump.txt.gz
>
>
> We have been experiencing a deadlock that happens on a consistent server 
> within our cluster. This happens multiple times a week currently. It first 
> started happening when we upgraded to 0.11.0.0. Sadly 0.11.0.1 failed to 
> resolve the issue.
> Sequence of events:
> At a seemingly random time broker 125 goes into a deadlock. As soon as it is 
> deadlocked it will remove all the ISR's for any partition is its the leader 
> for.
> [2017-10-10 00:06:10,061] INFO Partition [XX,24] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,073] INFO Partition [XX,974] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,079] INFO Partition [XX,64] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,081] INFO Partition [XX,21] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,084] INFO Partition [XX,12] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,085] INFO Partition [XX,61] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,086] INFO Partition [XX,53] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,088] INFO Partition [XX,27] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,090] INFO Partition [XX,182] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> [2017-10-10 00:06:10,091] INFO Partition [XX,16] on broker 125: 
> Shrinking ISR from 117,125 to 125 (kafka.cluster.Partition)
> 
> The other nodes fail to connect to the node 125 
> [2017-10-10 00:08:42,318] WARN [ReplicaFetcherThread-0-125]: Error in fetch 
> to broker 125, request (type=FetchRequest, replicaId=101, maxWait=500, 
> minBytes=1, maxBytes=10485760, fetchData={XX-94=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-22=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-58=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-11=(offset=78932482, 
> logStartOffset=50881481, maxBytes=1048576), XX-55=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-19=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-91=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-5=(offset=903857106, 
> logStartOffset=0, maxBytes=1048576), XX-80=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-88=(offset=0, 
> logStartOffset=0, maxBytes=1048576), XX-34=(offset=308, 
> logStartOffset=308, maxBytes=1048576), XX-7=(offset=369990, 
> logStartOffset=369990, maxBytes=1048576), XX-0=(offset=57965795, 
> logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 125 was disconnected before the response 
> was read
> at 
> org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:93)
> at 
> kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:207)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:151)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
> As node 125 removed all the ISRs a

[jira] [Commented] (KAFKA-6022) mirror maker stores offset in zookeeper

2017-10-10 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16199814#comment-16199814
 ] 

Manikumar commented on KAFKA-6022:
--

I think we missed to update consumer.properties file. Will raise a minor PR to 
update consumer.properties file.
Thank for raising this.

> mirror maker stores offset in zookeeper
> ---
>
> Key: KAFKA-6022
> URL: https://issues.apache.org/jira/browse/KAFKA-6022
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ronald van de Kuil
>Priority: Minor
>
> I happened to notice that the mirror maker stores its offset in zookeeper. 
> I do not see it using:
> bin/kafka-consumer-groups.sh --bootstrap-server pi1:9092 --new-consumer --list
> I do however see consumers that store their offset in kafka.
> I would guess that storing the offset in zookeeper is old style?
> Would it be an idea to update the mirror maker to the new consumer style?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4504) Details of retention.bytes property at Topic level are not clear on how they impact partition size

2017-10-17 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4504.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 1.0.0

> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> --
>
> Key: KAFKA-4504
> URL: https://issues.apache.org/jira/browse/KAFKA-4504
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Justin Manchester
>Assignee: Manikumar
> Fix For: 1.0.0
>
>
> Problem:
> Details of retention.bytes property at Topic level are not clear on how they 
> impact partition size
> Business Impact:
> Users are setting retention.bytes and not seeing the desired store amount of 
> data.
> Current Text:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.
> Proposed change:
> This configuration controls the maximum size a log can grow to before we will 
> discard old log segments to free up space if we are using the "delete" 
> retention policy. By default there is no size limit only a time limit.  
> Please note, this is calculated as retention.bytes * number of partitions on 
> the given topic for the total  amount of disk space to be used.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6077) Let SimpleConsumer support Kerberos authentication

2017-10-17 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208881#comment-16208881
 ] 

Manikumar commented on KAFKA-6077:
--

Old consumer API is deprecated and will be removed in future versions. So there 
won't is any new feature support for older APIs.
You may consider migrating to new consumer API.



> Let SimpleConsumer support Kerberos authentication
> --
>
> Key: KAFKA-6077
> URL: https://issues.apache.org/jira/browse/KAFKA-6077
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: huangjianan
>
> Cannot use SimpleConsumer in Kafka Kerberos environment



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6077) Let SimpleConsumer support Kerberos authentication

2017-10-17 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16208881#comment-16208881
 ] 

Manikumar edited comment on KAFKA-6077 at 10/18/17 6:52 AM:


Old consumer API is deprecated and will be removed in future versions. So there 
won't be any new feature support for older APIs.
You may consider migrating to new consumer API.




was (Author: omkreddy):
Old consumer API is deprecated and will be removed in future versions. So there 
won't is any new feature support for older APIs.
You may consider migrating to new consumer API.



> Let SimpleConsumer support Kerberos authentication
> --
>
> Key: KAFKA-6077
> URL: https://issues.apache.org/jira/browse/KAFKA-6077
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: huangjianan
>
> Cannot use SimpleConsumer in Kafka Kerberos environment



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5978) Transient failure in SslTransportLayerTest.testNetworkThreadTimeRecorded

2017-10-18 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209382#comment-16209382
 ] 

Manikumar commented on KAFKA-5978:
--

[~rsivaram]  SslTransportLayerTest.testNetworkThreadTimeRecorded  test is 
getting stuck sometimes.  This test getting passed consistently with less 
message size, 
https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/common/network/SslTransportLayerTest.java#L598

Any clue?

{code} 
"main" #1 prio=5 os_prio=31 tid=0x7fa03b807800 nid=0x1703 runnable 
[0x70218000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
at sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:198)
at sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:117)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00076ba22d88> (a sun.nio.ch.Util$2)
- locked <0x00076ba22d78> (a java.util.Collections$UnmodifiableSet)
- locked <0x00076ba22c58> (a sun.nio.ch.KQueueSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:657)
at org.apache.kafka.common.network.Selector.poll(Selector.java:383)
at 
org.apache.kafka.common.network.SslTransportLayerTest.testNetworkThreadTimeRecorded(SslTransportLayerTest.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{code}

> Transient failure in SslTransportLayerTest.testNetworkThreadTimeRecorded
> 
>
> Key: KAFKA-5978
> URL: https://issues.apache.org/jira/browse/KAFKA-5978
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Stack trace:
> {quote}
> java.lang.AssertionError: Send time not recorded
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.kafka.common.network.SslTransportLayerTest.testNetworkThreadTimeRecorded(SslTransportLayerTest.java:602)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6071) Use ZookeeperClient in LogManager

2017-10-18 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6071:


Assignee: Manikumar

> Use ZookeeperClient in LogManager 
> --
>
> Key: KAFKA-6071
> URL: https://issues.apache.org/jira/browse/KAFKA-6071
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Manikumar
>
> We want to replace the usage of ZkUtils in LogManager with ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6072) Use ZookeeperClient in GroupCoordinator and TransactionCoordinator

2017-10-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6072:


Assignee: Manikumar

> Use ZookeeperClient in GroupCoordinator and TransactionCoordinator
> --
>
> Key: KAFKA-6072
> URL: https://issues.apache.org/jira/browse/KAFKA-6072
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Manikumar
>
> We want to replace the usage of ZkUtils in GroupCoordinator and 
> TransactionCoordinator with ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2017-10-20 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212476#comment-16212476
 ] 

Manikumar commented on KAFKA-6091:
--

This behavior is changed in KAFKA-5547. 

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2017-10-20 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212476#comment-16212476
 ] 

Manikumar edited comment on KAFKA-6091 at 10/20/17 10:36 AM:
-

This behavior is changed in KAFKA-5547.  Now clients will fail after any topic 
authorization errors.


was (Author: omkreddy):
This behavior is changed in KAFKA-5547. 

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5645) Use async ZookeeperClient in SimpleAclAuthorizer

2017-10-23 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5645:


Assignee: Manikumar

> Use async ZookeeperClient in SimpleAclAuthorizer
> 
>
> Key: KAFKA-5645
> URL: https://issues.apache.org/jira/browse/KAFKA-5645
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Manikumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5646) Use async ZookeeperClient for DynamicConfigManager

2017-10-28 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5646:


Assignee: Manikumar

> Use async ZookeeperClient for DynamicConfigManager
> --
>
> Key: KAFKA-5646
> URL: https://issues.apache.org/jira/browse/KAFKA-5646
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Ismael Juma
>Assignee: Manikumar
> Fix For: 1.1.0
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> DynamicConfigManager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6077) Let SimpleConsumer support Kerberos authentication

2017-10-28 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16223874#comment-16223874
 ] 

Manikumar commented on KAFKA-6077:
--

It is highly unlikely to accept patches for older client APIs.  You may 
consider migrating to storm-kafka-client.
https://github.com/apache/storm/tree/master/external/storm-kafka-client
Or You can use HDP Kafka distribution which support security for older APIs 




> Let SimpleConsumer support Kerberos authentication
> --
>
> Key: KAFKA-6077
> URL: https://issues.apache.org/jira/browse/KAFKA-6077
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.0.0
>Reporter: huangjianan
>
> Cannot use SimpleConsumer in Kafka Kerberos environment



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6019) Sentry permissions bug on CDH

2017-10-28 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6019.
--
Resolution: Duplicate

> Sentry permissions bug on CDH
> -
>
> Key: KAFKA-6019
> URL: https://issues.apache.org/jira/browse/KAFKA-6019
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jorge Machado
>
> Hello Guys, 
> I think I found a bug on sentry +sasl + kafka CDH. 
> Please check https://issues.apache.org/jira/browse/KAFKA-6017
> thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-232) ConsumerConnector has no access to "getOffsetsBefore"

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-232.
-
Resolution: Won't Fix

Closing inactive issue. The old consumer is no longer supported. This jira 
requirement can be implemented with java consumer API.

> ConsumerConnector has no access to "getOffsetsBefore" 
> --
>
> Key: KAFKA-232
> URL: https://issues.apache.org/jira/browse/KAFKA-232
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Edward Capriolo
>Priority: Minor
>
> kafka.javaapi.SimpleConsumer has "getOffsetsBefore". I would like this 
> ability in KafkaMessageStream or in ConsumerConnector. In this way clients 
> can access their current position.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-243) Improve consumer connector documentation to include blocking semantics of kafka message streams

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-243.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> Improve consumer connector documentation to include blocking semantics of 
> kafka message streams
> ---
>
> Key: KAFKA-243
> URL: https://issues.apache.org/jira/browse/KAFKA-243
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Neha Narkhede
>
> http://markmail.org/message/6m46awa5bzsxg7cq?q=Questions+on+consumerConnector%2EcreateMessageStreams%28%29+list:org%2Eapache%2Eincubator%2Ekafka-users
> It will be good to improve the ScalaDocs for explaining the semantics of the 
> ConsumerIterator



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-250) Provide consumer iterator callback handler

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-250.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported

> Provide consumer iterator callback handler
> --
>
> Key: KAFKA-250
> URL: https://issues.apache.org/jira/browse/KAFKA-250
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Joel Koshy
>Assignee: Joel Koshy
>
> It would be useful to add a "callback" option in the consumer, similar to
> the producer callbacks that we have now. The callback point would be at the
> point messages are iterated over, and would be helpful for inserting
> instrumentation code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-242) Subsequent calls of ConsumerConnector.createMessageStreams cause Consumer offset to be incorrect

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-242.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> Subsequent calls of ConsumerConnector.createMessageStreams cause Consumer 
> offset to be incorrect
> 
>
> Key: KAFKA-242
> URL: https://issues.apache.org/jira/browse/KAFKA-242
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.7
>Reporter: David Arthur
> Attachments: kafka.log
>
>
> When calling ConsumerConnector.createMessageStreams in rapid succession, the 
> Consumer offset is incorrectly advanced causing the consumer to lose 
> messages. This seems to happen when createMessageStreams is called before the 
> rebalancing triggered by the previous call to createMessageStreams has 
> completed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-853) Allow OffsetFetchRequest to initialize offsets

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-853.
-
Resolution: Won't Fix

Closing inactive issue as per above comment.

> Allow OffsetFetchRequest to initialize offsets
> --
>
> Key: KAFKA-853
> URL: https://issues.apache.org/jira/browse/KAFKA-853
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1
>Reporter: David Arthur
>Assignee: Balaji Seshadri
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> It would be nice for the OffsetFetchRequest API to have the option to 
> initialize offsets instead of returning unknown_topic_or_partition. It could 
> mimic the Offsets API by adding the "time" field and then follow the same 
> code path on the server as the Offset API. 
> In this case, the response would need to a boolean to indicate if the 
> returned offset was initialized or fetched from ZK.
> This would simplify the client logic when dealing with new topics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-845) Re-implement shallow iteration on 0.8.1

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-845.
-
Resolution: Auto Closed

Closing inactive issue. 


> Re-implement shallow iteration on 0.8.1
> ---
>
> Key: KAFKA-845
> URL: https://issues.apache.org/jira/browse/KAFKA-845
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Maxime Brugidou
>
> After KAFKA-732 we decided to not support the shallow iteration feature to 
> speed up the 0.8 release.
> This severely impacts the performance for heavy mirroring between clusters. 
> The MirrorMaker needs to decompress/recompress the data which actually triple 
> the CPU compression cost in the data flow (initial compression + 2x 
> compression for mirroring).
> It would be great to discuss a solution to re-implement the feature correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-810) KafkaStream toString method blocks indefinitely

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-810.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> KafkaStream toString method blocks indefinitely
> ---
>
> Key: KAFKA-810
> URL: https://issues.apache.org/jira/browse/KAFKA-810
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Brandon Salzberg
>Assignee: Neha Narkhede
>
> Once a KafkaStream has been created, calling toString on it will block 
> indefinitely. It appears that the toString method is not currently being 
> overridden, so the default behavior will be to try to iterate over all of the 
> elements, which will obviously never finish since the object is designed to 
> iterate forever.
> Seems like the easiest solution would to override toString to avoid the 
> default behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-635) Producer error when trying to send not displayed unless in DEBUG logging level

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-635.
-
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported.

> Producer error when trying to send not displayed unless in DEBUG logging level
> --
>
> Key: KAFKA-635
> URL: https://issues.apache.org/jira/browse/KAFKA-635
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
> Environment: Java client
>Reporter: Chris Curtin
>Priority: Minor
>
> When trying to figure out how to connection with 0.8.0 Producer was only 
> seeing exceptions:
> Exception in thread "main" kafka.common.FailedToSendMessageException: Failed 
> to send messages after 3 tries.
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:70)
>   at kafka.producer.Producer.send(Producer.scala:75)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:32)
>   at com.spop.kafka.playproducer.TestProducer.main(TestProducer.java:40)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
> Changing log4j level to DEBUG showed the actual error:
> 878  [main] WARN  kafka.producer.async.DefaultEventHandler  - failed to send 
> to broker 3 with data Map([test1,0] -> 
> ByteBufferMessageSet(MessageAndOffset(Message(magic = 2, attributes = 0, crc 
> = 1906312613, key = null, payload = java.nio.HeapByteBuffer[pos=0 lim=22 
> cap=22]),0), ))
> java.lang.NoSuchMethodError: com.yammer.metrics.core.TimerContext.stop()J
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:36)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:94)
>   at 
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:221)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:87)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$1.apply(DefaultEventHandler.scala:81)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>   at 
> scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:80)
>   at 
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:81)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:57)
>   at kafka.producer.Producer.send(Producer.scala:75)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:32)
>   at com.spop.kafka.playproducer.TestProducer.main(TestProducer.java:40)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
> Submitted per Jay's request in a mailing list thread.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-623) It should be possible to re-create KafkaStreams after an exception, without recreating ConsumerConnector

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-623.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> It should be possible to re-create KafkaStreams after an exception, without 
> recreating ConsumerConnector
> 
>
> Key: KAFKA-623
> URL: https://issues.apache.org/jira/browse/KAFKA-623
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Jason Rosenberg
>
> This issue came out of a discussion on the user mailing list (not something I 
> experienced directly).
> Bob Cotton reported:
> "During the implementation of a custom Encoder/Decoder we noticed that
> should the Decoder throw an exception, the KafkaStream that it is in use
> becomes invalid.
> Searching the mailing list indicates that the only way to recover from an
> invalid stream is to shutdown the whole high-level consumer and restart.
> Is there a better way to recover from this?
> Thanks for the responsiveness on the mailing list."
> ..
> And then Neha responded:
> "This is a bug. Ideally, we should allow restarting the consumer
> streams when they encounter an error.
> Can you file a JIRA ?
> Thanks,
> Neha"
> Jason



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-602) A ConsumerConnector that never receives messages does not shutdown cleanly

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-602.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> A ConsumerConnector that never receives messages does not shutdown cleanly
> --
>
> Key: KAFKA-602
> URL: https://issues.apache.org/jira/browse/KAFKA-602
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Jason Rosenberg
>
> Working with the latest trunk version (last commit was on 10/9/2012).
> If I create a consumer connector, and create some KafkaStreams using a 
> TopicFilter, but then never send any messages, and then call shutdown on the 
> connector, the KafkaStreams never get notified of the shutdown.
> Looking at ZookeeperConsumerConnector, in the method 
> 'sendShutdownToAllQueues', It appears the current implementation is dependent 
> on receiving at least 1 message with a topic, before initializing the list of 
> topicThreadIdAndQueues, to send the shutdownCommand to.
> Jason



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-519) Allow commiting the state of single KafkaStream

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-519.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.


> Allow commiting the state of single KafkaStream
> ---
>
> Key: KAFKA-519
> URL: https://issues.apache.org/jira/browse/KAFKA-519
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.7, 0.7.1
>Reporter: Esko Suomi
>Priority: Minor
>
> Currently consuming multiple topics through ZK by first acquiring 
> ConsumerConnector and then fetching message streams for wanted topics. And 
> when the messages have been consumed, the current consuming state is commited 
> with the method ConsumerConnector#commitOffsets().
> This scheme has a flaw when the consuming application is used as sort of a 
> data piping proxy instead of final consuming sink. In our case we read data 
> from Kafka, repackage it and only then move it to persistent storage. The 
> repackaging step is relatively long running and may span several hours 
> (usually a few minutes) which in addition is mixed with highly asymmetric 
> topic throughputs; one of our topics gets about 80% of total throughput. We 
> have about 20 topics in total. As an unwanted side effect of all this, 
> commiting the offset whenever the per-topic persistence step has been taken 
> means commiting offsets for other topics too which may eventually manifest as 
> loss of data if the consuming application or the machine it is running on 
> crashes.
> So, while this loss of data can be alleviated to some extent with for example 
> local temp storage, it would be cleaner if KafkaStream itself would allow for 
> partition level offset commiting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-536) Kafka async appender/producer looses most messages

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-536.
-
Resolution: Auto Closed

Closing inactive issue. 

> Kafka async appender/producer looses most messages
> --
>
> Key: KAFKA-536
> URL: https://issues.apache.org/jira/browse/KAFKA-536
> Project: Kafka
>  Issue Type: Bug
>Reporter: nicu marasoiu
>
> The send thread reports Handling of 123 messages while the number of log 
> lines was 11000 (the async appender is on rootLogger). Please advice.
> Another issue which we did solve was a log4j deadlock caused by the appender. 
> We wrote our own appenders implementations that wrap the kafka implementation 
> so that activateOption is delayed and executed outside log4j bootstrap, 
> Please find more details on kafka-244 and its child



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-601) ConsumerConnector.shutdown() hangs if autocommit is enabled, and no zk available

2017-10-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-601.
-
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> ConsumerConnector.shutdown() hangs if autocommit is enabled, and no zk 
> available
> 
>
> Key: KAFKA-601
> URL: https://issues.apache.org/jira/browse/KAFKA-601
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.7.2
>Reporter: Jason Rosenberg
>
> Hi,
> I have a situation where I want to reliably shut-down a consumer, but it 
> seems to hang in certain conditions, namely:
> - If autocommit.enabled = true
> - If the current zk instance is not available
> It seems that it will indefinitely try to re-create the zk connection, in 
> order to auto-commit, before exiting, even after I've called 
> ConsumerConnector.shutdown().
> I would think that there should be a way to force it to shutdown, even under 
> error conditions.
> Jason



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-788) Periodic refresh of topic metadata on the producer doesn't include all topics

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-788.
-
Resolution: Auto Closed

Closing  inactive issue. The old producer is no longer supported.


> Periodic refresh of topic metadata on the producer doesn't include all topics
> -
>
> Key: KAFKA-788
> URL: https://issues.apache.org/jira/browse/KAFKA-788
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>  Labels: kafka-0.8, p2
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We added a patch to the producer to refresh the metadata for all topics 
> periodically. However, the producer only does this for the topics in the last 
> batch. But some topics sent by the producer could be low throughput and might 
> not be present in every batch. If we bounce the cluster or if brokers fail 
> and leaders change, the metadata for those low throughput topic is not 
> refreshed by this periodic topic metadata request. The next produce request 
> for those topics have to fail and then a separate metadata request needs to 
> be reissued to handle the produce request. This is especially a problem for 
> the migration tool. So even if the producer had a chance to refresh the 
> metadata when the leader changed, it throws LeaderNotAvailableExceptions much 
> later when it sends a request for that topic. 
> I propose we just fetch data for all topics sent by the producer in the 
> periodic refresh of topic metadata



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-407) Uncaught InputStream.close() exception in CompressionUtils.compress()

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-407.
-
Resolution: Won't Fix

This part of the code is removed. So closing the issue. 

> Uncaught InputStream.close() exception in CompressionUtils.compress()
> -
>
> Key: KAFKA-407
> URL: https://issues.apache.org/jira/browse/KAFKA-407
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.7.1
>Reporter: Lorenzo Alberton
>Priority: Minor
>
> In CompressionUtils.compress(), in this try/catch block:
> ==
> try {
>   cf.write(messageByteBuffer.array)
> } catch {
>   case e: IOException => error("Error while writing to the GZIP output 
> stream", e)
>   cf.close()
>   throw e
> } finally {
>   cf.close()
> }
> ==
> cf.close() might throw an IOException, which is not handled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-354) Refactor getter and setter API to conform to the new convention

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-354.
-
Resolution: Won't Fix

Closing inactive issue.

> Refactor getter and setter API to conform to the new convention
> ---
>
> Key: KAFKA-354
> URL: https://issues.apache.org/jira/browse/KAFKA-354
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.0
>Reporter: Neha Narkhede
>Assignee: Joe Stein
>  Labels: optimization
>
> We just agreed on a new convention for getter/setter APIs. It will be good to 
> refactor code to conform to that.
> > We can actually go with public vals or vars - there is not much point in
> > defining a custom getter/setter as that is redundant.
> >
> > For example:
> > - start with "val x"
> > - over time, we determine that it needs to be mutable - change it to "var
> > x"
> > - if you need something more custom (e.g., enforce constraints on the
> > values that you can assign) then we can add the custom setter
> >  private[this] var underyling: T = ...
> >  def  x = underlying
> >  def x_=(update: T)  { if (constraint satisfied) {underlying = update}
> > else {throw new Exception} }
> >
> > All of the above changes will be binary compatible since under the covers,
> > reads/assignments are all through getter/setter methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-435) Keep track of the transient test failure for Kafka-343 on Apache Jenkins

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-435.
-
Resolution: Cannot Reproduce

Closing inactive issue.

> Keep track of the transient test failure for Kafka-343 on Apache Jenkins
> 
>
> Key: KAFKA-435
> URL: https://issues.apache.org/jira/browse/KAFKA-435
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.0
>Reporter: Yang Ye
>Assignee: Yang Ye
>Priority: Minor
>  Labels: transient-unit-test-failure
>
> See: 
> http://mail-archives.apache.org/mod_mbox/incubator-kafka-commits/201208.mbox/browser
> Error message:
> --
> [...truncated 3415 lines...]
> [2012-08-01 17:27:08,432] ERROR KafkaApi on Broker 0, error when processing 
> request (test_topic,0,-1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset -1 is out of range
>   at kafka.log.Log$.findRange(Log.scala:46)
>   at kafka.log.Log.read(Log.scala:265)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:272)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
>   at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:08,446] ERROR Closing socket for /67.195.138.9 because of 
> error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcher.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:171)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
>   at kafka.utils.Utils$.read(Utils.scala:630)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Processor.read(SocketServer.scala:296)
>   at kafka.network.Processor.run(SocketServer.scala:212)
>   at java.lang.Thread.run(Thread.java:662)
> [info] Test Passed: 
> testResetToEarliestWhenOffsetTooLow(kafka.integration.AutoOffsetResetTest)
> [info] Test Starting: 
> testResetToLatestWhenOffsetTooHigh(kafka.integration.AutoOffsetResetTest)
> [2012-08-01 17:27:09,203] ERROR KafkaApi on Broker 0, error when processing 
> request (test_topic,0,1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset 1 is out of range
>   at kafka.log.Log$.findRange(Log.scala:46)
>   at kafka.log.Log.read(Log.scala:265)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala

[jira] [Resolved] (KAFKA-599) SimpleConsumerShell ONLY connects to the first host in the broker-list string to fetch topic metadata

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-599.
-
Resolution: Auto Closed

The Scala consumers are no longer supported. 


> SimpleConsumerShell ONLY connects to the first host in the broker-list string 
> to fetch topic metadata
> -
>
> Key: KAFKA-599
> URL: https://issues.apache.org/jira/browse/KAFKA-599
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Fung
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-610) connect.timeout.ms seems to do the wrong thing in the producer

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-610.
-
Resolution: Auto Closed

Closing  inactive issue. The old producer is no longer supported.


> connect.timeout.ms seems to do the wrong thing in the producer
> --
>
> Key: KAFKA-610
> URL: https://issues.apache.org/jira/browse/KAFKA-610
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>
> This setting used to control the socket connection timeout. This is also what 
> the documentation says:
> "the maximum time spent by kafka.producer.SyncProducer trying to connect to 
> the kafka broker. Once it elapses, the producer throws an ERROR and stops."
> But it doesn't look to me that this parameter is being used at all. The only 
> thing we do with it is check in a catch statement if that much time has 
> ellapsed and then throw an error. Since we haven't set the connection timeout 
> this is silly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1006) Consumer loses messages of a new topic with auto.offset.reset = largest

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1006.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> Consumer loses messages of a new topic with auto.offset.reset = largest
> ---
>
> Key: KAFKA-1006
> URL: https://issues.apache.org/jira/browse/KAFKA-1006
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Guozhang Wang
>  Labels: usability
>
> Consumer currently uses auto.offset.reset = largest by default. If a new 
> topic is created, consumer's topic watcher is fired. The consumer will first 
> finish partition reassignment as part of rebalance and then start consuming 
> from the tail of each partition. Until the partition reassignment is over, 
> the server may have appended new messages to the new topic, consumer won't 
> consume these messages. Thus, multiple batches of messages may be lost when a 
> topic is newly created. 
> The fix is to start consuming from the earliest offset for newly created 
> topics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1066) Reduce logging on producer connection failures

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1066.
--
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported

> Reduce logging on producer connection failures
> --
>
> Key: KAFKA-1066
> URL: https://issues.apache.org/jira/browse/KAFKA-1066
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.0
>Reporter: Jason Rosenberg
>Assignee: Jun Rao
>
> Below is a stack trace from a unit test, where a producer tries to send a 
> message, but no server is available.
> The exception/message logging seems to be inordinately verbose.  
> I'm thinking a simple change could be to not log full stack traces for simple 
> things like "Connection refused", etc.  Seems it would be fine to just log 
> the exception message in such cases.
> Also, the log levels could be tuned, such that things logged as ERROR 
> indicate that all possible retries have been attempted, rather than having it 
> be an ERROR for each step of the retry/failover process.  Thus, for a 
> redundant, clustered service, it should be considered normal that single 
> nodes will be unavailable (such as when we're doing a rolling restart of the 
> cluster, etc.).  It should only be an ERROR if all brokers/all replicas are 
> unavailable, etc.  This way, we can selectively set our log level to ERROR, 
> and have it be useful.
> This is from one of my unit tests.  I am using the default retry count (3) 
> here, but even if I reduced that, it seems this is a crazy amount of logging. 
>  (I've edited this to remove from each stack trace the portion of 
> testing/calling code, into the Producer.send() call).
> Here's the code snippet that produced the logging below (and note, the server 
> was not available on the requested port).
>   KeyedMessage msg = new KeyedMessage(topic, 
> message);
>   producer.send(msg);
> Jason
> 599 [main] ERROR kafka.producer.SyncProducer  - Producer connection to 
> localhost:1025 unsuccessful
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.Net.connect0(Native Method)
>   at sun.nio.ch.Net.connect(Net.java:465)
>   at sun.nio.ch.Net.connect(Net.java:457)
>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:69)
>   at kafka.utils.Utils$.swallow(Utils.scala:186)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:105)
>   at kafka.utils.Utils$.swallowError(Utils.scala:45)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:69)
>   at kafka.producer.Producer.send(Producer.scala:74)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:32)
>   ...
> 615 [main] WARN kafka.client.ClientUtils$  - Fetching topic metadata with 
> correlation id 0 for topics [Set(test-topic)] from broker 
> [id:0,host:localhost,port:1025] failed
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.Net.connect0(Native Method)
>   at sun.nio.ch.Net.connect(Net.java:465)
>   at sun.nio.ch.Net.connect(Net.java:457)
>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
>   at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
>   at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
>   at 
> kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
>   at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
>   at 
> kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:69)
>   at kafka.utils.Utils$.swallow(Utils.scala:186)
>   at kafka.utils.Logging$class.swallowError(Logging.scala:105)
>   at kafka.utils.Utils$.swallowError(Utils.scala:45)
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEven

[jira] [Resolved] (KAFKA-1415) Async producer.send can block forever if async.ProducerSendThread dies

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1415.
--
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported, please upgrade 
to the Java producer whenever possible.

> Async producer.send can block forever if async.ProducerSendThread dies
> --
>
> Key: KAFKA-1415
> URL: https://issues.apache.org/jira/browse/KAFKA-1415
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.8.0
> Environment: kafka_2.9.2-0.8.0.jar
>Reporter: James Blackburn
>Assignee: Jun Rao
>
> We noticed that if something goes fundamentally wrong (in this case the jars 
> were replaced under a running Producer's feet) then async calls to: 
> {{producer.send}} can lockup forever.
> I saw in the log file the following exception logged:
> {code}
> [2014-04-17 16:45:36,484] INFO Disconnecting from cn2:9092 
> (kafka.producer.SyncProducer)
> Exception in thread "ProducerSendThread-" java.lang.NoClassDefFoundError: 
> kafka/producer/async/ProducerSendThread$$anonfun$run$1
> at 
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:46)
> Caused by: java.lang.ClassNotFoundException: 
> kafka.producer.async.ProducerSendThread$$anonfun$run$1
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 1 more
> {code}
> However my application continued running. Jstack showed that the 
> producer.send calls had all locked up:
> {code}
> "SubscriberEventQueue0Executor-1" prio=10 tid=0x2aaab0a88000 nid=0x44f5 
> waiting on condition [0x44ac4000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000790c47918> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:349)
>   at kafka.producer.Producer$$anonfun$asyncSend$1.apply(Producer.scala:98)
>   at kafka.producer.Producer$$anonfun$asyncSend$1.apply(Producer.scala:90)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:33)
>   at kafka.producer.Producer.asyncSend(Producer.scala:90)
>   at kafka.producer.Producer.send(Producer.scala:77)
>   - locked <0x000791768ee8> (a java.lang.Object)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at com.mi.ahl.kafka.rmds2kafka.Bridge$1.onMarketData(Bridge.java:165)
>Locked ownable synchronizers:
>   - <0x000792205cd0> (a 
> java.util.concurrent.ThreadPoolExecutor$Worker)
> "SubscriberEventQueue1Executor-2" prio=10 tid=0x2aaab0aa nid=0x4511 
> waiting for monitor entry [0x44dc7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at kafka.producer.Producer.send(Producer.scala:71)
>   - waiting to lock <0x000791768ee8> (a java.lang.Object)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at com.mi.ahl.kafka.rmds2kafka.Bridge$1.onMarketData(Bridge.java:165)
> "SubscriberEventQueue2Executor-3" prio=10 tid=0x2aaab0ab6800 nid=0x4512 
> waiting for monitor entry [0x44ec8000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at kafka.producer.Producer.send(Producer.scala:71)
>   - waiting to lock <0x000791768ee8> (a java.lang.Object)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at com.mi.ahl.kafka.rmds2kafka.Bridge$1.onMarketData(Bridge.java:165)
> "SubscriberEventQueue3Executor-4" prio=10 tid=0x2aaab0ab8800 nid=0x4513 
> waiting for monitor entry [0x44fc9000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at kafka.producer.Producer.send(Producer.scala:71)
>   - waiting to lock <0x000791768ee8> (a java.lang.Object)
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33)
>   at com.mi.ahl.kafka.rmds2kafka.Bridge$1.onMarketData(Bridge.java:165)
> {code}
> *Expectation:*
> {{producer.send}} woul

[jira] [Resolved] (KAFKA-1958) ZookeeperConsumerConnector doesn't remove consumer node on shutdown.

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1958.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> ZookeeperConsumerConnector doesn't remove consumer node on shutdown.
> 
>
> Key: KAFKA-1958
> URL: https://issues.apache.org/jira/browse/KAFKA-1958
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Beletsky Andrey
>Assignee: Neha Narkhede
>  Labels: consumer, shutdown, zookeeper
>
> We use kafka with ZooKeeper via high level consumer.
> There is a scheduled job that creates a consumer with specific group, does 
> necessary logic and shuts down this consumer.
> +An issue:+
> Nobody deletes */consumers/myGroup/ids/myGroup__*. And after 
> several job runs there are a lot of dead consumer IDs under myGroup. I've got 
> [an 
> issue|https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whysomeoftheconsumersinaconsumergroupneverreceiveanymessage?]
>  that new consumer doesn't see a partition.
> We start to implement an approach to remove a consumer nodes from Zookeeper 
> manually after consumer is shutted down.
> I think better way to remove this node during 
> *ZookeeperConsumerConnector.shutdown()*.
> *P.S.:*
> If I missed something in your sources please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2062) Sync Producer, Variable Message Length, Multiple Threads = Direct memory overuse

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2062.
--
Resolution: Auto Closed

Closing inactive issue. The old producer is no longer supported, please upgrade 
to the Java producer whenever possible.

> Sync Producer, Variable Message Length, Multiple Threads = Direct memory 
> overuse
> 
>
> Key: KAFKA-2062
> URL: https://issues.apache.org/jira/browse/KAFKA-2062
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.1.1
>Reporter: Michael Braun
>Assignee: Jun Rao
>
> Using a synchronous producer with multiple threads each calling .send on the 
> single producer object, each thread ends up maintaining a threadlocal direct 
> memory buffer. In a case of messages where the sizing varies(for instance, 
> 99% of messages are 1MB and 1% are 100MB), eventually the buffers seem to 
> expand to this level for all the threads which can cause an out of memory - 
> direct buffer memory error:
> java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_67]
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_67]
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_67]
>   at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) ~[na:1.7.0_67]
>   at sun.nio.ch.IOUtil.write(IOUtil.java:130) ~[na:1.7.0_67]
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannel.java:493) ~[na:1.7.0_67]
>   at java.nio.channels.SocketChannel.write(SocketChannel.java:493) 
> ~[na:1.7.0_67]
>   at 
> kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:56) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at kafka.network.Send$class.writeCompletely(Transmission.scala:75) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:26)
>  ~[kafka_2.10-0.8.1.1.jar:na]
>   at kafka.network.BlockingChannel.send(BlockingChannel.scala:92) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at kafka.producer.SyncProducer.liftedTree$1(SyncProducer.scala:72) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71)
>  ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:102)
>  ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102)
>  ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:102)
>  ~[kafka_2.10-0.8.1.1.jar:na]
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:100) 
> ~[kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.async.DefaultEventHandler.kafka$producer$async$DEfaultEventHandler$$send(DefaultEventHandler.scala:255)
>  [kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
>  [kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
>  [kafka_2.10-0.8.1.1.jar:na]
>   at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>  [scala-library-2.10.1.jar:na]
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) 
> [scala-library-2.10.1.jar:na]
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) 
> [scala-library-2.10.1.jar:na]
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTasble.scala:226) 
> [scala-library-2.10.1.jar:na]
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) 
> [scala-library-2.10.1.jar:na]
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) 
> [scala-library-2.10.1.jar:na]
>   at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>  [scala-library-2.10.1.jar:na]
>   at 
> kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DEfaultEventHandler.scala:100)
>  [kafka_2.10-0.8.1.1.jar:na]
>   at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) 
> [kafka_2.10-0.8.1.1.jar:na]
>   at kafka.producer.Producer.send(Producer.scala:76) 
> [kafka_2.10-0.8.1.1.jar:na]
>   at kafka.javaapi.producer.Producer.send(Producer.scala:33) 
> [kafka_2.10-0.8.1.1.jar:na]
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2995) in 0.9.0.0 Old Consumer's commitOffsets with specify partition can submit not exists topic and partition to zk

2017-10-30 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2995.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> in 0.9.0.0 Old Consumer's commitOffsets with specify partition can submit not 
> exists topic and partition to zk
> --
>
> Key: KAFKA-2995
> URL: https://issues.apache.org/jira/browse/KAFKA-2995
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Pengwei
>Assignee: Neha Narkhede
>
> in 0.9.0.0 Version, the Old Consumer's commit interface is below:
> def commitOffsets(offsetsToCommit: immutable.Map[TopicAndPartition, 
> OffsetAndMetadata], isAutoCommit: Boolean) {
> trace("OffsetMap: %s".format(offsetsToCommit))
> var retriesRemaining = 1 + (if (isAutoCommit) 0 else 
> config.offsetsCommitMaxRetries) // no retries for commits from auto-commit
> var done = false
> while (!done) {
>   val committed = offsetsChannelLock synchronized {
> // committed when we receive either no error codes or only 
> MetadataTooLarge errors
> if (offsetsToCommit.size > 0) {
>   if (config.offsetsStorage == "zookeeper") {
> offsetsToCommit.foreach { case (topicAndPartition, 
> offsetAndMetadata) =>
>   commitOffsetToZooKeeper(topicAndPartition, 
> offsetAndMetadata.offset)
> }
>   
> this interface does not check the parameter offsetsToCommit, if 
> offsetsToCommit has some topic or partition which is not exist in the kafka. 
> Then will create an entry in the  /consumers/[group]/offsets/[Not exists 
> topic]   directory.
> We should check the offsetsToCommit's topic and partition is exists or just 
> check it is contain in the topicRegistry or checkpointedZkOffsets ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5647) Use async ZookeeperClient for Admin operations

2017-11-02 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5647:


Assignee: Manikumar

> Use async ZookeeperClient for Admin operations
> --
>
> Key: KAFKA-5647
> URL: https://issues.apache.org/jira/browse/KAFKA-5647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Manikumar
>Priority: Major
> Fix For: 1.1.0
>
>
> Since we will be removing the ZK dependency in most of the admin clients, we 
> only need to change the admin operations used on the server side. This 
> includes converting AdminManager and the remaining usage of zkUtils in 
> KafkaApis to use ZookeeperClient/KafkaZkClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-7274) Incorrect subject credential used in inter-broker communication

2018-10-04 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638241#comment-16638241
 ] 

Manikumar commented on KAFKA-7274:
--

[~rsivaram] As described in the JIRA, incase of static JAAS configuration, we 
are loading all login modules to each configured sasl mechanism.
This is causing authentication to fail. Any suggestions to fix this?

> Incorrect subject credential used in inter-broker communication
> ---
>
> Key: KAFKA-7274
> URL: https://issues.apache.org/jira/browse/KAFKA-7274
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0
>Reporter: TAO XIAO
>Priority: Major
>
> We configured one broker setup to enable multiple SASL mechanisms using JAAS 
> config file but we failed to start up the broker.
>  
> Here is security section of server.properties
>  
> {{listeners=SASL_PLAINTEXT://:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
> sasl.mechanism.inter.broker.protocol=PLAIN}}{{}}
>  
> JAAS file
>  
> {noformat}
> sasl_plaintext.KafkaServer {
>   org.apache.kafka.common.security.plain.PlainLoginModule required
>   username="admin"
>   password="admin-secret"
>   user_admin="admin-secret"
>   user_alice="alice-secret";
>   org.apache.kafka.common.security.scram.ScramLoginModule required
>   username="admin1"
>   password="admin-secret";
> };{noformat}
>  
> Exception we got
>  
> {noformat}
> [2018-08-10 12:12:13,070] ERROR [Controller id=0, targetBrokerId=0] 
> Connection to node 0 failed authentication due to: Authentication failed: 
> Invalid username or password 
> (org.apache.kafka.clients.NetworkClient){noformat}
>  
> If we changed to use broker configuration property we can start broker 
> successfully
>  
> {noformat}
> listeners=SASL_PLAINTEXT://:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
> sasl.mechanism.inter.broker.protocol=PLAIN
> listener.name.sasl_plaintext.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
>  required username="admin" password="admin-secret" user_admin="admin-secret" 
> user_alice="alice-secret";
> listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
>  required username="admin1" password="admin-secret";{noformat}
>  
> I believe this issue is caused by Kafka assigning all login modules to each 
> defined mechanism when using JAAS file which results in Login class to add 
> both username defined in each login module to the same subject
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/JaasContext.java#L101]
>  
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L63]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7274) Incorrect subject credential used in inter-broker communication

2018-10-04 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16638347#comment-16638347
 ] 

Manikumar commented on KAFKA-7274:
--

[~rsivaram] Thanks for the explanation. Let me know, if we need to update docs 
for this. otherwise we can close this JIRA.

> Incorrect subject credential used in inter-broker communication
> ---
>
> Key: KAFKA-7274
> URL: https://issues.apache.org/jira/browse/KAFKA-7274
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0
>Reporter: TAO XIAO
>Priority: Major
>
> We configured one broker setup to enable multiple SASL mechanisms using JAAS 
> config file but we failed to start up the broker.
>  
> Here is security section of server.properties
>  
> {{listeners=SASL_PLAINTEXT://:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
> sasl.mechanism.inter.broker.protocol=PLAIN}}{{}}
>  
> JAAS file
>  
> {noformat}
> sasl_plaintext.KafkaServer {
>   org.apache.kafka.common.security.plain.PlainLoginModule required
>   username="admin"
>   password="admin-secret"
>   user_admin="admin-secret"
>   user_alice="alice-secret";
>   org.apache.kafka.common.security.scram.ScramLoginModule required
>   username="admin1"
>   password="admin-secret";
> };{noformat}
>  
> Exception we got
>  
> {noformat}
> [2018-08-10 12:12:13,070] ERROR [Controller id=0, targetBrokerId=0] 
> Connection to node 0 failed authentication due to: Authentication failed: 
> Invalid username or password 
> (org.apache.kafka.clients.NetworkClient){noformat}
>  
> If we changed to use broker configuration property we can start broker 
> successfully
>  
> {noformat}
> listeners=SASL_PLAINTEXT://:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256
> sasl.mechanism.inter.broker.protocol=PLAIN
> listener.name.sasl_plaintext.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule
>  required username="admin" password="admin-secret" user_admin="admin-secret" 
> user_alice="alice-secret";
> listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
>  required username="admin1" password="admin-secret";{noformat}
>  
> I believe this issue is caused by Kafka assigning all login modules to each 
> defined mechanism when using JAAS file which results in Login class to add 
> both username defined in each login module to the same subject
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/JaasContext.java#L101]
>  
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L63]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7508) Kafka broker anonymous disconnected from Zookeeper

2018-10-16 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651985#comment-16651985
 ] 

Manikumar commented on KAFKA-7508:
--

[~sathish051] Exception message shows  "java.lang.OutOfMemoryError".  Did you 
tried increasing heap size?
 Please post your java heap memory settings.

> Kafka broker anonymous disconnected from Zookeeper
> --
>
> Key: KAFKA-7508
> URL: https://issues.apache.org/jira/browse/KAFKA-7508
> Project: Kafka
>  Issue Type: Task
>  Components: admin, config
>Reporter: Sathish Yanamala
>Priority: Blocker
>
> Hello Team,
>  
> We are facing below Error , Kafka broker unable to connect Zookeeper , Please 
> check and suggest is there any configuration changes required on Kafka Broker.
>  
>  ERROR:
> 2018-10-15 12:24:07,502 WARN kafka.network.Processor: Attempting to send 
> response via channel for which there is no open connection, connection id 
> - -:9093-- -:47542-25929
> 2018-10-15 12:24:09,428 INFO kafka.coordinator.group.GroupCoordinator: 
> [GroupCoordinator 3]: Group KMOffsetCache-xxx  with generation 9 is now 
> empty (__consumer_offsets-22)
> 2018-10-15 12:24:09,428 INFO kafka.server.epoch.LeaderEpochFileCache: Updated 
> PartitionLeaderEpoch. New: \{epoch:1262, offset:151}, Current: \{epoch:1261, 
> offset144} for Partition: __consumer_offsets-22. Cache now contains 15 
> entries.
> {color:#d04437}*2018-10-15 12:24:10,905 ERROR kafka.utils.KafkaScheduler: 
> Uncaught exception in scheduled task 'highwatermark-checkpoint'*{color}
> {color:#d04437}*java.lang.OutOfMemoryError: Java heap space*{color}
> {color:#d04437}    at{color} 
> scala.collection.convert.DecorateAsScala$$Lambda$214/x.get$Lambda(Unknown 
> Source)
>     at 
> java.lang.invoke.LambdaForm$DMH/xxx.invokeStatic_LL_L(LambdaForm$DMH)
>     at 
> java.lang.invoke.LambdaForm$MH/xx.linkToTargetMethod(LambdaForm$MH)
>     at 
> scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter(DecorateAsScala.scala:45)
>     at 
> scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter$(DecorateAsScala.scala:44)
>     at 
> scala.collection.JavaConverters$.collectionAsScalaIterableConverter(JavaConverters.scala:73)
>     at kafka.utils.Pool.values(Pool.scala:85)
>     at 
> kafka.server.ReplicaManager.nonOfflinePartitionsIterator(ReplicaManager.scala:397)
>     at 
> kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1340)
>     at 
> kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:253)
>     at 
> kafka.server.ReplicaManager$$Lambda$608/xx.apply$mcV$sp(Unknown Source)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at 
> kafka.utils.KafkaScheduler$$Lambda$269/.apply$mcV$sp(Unknown Source)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> Thank you,
> Sathish Yanamala
> M:832-382-4487



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7516) Client (Producer and/or Consumer) crashes during initialization on Android

2018-10-18 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7516:
-
Fix Version/s: (was: 2.0.1)
   2.2.0

> Client (Producer and/or Consumer) crashes during initialization on Android
> --
>
> Key: KAFKA-7516
> URL: https://issues.apache.org/jira/browse/KAFKA-7516
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: alex kamenetsky
>Priority: Major
> Fix For: 2.2.0
>
>
> Attempt to incorporate kafka client (both Producer and Consumer) on Android 
> Dalvik fails during initialization stage: Dalvik doesn't support 
> javax.management (JMX).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7508) Kafka broker anonymous disconnected from Zookeeper

2018-10-19 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16656854#comment-16656854
 ] 

Manikumar commented on KAFKA-7508:
--

[~sathish051] Please add Kafka and Java version details. Why is PermSize is 
small. Try increasing PermGen size.
 PermGen space is replaced by MetaSpace in Java 8.  Suggested settings: 
http://kafka.apache.org/documentation/#java

We need to monitor heap size and if required may need to take heap dump. 
Profiling helps us to find out the  root cause. 
You can also take periodic Jmap command output for memory analysis

 $jmap  -histo:live  //to print histogram of  live java object heap

> Kafka broker anonymous disconnected from Zookeeper
> --
>
> Key: KAFKA-7508
> URL: https://issues.apache.org/jira/browse/KAFKA-7508
> Project: Kafka
>  Issue Type: Task
>  Components: admin, config
>Reporter: Sathish Yanamala
>Priority: Blocker
>
> Hello Team,
>  
> We are facing below Error , Kafka broker unable to connect Zookeeper , Please 
> check and suggest is there any configuration changes required on Kafka Broker.
>  
>  ERROR:
> 2018-10-15 12:24:07,502 WARN kafka.network.Processor: Attempting to send 
> response via channel for which there is no open connection, connection id 
> - -:9093-- -:47542-25929
> 2018-10-15 12:24:09,428 INFO kafka.coordinator.group.GroupCoordinator: 
> [GroupCoordinator 3]: Group KMOffsetCache-xxx  with generation 9 is now 
> empty (__consumer_offsets-22)
> 2018-10-15 12:24:09,428 INFO kafka.server.epoch.LeaderEpochFileCache: Updated 
> PartitionLeaderEpoch. New: \{epoch:1262, offset:151}, Current: \{epoch:1261, 
> offset144} for Partition: __consumer_offsets-22. Cache now contains 15 
> entries.
> {color:#d04437}*2018-10-15 12:24:10,905 ERROR kafka.utils.KafkaScheduler: 
> Uncaught exception in scheduled task 'highwatermark-checkpoint'*{color}
> {color:#d04437}*java.lang.OutOfMemoryError: Java heap space*{color}
> {color:#d04437}    at{color} 
> scala.collection.convert.DecorateAsScala$$Lambda$214/x.get$Lambda(Unknown 
> Source)
>     at 
> java.lang.invoke.LambdaForm$DMH/xxx.invokeStatic_LL_L(LambdaForm$DMH)
>     at 
> java.lang.invoke.LambdaForm$MH/xx.linkToTargetMethod(LambdaForm$MH)
>     at 
> scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter(DecorateAsScala.scala:45)
>     at 
> scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter$(DecorateAsScala.scala:44)
>     at 
> scala.collection.JavaConverters$.collectionAsScalaIterableConverter(JavaConverters.scala:73)
>     at kafka.utils.Pool.values(Pool.scala:85)
>     at 
> kafka.server.ReplicaManager.nonOfflinePartitionsIterator(ReplicaManager.scala:397)
>     at 
> kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1340)
>     at 
> kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:253)
>     at 
> kafka.server.ReplicaManager$$Lambda$608/xx.apply$mcV$sp(Unknown Source)
>     at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>     at 
> kafka.utils.KafkaScheduler$$Lambda$269/.apply$mcV$sp(Unknown Source)
>     at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> Thank you,
> Sathish Yanamala
> M:832-382-4487



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7546) Java implementation for Authorizer

2018-10-25 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663494#comment-16663494
 ] 

Manikumar commented on KAFKA-7546:
--

You can check this implementaion: 
https://github.com/apache/ranger/blob/master/plugin-kafka/src/main/java/org/apache/ranger/authorization/kafka/authorizer/RangerKafkaAuthorizer.java

> Java implementation for Authorizer
> --
>
> Key: KAFKA-7546
> URL: https://issues.apache.org/jira/browse/KAFKA-7546
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Pradeep Bansal
>Priority: Major
> Attachments: AuthorizerImpl.PNG
>
>
> I am using kafka with authentication and authorization. I wanted to plugin my 
> own implementation of Authorizer which doesn't use zookeeper instead has 
> permission mapping in SQL database. Is it possible to write Authorizer code 
> in Java?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7535) KafkaConsumer doesn't report records-lag if isolation.level is read_committed

2018-10-25 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663595#comment-16663595
 ] 

Manikumar commented on KAFKA-7535:
--

As mentioned in the JIRA description, this only impacts consumer records-lag 
metric. We can include the fix to 2.0.1 .

> KafkaConsumer doesn't report records-lag if isolation.level is read_committed
> -
>
> Key: KAFKA-7535
> URL: https://issues.apache.org/jira/browse/KAFKA-7535
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Alexey Vakhrenev
>Assignee: lambdaliu
>Priority: Major
>  Labels: regression
> Fix For: 2.1.0, 2.1.1, 2.0.2
>
>
> Starting from 2.0.0, {{KafkaConsumer}} doesn't report {{records-lag}} if 
> {{isolation.level}} is {{read_committed}}. The last version, where it works 
> is {{1.1.1}}.
> The issue can be easily reproduced in {{kafka.api.PlaintextConsumerTest}} by 
> adding {{consumerConfig.setProperty("isolation.level", "read_committed")}} 
> witin related tests:
>  - {{testPerPartitionLagMetricsCleanUpWithAssign}}
>  - {{testPerPartitionLagMetricsCleanUpWithSubscribe}}
>  - {{testPerPartitionLagWithMaxPollRecords}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5462) Add a configuration for users to specify a template for building a custom principal name

2018-10-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5462.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

> Add a configuration for users to specify a template for building a custom 
> principal name
> 
>
> Key: KAFKA-5462
> URL: https://issues.apache.org/jira/browse/KAFKA-5462
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Koelli Mungee
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.2.0
>
>
> Add a configuration for users to specify a template for building a custom 
> principal name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7535) KafkaConsumer doesn't report records-lag if isolation.level is read_committed

2018-10-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7535:
-
Fix Version/s: (was: 2.0.2)
   (was: 2.1.1)
   2.0.1

> KafkaConsumer doesn't report records-lag if isolation.level is read_committed
> -
>
> Key: KAFKA-7535
> URL: https://issues.apache.org/jira/browse/KAFKA-7535
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Alexey Vakhrenev
>Assignee: lambdaliu
>Priority: Major
>  Labels: regression
> Fix For: 2.0.1, 2.1.0
>
>
> Starting from 2.0.0, {{KafkaConsumer}} doesn't report {{records-lag}} if 
> {{isolation.level}} is {{read_committed}}. The last version, where it works 
> is {{1.1.1}}.
> The issue can be easily reproduced in {{kafka.api.PlaintextConsumerTest}} by 
> adding {{consumerConfig.setProperty("isolation.level", "read_committed")}} 
> witin related tests:
>  - {{testPerPartitionLagMetricsCleanUpWithAssign}}
>  - {{testPerPartitionLagMetricsCleanUpWithSubscribe}}
>  - {{testPerPartitionLagWithMaxPollRecords}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7535) KafkaConsumer doesn't report records-lag if isolation.level is read_committed

2018-10-25 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7535.
--
Resolution: Fixed

> KafkaConsumer doesn't report records-lag if isolation.level is read_committed
> -
>
> Key: KAFKA-7535
> URL: https://issues.apache.org/jira/browse/KAFKA-7535
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Alexey Vakhrenev
>Assignee: lambdaliu
>Priority: Major
>  Labels: regression
> Fix For: 2.0.1, 2.1.0
>
>
> Starting from 2.0.0, {{KafkaConsumer}} doesn't report {{records-lag}} if 
> {{isolation.level}} is {{read_committed}}. The last version, where it works 
> is {{1.1.1}}.
> The issue can be easily reproduced in {{kafka.api.PlaintextConsumerTest}} by 
> adding {{consumerConfig.setProperty("isolation.level", "read_committed")}} 
> witin related tests:
>  - {{testPerPartitionLagMetricsCleanUpWithAssign}}
>  - {{testPerPartitionLagMetricsCleanUpWithSubscribe}}
>  - {{testPerPartitionLagWithMaxPollRecords}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7547) Avoid relogin in kafka if connection is already established.

2018-10-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16665001#comment-16665001
 ] 

Manikumar commented on KAFKA-7547:
--

Since Kerberos ticket expiration times are typically short, repeated logins are 
required to renew the tickets.

> Avoid relogin in kafka if connection is already established.
> 
>
> Key: KAFKA-7547
> URL: https://issues.apache.org/jira/browse/KAFKA-7547
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Pradeep Bansal
>Priority: Major
>
> I am new to kafka and may be there are ways already there for my requirement. 
> I didn't find a way so far and hence I though I will post it here.
> Currently, I observed that kafka periodically tries to renew kerberos token 
> using kinit -R command. I found that I can set 
> sasl.kerberos.min.time.before.relogin and change default from 1 minute to 1 
> day max. But in my case I am not clear on why renew is even required.
>  
> If it is not really required is there a way to turn it off?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7519) Transactional Ids Left in Pending State by TransactionStateManager During Transactional Id Expiration Are Unusable

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7519:


Assignee: Manikumar

> Transactional Ids Left in Pending State by TransactionStateManager During 
> Transactional Id Expiration Are Unusable
> --
>
> Key: KAFKA-7519
> URL: https://issues.apache.org/jira/browse/KAFKA-7519
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 2.0.0
>Reporter: Bridger Howell
>Assignee: Manikumar
>Priority: Blocker
> Fix For: 2.0.1, 2.1.0
>
> Attachments: KAFKA-7519.patch, image-2018-10-18-13-02-22-371.png
>
>
>  
> After digging into a case where an exactly-once streams process was bizarrely 
> unable to process incoming data, we observed the following:
>  * StreamThreads stalling while creating a producer, eventually resulting in 
> no consumption by that streams process. Looking into those threads, we found 
> they were stuck in a loop, sending InitProducerIdRequests and always 
> receiving back the retriable error CONCURRENT_TRANSACTIONS and trying again. 
> These requests always had the same transactional id.
>  * After changing the streams process to not use exactly-once, it was able to 
> process messages with no problems.
>  * Alternatively, changing the applicationId for that streams process, it was 
> able to process with no problems.
>  * Every hour,  every broker would fail the task `transactionalId-expiration` 
> with the following error:
>  ** 
> {code:java}
> {"exception":{"stacktrace":"java.lang.IllegalStateException: Preparing 
> transaction state transition to Dead while it already a pending sta
> te Dead
>     at 
> kafka.coordinator.transaction.TransactionMetadata.prepareTransitionTo(TransactionMetadata.scala:262)
>     at kafka.coordinator
> .transaction.TransactionMetadata.prepareDead(TransactionMetadata.scala:237)
>     at kafka.coordinator.transaction.TransactionStateManager$$a
> nonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2$$anonfun$apply$9$$anonfun$3.apply(TransactionStateManager.scal
> a:151)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$ano
> nfun$2$$anonfun$apply$9$$anonfun$3.apply(TransactionStateManager.scala:151)
>     at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>     at
>  
> kafka.coordinator.transaction.TransactionMetadata.inLock(TransactionMetadata.scala:172)
>     at kafka.coordinator.transaction.TransactionSt
> ateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2$$anonfun$apply$9.apply(TransactionStateManager.sc
> ala:150)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$a
> nonfun$2$$anonfun$apply$9.apply(TransactionStateManager.scala:149)
>     at scala.collection.TraversableLike$$anonfun$map$1.apply(Traversable
> Like.scala:234)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at scala.collection.immutable.Li
> st.foreach(List.scala:392)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>     at scala.collection.immutable.Li
> st.map(List.scala:296)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$app
> ly$mcV$sp$1$$anonfun$2.apply(TransactionStateManager.scala:149)
>     at kafka.coordinator.transaction.TransactionStateManager$$anonfun$enabl
> eTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2.apply(TransactionStateManager.scala:142)
>     at scala.collection.Traversabl
> eLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.
> scala:241)
>     at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
>     at scala.collection.mutable.HashMap$$anon
> fun$foreach$1.apply(HashMap.scala:130)
>     at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
>     at scala.collec
> tion.mutable.HashMap.foreachEntry(HashMap.scala:40)
>     at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
>     at scala.collecti
> on.TraversableLike$class.flatMap(TraversableLike.scala:241)
>     at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
>     a
> t 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Tr
> ansactionStateManager.scala:142)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$

[jira] [Assigned] (KAFKA-7519) Transactional Ids Left in Pending State by TransactionStateManager During Transactional Id Expiration Are Unusable

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7519:


Assignee: Bridger Howell  (was: Manikumar)

> Transactional Ids Left in Pending State by TransactionStateManager During 
> Transactional Id Expiration Are Unusable
> --
>
> Key: KAFKA-7519
> URL: https://issues.apache.org/jira/browse/KAFKA-7519
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 2.0.0
>Reporter: Bridger Howell
>Assignee: Bridger Howell
>Priority: Blocker
> Fix For: 2.0.1, 2.1.0
>
> Attachments: KAFKA-7519.patch, image-2018-10-18-13-02-22-371.png
>
>
>  
> After digging into a case where an exactly-once streams process was bizarrely 
> unable to process incoming data, we observed the following:
>  * StreamThreads stalling while creating a producer, eventually resulting in 
> no consumption by that streams process. Looking into those threads, we found 
> they were stuck in a loop, sending InitProducerIdRequests and always 
> receiving back the retriable error CONCURRENT_TRANSACTIONS and trying again. 
> These requests always had the same transactional id.
>  * After changing the streams process to not use exactly-once, it was able to 
> process messages with no problems.
>  * Alternatively, changing the applicationId for that streams process, it was 
> able to process with no problems.
>  * Every hour,  every broker would fail the task `transactionalId-expiration` 
> with the following error:
>  ** 
> {code:java}
> {"exception":{"stacktrace":"java.lang.IllegalStateException: Preparing 
> transaction state transition to Dead while it already a pending sta
> te Dead
>     at 
> kafka.coordinator.transaction.TransactionMetadata.prepareTransitionTo(TransactionMetadata.scala:262)
>     at kafka.coordinator
> .transaction.TransactionMetadata.prepareDead(TransactionMetadata.scala:237)
>     at kafka.coordinator.transaction.TransactionStateManager$$a
> nonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2$$anonfun$apply$9$$anonfun$3.apply(TransactionStateManager.scal
> a:151)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$ano
> nfun$2$$anonfun$apply$9$$anonfun$3.apply(TransactionStateManager.scala:151)
>     at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
>     at
>  
> kafka.coordinator.transaction.TransactionMetadata.inLock(TransactionMetadata.scala:172)
>     at kafka.coordinator.transaction.TransactionSt
> ateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2$$anonfun$apply$9.apply(TransactionStateManager.sc
> ala:150)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$a
> nonfun$2$$anonfun$apply$9.apply(TransactionStateManager.scala:149)
>     at scala.collection.TraversableLike$$anonfun$map$1.apply(Traversable
> Like.scala:234)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at scala.collection.immutable.Li
> st.foreach(List.scala:392)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>     at scala.collection.immutable.Li
> st.map(List.scala:296)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$app
> ly$mcV$sp$1$$anonfun$2.apply(TransactionStateManager.scala:149)
>     at kafka.coordinator.transaction.TransactionStateManager$$anonfun$enabl
> eTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1$$anonfun$2.apply(TransactionStateManager.scala:142)
>     at scala.collection.Traversabl
> eLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>     at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.
> scala:241)
>     at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
>     at scala.collection.mutable.HashMap$$anon
> fun$foreach$1.apply(HashMap.scala:130)
>     at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
>     at scala.collec
> tion.mutable.HashMap.foreachEntry(HashMap.scala:40)
>     at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
>     at scala.collecti
> on.TraversableLike$class.flatMap(TraversableLike.scala:241)
>     at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
>     a
> t 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enableTransactionalIdExpiration$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Tr
> ansactionStateManager.scala:142)
>     at 
> kafka.coordinator.transaction.TransactionStateManager$$anonfun$enab

[jira] [Assigned] (KAFKA-7316) Use of filter method in KTable.scala may result in StackOverflowError

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7316:


Assignee: Joan Goyeau

> Use of filter method in KTable.scala may result in StackOverflowError
> -
>
> Key: KAFKA-7316
> URL: https://issues.apache.org/jira/browse/KAFKA-7316
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Joan Goyeau
>Priority: Major
>  Labels: scala
> Fix For: 2.0.1, 2.1.0
>
> Attachments: 7316.v4.txt
>
>
> In this thread:
> http://search-hadoop.com/m/Kafka/uyzND1dNbYKXzC4F1?subj=Issue+in+Kafka+2+0+0+
> Druhin reported seeing StackOverflowError when using filter method from 
> KTable.scala
> This can be reproduced with the following change:
> {code}
> diff --git 
> a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
>  b/streams/streams-scala/src/test/scala
> index 3d1bab5..e0a06f2 100644
> --- 
> a/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
> +++ 
> b/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/StreamToTableJoinScalaIntegrationTestImplicitSerdes.scala
> @@ -58,6 +58,7 @@ class StreamToTableJoinScalaIntegrationTestImplicitSerdes 
> extends StreamToTableJ
>  val userClicksStream: KStream[String, Long] = 
> builder.stream(userClicksTopic)
>  val userRegionsTable: KTable[String, String] = 
> builder.table(userRegionsTopic)
> +userRegionsTable.filter { case (_, count) => true }
>  // Compute the total per region by summing the individual click counts 
> per region.
>  val clicksPerRegion: KTable[String, Long] =
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7301) KTable to KTable join invocation does not resolve in Scala DSL

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7301:


Assignee: Joan Goyeau

> KTable to KTable join invocation does not resolve in Scala DSL
> --
>
> Key: KAFKA-7301
> URL: https://issues.apache.org/jira/browse/KAFKA-7301
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Michal
>Assignee: Joan Goyeau
>Priority: Major
>  Labels: scala
> Fix For: 2.0.1, 2.1.0
>
>
> I found a peculiar problem while doing KTable to KTable join using Scala DSL. 
> The following code:
>  
> {code:java}
> val t1: KTable[String, Int] = ...
> val t2: KTable[String, Int] = ...
> val result = t1.join(t2)((x: Int, y: Int) => x + y) 
> {code}
>  
> does not compile with "ambiguous reference to overloaded function". 
> A quick look at the code shows the join functions are defined as follows:
>  
> {code:java}
> def join[VO, VR](other: KTable[K, VO])(
>  joiner: (V, VO) => VR,
>  materialized: Materialized[K, VR, ByteArrayKeyValueStore]
> )
> def join[VO, VR](other: KTable[K, VO])(joiner: (V, VO) => VR)
> {code}
>  
> the reason it does not compile is the fact that the first parameter list is 
> identical. For some peculiar reason the KTable class actually compiles...
> The same problem exists for KTable to KTable leftJoin. Other joins 
> (stream-stream, stream-table) do not seem to be affected as there are no 
> overloaded versions of the functions.
> This can be reproduced in smaller scale by some simple scala code:
>  
> {code:java}
> object F {
>  //def x(a: Int): Int = 5
>  //def x(a: Int): Int = 6 //obviously does not compile
>  def f(x: Int)(y: Int): Int = x
>  def f(x: Int)(y: Int, z: Int): Int = x
> }
> val r = F.f(5)(4) //Cannot resolve
> val r2 = F.f(5)(4, 6) //cannot resolve
> val partial = F.f(5) _ //cannot resolve
> /* you get following error:
> Error: ambiguous reference to overloaded definition,
> both method f in object F of type (x: Int)(y: Int, z: Int)Int
> and method f in object F of type (x: Int)(y: Int)Int
> match argument types (Int)
> */{code}
>  
> The solution: get rid of the multiple parameter lists. I fail to see what 
> practical purpose they serve anyways. I am happy to supply appropriate PR if 
> there is agreement.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7353) Connect logs 'this' for anonymous inner classes

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7353:


Assignee: Kevin Lafferty

> Connect logs 'this' for anonymous inner classes
> ---
>
> Key: KAFKA-7353
> URL: https://issues.apache.org/jira/browse/KAFKA-7353
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.0
>Reporter: Kevin Lafferty
>Assignee: Kevin Lafferty
>Priority: Minor
> Fix For: 1.0.3, 1.1.2, 2.0.1, 2.1.0
>
>
> Some classes in the Kafka Connect runtime create anonymous inner classes that 
> log 'this', resulting in log messages that can't be correlated with any other 
> messages. These should scope 'this' to the outer class to have consistent log 
> messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-5891) Cast transformation fails if record schema contains timestamp field

2018-10-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5891:


Assignee: Maciej Bryński

> Cast transformation fails if record schema contains timestamp field
> ---
>
> Key: KAFKA-5891
> URL: https://issues.apache.org/jira/browse/KAFKA-5891
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Artem Plotnikov
>Assignee: Maciej Bryński
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.1
>
>
> I have the following simple type cast transformation:
> {code}
> name=postgresql-source-simple
> connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
> tasks.max=1
> connection.url=jdbc:postgresql://localhost:5432/testdb?user=postgres&password=mysecretpassword
> query=SELECT 1::INT as a, '2017-09-14 10:23:54'::TIMESTAMP as b
> transforms=Cast
> transforms.Cast.type=org.apache.kafka.connect.transforms.Cast$Value
> transforms.Cast.spec=a:boolean
> mode=bulk
> topic.prefix=clients
> {code}
> Which fails with the following exception in runtime:
> {code}
> [2017-09-14 16:51:01,885] ERROR Task postgresql-source-simple-0 threw an 
> uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:148)
> org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
> type INT64: class java.sql.Timestamp for field: "null"
>   at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:239)
>   at 
> org.apache.kafka.connect.data.ConnectSchema.validateValue(ConnectSchema.java:209)
>   at org.apache.kafka.connect.data.Struct.put(Struct.java:214)
>   at 
> org.apache.kafka.connect.transforms.Cast.applyWithSchema(Cast.java:152)
>   at org.apache.kafka.connect.transforms.Cast.apply(Cast.java:108)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:190)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:168)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> If I remove the  transforms.* part of the connector it will work correctly. 
> Actually, it doesn't really matter which types I use in the transformation 
> for field 'a', just the existence of a timestamp field brings the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671607#comment-16671607
 ] 

Manikumar commented on KAFKA-7579:
--

This issue is related to KAFKA-7561

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671733#comment-16671733
 ] 

Manikumar edited comment on KAFKA-7579 at 11/1/18 3:15 PM:
---

tests are passing locally after increasing console consume timeout. looks like 
the issue started after [https://github.com/apache/kafka/pull/5735]
I am going to lower the priority to unblock 2.0.1 release. Let me know if any 
concerns.


was (Author: omkreddy):
tests are passing locally after increasing console consume timeout. looks like 
the issue started after [https://github.com/apache/kafka/pull/5735]
I am going lower the priority to unblock 2.0.1 release. Let me know if any 
concerns.

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671733#comment-16671733
 ] 

Manikumar commented on KAFKA-7579:
--

tests are passing locally after increasing console consume timeout. looks like 
the issue started after [https://github.com/apache/kafka/pull/5735]
I am going lower the priority to unblock 2.0.1 release. Let me know if any 
concerns.

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7579:
-
Fix Version/s: 2.0.2

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
> Fix For: 2.0.2
>
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7579:
-
Priority: Major  (was: Blocker)

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Major
> Fix For: 2.1.0, 2.0.2
>
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7579) System Test Failure - security_test.SecurityTest.test_client_ssl_endpoint_validation_failure

2018-11-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7579.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

This is fixed via  KAFKA-7561.

> System Test Failure - 
> security_test.SecurityTest.test_client_ssl_endpoint_validation_failure
> 
>
> Key: KAFKA-7579
> URL: https://issues.apache.org/jira/browse/KAFKA-7579
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Priority: Blocker
> Fix For: 2.1.0, 2.0.2
>
>
> The security_test.SecurityTest.test_client_ssl_endpoint_validation_failure 
> test with security_protocol=SSL fails to pass
> {code:java}
> SESSION REPORT (ALL TESTS) ducktape version: 0.7.1 session_id: 
> 2018-10-31--002 run time: 2 minutes 12.452 seconds tests run: 2 passed: 1 
> failed: 1 ignored: 0 test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=PLAINTEXT.security_protocol=SSL
>  status: FAIL run time: 1 minute 2.149 seconds Node ducker@ducker05: did not 
> stop within the specified timeout of 15 seconds Traceback (most recent call 
> last): File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 132, in run data = self.run_test() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 185, in run_test return self.test_context.function(self.test) File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 324, in 
> wrapper return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs) 
> File "/opt/kafka-dev/tests/kafkatest/tests/core/security_test.py", line 114, 
> in test_client_ssl_endpoint_validation_failure self.consumer.stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
>  line 80, in stop super(BackgroundThreadService, self).stop() File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", line 
> 278, in stop self.stop_node(node) File 
> "/opt/kafka-dev/tests/kafkatest/services/console_consumer.py", line 254, in 
> stop_node (str(node.account), str(self.stop_timeout_sec)) AssertionError: 
> Node ducker@ducker05: did not stop within the specified timeout of 15 seconds 
> test_id: 
> kafkatest.tests.core.security_test.SecurityTest.test_client_ssl_endpoint_validation_failure.interbroker_security_protocol=SSL.security_protocol=PLAINTEXT
>  status: PASS run time: 1 minute 10.144 seconds ducker-ak test failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7561) Console Consumer - system test fails

2018-11-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7561:
-
Fix Version/s: (was: 2.1.1)
   2.1.0

> Console Consumer - system test fails
> 
>
> Key: KAFKA-7561
> URL: https://issues.apache.org/jira/browse/KAFKA-7561
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Stanislav Kozlovski
>Priority: Major
> Fix For: 0.10.2.3, 0.11.0.4, 1.0.3, 1.1.2, 2.1.0, 2.2.0, 2.0.2
>
>
> The test under 
> `kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle`
>  fails when I run it locally. 7 versions of the test failed for me and they 
> all had a similar error message:
> {code:java}
> AssertionError: Node ducker@ducker11: did not stop within the specified 
> timeout of 15 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7559) ConnectStandaloneFileTest system tests do not pass

2018-11-08 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7559:
-
Fix Version/s: (was: 2.0.1)
   2.0.2

> ConnectStandaloneFileTest system tests do not pass
> --
>
> Key: KAFKA-7559
> URL: https://issues.apache.org/jira/browse/KAFKA-7559
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Stanislav Kozlovski
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.1.0, 2.0.2
>
>
> Both tests `test_skip_and_log_to_dlq` and `test_file_source_and_sink` under 
> `kafkatest.tests.connect.connect_test.ConnectStandaloneFileTest` fail with 
> error messages similar to:
> "TimeoutError: Kafka Connect failed to start on node: ducker@ducker04 in 
> condition mode: LISTEN"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7313) StopReplicaRequest should attempt to remove future replica for the partition only if future replica exists

2018-11-08 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7313:
-
Fix Version/s: (was: 2.0.1)
   2.0.2

> StopReplicaRequest should attempt to remove future replica for the partition 
> only if future replica exists
> --
>
> Key: KAFKA-7313
> URL: https://issues.apache.org/jira/browse/KAFKA-7313
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Dong Lin
>Priority: Major
> Fix For: 2.1.0, 2.0.2
>
>
> This patch fixes two issues:
> 1) Currently if a broker received StopReplicaRequest with delete=true for the 
> same offline replica, the first StopRelicaRequest will show 
> KafkaStorageException and the second StopRelicaRequest will show 
> ReplicaNotAvailableException. This is because the first StopRelicaRequest 
> will remove the mapping (tp -> ReplicaManager.OfflinePartition) from 
> ReplicaManager.allPartitions before returning KafkaStorageException, thus the 
> second StopRelicaRequest will not find this partition as offline.
> This result appears to be inconsistent. And since the replica is already 
> offline and broker will not be able to delete file for this replica, the 
> StopReplicaRequest should fail without making any change and broker should 
> still remember that this replica is offline. 
> 2) Currently if broker receives StopReplicaRequest with delete=true, the 
> broker will attempt to remove future replica for the partition, which will 
> cause KafkaStorageException in the StopReplicaResponse if this replica does 
> not have future replica. It is problematic to always return 
> KafkaStorageException in the response if future replica does not exist.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7126) Reduce number of rebalance for large consumer groups after a topic is created

2018-11-09 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7126:
-
Fix Version/s: (was: 2.0.0)
   2.0.1

> Reduce number of rebalance for large consumer groups after a topic is created
> -
>
> Key: KAFKA-7126
> URL: https://issues.apache.org/jira/browse/KAFKA-7126
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Dong Lin
>Assignee: Jon Lee
>Priority: Major
> Fix For: 2.0.1, 2.1.0
>
> Attachments: 1.diff
>
>
> For a group of 200 MirrorMaker consumers with patten-based topic 
> subscription, a single topic creation caused 50 rebalances for each of these 
> consumer over 5 minutes period. This causes the MM to significantly lag 
> behind during this 5 minutes period and the clusters may be considerably 
> out-of-sync during this period.
> Ideally we would like to trigger only 1 rebalance in the MM group after a 
> topic is created. And conceptually it should be doable.
>  
> Here is the explanation of this repeated consumer rebalance based on the 
> consumer rebalance logic in the latest Kafka code:
> 1) A topic of 10 partitions are created in the cluster and it matches the 
> subscription pattern of the MM consumers.
> 2) The leader of the MM consumer group detects the new topic after metadata 
> refresh. It triggers rebalance.
> 3) At time T0, the first rebalance finishes. 10 consumers are assigned 1 
> partition of this topic. The other 190 consumers are not assigned any 
> partition of this topic. At this moment, the newly created topic will appear 
> in `ConsumerCoordinator.subscriptions.subscription` for those consumers who 
> is assigned partition of this consumer or who has refreshed metadata before 
> time T0.
> 4) In the common case, half of the consumers has refreshed metadata before 
> the leader of the consumer group refreshed metadata. Thus around 100 + 10 = 
> 110 consumers has the newly created topic in 
> `ConsumerCoordinator.subscriptions.subscription`. The other 90 consumers do 
> not have this topic in `ConsumerCoordinator.subscriptions.subscription`.
> 5) For those 90 consumers, if any consumer refreshes metadata, it will add 
> this topic to `ConsumerCoordinator.subscriptions.subscription`, which causes 
> `ConsumerCoordinator.rejoinNeededOrPending()` to return true and triggers 
> another rebalance. If a few consumers refresh metadata almost at the same 
> time, they will jointly trigger one rebalance. Otherwise, they each trigger a 
> separate rebalance.
> 6) The default metadata.max.age.ms is 5 minutes. Thus in the worse case, 
> which is probably also the average case if number of consumers in the group 
> is large, the latest consumer will refresh its metadata 5 minutes after T0. 
> And the rebalance will be repeated during this 5 minutes interval.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7286) Loading offsets and group metadata hangs with large group metadata records

2018-11-09 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7286:
-
Fix Version/s: 2.0.1

> Loading offsets and group metadata hangs with large group metadata records
> --
>
> Key: KAFKA-7286
> URL: https://issues.apache.org/jira/browse/KAFKA-7286
> Project: Kafka
>  Issue Type: Bug
>Reporter: Flavien Raynaud
>Assignee: Flavien Raynaud
>Priority: Minor
> Fix For: 2.0.1, 2.1.0
>
>
> When a (Kafka-based) consumer group contains many members, group metadata 
> records (in the {{__consumer-offsets}} topic) may happen to be quite large.
> Increasing the {{message.max.bytes}} makes storing these records possible.
>  Loading them when a broker restart is done via 
> [doLoadGroupsAndOffsets|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L504].
>  However, this method relies on the {{offsets.load.buffer.size}} 
> configuration to create a 
> [buffer|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L513]
>  that will contain the records being loaded.
> If a group metadata record is too large for this buffer, the loading method 
> will get stuck trying to load records (in a tight loop) into a buffer that 
> cannot accommodate a single record.
> 
> For example, if the {{__consumer-offsets-9}} partition contains a record 
> smaller than {{message.max.bytes}} but larger than 
> {{offsets.load.buffer.size}}, logs would indicate the following:
> {noformat}
> ...
> [2018-08-13 21:00:21,073] INFO [GroupMetadataManager brokerId=0] Scheduling 
> loading of offsets and group metadata from __consumer_offsets-9 
> (kafka.coordinator.group.GroupMetadataManager)
> ...
> {noformat}
> But logs will never contain the expected {{Finished loading offsets and group 
> metadata from ...}} line.
> Consumers whose group are assigned to this partition will see {{Marking the 
> coordinator dead}} and will never be able to stabilize and make progress.
> 
> From what I could gather in the code, it seems that:
>  - 
> [fetchDataInfo|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L522]
>  returns at least one record (even if larger than 
> {{offsets.load.buffer.size}}, thanks to {{minOneMessage = true}})
>  - No fully-readable record is stored in the buffer with 
> [fileRecords.readInto(buffer, 
> 0)|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L528]
>  (too large to fit in the buffer)
>  - 
> [memRecords.batches|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L532]
>  returns an empty iterator
>  - 
> [currOffset|https://github.com/apache/kafka/blob/418a91b5d4e3a0579b91d286f61c2b63c5b4a9b6/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L590]
>  never advances, hence loading the partition hangs forever.
> 
> It would be great to let the partition load even if a record is larger than 
> the configured {{offsets.load.buffer.size}} limit. The fact that 
> {{minOneMessage = true}} when reading records seems to indicate it might be a 
> good idea for the buffer to accommodate at least one record.
> If you think the limit should stay a hard limit, then at least adding a log 
> line indicating {{offsets.load.buffer.size}} is not large enough and should 
> be increased. Otherwise, one can only guess and dig through the code to 
> figure out what is happening :)
> I will try to open a PR with the first idea (allowing large records to be 
> read when needed) soon, but any feedback from anyone who also had the same 
> issue in the past would be appreciated :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7080) WindowStoreBuilder incorrectly initializes CachingWindowStore

2018-11-09 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681546#comment-16681546
 ] 

Manikumar commented on KAFKA-7080:
--

[~vvcephei]  [~guozhang] Can we create a separate JIRA for the part that was 
fixed in 2.0 branch. This will help us to track the changes for 2.0.1 release.

> WindowStoreBuilder incorrectly initializes CachingWindowStore
> -
>
> Key: KAFKA-7080
> URL: https://issues.apache.org/jira/browse/KAFKA-7080
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0, 1.0.1, 1.1.0, 2.0.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.1.0
>
>
> When caching is enabled on the WindowStoreBuilder, it creates a 
> CachingWindowStore. However, it incorrectly passes storeSupplier.segments() 
> (the number of segments) to the segmentInterval argument.
>  
> The impact is low, since any valid number of segments is also a valid segment 
> size, but it likely results in much smaller segments than intended. For 
> example, the segments may be sized 3ms instead of 60,000ms.
>  
> Ideally the WindowBytesStoreSupplier interface would allow suppliers to 
> advertise their segment size instead of segment count. I plan to create a KIP 
> to propose this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6584) Session expiration concurrent with ZooKeeper leadership failover may lead to broker registration failure

2018-11-09 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681872#comment-16681872
 ] 

Manikumar commented on KAFKA-6584:
--

[~pachilo] maybe we can leave this as open, so that we can track 
ZOOKEEPER-2985. All other JIRAs we can resolve as duplicate of  KAFKA-7165 

> Session expiration concurrent with ZooKeeper leadership failover may lead to 
> broker registration failure
> 
>
> Key: KAFKA-6584
> URL: https://issues.apache.org/jira/browse/KAFKA-6584
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 1.0.0
>Reporter: Chris Thunes
>Priority: Major
>
> It seems that an edge case exists which can lead to sessions "un-expiring" 
> during a ZooKeeper leadership failover. Additional details can be found in 
> ZOOKEEPER-2985.
> This leads to a NODEXISTS error when attempting to re-create the ephemeral 
> brokers/ids/\{id} node in ZkUtils.registerBrokerInZk. We experienced this 
> issue on each node within a 3-node Kafka cluster running 1.0.0. All three 
> nodes continued running (producers and consumers appeared unaffected), but 
> none of the nodes were considered online and partition leadership could be 
> not re-assigned.
> I took a quick look at trunk and I believe the issue is still present, but 
> has moved into KafkaZkClient.checkedEphemeralCreate which will [raise an 
> error|https://github.com/apache/kafka/blob/90e0bbe/core/src/main/scala/kafka/zk/KafkaZkClient.scala#L1512]
>  when it finds that the broker/ids/\{id} node exists, but belongs to the old 
> (believed expired) session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6584) Session expiration concurrent with ZooKeeper leadership failover may lead to broker registration failure

2018-11-09 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16681900#comment-16681900
 ] 

Manikumar commented on KAFKA-6584:
--

[~pachilo] Yes

> Session expiration concurrent with ZooKeeper leadership failover may lead to 
> broker registration failure
> 
>
> Key: KAFKA-6584
> URL: https://issues.apache.org/jira/browse/KAFKA-6584
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 1.0.0
>Reporter: Chris Thunes
>Priority: Major
>
> It seems that an edge case exists which can lead to sessions "un-expiring" 
> during a ZooKeeper leadership failover. Additional details can be found in 
> ZOOKEEPER-2985.
> This leads to a NODEXISTS error when attempting to re-create the ephemeral 
> brokers/ids/\{id} node in ZkUtils.registerBrokerInZk. We experienced this 
> issue on each node within a 3-node Kafka cluster running 1.0.0. All three 
> nodes continued running (producers and consumers appeared unaffected), but 
> none of the nodes were considered online and partition leadership could be 
> not re-assigned.
> I took a quick look at trunk and I believe the issue is still present, but 
> has moved into KafkaZkClient.checkedEphemeralCreate which will [raise an 
> error|https://github.com/apache/kafka/blob/90e0bbe/core/src/main/scala/kafka/zk/KafkaZkClient.scala#L1512]
>  when it finds that the broker/ids/\{id} node exists, but belongs to the old 
> (believed expired) session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7554) zookeeper.session.timeout.ms Value

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7554.
--
Resolution: Not A Problem

Default Zookeeper session timeout value 6 seconds. If the server fails to 
heartbeat to zookeeper within this period of time it is considered dead. If you 
set this too low the server may be falsely considered dead; if you set it too 
high it may take too long to recognize a truly dead server.
We can tune this parameter as per requirement. 

> zookeeper.session.timeout.ms Value
> --
>
> Key: KAFKA-7554
> URL: https://issues.apache.org/jira/browse/KAFKA-7554
> Project: Kafka
>  Issue Type: Improvement
>  Components: zkclient
>Reporter: BELUGA BEHR
>Priority: Major
>
> {quote}
> zookeeper.session.timeout.ms = 6000 (6s)
> zookeeper.connection.timeout.ms = 6000 (6s)
> {quote}
> - https://kafka.apache.org/documentation/#configuration
> Kind of an odd value?  Was it supposed to be 6 (60s) ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7323) add replication factor doesn't work

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7323.
--
Resolution: Not A Problem

Closing as per above comment.  probably heap memory is not sufficient. Please 
reopen if you think the issue still exists

> add replication factor doesn't work
> ---
>
> Key: KAFKA-7323
> URL: https://issues.apache.org/jira/browse/KAFKA-7323
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.11.0.2
>Reporter: superheizai
>Priority: Major
>
> I have topic with 256 parititons.
> Firstly, I generate the  topic partitions with their brokerIds with 
> kafka-reassign-partitions generate.
> Seconld, I add a brokerId for each partition.
> Then, I run kafka-reassign-partitions, some partitions increased their 
> replication factor, but the others stoped.
> When I read log controller.log,  some partitions' replication factors 
> increased. Then I remove these paritions which replication factor base been 
> increased and run kafka-reassign-partitions again, but no log in 
> controller.log, all paritions are "still in progress", no network flow 
> changed when watch zabbix network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7361) Kafka wont reconnect after NoRouteToHostException

2018-11-19 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692035#comment-16692035
 ] 

Manikumar commented on KAFKA-7361:
--

This must be fixed in https://issues.apache.org/jira/browse/KAFKA-4041

> Kafka wont reconnect after NoRouteToHostException
> -
>
> Key: KAFKA-7361
> URL: https://issues.apache.org/jira/browse/KAFKA-7361
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 1.1.0
> Environment: kubernetes cluster
>Reporter: C Schabert
>Priority: Major
>
> After Zookeeper died and came back up kafka could not reconnect to zookeeper.
> In this Setup zookeeper ist behind a dns and came up with a different ip.
>  
> Here is the kafka log output:
>  
> {code:java}
> [2018-08-30 14:50:23,846] INFO Opening socket connection to server 
> zookeeper-0.zookeeper./10.42.0.123:2181. Will not attempt to authenticate 
> using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
> 8/30/2018 4:50:26 PM [2018-08-30 14:50:26,916] WARN Session 0x1658b2f0f4e0002 
> for server null, unexpected error, closing socket connection and attempting 
> reconnect (org.apache.zookeeper.ClientCnxn)
> 8/30/2018 4:50:26 PM java.net.NoRouteToHostException: No route to host
> 8/30/2018 4:50:26 PM at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
> Method)
> 8/30/2018 4:50:26 PM at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> 8/30/2018 4:50:26 PM at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
> 8/30/2018 4:50:26 PM at 
> org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-764) Race Condition in Broker Registration after ZooKeeper disconnect

2018-11-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-764.
-
Resolution: Duplicate

This old issue is similar to KAFKA-7165. Closing this as duplicate KAFKA-7165

> Race Condition in Broker Registration after ZooKeeper disconnect
> 
>
> Key: KAFKA-764
> URL: https://issues.apache.org/jira/browse/KAFKA-764
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.7.1
>Reporter: Bob Cotton
>Priority: Major
> Attachments: BPPF_2900-Broker_Logs.tbz2
>
>
> When running our ZooKeepers in VMware, occasionally all the keepers 
> simultaneously pause long enough for the Kafka clients to time out and then 
> the keepers simultaneously un-pause.
> When this happens, the zk clients disconnect from ZooKeeper. When ZooKeeper 
> comes back ZkUtils.createEphemeralPathExpectConflict discovers the node id of 
> itself and does not re-register the broker id node and the function call 
> succeeds. Then ZooKeeper figures out the broker disconnected from the keeper 
> and deletes the ephemeral node *after* allowing the consumer to read the data 
> in the /brokers/ids/x node.  The broker then goes on to register all the 
> topics, etc.  When consumers connect, they see topic nodes associated with 
> the broker but thy can't find the broker node to get connection information 
> for the broker, sending them into a rebalance loop until they reach 
> rebalance.retries.max and fail.
> This might also be a ZooKeeper issue, but the desired behavior for a 
> disconnect case might be, if the broker node is found to explicitly delete 
> and recreate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7659) dummy test

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7659.
--
Resolution: Invalid

> dummy test
> --
>
> Key: KAFKA-7659
> URL: https://issues.apache.org/jira/browse/KAFKA-7659
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: kaushik srinivas
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7616) MockConsumer can return ConsumerRecords objects with a non-empty map but no records

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7616.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5901
[https://github.com/apache/kafka/pull/5901]

> MockConsumer can return ConsumerRecords objects with a non-empty map but no 
> records
> ---
>
> Key: KAFKA-7616
> URL: https://issues.apache.org/jira/browse/KAFKA-7616
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.0.1
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Trivial
> Fix For: 2.2.0
>
>
> The ConsumerRecords returned from MockConsumer.poll can return false for 
> isEmpty while not containing any records. This behavior is because 
> MockConsumer.poll eagerly adds entries to the returned Map List>, based on which partitions have been added. If no 
> records are returned for a partition, e.g. because the position was too far 
> ahead, the entry for that partition will still be there.
>  
> The MockConsumer should lazily add entries to the map as they are needed, 
> since it is more in line with how the real consumer behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6971) Passing in help flag to kafka-console-producer should print arg options

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6971.
--
Resolution: Duplicate

> Passing in help flag to kafka-console-producer should print arg options
> ---
>
> Key: KAFKA-6971
> URL: https://issues.apache.org/jira/browse/KAFKA-6971
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, producer 
>Affects Versions: 1.1.0
>Reporter: Yeva Byzek
>Priority: Minor
>  Labels: newbie
>
> {{kafka-console-consumer --help}} prints "help is not a recognized option" as 
> well as output of options
> {{kafka-console-producer --help}} prints "help is not a recognized option" 
> but no output of options
> Possible solutions:
> (a) Enhance {{kafka-console-producer}} to also print out all options when a 
> user passes in an unrecognized option
> (b) Enhance both {{kafka-console-producer}} and {{kafka-console-consumer}} to 
> legitimately accept the {{--help}} flag



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7390) Enable the find-sec-bugs spotBugs plugin for Gradle

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7390:


Assignee: Manikumar

> Enable the find-sec-bugs spotBugs plugin for Gradle
> ---
>
> Key: KAFKA-7390
> URL: https://issues.apache.org/jira/browse/KAFKA-7390
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Manikumar
>Priority: Major
>  Labels: newbie
>
> Once we switch to spotBugs (KAFKA-5887), we should try the find-sec-bugs 
> plugin that helps find security issues:
>  
> https://spotbugs.readthedocs.io/en/latest/gradle.html#introduce-spotbugs-plugin
> http://find-sec-bugs.github.io/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7390) Enable the find-sec-bugs spotBugs plugin for Gradle

2018-11-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7390:


Assignee: (was: Manikumar)

> Enable the find-sec-bugs spotBugs plugin for Gradle
> ---
>
> Key: KAFKA-7390
> URL: https://issues.apache.org/jira/browse/KAFKA-7390
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
>  Labels: newbie
>
> Once we switch to spotBugs (KAFKA-5887), we should try the find-sec-bugs 
> plugin that helps find security issues:
>  
> https://spotbugs.readthedocs.io/en/latest/gradle.html#introduce-spotbugs-plugin
> http://find-sec-bugs.github.io/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7390) Enable the find-sec-bugs spotBugs plugin for Gradle

2018-11-20 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694262#comment-16694262
 ] 

Manikumar commented on KAFKA-7390:
--

[~timmy2702] I have given you Jira contributor permissions. Please feel free to 
assign yourself.

> Enable the find-sec-bugs spotBugs plugin for Gradle
> ---
>
> Key: KAFKA-7390
> URL: https://issues.apache.org/jira/browse/KAFKA-7390
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
>  Labels: newbie
>
> Once we switch to spotBugs (KAFKA-5887), we should try the find-sec-bugs 
> plugin that helps find security issues:
>  
> https://spotbugs.readthedocs.io/en/latest/gradle.html#introduce-spotbugs-plugin
> http://find-sec-bugs.github.io/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7205) KafkaConsumer / KafkaProducer should allow Reconfiguration of SSL Configuration

2018-11-22 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7205:
-
Labels: needs-kip  (was: )

Yes, you can raise PR against trunk.  Since this is a public API change, we 
need to go through KIP Process. 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

> KafkaConsumer / KafkaProducer should allow Reconfiguration of SSL 
> Configuration
> ---
>
> Key: KAFKA-7205
> URL: https://issues.apache.org/jira/browse/KAFKA-7205
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Affects Versions: 1.1.1
>Reporter: Magnus Jungsbluth
>Priority: Major
>  Labels: needs-kip
>
> Since Kafka 1.1 it is possible to reconfigure KeyStores on the broker side of 
> things. 
> When being serious about short lived keys, the client side should also 
> support reconfiguring consumers and producers.
> What I would propose is to implement {{Reconfigurable}}  on {{KafkaConsumer}} 
> and {{KafkaProducer}}. The implementation has to pass the calls to 
> NetworkClient which passes them on to Selector until they finally reach 
> {{SslFactory}} which already implements {{Reconfigurable}}.
> This seems pretty straightforward unless I am missing something important.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6149) LogCleanerManager should include topic partition name when warning of invalid cleaner offset

2018-11-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6149.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.1
   1.0.0

This was fixed in 1.0 and 0.11.0.1+ releases

> LogCleanerManager should include topic partition name when warning of invalid 
> cleaner offset 
> -
>
> Key: KAFKA-6149
> URL: https://issues.apache.org/jira/browse/KAFKA-6149
> Project: Kafka
>  Issue Type: Improvement
>  Components: log, logging
>Reporter: Ryan P
>Priority: Major
> Fix For: 1.0.0, 0.11.0.1
>
>
> The following message would be a lot more helpful if the topic partition name 
> were included.
> if (!isCompactAndDelete(log))
>   warn(s"Resetting first dirty offset to log start offset 
> $logStartOffset since the checkpointed offset $offset is invalid.")



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7259) Remove deprecated ZKUtils usage from ZkSecurityMigrator

2018-11-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7259.
--
Resolution: Fixed

Issue resolved by pull request 5480
[https://github.com/apache/kafka/pull/5480]

> Remove deprecated ZKUtils usage from ZkSecurityMigrator
> ---
>
> Key: KAFKA-7259
> URL: https://issues.apache.org/jira/browse/KAFKA-7259
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
> Fix For: 2.2.0
>
>
> ZkSecurityMigrator code currently uses ZKUtils.  We can replace ZKUtils usage 
> with KafkaZkClient. Also remove usage of ZKUtils from various tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7617) Document security primitives

2018-11-30 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7617.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5906
[https://github.com/apache/kafka/pull/5906]

> Document security primitives
> 
>
> Key: KAFKA-7617
> URL: https://issues.apache.org/jira/browse/KAFKA-7617
> Project: Kafka
>  Issue Type: Task
>Reporter: Viktor Somogyi
>Assignee: Viktor Somogyi
>Priority: Minor
> Fix For: 2.2.0
>
>
> Although the documentation gives help on configuring the authentication and 
> authorization, it won't list what are the security primitives (operations and 
> resources) that can be used which makes it hard for users to easily set up 
> thorough authorization rules.
> This task would cover adding these to the security page of the Kafka 
> documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7390) Enable the find-sec-bugs spotBugs plugin for Gradle

2018-12-02 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706140#comment-16706140
 ] 

Manikumar commented on KAFKA-7390:
--

cc [~ijuma]

> Enable the find-sec-bugs spotBugs plugin for Gradle
> ---
>
> Key: KAFKA-7390
> URL: https://issues.apache.org/jira/browse/KAFKA-7390
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Tim Nguyen
>Priority: Major
>  Labels: newbie
>
> Once we switch to spotBugs (KAFKA-5887), we should try the find-sec-bugs 
> plugin that helps find security issues:
>  
> https://spotbugs.readthedocs.io/en/latest/gradle.html#introduce-spotbugs-plugin
> http://find-sec-bugs.github.io/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7694) Support ZooKeeper based master/secret key management for delegation tokens

2018-12-02 Thread Manikumar (JIRA)
Manikumar created KAFKA-7694:


 Summary:  Support ZooKeeper based master/secret key management for 
delegation tokens
 Key: KAFKA-7694
 URL: https://issues.apache.org/jira/browse/KAFKA-7694
 Project: Kafka
  Issue Type: Sub-task
Reporter: Manikumar


Master/secret key is used to generate and verify delegation tokens. currently, 
master key/secret is stored as plain text in server.properties config file. 
Same key must be configured across all the brokers. We require a re-deployment 
when the secret needs to be rotated.

This JIRA is to explore and implement a ZooKeeper based master/secret key 
management to automate secret key generation and expiration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7696) kafka-delegation-tokens.sh using a config file that contains security.protocol=SASL_PLAINTEXT throws OutOfMemoryError if it tries to connect to an SSL-enabled secured b

2018-12-03 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708257#comment-16708257
 ] 

Manikumar commented on KAFKA-7696:
--

[~asasvari]  This is related to KAFKA-4493. There is a WIP PR 
https://github.com/apache/kafka/pull/5940.

> kafka-delegation-tokens.sh using a config file that contains 
> security.protocol=SASL_PLAINTEXT throws OutOfMemoryError if it tries to 
> connect to an SSL-enabled secured broker
> -
>
> Key: KAFKA-7696
> URL: https://issues.apache.org/jira/browse/KAFKA-7696
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Attila Sasvari
>Assignee: Viktor Somogyi
>Priority: Major
>
> When the command-config file of kafka-delegation-tokens contain 
> security.protocol=SASL_PLAINTEXT instead of SASL_SSL (i.e. due to a user 
> error), the process throws a java.lang.OutOfMemoryError upon connection 
> attempt to a secured (i.e. Kerberized, SSL-enabled) Kafka broker.
> {code}
> [2018-12-03 11:27:13,221] ERROR Uncaught exception in thread 
> 'kafka-admin-client-thread | adminclient-1': 
> (org.apache.kafka.common.utils.KafkaThread)
> java.lang.OutOfMemoryError: Java heap space
>   at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
>   at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
>   at 
> org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:407)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveKafkaResponse(SaslClientAuthenticator.java:497)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:207)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:173)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:533)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:468)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1125)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7746) sasl.jaas.config dynamic broker configuration does not accept "=" in value

2018-12-17 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723750#comment-16723750
 ] 

Manikumar commented on KAFKA-7746:
--

Can you try below format?

{code}
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
--entity-name 0 --alter --add-config 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
 required \
username="admin" \
password="admin-secret"
{code}

> sasl.jaas.config dynamic broker configuration does not accept "=" in value
> --
>
> Key: KAFKA-7746
> URL: https://issues.apache.org/jira/browse/KAFKA-7746
> Project: Kafka
>  Issue Type: Bug
>  Components: config, security
>Reporter: Tom Scott
>Priority: Minor
>
> In KIP-226 it give an example of setting sasl.jaas.config using dynmaic 
> broker configuration:
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration#KIP-226-DynamicBrokerConfiguration-NewBrokerConfigurationOption]
>  
> However, as most SASL module configurations contain the "=" symbol this ends 
> up with the error:
>  
> {code:java}
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format “key=val”.{code}
>  
> I have tried various escape sequences but have not so far been successful.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7742) DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed using removeToken(String tokenId) API.

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7742.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6037
[https://github.com/apache/kafka/pull/6037]

> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using removeToken(String tokenId) API.
> 
>
> Key: KAFKA-7742
> URL: https://issues.apache.org/jira/browse/KAFKA-7742
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
> Fix For: 2.2.0
>
>
> DelegationTokenCache#hmacIdCache entry is not cleared when a token is removed 
> using `removeToken(String tokenId)`[1] API.
> 1) 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/token/delegation/internals/DelegationTokenCache.java#L84



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7762.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6052
[https://github.com/apache/kafka/pull/6052]

> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7762) KafkaConsumer uses old API in the javadocs

2018-12-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-7762:


Assignee: Matthias Weßendorf

> KafkaConsumer uses old API in the javadocs
> --
>
> Key: KAFKA-7762
> URL: https://issues.apache.org/jira/browse/KAFKA-7762
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias Weßendorf
>Assignee: Matthias Weßendorf
>Priority: Major
> Fix For: 2.2.0
>
>
> the `poll(ms)` API is deprecated, hence the javadoc should not use it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7054) Kafka describe command should throw topic doesn't exist exception.

2018-12-21 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7054.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5211
[https://github.com/apache/kafka/pull/5211]

> Kafka describe command should throw topic doesn't exist exception.
> --
>
> Key: KAFKA-7054
> URL: https://issues.apache.org/jira/browse/KAFKA-7054
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Manohar Vanam
>Priority: Minor
> Fix For: 2.2.0
>
>
> If topic doesn't exist then Kafka describe command should throw topic doesn't 
> exist exception.
> like alter and delete commands :
> {code:java}
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --delete 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:13,111] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.deleteTopic(TopicCommand.scala:184)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:71)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$)
> local:bin mvanam$ ./kafka-topics.sh --zookeeper localhost:2181 --alter 
> --topic manu
> Error while executing topic command : Topic manu does not exist on ZK path 
> localhost:2181
> [2018-06-13 15:08:43,663] ERROR java.lang.IllegalArgumentException: Topic 
> manu does not exist on ZK path localhost:2181
>  at kafka.admin.TopicCommand$.getTopics(TopicCommand.scala:91)
>  at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:125)
>  at kafka.admin.TopicCommand$.main(TopicCommand.scala:65)
>  at kafka.admin.TopicCommand.main(TopicCommand.scala)
>  (kafka.admin.TopicCommand$){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   6   7   8   9   10   >