[jira] [Work stopped] (KAFKA-4938) Creating a connector with missing name parameter throws a NullPointerException

2017-04-03 Thread Balint Molnar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4938 stopped by Balint Molnar.

> Creating a connector with missing name parameter throws a NullPointerException
> --
>
> Key: KAFKA-4938
> URL: https://issues.apache.org/jira/browse/KAFKA-4938
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Sönke Liebau
>Assignee: Balint Molnar
>Priority: Minor
>  Labels: newbie
>
> Creating a connector via the rest api runs into a NullPointerException, when 
> omitting the name parameter in the request.
> {code}
> POST 127.0.0.1:8083/connectors
> {
>   "config": {
> "connector.class": "org.apache.kafka.connect.tools.MockSourceConnector",
> "tasks.max": "1",
> "topics": "test-topic"
>   }
> }
> {code}
> Results in a 500 return code, due to a NullPointerException being thrown when 
> checking the name for slashes 
> [here|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L91].
>  I believe this was introduced with the fix for 
> [KAFKA-4372|https://issues.apache.org/jira/browse/KAFKA-4372]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4997) Issue with running kafka-acls.sh when using SASL between Kafka and ZK

2017-04-03 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953144#comment-15953144
 ] 

Rajini Sivaram commented on KAFKA-4997:
---

The user principal (by default for Kerberos) for each node in the example is 
'kafka' (just the primary name). So this principal should be made super-user or 
given the all the access required for brokers. You can customize this as 
described in https://kafka.apache.org/documentation/#security_authz.

Are you using default {{sasl.kerberos.principal.to.local.rules}}? Have you 
configured {{super.users}} in server.properties? For the defaults in the 
example, you need to set super.users=kafka.

> Issue with running kafka-acls.sh when using SASL between Kafka and ZK
> -
>
> Key: KAFKA-4997
> URL: https://issues.apache.org/jira/browse/KAFKA-4997
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
> Environment: Redhat Enterprise Edition Linux, 
>Reporter: Shrikant
>Priority: Critical
>
> Hi All, 
> We are using SASL for Authentication between Kafka and ZK. Followed - 
> https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
> We have 3 Kafka nodes, on each node, we have 
> principal="kafka/server_no.xxx@xxx.com. So 
> On first node in kafka_server_jaas.conf, principal is set to 
> principal="kafka/server1.xxx@xxx.com"
> On second node in kafka_server_jaas.conf, principal is set to 
> principal="kafka/server2.xxx@xxx.com"
> On third node in kafka_server_jaas.conf, principal is set to 
> principal="kafka/server3.xxx@xxx.com"
> When I run the kafka-acls.sh command from node 1, its successful. It all 
> works, but after that I cannot run kafka-acls.sh from the other 2 nodes. On 
> the other 2 nodes it fails, with error 
> [2017-03-31 18:44:38,629] ERROR Conditional update of path 
> /kafka-acl/Topic/shri-topic with data 
> {"version":1,"acls":[{"principal":"User:CN=xxx,OU=,O=,L=x,ST=xx,C=xx","permissionType":"Allow","operation":"Describe","host":"*"},{"principal":"User:CN=xx,OU=,O=,L=x,ST=xx,C=xx","permissionType":"Allow","operation":"Write","host":"*"}]}
>  and expected version 0 failed due to 
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /kafka-acl/Topic/shri-topic (kafka.utils.ZkUtils)
> When I look at zookeeper-shell.sh for the kafka-acl node, that node only has 
> permission for principal of first node. I believe this is the reason it fails 
> to run  kafka-acls.sh from the other 2 nodes, even though those nodes have 
> valid key tabs.  
> getAcl /kafka-acl
> 'world,'anyone
> : r
> 'sasl,'kafka/server1.xxx@xxx.com
> : cdrwa
> Is it this bug ?? or am I doing something wrong here.   
> Thanks,
> Shri



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-04-03 Thread Ait haj Slimane (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953162#comment-15953162
 ] 

Ait haj Slimane commented on KAFKA-4984:


Thank you LAkshmi for your quick reply , you find the logs attached .

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: logKafka.txt, logZookeeper.txt, Screenshot from 
> 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-04-03 Thread Ait haj Slimane (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ait haj Slimane updated KAFKA-4984:
---
Attachment: logKafka.txt
logZookeeper.txt

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: logKafka.txt, logZookeeper.txt, Screenshot from 
> 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[DISCUSS] KIP-138: Change punctuate semantics

2017-04-03 Thread Michal Borowiecki
Hi all,

I have created a draft for KIP-138: Change punctuate semantics

.

Appreciating there can be different views on system-time vs event-time
semantics for punctuation depending on use-case and the importance of
backwards compatibility of any such change, I've left it quite open and
hope to fill in more info as the discussion progresses.

Thanks,
Michal


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-04-03 Thread lakshminarayanasyamala (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953247#comment-15953247
 ] 

lakshminarayanasyamala commented on KAFKA-4984:
---

Have you tried restarting the broker ?

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: logKafka.txt, logZookeeper.txt, Screenshot from 
> 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2792: MINOR: Fix potential deadlock in consumer close te...

2017-04-03 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/2792

MINOR: Fix potential deadlock in consumer close test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-closetest-deadlock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2792.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2792


commit 37eb3d16f31edb2c47cb063cfc216e205075de0f
Author: Rajini Sivaram 
Date:   2017-04-03T08:57:14Z

MINOR: Fix potential deadlock in consumer close test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-04-03 Thread lakshminarayanasyamala (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953253#comment-15953253
 ] 

lakshminarayanasyamala commented on KAFKA-4984:
---

and how about a normal producer-consumer without SASL ?

Thanks
LAkshmi

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: logKafka.txt, logZookeeper.txt, Screenshot from 
> 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2793: KAFKA-4916: test streams with brokers failing (for...

2017-04-03 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2793

KAFKA-4916: test streams with brokers failing (for 0.10.2)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-4916-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2793


commit 8bc7aa970836ba5f2775c14f04f2905ebdae9633
Author: Eno Thereska 
Date:   2017-04-03T10:52:25Z

0.10.2 cherrypick




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953272#comment-15953272
 ] 

ASF GitHub Bot commented on KAFKA-4916:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2793

KAFKA-4916: test streams with brokers failing (for 0.10.2)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-4916-0.10.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2793.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2793


commit 8bc7aa970836ba5f2775c14f04f2905ebdae9633
Author: Eno Thereska 
Date:   2017-04-03T10:52:25Z

0.10.2 cherrypick




> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4984) Unable to produce or consume when enabling authentication SASL/Kerberos

2017-04-03 Thread Ait haj Slimane (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953285#comment-15953285
 ] 

Ait haj Slimane commented on KAFKA-4984:


Yes i already restart the broker but the error still appear ,
Without SASL it works fine .

> Unable to produce or consume when enabling authentication SASL/Kerberos
> ---
>
> Key: KAFKA-4984
> URL: https://issues.apache.org/jira/browse/KAFKA-4984
> Project: Kafka
>  Issue Type: Bug
> Environment: Ubuntu 16.04LTS running in VirtualBox
>Reporter: Ait haj Slimane
>Priority: Critical
> Attachments: logKafka.txt, logZookeeper.txt, Screenshot from 
> 2017-03-30 15-36-30.png
>
>
> I have a problem while trying to produce or consume on kerberos enabled 
> cluster.
> I launched a single broker and a console producer,
> using the SASL authentication between producer and broker.
> When i run the producer ,I got the result attached below
> Any advice on what can cause the problem.
> Thanks!
> 
> configuration used:
> server.properties:
> listeners=SASL_PLAINTEXT://kafka.example.com:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=GSSAPI
> sasl.enabled.mechanism=GSSAPI
> sasl.kerberos.service.name=kafka
> producer.properties
> bootstrap.servers=kafka.example.com:9092
> sasl.kerberos.service.name=kafka
> security.protocol=SASL_PLAINTEXT
> kafka_client_jaas.conf
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/etc/kafka/keytabs/kafkaclient.keytab"
> principal="kafkaclient/kafka.example@example.com";
> };



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-81: Bound Fetch memory usage in the consumer

2017-04-03 Thread Rajini Sivaram
+1 (non-binding)

On Fri, Mar 31, 2017 at 5:36 PM, radai  wrote:

> possible priorities:
>
> 1. keepalives/coordination
> 2. inter-broker-traffic
> 3. produce traffic
> 4. consume traffic
>
> (dont want to start a debate, just to illustrate there may be >2 of them so
> int is better than bool)
>
> On Fri, Mar 31, 2017 at 9:10 AM, Ismael Juma  wrote:
>
> > +1 from me too, thanks for the KIP.
> >
> > Ismael
> >
> > On Fri, Mar 31, 2017 at 5:06 PM, Jun Rao  wrote:
> >
> > > Hi, Mickael,
> > >
> > > Thanks for the KIP. +1 from me too.
> > >
> > > Jun
> > >
> > > On Thu, Mar 30, 2017 at 4:40 AM, Mickael Maison <
> > mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Thanks for the suggestion.
> > > >
> > > > Currently, I can't think of a scenario when we would need multiple
> > > > priority "levels". If in the future it makes sense to have some, I
> > > > think we could just make the change without a new KIP as these APIs
> > > > are not public.
> > > > So I'd be more inclined to keep the boolean for now.
> > > >
> > > > On Wed, Mar 29, 2017 at 6:13 PM, Edoardo Comar 
> > > wrote:
> > > > > Hi Mickael,
> > > > > as discussed we could change the priority parameter to be an int
> > rather
> > > > > than a boolean.
> > > > >
> > > > > That's a bit more extensible
> > > > > --
> > > > > Edoardo Comar
> > > > > IBM MessageHub
> > > > > eco...@uk.ibm.com
> > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > >
> > > > > IBM United Kingdom Limited Registered in England and Wales with
> > number
> > > > > 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> > Hants.
> > > > PO6
> > > > > 3AU
> > > > >
> > > > >
> > > > >
> > > > > From:   Guozhang Wang 
> > > > > To: "dev@kafka.apache.org" 
> > > > > Date:   28/03/2017 19:02
> > > > > Subject:Re: [VOTE] KIP-81: Bound Fetch memory usage in the
> > > > > consumer
> > > > >
> > > > >
> > > > >
> > > > > 1) Makes sense.
> > > > > 2) Makes sense. Thanks!
> > > > >
> > > > > On Tue, Mar 28, 2017 at 10:11 AM, Mickael Maison
> > > > > 
> > > > > wrote:
> > > > >
> > > > >> Hi Guozhang,
> > > > >>
> > > > >> Thanks for the feedback.
> > > > >>
> > > > >> 1) By MemoryPool, I mean the implementation added in KIP-72. That
> > will
> > > > >> most likely be SimpleMemoryPool, but the PR for KIP-72 has not
> been
> > > > >> merged yet.
> > > > >> I've updated the KIP to make it more obvious.
> > > > >>
> > > > >> 2) I was thinking to pass in the priority when creating the
> > > > >> Coordinator Node (in
> > > > >> https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > >> main/java/org/apache/kafka/clients/consumer/internals/
> > > > >> AbstractCoordinator.java#L582)
> > > > >> Then when calling Selector.connect() (in
> > > > >> https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > >> main/java/org/apache/kafka/clients/NetworkClient.java#L643)
> > > > >> retrieve it and pass it in the Selector so it uses it when
> building
> > > > >> the Channel.
> > > > >> The idea was to avoid having to deduce the connection is for the
> > > > >> Coordinator from the ID but instead have it explicitly set by
> > > > >> AbstractCoordinator (and pass it all the way down to the Channel)
> > > > >>
> > > > >> On Tue, Mar 28, 2017 at 1:33 AM, Guozhang Wang <
> wangg...@gmail.com>
> > > > > wrote:
> > > > >> > Mickael,
> > > > >> >
> > > > >> > Sorry for the late review of the KIP. I'm +1 on the proposed
> > change
> > > as
> > > > >> > well. Just a few minor comments on the wiki itself:
> > > > >> >
> > > > >> > 1. By the "MemoryPool" are you referring to a new class impl or
> to
> > > > >> reusing "
> > > > >> > org.apache.kafka.clients.producer.internals.BufferPool"? I
> assume
> > > it
> > > > > was
> > > > >> > the latter case, and if yes, could you update the wiki page to
> > make
> > > it
> > > > >> > clear?
> > > > >> >
> > > > >> > 2. I think it is sufficient to add the priority to KafkaChannel
> > > class,
> > > > >> but
> > > > >> > not needed in Node (but one may need to add this parameter to
> > > > > Selector#
> > > > >> > connect). Could you point me to which usage of Node needs to
> > access
> > > > > the
> > > > >> > priority?
> > > > >> >
> > > > >> >
> > > > >> > Guozhang
> > > > >> >
> > > > >> >
> > > > >> > On Fri, Mar 10, 2017 at 9:52 AM, Mickael Maison <
> > > > >> mickael.mai...@gmail.com>
> > > > >> > wrote:
> > > > >> >
> > > > >> >> Thanks Jason for the feedback! Yes it makes sense to always use
> > the
> > > > >> >> MemoryPool is we can. I've updated the KIP with the suggestion
> > > > >> >>
> > > > >> >> On Fri, Mar 10, 2017 at 1:18 AM, Jason Gustafson <
> > > ja...@confluent.io
> > > > >
> > > > >> >> wrote:
> > > > >> >> > Just a minor comment. The KIP suggests that coordinator
> > responses
> > > > > are
> > > > >> >> > always allocated outside of the memory pool, but maybe we can
> > > > > reserve
> > > > >> >> that
> > > > >> >> > capability for only when the pool does not have eno

[jira] [Created] (KAFKA-5002) Stream does't seem to consider partitions for processing which are being consumed

2017-04-03 Thread Mustak (JIRA)
Mustak created KAFKA-5002:
-

 Summary: Stream does't seem to consider partitions for processing 
which are being consumed
 Key: KAFKA-5002
 URL: https://issues.apache.org/jira/browse/KAFKA-5002
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
 Environment: Windows 8.1
Reporter: Mustak


Kafka streams doesn't seems to consider particular partition for processing if 
that partition is being consumed by some consumer. For example if I've two 
topics t1 and t2 with two partitions p1 and p2 and there is a stream process is 
running with consumes data from these topics and produce output to topic t3 
which has two partitions. If run this kind of topology it works but if i start 
consumer which consumes data from topic t1 and partition p1 then the stream 
logic doesn't consider p1 for processing and stream doesn't provide any output 
related to that partition. I think stream logic should consider partitions 
which are being consumed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2791: MINOR: Fix Deadlock in StreamThread

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2791


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-86: Configurable SASL callback handlers

2017-04-03 Thread Rajini Sivaram
If there are no other concerns or suggestions on this KIP, I will start
vote later this week.

Thank you...

Regards,

Rajini

On Thu, Mar 30, 2017 at 9:42 PM, Rajini Sivaram 
wrote:

> I have made a minor change to the callback handler interface to pass in
> the JAAS configuration entries in *configure,* to work with the multiple
> listener configuration introduced in KIP-103. I have also renamed the
> interface to AuthenticateCallbackHandler instead of AuthCallbackHandler to
> avoid confusion with authorization.
>
> I have rebased and updated the PR (https://github.com/apache/
> kafka/pull/2022) as well. Please let me know if there are any other
> comments or suggestions to move this forward.
>
> Thank you...
>
> Regards,
>
> Rajini
>
>
> On Thu, Dec 15, 2016 at 3:11 PM, Rajini Sivaram 
> wrote:
>
>> Ismael,
>>
>> The reason for choosing CallbackHandler interface as the configurable
>> interface is flexibility. As you say, we could instead define a simpler
>> PlainCredentialProvider and ScramCredentialProvider. But that would tie
>> users to Kafka's SaslServer implementation for PLAIN and SCRAM.
>> SaslServer/SaslClient implementations are already pluggable using standard
>> Java security provider mechanism. Callback handlers are the configuration
>> mechanism for SaslServer/SaslClient. By making the handlers configurable,
>> SASL becomes fully configurable for mechanisms supported by Kafka as well
>> as custom mechanisms. From the 'Scenarios' section in the KIP, a simpler
>> PLAIN/SCRAM interface satisfies the first two, but configurable callback
>> handlers enables all five. I agree that most users don't require this
>> level
>> of flexibility, but we have had discussions about custom mechanisms in the
>> past for integration with existing authentication servers. So I think it
>> is
>> a feature worth supporting.
>>
>> On Thu, Dec 15, 2016 at 2:21 PM, Ismael Juma  wrote:
>>
>> > Thanks Rajini, your answers make sense to me. One more general point: we
>> > are following the JAAS callback architecture and exposing that to the
>> user
>> > where the user has to write code like:
>> >
>> > @Override
>> > public void handle(Callback[] callbacks) throws IOException,
>> > UnsupportedCallbackException {
>> > String username = null;
>> > for (Callback callback: callbacks) {
>> > if (callback instanceof NameCallback)
>> > username = ((NameCallback) callback).getDefaultName();
>> > else if (callback instanceof PlainAuthenticateCallback) {
>> > PlainAuthenticateCallback plainCallback =
>> > (PlainAuthenticateCallback) callback;
>> > boolean authenticated = authenticate(username,
>> > plainCallback.password());
>> > plainCallback.authenticated(authenticated);
>> > } else
>> > throw new UnsupportedCallbackException(callback);
>> > }
>> > }
>> >
>> > protected boolean authenticate(String username, char[] password)
>> throws
>> > IOException {
>> > if (username == null)
>> > return false;
>> > else {
>> > String expectedPassword =
>> > JaasUtils.jaasConfig(LoginType.SERVER.contextName(), "user_" +
>> username,
>> > PlainLoginModule.class.getName());
>> > return Arrays.equals(password,
>> expectedPassword.toCharArray()
>> > );
>> > }
>> > }
>> >
>> > Since we need to create a new callback type for Plain, Scram and so on,
>> is
>> > it really worth it to do it this way? For example, in the code above,
>> the
>> > `authenticate` method could be the only thing the user has to implement
>> and
>> > we could do the necessary work to unwrap the data from the various
>> > callbacks when interacting with the SASL API. More work for us, but a
>> much
>> > more pleasant API for users. What are the drawbacks?
>> >
>> > Ismael
>> >
>> > On Thu, Dec 15, 2016 at 1:06 AM, Rajini Sivaram 
>> > wrote:
>> >
>> > > Ismael,
>> > >
>> > > 1. At the moment AuthCallbackHandler is not a public interface, so I
>> am
>> > > assuming that it can be modified. Yes, agree that we should keep
>> > non-public
>> > > methods separate. Will do that as part of the implementation of this
>> KIP.
>> > >
>> > > 2. Callback handlers do tend to depend on ordering, including those
>> > > included in the JVM and these in Kafka. I have specified the ordering
>> in
>> > > the KIP. Will make sure they get included in documentation too.
>> > >
>> > > 3. Added a note to the KIP. Kafka needs access to the SCRAM
>> credentials
>> > to
>> > > perform SCRAM authentication. For PLAIN, Kafka only needs to know if
>> the
>> > > password is valid for the user. We want to support external
>> > authentication
>> > > servers whose interface is to validate password, not retrieve it.
>> > >
>> > > 4. Added code of ScramCredential to the KIP.
>> > >
>> > >
>> > > On Wed, Dec 14, 2016 at 3:54 PM, Ismael Juma 
>> wrote:
>> > >
>> > > > Thanks Rajini, that helps. A f

[GitHub] kafka pull request #2792: MINOR: Fix potential deadlock in consumer close te...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2792


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-1211) Hold the produce request with ack > 1 in purgatory until replicas' HW has larger than the produce offset (KIP-101)

2017-04-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1211:
---
Assignee: Ben Stopford
  Status: Patch Available  (was: Open)

> Hold the produce request with ack > 1 in purgatory until replicas' HW has 
> larger than the produce offset (KIP-101)
> --
>
> Key: KAFKA-1211
> URL: https://issues.apache.org/jira/browse/KAFKA-1211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Ben Stopford
>  Labels: reliability
> Fix For: 0.11.0.0
>
>
> Today during leader failover we will have a weakness period when the 
> followers truncate their data before fetching from the new leader, i.e., 
> number of in-sync replicas is just 1. If during this time the leader has also 
> failed then produce requests with ack >1 that have get responded will still 
> be lost. To avoid this scenario we would prefer to hold the produce request 
> in purgatory until replica's HW has larger than the offset instead of just 
> their end-of-log offsets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4586) Add purgeDataBefore() API in AdminClient

2017-04-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4586:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add purgeDataBefore() API in AdminClient
> 
>
> Key: KAFKA-4586
> URL: https://issues.apache.org/jira/browse/KAFKA-4586
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 0.11.0.0
>
>
> Please visit 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient
>  for motivation etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4208) Add Record Headers

2017-04-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4208:
---
Status: Patch Available  (was: Open)

> Add Record Headers
> --
>
> Key: KAFKA-4208
> URL: https://issues.apache.org/jira/browse/KAFKA-4208
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Michael Andre Pearce (IG)
>Priority: Critical
>
> Currently headers are not natively supported unlike many transport and 
> messaging platforms or standard, this is to add support for headers to kafka
> This JIRA is related to KIP found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-03 Thread Balint Molnar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953384#comment-15953384
 ] 

Balint Molnar commented on KAFKA-4814:
--

[~ijuma] Something odd happening here, or I don't understand something. There 
is a ZkUtils constructor where we have a parameter isZkSecurityEnabled 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L80
 . We are giving value to this parameter from two different? thing. First we 
are using the zookeeper.set.acl value for example in class 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaServer.scala#L325
and we are also using the JaasUtils.isZkSecurityEnabled method for example in 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/ConfigCommand.scala#L61

I think these are separate things which we need to handle differently. Or am I 
missing something here?  

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4814) ZookeeperLeaderElector not respecting zookeeper.set.acl

2017-04-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953428#comment-15953428
 ] 

Ismael Juma commented on KAFKA-4814:


For commands, it's enough to check `JaasUtils.isZkSecurityEnabled`. For the 
broker, we should check the config as well (same as KafkaServer).

> ZookeeperLeaderElector not respecting zookeeper.set.acl
> ---
>
> Key: KAFKA-4814
> URL: https://issues.apache.org/jira/browse/KAFKA-4814
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.1.1
>Reporter: Stevo Slavic
>Assignee: Balint Molnar
>  Labels: newbie
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> By [migration 
> guide|https://kafka.apache.org/documentation/#zk_authz_migration] for 
> enabling ZooKeeper security on an existing Apache Kafka cluster, and [broker 
> configuration 
> documentation|https://kafka.apache.org/documentation/#brokerconfigs] for 
> {{zookeeper.set.acl}} configuration property, when this property is set to 
> false Kafka brokers should not be setting any ACLs on ZooKeeper nodes, even 
> when JAAS config file is provisioned to broker. 
> Problem is that there is broker side logic, like one in 
> {{ZookeeperLeaderElector}} making use of {{JaasUtils#isZkSecurityEnabled}}, 
> which does not respect this configuration property, resulting in ACLs being 
> set even when there's just JAAS config file provisioned to Kafka broker while 
> {{zookeeper.set.acl}} is set to {{false}}.
> Notice that {{JaasUtils}} is in {{org.apache.kafka.common.security}} package 
> of {{kafka-clients}} module, while {{zookeeper.set.acl}} is broker side only 
> configuration property.
> To make it possible without downtime to enable ZooKeeper authentication on 
> existing cluster, it should be possible to have all Kafka brokers in cluster 
> first authenticate to ZooKeeper cluster, without ACLs being set. Only once 
> all ZooKeeper clients (Kafka brokers and others) are authenticating to 
> ZooKeeper cluster then ACLs can be started being set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2794: HOTFIX: Set `baseOffset` and `writeLimit` correctl...

2017-04-03 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/2794

HOTFIX: Set `baseOffset` and `writeLimit` correctly in `RecordAccumulator`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka fix-records-builder-construction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2794.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2794


commit d8fb9a5e3558a20228fda04fdc73705f958acf29
Author: Ismael Juma 
Date:   2017-04-03T13:18:00Z

HOTFIX: Set `baseOffset` and `writeLimit` correctly in `RecordAccumulator`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-04-03 Thread Mathieu Fenniak
+1 (non-binding)

This will be very helpful for me, looking forward to it! :-)

On Thu, Mar 30, 2017 at 4:46 AM, Damian Guy  wrote:

> Hi All,
>
> I'd like to start the voting thread on KIP-134:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 134%3A+Delay+initial+consumer+group+rebalance
>
> Thanks,
> Damian
>


[jira] [Updated] (KAFKA-4971) Why is there no difference between kafka benchmark tests on SSD and HDD?

2017-04-03 Thread Dasol Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dasol Kim updated KAFKA-4971:
-
Description: 
I installed OS and kafka in the two SSD and two HDDs  to perform the kafka 
benchmark test based on the disc difference. As expected, the SSD should show 
faster results, but according to my experimental results, there is no big 
difference between SSD and HDD. why? Ohter settings have been set to default.

*test settings

zookeeper node  : 1, producer node : 2, broker node : 2(SSD 1, HDD 1)
test scenario : Two producers send messages to the broker and compare the 
throughtput per second of kafka installed on SSD and kafka on HDD

command : ./bin/kafka-producer-perf-test.sh --num-records 100 --record-size 
2000 --topic test --throughput 10 --producer-props 
bootstrap.servers=SN02:9092


 

  was:
I installed OS and kafka in the two SSD and two HDDs  to perform the kafka 
benchmark test based on the disc difference. As expected, the SSD should show 
faster results, but according to my experimental results, there is no big 
difference between SSD and HDD. why? Ohter settings have been set to default.

*test settings

zookeeper node 1, producer node : 2, broker node : 2(SSD 1, HDD 1)
test scenario : Two producers send messages to the broker and compare the 
throughtput per second of kafka installed on SSD and kafka on HDD

command : ./bin/kafka-producer-perf-test.sh --num-records 100 --record-size 
2000 --topic test --throughput 10 --producer-props 
bootstrap.servers=SN02:9092


 


> Why is there no difference between kafka benchmark tests on SSD and HDD? 
> -
>
> Key: KAFKA-4971
> URL: https://issues.apache.org/jira/browse/KAFKA-4971
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.10.0.0
> Environment: Oracle VM VirtualBox
> OS : CentOs 7
> Memory : 1G
> Disk : 8GB
>Reporter: Dasol Kim
>
> I installed OS and kafka in the two SSD and two HDDs  to perform the kafka 
> benchmark test based on the disc difference. As expected, the SSD should show 
> faster results, but according to my experimental results, there is no big 
> difference between SSD and HDD. why? Ohter settings have been set to default.
> *test settings
> zookeeper node  : 1, producer node : 2, broker node : 2(SSD 1, HDD 1)
> test scenario : Two producers send messages to the broker and compare the 
> throughtput per second of kafka installed on SSD and kafka on HDD
> command : ./bin/kafka-producer-perf-test.sh --num-records 100 
> --record-size 2000 --topic test --throughput 10 --producer-props 
> bootstrap.servers=SN02:9092
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1393

2017-04-03 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix deadlock between StreamThread and KafkaStreams

--
[...truncated 1.49 MB...]
kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
t

[jira] [Commented] (KAFKA-4971) Why is there no difference between kafka benchmark tests on SSD and HDD?

2017-04-03 Thread Michal Borowiecki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953608#comment-15953608
 ] 

Michal Borowiecki commented on KAFKA-4971:
--

I think your question would be easier to respond to if you quantified it by 
providing your test results and the drive specs.
Kafka IO access patterns are designed to be sequential for good reason. 
Spinning disks and OS level buffering are optimised for such IO patterns, but I 
don't know if that alone can account for the miss-match between your 
expectations and the results your getting on your hardware.

> Why is there no difference between kafka benchmark tests on SSD and HDD? 
> -
>
> Key: KAFKA-4971
> URL: https://issues.apache.org/jira/browse/KAFKA-4971
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.10.0.0
> Environment: Oracle VM VirtualBox
> OS : CentOs 7
> Memory : 1G
> Disk : 8GB
>Reporter: Dasol Kim
>
> I installed OS and kafka in the two SSD and two HDDs  to perform the kafka 
> benchmark test based on the disc difference. As expected, the SSD should show 
> faster results, but according to my experimental results, there is no big 
> difference between SSD and HDD. why? Ohter settings have been set to default.
> *test settings
> zookeeper node  : 1, producer node : 2, broker node : 2(SSD 1, HDD 1)
> test scenario : Two producers send messages to the broker and compare the 
> throughtput per second of kafka installed on SSD and kafka on HDD
> command : ./bin/kafka-producer-perf-test.sh --num-records 100 
> --record-size 2000 --topic test --throughput 10 --producer-props 
> bootstrap.servers=SN02:9092
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4971) Why is there no difference between kafka benchmark tests on SSD and HDD?

2017-04-03 Thread Michal Borowiecki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953608#comment-15953608
 ] 

Michal Borowiecki edited comment on KAFKA-4971 at 4/3/17 2:54 PM:
--

I think your question would be easier to respond to if you quantified it by 
providing your test results and the drive specs.
Kafka IO access patterns are designed to be sequential for good reason. 
Spinning disks and OS level buffering are optimised for such IO patterns, but I 
don't know if that alone can account for the miss-match between your 
expectations and the results you are getting on your hardware.


was (Author: mihbor):
I think your question would be easier to respond to if you quantified it by 
providing your test results and the drive specs.
Kafka IO access patterns are designed to be sequential for good reason. 
Spinning disks and OS level buffering are optimised for such IO patterns, but I 
don't know if that alone can account for the miss-match between your 
expectations and the results your getting on your hardware.

> Why is there no difference between kafka benchmark tests on SSD and HDD? 
> -
>
> Key: KAFKA-4971
> URL: https://issues.apache.org/jira/browse/KAFKA-4971
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.10.0.0
> Environment: Oracle VM VirtualBox
> OS : CentOs 7
> Memory : 1G
> Disk : 8GB
>Reporter: Dasol Kim
>
> I installed OS and kafka in the two SSD and two HDDs  to perform the kafka 
> benchmark test based on the disc difference. As expected, the SSD should show 
> faster results, but according to my experimental results, there is no big 
> difference between SSD and HDD. why? Ohter settings have been set to default.
> *test settings
> zookeeper node  : 1, producer node : 2, broker node : 2(SSD 1, HDD 1)
> test scenario : Two producers send messages to the broker and compare the 
> throughtput per second of kafka installed on SSD and kafka on HDD
> command : ./bin/kafka-producer-perf-test.sh --num-records 100 
> --record-size 2000 --topic test --throughput 10 --producer-props 
> bootstrap.servers=SN02:9092
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-04-03 Thread Bill Bejeck
+1 (non-binding)

On Mon, Apr 3, 2017 at 9:53 AM, Mathieu Fenniak <
mathieu.fenn...@replicon.com> wrote:

> +1 (non-binding)
>
> This will be very helpful for me, looking forward to it! :-)
>
> On Thu, Mar 30, 2017 at 4:46 AM, Damian Guy  wrote:
>
> > Hi All,
> >
> > I'd like to start the voting thread on KIP-134:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 134%3A+Delay+initial+consumer+group+rebalance
> >
> > Thanks,
> > Damian
> >
>


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-04-03 Thread Onur Karaman
Hi Damian.

After reading the discussion thread again, it still doesn't seem like the
thread discussed the option I mentioned earlier.

>From what I had understood from the broker-side vs. client-side config
debate was that the client-side config from the discussion would cause a
wire format change, while the client-side config change that I had
suggested would not.

I just want to make sure we don't accidentally skip past it due to a
potential misunderstanding.

On Mon, Apr 3, 2017 at 8:10 AM, Bill Bejeck  wrote:

> +1 (non-binding)
>
> On Mon, Apr 3, 2017 at 9:53 AM, Mathieu Fenniak <
> mathieu.fenn...@replicon.com> wrote:
>
> > +1 (non-binding)
> >
> > This will be very helpful for me, looking forward to it! :-)
> >
> > On Thu, Mar 30, 2017 at 4:46 AM, Damian Guy 
> wrote:
> >
> > > Hi All,
> > >
> > > I'd like to start the voting thread on KIP-134:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 134%3A+Delay+initial+consumer+group+rebalance
> > >
> > > Thanks,
> > > Damian
> > >
> >
>


[jira] [Updated] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4916:

Priority: Blocker  (was: Critical)

> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4916:

Fix Version/s: (was: 0.11.0.0)
   0.10.2.1

> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Critical
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4913) creating a window store with one segment throws division by zero error

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4913:

Fix Version/s: 0.10.2.1

> creating a window store with one segment throws division by zero error
> --
>
> Key: KAFKA-4913
> URL: https://issues.apache.org/jira/browse/KAFKA-4913
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Xavier Léauté
>Assignee: Damian Guy
> Fix For: 0.10.2.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5003) StreamThread should catch InvalidTopicException

2017-04-03 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-5003:
---

 Summary: StreamThread should catch InvalidTopicException
 Key: KAFKA-5003
 URL: https://issues.apache.org/jira/browse/KAFKA-5003
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Eno Thereska
Assignee: Matthias J. Sax
Priority: Blocker
 Fix For: 0.10.2.1


There is already a PR here: https://github.com/apache/kafka/pull/2747. Tracking 
with JIRA for 0.10.2.1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5002) Stream does't seem to consider partitions for processing which are being consumed

2017-04-03 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953717#comment-15953717
 ] 

Matthias J. Sax commented on KAFKA-5002:


"if i start consumer which consumes data from topic t1 and partition p1 then 
the stream logic doesn't consider p1 for processing" -> starting a consumer 
(with different {{group.id}}) should not conflict with Kafka Streams -- it 
would be invalid (and should fail with an exception) if you start a consumer 
with the same {{group.id}} as the Kafka Streams app (note, Kafka Streams uses 
it's {{application.id}} as {{group.id}} internally). Just want to verify, I 
understand correctly what you are reporting.

> Stream does't seem to consider partitions for processing which are being 
> consumed
> -
>
> Key: KAFKA-5002
> URL: https://issues.apache.org/jira/browse/KAFKA-5002
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
> Environment: Windows 8.1
>Reporter: Mustak
>  Labels: patch
>
> Kafka streams doesn't seems to consider particular partition for processing 
> if that partition is being consumed by some consumer. For example if I've two 
> topics t1 and t2 with two partitions p1 and p2 and there is a stream process 
> is running with consumes data from these topics and produce output to topic 
> t3 which has two partitions. If run this kind of topology it works but if i 
> start consumer which consumes data from topic t1 and partition p1 then the 
> stream logic doesn't consider p1 for processing and stream doesn't provide 
> any output related to that partition. I think stream logic should consider 
> partitions which are being consumed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] Time for 0.10.2.1 bugfix release

2017-04-03 Thread Gwen Shapira
Reminder that:

1. If you have a bug fix that you want to see in 0.10.2.1, now is the
right time to mark the JIRA as "fixVersion=0.10.2.1", so I can track
it, and get a committer to cherrypick the fix into 0.10.2 branch. Only
JIRAs with PRs please, unless it is an absolute blocker.

2. If you have a JIRA with "fixVersion=0.10.2.1" (i.e. in this list:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20fixVersion%20%3D%200.10.2.1%20ORDER%20BY%20priority%20DESC%2C%20key%20DESC),
now is the time to either get it resolved or move it out of the
release.

I'm really hoping to get an RC done by end-of-day Wednesday, so people
can check their existing bug fixes. Please help make it happen :)

Gwen

On Tue, Mar 28, 2017 at 3:52 PM, Gwen Shapira  wrote:
> Hi Team Kafka,
>
> Since the 0.10.2.0 release we've fixed 13 JIRAs, few rather critical, that
> are targeted for 0.10.2.1:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%200.10.2.1%20ORDER%20BY%20priority%20DESC%2C%20key%20DESC
>
> We also have few outstanding issues for 0.10.2.1:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20resolution%20%3D%20Unresolved%20AND%20fixVersion%20%3D%200.10.2.1%20ORDER%20BY%20priority%20DESC%2C%20key%20DESC
>
> I think it's time for a 0.10.2.1 bugfix release!
>
> Can the owners of the remaining issues get them resolved or move them to a
> different release?
>
> Once we get the "remaining" list to zero, I'll roll the first RC.
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


[jira] [Commented] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953722#comment-15953722
 ] 

Gwen Shapira commented on KAFKA-4916:
-

This doesn't block the release, right? It doesn't fix any known issue and tests 
can be added to the branch at any point in time?

> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4848:

Fix Version/s: 0.10.2.1

> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953723#comment-15953723
 ] 

Eno Thereska commented on KAFKA-4916:
-

There is a critical bug fix as part of the test. The name of the JIRA is 
missleading, will change.

> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953723#comment-15953723
 ] 

Eno Thereska edited comment on KAFKA-4916 at 4/3/17 4:19 PM:
-

There is a critical bug fix as part of the test. The name of the JIRA is 
missleading, will change. Actually might not change the JIRA name, but the 
bottom line is that critical bugs were found as part of adding this one test, 
and fixed.


was (Author: enothereska):
There is a critical bug fix as part of the test. The name of the JIRA is 
missleading, will change.

> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4916) Add streams tests with brokers failing

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4916:

Description: 
We need to add either integration or system tests with streams and have Kafka 
brokers fail and come back up. A combination of transient and permanent broker 
failures.

As part of adding test, fix any critical bugs that arise.

  was:We need to add either integration or system tests with streams and have 
Kafka brokers fail and come back up. A combination of transient and permanent 
broker failures.


> Add streams tests with brokers failing
> --
>
> Key: KAFKA-4916
> URL: https://issues.apache.org/jira/browse/KAFKA-4916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> We need to add either integration or system tests with streams and have Kafka 
> brokers fail and come back up. A combination of transient and permanent 
> broker failures.
> As part of adding test, fix any critical bugs that arise.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953734#comment-15953734
 ] 

Eno Thereska commented on KAFKA-4848:
-

Needs to go to 0.10.2.1 too. Reopening to track that.

> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5003) StreamThread should catch InvalidTopicException

2017-04-03 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953736#comment-15953736
 ] 

Matthias J. Sax commented on KAFKA-5003:


The PR from about is for {{trunk}} -- there is a second PR for {{0.10.2}} 
branch: https://github.com/apache/kafka/pull/2774

> StreamThread should catch InvalidTopicException
> ---
>
> Key: KAFKA-5003
> URL: https://issues.apache.org/jira/browse/KAFKA-5003
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
>Priority: Blocker
> Fix For: 0.10.2.1
>
>
> There is already a PR here: https://github.com/apache/kafka/pull/2747. 
> Tracking with JIRA for 0.10.2.1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3795) Transient system test failure upgrade_test.TestUpgrade

2017-04-03 Thread Roger Hoover (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953759#comment-15953759
 ] 

Roger Hoover commented on KAFKA-3795:
-

Happened again: 
http://confluent-systest.s3-website-us-west-2.amazonaws.com/confluent-kafka-system-test-results/?prefix=2017-04-03--001.1491220440--apache--trunk--bdf4cba/

{noformat}
test_id:
kafkatest.tests.core.upgrade_test.TestUpgrade.test_upgrade.from_kafka_version=0.9.0.1.to_message_format_version=None.security_protocol=SASL_SSL.compression_types=.none
status: FAIL
run time:   4 minutes 4.673 seconds


199680 acked message did not make it to the Consumer. They are: 538129, 
538132, 538135, 538138, 538140, 538141, 538143, 538144, 538146, 538147, 538149, 
538150, 538152, 538153, 538155, 538156, 538158, 538159, 538161, 538162...plus 
199660 more. Total Acked: 331954, Total Consumed: 138002. The first 1000 
missing messages were validated to ensure they are in Kafka's data files. 1000 
were missing. This suggests data loss. Here are some of the messages not found 
in the data files: [538624, 538625, 538626, 538627, 538628, 538629, 538630, 
538631, 538632, 538633]

Traceback (most recent call last):
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
 line 123, in run
data = self.run_test()
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
 line 176, in run_test
return self.test_context.function(self.test)
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
 line 321, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/core/upgrade_test.py",
 line 125, in test_upgrade
self.run_produce_consume_validate(core_test_action=lambda: 
self.perform_upgrade(from_kafka_version,
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 118, in run_produce_consume_validate
self.validate()
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 188, in validate
assert success, msg
AssertionError: 199680 acked message did not make it to the Consumer. They are: 
538129, 538132, 538135, 538138, 538140, 538141, 538143, 538144, 538146, 538147, 
538149, 538150, 538152, 538153, 538155, 538156, 538158, 538159, 538161, 
538162...plus 199660 more. Total Acked: 331954, Total Consumed: 138002. The 
first 1000 missing messages were validated to ensure they are in Kafka's data 
files. 1000 were missing. This suggests data loss. Here are some of the 
messages not found in the data files: [538624, 538625, 538626, 538627, 538628, 
538629, 538630, 538631, 538632, 538633]
{noformat}

> Transient system test failure upgrade_test.TestUpgrade
> --
>
> Key: KAFKA-3795
> URL: https://issues.apache.org/jira/browse/KAFKA-3795
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Jason Gustafson
>  Labels: reliability
>
> From a recent build running on the 0.10.0 branch:
> {code}
> test_id:
> 2016-06-06--001.kafkatest.tests.core.upgrade_test.TestUpgrade.test_upgrade.from_kafka_version=0.9.0.1.to_message_format_version=0.9.0.1.compression_types=.snappy.new_consumer=True
> status: FAIL
> run time:   3 minutes 29.166 seconds
> 3522 acked message did not make it to the Consumer. They are: 476524, 
> 476525, 476527, 476528, 476530, 476531, 476533, 476534, 476536, 476537, 
> 476539, 476540, 476542, 476543, 476545, 476546, 476548, 476549, 476551, 
> 476552, ...plus 3482 more. Total Acked: 110437, Total Consumed: 127470. The 
> first 1000 missing messages were validated to ensure they are in Kafka's data 
> files. 1000 were missing. This suggests data loss. Here are some of the 
> messages not found in the data files: [477184, 477185, 477187, 477188, 
> 477190, 477191, 477193, 477194, 477196, 477197]
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 106, in run_all_tests
> data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 162, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0

[jira] [Updated] (KAFKA-5003) StreamThread should catch InvalidTopicException

2017-04-03 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5003:
---
Fix Version/s: 0.11.0.0
   Status: Patch Available  (was: Open)

> StreamThread should catch InvalidTopicException
> ---
>
> Key: KAFKA-5003
> URL: https://issues.apache.org/jira/browse/KAFKA-5003
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Eno Thereska
>Assignee: Matthias J. Sax
>Priority: Blocker
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> There is already a PR here: https://github.com/apache/kafka/pull/2747. 
> Tracking with JIRA for 0.10.2.1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-3795) Transient system test failure upgrade_test.TestUpgrade

2017-04-03 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3795:
---
Labels: kip-101 reliability  (was: reliability)

> Transient system test failure upgrade_test.TestUpgrade
> --
>
> Key: KAFKA-3795
> URL: https://issues.apache.org/jira/browse/KAFKA-3795
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: Jason Gustafson
>  Labels: kip-101, reliability
>
> From a recent build running on the 0.10.0 branch:
> {code}
> test_id:
> 2016-06-06--001.kafkatest.tests.core.upgrade_test.TestUpgrade.test_upgrade.from_kafka_version=0.9.0.1.to_message_format_version=0.9.0.1.compression_types=.snappy.new_consumer=True
> status: FAIL
> run time:   3 minutes 29.166 seconds
> 3522 acked message did not make it to the Consumer. They are: 476524, 
> 476525, 476527, 476528, 476530, 476531, 476533, 476534, 476536, 476537, 
> 476539, 476540, 476542, 476543, 476545, 476546, 476548, 476549, 476551, 
> 476552, ...plus 3482 more. Total Acked: 110437, Total Consumed: 127470. The 
> first 1000 missing messages were validated to ensure they are in Kafka's data 
> files. 1000 were missing. This suggests data loss. Here are some of the 
> messages not found in the data files: [477184, 477185, 477187, 477188, 
> 477190, 477191, 477193, 477194, 477196, 477197]
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 106, in run_all_tests
> data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/tests/runner.py",
>  line 162, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.5.1-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/tests/kafkatest/tests/core/upgrade_test.py",
>  line 113, in test_upgrade
> self.run_produce_consume_validate(core_test_action=lambda: 
> self.perform_upgrade(from_kafka_version,
>   File 
> "/var/lib/jenkins/workspace/system-test-kafka-0.10.0/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 79, in run_produce_consume_validate
> raise e
> AssertionError: 3522 acked message did not make it to the Consumer. They are: 
> 476524, 476525, 476527, 476528, 476530, 476531, 476533, 476534, 476536, 
> 476537, 476539, 476540, 476542, 476543, 476545, 476546, 476548, 476549, 
> 476551, 476552, ...plus 3482 more. Total Acked: 110437, Total Consumed: 
> 127470. The first 1000 missing messages were validated to ensure they are in 
> Kafka's data files. 1000 were missing. This suggests data loss. Here are some 
> of the messages not found in the data files: [477184, 477185, 477187, 477188, 
> 477190, 477191, 477193, 477194, 477196, 477197]
> {code}
> Here's a link to the test data: 
> http://testing.confluent.io/confluent-kafka-0-10-0-system-test-results/?prefix=2016-06-06--001.1465234069--apache--0.10.0--6500b53/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2794: HOTFIX: Set `baseOffset` correctly in `RecordAccum...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2794


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-03 Thread Sachin Mittal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953777#comment-15953777
 ] 

Sachin Mittal commented on KAFKA-4848:
--

Please let me know if this will be done in 0.10.2 branch. Do I need to issue a 
PR for the same.

Also note that in that branch some fixes which are there in trunk like catching 
the commit failed exception of offset commits is not there, which would be  a 
pre-requiste for this fix.

So let me know how are we planning on 0.10.2.1 release.




> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2795: MINOR: Add a release script that helps generate re...

2017-04-03 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/2795

MINOR: Add a release script that helps generate release candidates.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka release-script

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2795


commit 382b7f99d79ada16bd7f06d1424577de2418c1ea
Author: Ewen Cheslack-Postava 
Date:   2017-04-03T16:49:30Z

MINOR: Add a release script that helps generate release candidates.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[VOTE] KIP-112 - Handle disk failure for JBOD

2017-04-03 Thread Dong Lin
Hi all,

It seems that there is no further concern with the KIP-112. We would like
to start the voting process. The KIP can be found at
*https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
.*

Thanks,
Dong


[VOTE] KIP-113 - Support replicas movement between log directories

2017-04-03 Thread Dong Lin
Hi all,

It seems that there is no further concern with the KIP-113. We would like
to start the voting process. The KIP can be found at
*https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories
.*

Thanks,
Dong


[jira] [Comment Edited] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953804#comment-15953804
 ] 

Eno Thereska edited comment on KAFKA-4848 at 4/3/17 4:54 PM:
-

Sachin, stay tuned, the committers are looking into it for now.


was (Author: enothereska):
Saching, stay tuned, the committers are looking into it for now.

> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4848) Stream thread getting into deadlock state while trying to get rocksdb lock in retryWithBackoff

2017-04-03 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953804#comment-15953804
 ] 

Eno Thereska commented on KAFKA-4848:
-

Saching, stay tuned, the committers are looking into it for now.

> Stream thread getting into deadlock state while trying to get rocksdb lock in 
> retryWithBackoff
> --
>
> Key: KAFKA-4848
> URL: https://issues.apache.org/jira/browse/KAFKA-4848
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Sachin Mittal
>Assignee: Sachin Mittal
> Fix For: 0.11.0.0, 0.10.2.1
>
> Attachments: thr-1
>
>
> We see a deadlock state when streams thread to process a task takes longer 
> than MAX_POLL_INTERVAL_MS_CONFIG time. In this case this threads partitions 
> are assigned to some other thread including rocksdb lock. When it tries to 
> process the next task it cannot get rocks db lock and simply keeps waiting 
> for that lock forever.
> in retryWithBackoff for AbstractTaskCreator we have a backoffTimeMs = 50L.
> If it does not get lock the we simply increase the time by 10x and keep 
> trying inside the while true loop.
> We need to have a upper bound for this backoffTimeM. If the time is greater 
> than  MAX_POLL_INTERVAL_MS_CONFIG and it still hasn't got the lock means this 
> thread's partitions are moved somewhere else and it may not get the lock 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2778: MINOR: fix cleanup phase for KStreamWindowAggregat...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2778


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-81: Bound Fetch memory usage in the consumer

2017-04-03 Thread Becket Qin
+1. Thanks for the KIP.

On Mon, Apr 3, 2017 at 4:29 AM, Rajini Sivaram 
wrote:

> +1 (non-binding)
>
> On Fri, Mar 31, 2017 at 5:36 PM, radai  wrote:
>
> > possible priorities:
> >
> > 1. keepalives/coordination
> > 2. inter-broker-traffic
> > 3. produce traffic
> > 4. consume traffic
> >
> > (dont want to start a debate, just to illustrate there may be >2 of them
> so
> > int is better than bool)
> >
> > On Fri, Mar 31, 2017 at 9:10 AM, Ismael Juma  wrote:
> >
> > > +1 from me too, thanks for the KIP.
> > >
> > > Ismael
> > >
> > > On Fri, Mar 31, 2017 at 5:06 PM, Jun Rao  wrote:
> > >
> > > > Hi, Mickael,
> > > >
> > > > Thanks for the KIP. +1 from me too.
> > > >
> > > > Jun
> > > >
> > > > On Thu, Mar 30, 2017 at 4:40 AM, Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks for the suggestion.
> > > > >
> > > > > Currently, I can't think of a scenario when we would need multiple
> > > > > priority "levels". If in the future it makes sense to have some, I
> > > > > think we could just make the change without a new KIP as these APIs
> > > > > are not public.
> > > > > So I'd be more inclined to keep the boolean for now.
> > > > >
> > > > > On Wed, Mar 29, 2017 at 6:13 PM, Edoardo Comar 
> > > > wrote:
> > > > > > Hi Mickael,
> > > > > > as discussed we could change the priority parameter to be an int
> > > rather
> > > > > > than a boolean.
> > > > > >
> > > > > > That's a bit more extensible
> > > > > > --
> > > > > > Edoardo Comar
> > > > > > IBM MessageHub
> > > > > > eco...@uk.ibm.com
> > > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > > >
> > > > > > IBM United Kingdom Limited Registered in England and Wales with
> > > number
> > > > > > 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> > > Hants.
> > > > > PO6
> > > > > > 3AU
> > > > > >
> > > > > >
> > > > > >
> > > > > > From:   Guozhang Wang 
> > > > > > To: "dev@kafka.apache.org" 
> > > > > > Date:   28/03/2017 19:02
> > > > > > Subject:Re: [VOTE] KIP-81: Bound Fetch memory usage in
> the
> > > > > > consumer
> > > > > >
> > > > > >
> > > > > >
> > > > > > 1) Makes sense.
> > > > > > 2) Makes sense. Thanks!
> > > > > >
> > > > > > On Tue, Mar 28, 2017 at 10:11 AM, Mickael Maison
> > > > > > 
> > > > > > wrote:
> > > > > >
> > > > > >> Hi Guozhang,
> > > > > >>
> > > > > >> Thanks for the feedback.
> > > > > >>
> > > > > >> 1) By MemoryPool, I mean the implementation added in KIP-72.
> That
> > > will
> > > > > >> most likely be SimpleMemoryPool, but the PR for KIP-72 has not
> > been
> > > > > >> merged yet.
> > > > > >> I've updated the KIP to make it more obvious.
> > > > > >>
> > > > > >> 2) I was thinking to pass in the priority when creating the
> > > > > >> Coordinator Node (in
> > > > > >> https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > > >> main/java/org/apache/kafka/clients/consumer/internals/
> > > > > >> AbstractCoordinator.java#L582)
> > > > > >> Then when calling Selector.connect() (in
> > > > > >> https://github.com/apache/kafka/blob/trunk/clients/src/
> > > > > >> main/java/org/apache/kafka/clients/NetworkClient.java#L643)
> > > > > >> retrieve it and pass it in the Selector so it uses it when
> > building
> > > > > >> the Channel.
> > > > > >> The idea was to avoid having to deduce the connection is for the
> > > > > >> Coordinator from the ID but instead have it explicitly set by
> > > > > >> AbstractCoordinator (and pass it all the way down to the
> Channel)
> > > > > >>
> > > > > >> On Tue, Mar 28, 2017 at 1:33 AM, Guozhang Wang <
> > wangg...@gmail.com>
> > > > > > wrote:
> > > > > >> > Mickael,
> > > > > >> >
> > > > > >> > Sorry for the late review of the KIP. I'm +1 on the proposed
> > > change
> > > > as
> > > > > >> > well. Just a few minor comments on the wiki itself:
> > > > > >> >
> > > > > >> > 1. By the "MemoryPool" are you referring to a new class impl
> or
> > to
> > > > > >> reusing "
> > > > > >> > org.apache.kafka.clients.producer.internals.BufferPool"? I
> > assume
> > > > it
> > > > > > was
> > > > > >> > the latter case, and if yes, could you update the wiki page to
> > > make
> > > > it
> > > > > >> > clear?
> > > > > >> >
> > > > > >> > 2. I think it is sufficient to add the priority to
> KafkaChannel
> > > > class,
> > > > > >> but
> > > > > >> > not needed in Node (but one may need to add this parameter to
> > > > > > Selector#
> > > > > >> > connect). Could you point me to which usage of Node needs to
> > > access
> > > > > > the
> > > > > >> > priority?
> > > > > >> >
> > > > > >> >
> > > > > >> > Guozhang
> > > > > >> >
> > > > > >> >
> > > > > >> > On Fri, Mar 10, 2017 at 9:52 AM, Mickael Maison <
> > > > > >> mickael.mai...@gmail.com>
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> >> Thanks Jason for the feedback! Yes it makes sense to always
> use
> > > the
> > > > > >> >> MemoryPool is we can. I've updated the KIP with the
> suggestion
> > > > > >> >>
> > > > > >>

Build failed in Jenkins: kafka-trunk-jdk8 #1394

2017-04-03 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Fix potential deadlock in consumer close test

--
[...truncated 1.43 MB...]
kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes STARTED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithCreateTime PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
STARTED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign 
STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsCleanUpWithAssign 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes STARTED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption STARTED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor STARTED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue STARTED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest 

[GitHub] kafka pull request #2796: Close the producer batch data stream when the batc...

2017-04-03 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/2796

Close the producer batch data stream when the batch gets full to free up 
compression buffers, etc.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
idempotent-producer-close-data-stream

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2796


commit 8e0b3eba9bde8fb790069ebfe43c3f82f91af4a3
Author: Apurva Mehta 
Date:   2017-04-02T00:53:20Z

Close the underlying ProducerBatch data stream once it is closed. This
frees up compression buffers, etc.

The actual batch will be closed and headers written when the batch is
it is about to be sent, or if it is expired, or if the producer is
closed.

commit e5b0771b74bb576ee1821a7a71bbcc5377b8dcde
Author: Ismael Juma 
Date:   2017-04-03T14:23:06Z

A few improvements




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-03 Thread Thomas Becker
Although I fully agree we need a way to trigger periodic processing
that is independent from whether and when messages arrive, I'm not sure
I like the idea of changing the existing semantics across the board.
What if we added an additional callback to Processor that can be
scheduled similarly to punctuate() but was always called at fixed, wall
clock based intervals? This way you wouldn't have to give up the notion
of stream time to be able to do periodic processing.

On Mon, 2017-04-03 at 10:34 +0100, Michal Borowiecki wrote:
> Hi all,
>
> I have created a draft for KIP-138: Change punctuate semantics
>  punctuate+semantics>
> .
>
> Appreciating there can be different views on system-time vs event-
> time
> semantics for punctuation depending on use-case and the importance of
> backwards compatibility of any such change, I've left it quite open
> and
> hope to fill in more info as the discussion progresses.
>
> Thanks,
> Michal
--


Tommy Becker

Senior Software Engineer

O +1 919.460.4747

tivo.com




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-04-03 Thread Becket Qin
Hey Onur,

Are you suggesting letting the consumers to hold back on sending
SyncGroupRequest on the first rebalance? I am not sure how exactly that
works. But it looks that having the group coordinator to control the
rebalance progress would be clearer and probably safer than letting the
group members to guess the state of a group. Can you elaborate a little bit
on your idea?

Thanks,

Jiangjie (Becket) Qin

On Mon, Apr 3, 2017 at 8:16 AM, Onur Karaman 
wrote:

> Hi Damian.
>
> After reading the discussion thread again, it still doesn't seem like the
> thread discussed the option I mentioned earlier.
>
> From what I had understood from the broker-side vs. client-side config
> debate was that the client-side config from the discussion would cause a
> wire format change, while the client-side config change that I had
> suggested would not.
>
> I just want to make sure we don't accidentally skip past it due to a
> potential misunderstanding.
>
> On Mon, Apr 3, 2017 at 8:10 AM, Bill Bejeck  wrote:
>
> > +1 (non-binding)
> >
> > On Mon, Apr 3, 2017 at 9:53 AM, Mathieu Fenniak <
> > mathieu.fenn...@replicon.com> wrote:
> >
> > > +1 (non-binding)
> > >
> > > This will be very helpful for me, looking forward to it! :-)
> > >
> > > On Thu, Mar 30, 2017 at 4:46 AM, Damian Guy 
> > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I'd like to start the voting thread on KIP-134:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 134%3A+Delay+initial+consumer+group+rebalance
> > > >
> > > > Thanks,
> > > > Damian
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-5004) poll() timeout not enforced when connecting to 0.10.0 broker

2017-04-03 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5004:
--

 Summary: poll() timeout not enforced when connecting to 0.10.0 
broker
 Key: KAFKA-5004
 URL: https://issues.apache.org/jira/browse/KAFKA-5004
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Matthias J. Sax
 Fix For: 0.10.2.0


In 0.10.1, heartbeat thread and new poll timeout {{max.poll.interval.ms}} got 
introduced via KIP-62. In 0.10.2, we added client-broker backward compatibility.

Now, if a 0.10.2 client connects to a 0.10.0 broker, the broker only understand 
the heartbeat timeout but not the poll timeout, while the client is still using 
the heartbeat background threat. Thus, the new client config 
{{max.poll.interval.ms}} is ignored.

In the worst case, the polling threat might die while the heartbeat thread is 
still up. Thus, the broker would not timeout the client and no rebalance would 
be triggered while at the same time the client is effectively dead not making 
any progress in its assigned partitions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-138: Change punctuate semantics

2017-04-03 Thread Michal Borowiecki

Thanks Thomas,

I'm also wary of changing the existing semantics of punctuate, for 
backward compatibility reasons, although I like the conceptual 
simplicity of that option.


Adding a new method to me feels safer but, in a way, uglier. I added 
this to the KIP now as option (C).


The TimestampExtractor mechanism is actually more flexible, as it allows 
you to return any value, you're not limited to event time or system time 
(although I don't see an actual use case where you might need anything 
else then those two). Hence I also proposed the option to allow users 
to, effectively, decide what "stream time" is for them given the 
presence or absence of messages, much like they can decide what msg time 
means for them using the TimestampExtractor. What do you think about 
that? This is probably most flexible but also most complicated.


All comments appreciated.

Cheers,

Michal


On 03/04/17 19:23, Thomas Becker wrote:

Although I fully agree we need a way to trigger periodic processing
that is independent from whether and when messages arrive, I'm not sure
I like the idea of changing the existing semantics across the board.
What if we added an additional callback to Processor that can be
scheduled similarly to punctuate() but was always called at fixed, wall
clock based intervals? This way you wouldn't have to give up the notion
of stream time to be able to do periodic processing.

On Mon, 2017-04-03 at 10:34 +0100, Michal Borowiecki wrote:

Hi all,

I have created a draft for KIP-138: Change punctuate semantics

.

Appreciating there can be different views on system-time vs event-
time
semantics for punctuation depending on use-case and the importance of
backwards compatibility of any such change, I've left it quite open
and
hope to fill in more info as the discussion progresses.

Thanks,
Michal

--


 Tommy Becker

 Senior Software Engineer

 O +1 919.460.4747

 tivo.com




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.





Jenkins build is back to normal : kafka-trunk-jdk8 #1395

2017-04-03 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-04-03 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-4222.

Resolution: Fixed

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-04-03 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-4222:


> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5005) JoinIntegrationTest.testLeftKStreamKStream() fails occasionally

2017-04-03 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5005:
--

 Summary: JoinIntegrationTest.testLeftKStreamKStream() fails 
occasionally
 Key: KAFKA-5005
 URL: https://issues.apache.org/jira/browse/KAFKA-5005
 Project: Kafka
  Issue Type: Bug
  Components: streams, unit tests
Reporter: Matthias J. Sax
Assignee: Eno Thereska


{noformat}
java.lang.AssertionError: Condition not met within timeout 3. Expecting 1 
records from topic outputTopic while only received 0: []
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:265)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:247)
at 
org.apache.kafka.streams.integration.JoinIntegrationTest.checkResult(JoinIntegrationTest.java:170)
at 
org.apache.kafka.streams.integration.JoinIntegrationTest.runTest(JoinIntegrationTest.java:192)
at 
org.apache.kafka.streams.integration.JoinIntegrationTest.testLeftKStreamKStream(JoinIntegrationTest.java:250)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2722: KAFKA-4878: Improved Invalid Connect Config Error ...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2722


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4878) Kafka Connect does not log connector configuration errors

2017-04-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-4878.
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2722
[https://github.com/apache/kafka/pull/2722]

> Kafka Connect does not log connector configuration errors
> -
>
> Key: KAFKA-4878
> URL: https://issues.apache.org/jira/browse/KAFKA-4878
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Armin Braun
>Priority: Blocker
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> Currently, on connector configuration error, Kafka Connect (both distributed 
> and stand alone) logs:
> org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector 
> configuration is invalid (use the endpoint `/{connectorType}/config/validate` 
> to get a full list of errors)
> This is annoying because:
> 1. If I'm using stand-alone mode, I may have configured my connector via 
> configuration file and I don't want to know about the REST API at all.
> 2. The output of validate is rather annoying
> What I'd like to see in the output is:
> 1. number of errors in my configuration
> 2. at least one error, preferably all of them



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4878) Kafka Connect does not log connector configuration errors

2017-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954085#comment-15954085
 ] 

ASF GitHub Bot commented on KAFKA-4878:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2722


> Kafka Connect does not log connector configuration errors
> -
>
> Key: KAFKA-4878
> URL: https://issues.apache.org/jira/browse/KAFKA-4878
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Armin Braun
>Priority: Blocker
> Fix For: 0.11.0.0, 0.10.2.1
>
>
> Currently, on connector configuration error, Kafka Connect (both distributed 
> and stand alone) logs:
> org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector 
> configuration is invalid (use the endpoint `/{connectorType}/config/validate` 
> to get a full list of errors)
> This is annoying because:
> 1. If I'm using stand-alone mode, I may have configured my connector via 
> configuration file and I don't want to know about the REST API at all.
> 2. The output of validate is rather annoying
> What I'd like to see in the output is:
> 1. number of errors in my configuration
> 2. at least one error, preferably all of them



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2797: KAFKA-4990: Add API stubs, config parameters, and ...

2017-04-03 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2797

KAFKA-4990: Add API stubs, config parameters, and request types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
exactly-once-transactions-add-public-api

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2797


commit 6f2ec1ccafa304e62e3858ec7b24c902feb07ebb
Author: Matthias J. Sax 
Date:   2017-03-08T23:17:06Z

KAFKA-4990: Add API stubs, config parameters, and request types




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4990) Add API stubs, config parameters, and request types

2017-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954131#comment-15954131
 ] 

ASF GitHub Bot commented on KAFKA-4990:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2797

KAFKA-4990: Add API stubs, config parameters, and request types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
exactly-once-transactions-add-public-api

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2797.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2797


commit 6f2ec1ccafa304e62e3858ec7b24c902feb07ebb
Author: Matthias J. Sax 
Date:   2017-03-08T23:17:06Z

KAFKA-4990: Add API stubs, config parameters, and request types




> Add API stubs, config parameters, and request types
> ---
>
> Key: KAFKA-4990
> URL: https://issues.apache.org/jira/browse/KAFKA-4990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2797: KAFKA-4990: Add API stubs, config parameters, and ...

2017-04-03 Thread mjsax
Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2797


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4990) Add API stubs, config parameters, and request types

2017-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954134#comment-15954134
 ] 

ASF GitHub Bot commented on KAFKA-4990:
---

Github user mjsax closed the pull request at:

https://github.com/apache/kafka/pull/2797


> Add API stubs, config parameters, and request types
> ---
>
> Key: KAFKA-4990
> URL: https://issues.apache.org/jira/browse/KAFKA-4990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-134: Delay initial consumer group rebalance

2017-04-03 Thread Onur Karaman
Delaying the SyncGroupRequest is not what I had in mind.

What I was thinking was essentially a client-side stabilization window
where the client does nothing other than participate in the group
membership protocol and wait a bit for the group to stabilize.

During this window, several rounds of rebalance can take place, clients
would participate in these rebalances (they'd get notified of the rebalance
from the heartbeats they've been sending during this stabilization window),
but they would effectively not run any
ConsumerRebalanceListener.onPartitionsAssigned or process messages until
the window has closed or rebalance finishes if the window ends during a
rebalance.

So something like:
T0: client A is processing messages
T1: new client B joins
T2: client A gets notified and rejoins the group.
T3: rebalance finishes with the group consisting of A and B. This is where
the stabilization window begins for both A and B. Stabilization window
duration is W.
T4: new client C joins.
T5: clients A and B get notified and they rejoin the group.
T6: rebalance finishes with the group consisting of A, B, and C.
T3+W: clients A, B, and C finally run their
ConsumerRebalanceListener.onPartitionsAssigned and begin processing
messages.

If T3+W is during the middle of a rebalance, then we wait until that
rebalance round finishes. Otherwise, we just run the
ConsumerRebalanceListener.onPartitionsAssigned and begin processing
messages.

On Mon, Apr 3, 2017 at 11:40 AM, Becket Qin  wrote:

> Hey Onur,
>
> Are you suggesting letting the consumers to hold back on sending
> SyncGroupRequest on the first rebalance? I am not sure how exactly that
> works. But it looks that having the group coordinator to control the
> rebalance progress would be clearer and probably safer than letting the
> group members to guess the state of a group. Can you elaborate a little bit
> on your idea?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Apr 3, 2017 at 8:16 AM, Onur Karaman  >
> wrote:
>
> > Hi Damian.
> >
> > After reading the discussion thread again, it still doesn't seem like the
> > thread discussed the option I mentioned earlier.
> >
> > From what I had understood from the broker-side vs. client-side config
> > debate was that the client-side config from the discussion would cause a
> > wire format change, while the client-side config change that I had
> > suggested would not.
> >
> > I just want to make sure we don't accidentally skip past it due to a
> > potential misunderstanding.
> >
> > On Mon, Apr 3, 2017 at 8:10 AM, Bill Bejeck  wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Mon, Apr 3, 2017 at 9:53 AM, Mathieu Fenniak <
> > > mathieu.fenn...@replicon.com> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > This will be very helpful for me, looking forward to it! :-)
> > > >
> > > > On Thu, Mar 30, 2017 at 4:46 AM, Damian Guy 
> > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I'd like to start the voting thread on KIP-134:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 134%3A+Delay+initial+consumer+group+rebalance
> > > > >
> > > > > Thanks,
> > > > > Damian
> > > > >
> > > >
> > >
> >
>


[GitHub] kafka pull request #2798: KAFKA-4837: Fix class name comparison in connector...

2017-04-03 Thread kkonstantine
GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/2798

KAFKA-4837: Fix class name comparison in connector-plugins REST endpoint



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-4837-Config-validation-in-Connector-plugins-need-to-compare-against-both-canonical-and-simple-class-names

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2798


commit 3acacb85b819e989ab0423093c900e416e1e371d
Author: Konstantine Karantasis 
Date:   2017-04-03T20:42:37Z

KAFKA-4837: Fix class name comparison in connector-plugins REST endpoint




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4837) Config validation in Connector plugins need to compare against both canonical and simple class names

2017-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954144#comment-15954144
 ] 

ASF GitHub Bot commented on KAFKA-4837:
---

GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/2798

KAFKA-4837: Fix class name comparison in connector-plugins REST endpoint



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-4837-Config-validation-in-Connector-plugins-need-to-compare-against-both-canonical-and-simple-class-names

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2798.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2798


commit 3acacb85b819e989ab0423093c900e416e1e371d
Author: Konstantine Karantasis 
Date:   2017-04-03T20:42:37Z

KAFKA-4837: Fix class name comparison in connector-plugins REST endpoint




> Config validation in Connector plugins need to compare against both canonical 
> and simple class names
> 
>
> Key: KAFKA-4837
> URL: https://issues.apache.org/jira/browse/KAFKA-4837
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
> Fix For: 0.10.2.1
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> A validation check in Connect's REST API that was added to validate that the 
> connector class name in the config matches the connector class name in the 
> request's URL is too strict by not considering both the simple and the 
> canonical name of the connector class. For instance, the following example 
> request: 
> {code}
> PUT /connector-plugins/FileStreamSinkConnector/config/validate/ HTTP/1.1
> Host: connect.example.com
> Accept: application/json
> {
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSinkConnector",
> "tasks.max": "1",
> "topics": "test-topic"
> }
> {code}
> returns a "Bad Request" response with error code "400".
> Currently the reasonable workaround is to exactly match the connector class 
> name in both places. The following will work: 
> {code}
> PUT 
> /connector-plugins/"org.apache.kafka.connect.file.FileStreamSinkConnector/config/validate/
>  HTTP/1.1
> Host: connect.example.com
> Accept: application/json
> {
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSinkConnector",
> "tasks.max": "1",
> "topics": "test-topic"
> }
> {code}
> However, this is not flexible enough and also breaks several examples in 
> documentation. Validation should take into account both simple and canonical 
> class names. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-04-03 Thread JIRA
Xavier Léauté created KAFKA-5006:


 Summary: KeyValueStore.put may throw exception unrelated to the 
current put attempt
 Key: KAFKA-5006
 URL: https://issues.apache.org/jira/browse/KAFKA-5006
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0, 0.10.1.0, 0.10.0.0
Reporter: Xavier Léauté


It is possible for {{KeyValueStore.put(K key, V value)}} to throw an exception 
unrelated to the store in question. Due to [the way that 
{{RecordCollector.send}} is currently 
implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
the exception thrown would be for any previous record produced by the stream 
task, possibly for a completely unrelated topic the same task is producing to.

This can be very confusing for someone attempting to correctly handle 
exceptions thrown by put(), as they would not be able to add any additional 
debugging information to understand the operation that caused the problem. 
Worse, such logging would likely confuse the user, since they might mislead 
themselves into thinking the changelog record created by calling put() caused 
the problem.

Given that there is likely no way for the user to recover from an exception 
thrown by an unrelated produce request, it is questionable whether we should 
even try to raise the exception at this level. A short-term fix would be to 
simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5006) KeyValueStore.put may throw exception unrelated to the current put attempt

2017-04-03 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-5006:

Fix Version/s: 0.11.0.0

> KeyValueStore.put may throw exception unrelated to the current put attempt
> --
>
> Key: KAFKA-5006
> URL: https://issues.apache.org/jira/browse/KAFKA-5006
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.1.0, 0.10.2.0
>Reporter: Xavier Léauté
> Fix For: 0.11.0.0
>
>
> It is possible for {{KeyValueStore.put(K key, V value)}} to throw an 
> exception unrelated to the store in question. Due to [the way that 
> {{RecordCollector.send}} is currently 
> implemented|https://github.com/confluentinc/kafka/blob/3.2.x/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L76]
> the exception thrown would be for any previous record produced by the stream 
> task, possibly for a completely unrelated topic the same task is producing to.
> This can be very confusing for someone attempting to correctly handle 
> exceptions thrown by put(), as they would not be able to add any additional 
> debugging information to understand the operation that caused the problem. 
> Worse, such logging would likely confuse the user, since they might mislead 
> themselves into thinking the changelog record created by calling put() caused 
> the problem.
> Given that there is likely no way for the user to recover from an exception 
> thrown by an unrelated produce request, it is questionable whether we should 
> even try to raise the exception at this level. A short-term fix would be to 
> simply delegate this exception to the uncaught exception handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-112 - Handle disk failure for JBOD

2017-04-03 Thread radai
+1, LGTM

On Mon, Apr 3, 2017 at 9:49 AM, Dong Lin  wrote:

> Hi all,
>
> It seems that there is no further concern with the KIP-112. We would like
> to start the voting process. The KIP can be found at
> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 112%3A+Handle+disk+failure+for+JBOD
>  112%3A+Handle+disk+failure+for+JBOD>.*
>
> Thanks,
> Dong
>


[GitHub] kafka pull request #2799: Kafka-4990: Add API stubs, config parameters, and ...

2017-04-03 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2799

Kafka-4990: Add API stubs, config parameters, and request types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-4990-add-api-stub-config-parameters-request-types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2799.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2799


commit 92694921ffe244987c70c0c9fca660561c562dce
Author: Matthias J. Sax 
Date:   2017-03-08T23:17:06Z

KAFKA-4990: Add API stubs, config parameters, and request types

commit 61731e4e6d7a21f5c60c6c647cf7d34db573a885
Author: hachikuji 
Date:   2017-03-08T07:00:19Z

Exactly once transactions request types (#141)

commit c32096f7e7aa6836728016cb164c406ab5cb1669
Author: Guozhang Wang 
Date:   2017-03-24T18:46:14Z

change write txn request protocol

commit afb4e8d9cd61b970704496d0c9e3c5ec42810f47
Author: Guozhang Wang 
Date:   2017-03-24T22:42:53Z

fix unit tests

commit 1b61e735a3636869ba223a89fd6b3676dceaee9a
Author: Matthias J. Sax 
Date:   2017-03-31T00:09:21Z

Rebasing.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5007) Kafka Replica Fetcher Thread- Resource Leak

2017-04-03 Thread Joseph Aliase (JIRA)
Joseph Aliase created KAFKA-5007:


 Summary: Kafka Replica Fetcher Thread- Resource Leak
 Key: KAFKA-5007
 URL: https://issues.apache.org/jira/browse/KAFKA-5007
 Project: Kafka
  Issue Type: Bug
  Components: core, network
Affects Versions: 0.10.1.1
 Environment: Centos 7
Jave 8
Reporter: Joseph Aliase


Kafka is running out of open file descriptor when system network interface is 
done.

Issue description:
We have a Kafka Cluster of 5 node running on version 0.10.1.1. The open file 
descriptor for the account running Kafka is set to 10.

During an upgrade, network interface went down. Outage continued for 12 hours 
eventually all the broker crashed with java.io.IOException: Too many open files 
error.

We repeated the test in a lower environment and observed that Open Socket count 
keeps on increasing while the NIC is down.
We have around 13 topics with max partition size of 120 and number of replica 
fetcher thread is set to 8.

Using an internal monitoring tool we observed that Open Socket descriptor   for 
the broker pid continued to increase although NIC was down leading to  Open 
File descriptor error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5008) Kafka-Clients not OSGi ready

2017-04-03 Thread Marc (JIRA)
Marc created KAFKA-5008:
---

 Summary: Kafka-Clients not OSGi ready
 Key: KAFKA-5008
 URL: https://issues.apache.org/jira/browse/KAFKA-5008
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.2.0
Reporter: Marc
Priority: Minor


The kafka-clients artifact does not provide OSGi metadata. This adds an 
additional barrier for OSGi developers to use the artifact since it has to be 
[wrapped|http://bnd.bndtools.org/chapters/390-wrapping.html].

The metadata can automatically be created using bnd.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-0.10.2-jdk7 #118

2017-04-03 Thread Apache Jenkins Server
See 


Changes:

[cshapi] KAFKA-4878: Improved Invalid Connect Config Error Message

--
[...truncated 808.45 KB...]

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable 
PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testLeftKStreamKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testOuterKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > 
testInnerKStreamKTable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[0] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[0] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[1] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[1] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[2] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[2] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[3] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[3] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[4] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[4] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
KTableKTableJoin[5] STARTED

org.apache.kafka.s

[jira] [Updated] (KAFKA-4990) Add API stubs, config parameters, and request types

2017-04-03 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4990:
---
Status: Patch Available  (was: Open)

> Add API stubs, config parameters, and request types
> ---
>
> Key: KAFKA-4990
> URL: https://issues.apache.org/jira/browse/KAFKA-4990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2796: MINOR: Close the producer batch data stream when t...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2796


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2676: MINOR: Suppress an inappropriate warning in Mirror...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2676


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[VOTE] KIP-135 : Send of null key to a compacted topic should throw non-retriable error back to user

2017-04-03 Thread Mayuresh Gharat
Hi All,

It seems that there is no further concern with the KIP-135. At this point
we would like to start the voting process. The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-135+%3A+Send+of+null+key+to+a+compacted+topic+should+throw+non-retriable+error+back+to+user


Thanks,

Mayuresh


[jira] [Updated] (KAFKA-5004) poll() timeout not enforced when connecting to 0.10.0 broker

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5004:
-
Fix Version/s: (was: 0.10.2.0)
  Component/s: consumer

[~mjsax] Moved this out of 0.10.2.0 since it has already been released. If 
you're looking for it to make it to 0.10.2.1 you should ask [~gwenshap] about 
it (and figure out who will be fixing it).

> poll() timeout not enforced when connecting to 0.10.0 broker
> 
>
> Key: KAFKA-5004
> URL: https://issues.apache.org/jira/browse/KAFKA-5004
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Matthias J. Sax
>
> In 0.10.1, heartbeat thread and new poll timeout {{max.poll.interval.ms}} got 
> introduced via KIP-62. In 0.10.2, we added client-broker backward 
> compatibility.
> Now, if a 0.10.2 client connects to a 0.10.0 broker, the broker only 
> understand the heartbeat timeout but not the poll timeout, while the client 
> is still using the heartbeat background threat. Thus, the new client config 
> {{max.poll.interval.ms}} is ignored.
> In the worst case, the polling threat might die while the heartbeat thread is 
> still up. Thus, the broker would not timeout the client and no rebalance 
> would be triggered while at the same time the client is effectively dead not 
> making any progress in its assigned partitions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4972) Kafka 0.10.0 Found a corrupted index file during Kafka broker startup

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4972:
-
Fix Version/s: (was: 0.10.2.0)
   (was: 0.10.1.1)
   (was: 0.10.0.1)
   (was: 0.10.0.0)
   (was: 0.10.1.0)

Removed a bunch of released versions from the fix version field.

> Kafka 0.10.0  Found a corrupted index file during Kafka broker startup
> --
>
> Key: KAFKA-4972
> URL: https://issues.apache.org/jira/browse/KAFKA-4972
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.0.0
> Environment: JDK: HotSpot  x64  1.7.0_80
> Tag: 0.10.0
>Reporter: fangjinuo
>Priority: Critical
> Fix For: 0.11.0.0, 0.12.0.0
>
> Attachments: Snap3.png
>
>
> After force shutdown all kafka brokers one by one, restart them one by one, 
> but a broker startup failure.
> The following WARN leval log was found in the log file:
> found a corrutped index file,  .index , delet it  ...
> you can view details by following attachment.
> I look up some codes in core module, found out :
> the nonthreadsafe method LogSegment.append(offset, messages)  has tow caller:
> 1) Log.append(messages)  // here has a synchronized 
> lock 
> 2) LogCleaner.cleanInto(topicAndPartition, source, dest, map, retainDeletes, 
> messageFormatVersion)   // here has not 
> So I guess this may be the reason for the repeated offset in 0xx.log file 
> (logsegment's .log) 
> Although this is just my inference, but I hope that this problem can be 
> quickly repaired



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4927) KStreamsTestDriver fails with NPE when KStream.to() sinks are used

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4927:
-
Fix Version/s: (was: 0.10.2.0)

Removed already released version from fix version field.

> KStreamsTestDriver fails with NPE when KStream.to() sinks are used
> --
>
> Key: KAFKA-4927
> URL: https://issues.apache.org/jira/browse/KAFKA-4927
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Wim Van Leuven
>  Labels: test
>
> *Context*
> KStreamsTestDriver allows to build integration tests of KStreamsTopologies. 
> This also includes topologies that sink data into outgoing topics by calling 
> the KStream.to() methods.
> *Problem*
> When a topic is added as a sink, the KStreamsTestDriver fails with 
> NullPointerExceptions. 
> *Solution*
> BugFix the method KStreamTestDriver.process() method to also lookup a topic  
> by topicName as a sink when a source has not been found.
> *PullRequest*



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4766) Document lz4 and lz4hc in confluence

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4766.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: (was: 0.8.2.0)

LGTM

> Document lz4 and lz4hc in confluence
> 
>
> Key: KAFKA-4766
> URL: https://issues.apache.org/jira/browse/KAFKA-4766
> Project: Kafka
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 0.8.2.0
>Reporter: Daniel Pinyol
>Assignee: Lee Dongjin
>
> https://cwiki.apache.org/confluence/display/KAFKA/Compression does not 
> mention that lz4 and lz4hc compressions are supported 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4832) kafka producer send Async message to the wrong IP cannot to stop producer.close()

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4832:
-
Fix Version/s: (was: 0.8.2.2)

Removing invalid fix version, we'll need to revisit when this should be 
fixed/released.

> kafka producer send Async message to the wrong IP cannot to stop 
> producer.close()
> -
>
> Key: KAFKA-4832
> URL: https://issues.apache.org/jira/browse/KAFKA-4832
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.2
> Environment: JDK8 Eclipse Mars Win7
>Reporter: Wang Hong
>
> 1.When I tried to send msg by Async with wrong IP in loop 1400times 10batches.
> 2.I use javaapi.kafkaproducer designed by Factory.
> 3.1 of 10 batches I take Producer.Connected() and Producer.Closed().
> 4.I know I send msg to a wrong IP finally, But I noticed the terminal was 
> blocking. It can't close normally.
> function just like that :
>   public static void go(int s) throws Exception {
> KafkaService kf = new KafkaServiceImpl();//init properties
>   for (int i = 0; i < 1400; i++) {
>   String msg = "a b 0a b c d e 0a b c d e 0" + s + "--" + 
> i;
>   System.out.println(msg);
>   kf.push(msg); //producer.send()
>   }
> kf.closeProducerFactory();//producer.closed()
>   System.out.println(s);
>   Thread.sleep(1000);
>   }
> kf.closeProducerFactory() is used by producer.closed(),
> But Async send was always waiting for kafka server .I gave it a wrong IP.
> I think it waits for long time Will bring problem with whole system.it occupy 
> resources.
> And another problem was I sending kafka msg with true IP and Runnable 
> ,Threadpools, all is right .Also use ↑ examples for loop.
> It take error that said wait for 3 tries.
> I also configered 
> advertised.host.name=xxx.xxx.xxx.xxx
> advertised.port=9092
> Now I think it maybe cannot get so much concurrent volume in a time.
> Our System is  over 1000tps.
> Thank you .
> Resource Code part:
> package kafka.baseconfig;
> import java.util.Properties;
> import com.travelsky.util.ConFile;
> import kafka.javaapi.producer.Producer;
> import kafka.producer.KeyedMessage;
> import kafka.producer.ProducerConfig;
> /**
>  * kafka工厂模式
>  * 
>  * 1.替代Producer方法.//多线程效率不适合.
>  * 2.使用三部: 
>  * ProducerFactory fac = new ProducerFactory();
>  * fac.openProducer(); ->初始化对象
>  * fac.push(msg); ->发消息主体
>  * fac.closeProducer(); ->关闭对象
>  * @author 王宏
>  *
>  */
> public class ProducerFactory {
>   protected Producer producer = null;
>   protected ConFile conf = null;
>   private Properties props = new Properties();
>   private String topic = null;
>   {
>   try {
>   conf = new ConFile("KafkaProperties.conf");
>   topic = conf.getString("topic");
>   if (conf == null) {
>   throw new Exception("kafka配置文件有问题");
>   }
>   } catch (Exception e) {
>   e.printStackTrace();
>   }
>   }
>   
>   /**
>* 发送消息方法
>* @param msg
>*/
>   public void push(String msg) {
>   if (producer == null) {
>   throw new RuntimeException("producer实例为空");
>   }
>   KeyedMessage messageForSend = new 
> KeyedMessage(topic, msg);
>   producer.send(messageForSend);
>   }
>   
>   /**
>* 打开生产者
>*/
>   public void openProducer() {
>   props.put("serializer.class", "kafka.serializer.StringEncoder");
>   props.put("metadata.broker.list", 
> conf.getString("kafkaserverurl"));
>   // 异步发送
>   props.put("producer.type", conf.getString("synctype"));
>   // 每次发送多少条
>   props.put("batch.num.messages", conf.getString("batchmsgnums"));
>   
>   //
>   props.put("request.required.acks", "1");
>   //
>   props.put("queue.enqueue.timeout.ms", "1");
>   //
>   props.put("request.timeout.ms", "1");
>   //
>   props.put("timeout.ms", "1");
>   //
>   props.put("reconnect.backoff.ms", "1");
>   //
>   props.put("retry.backoff.ms", "1");
>   //
>   props.put("message.send.max.retries", "1");
>   //
>   props.put("retry.backoff.ms", "1");
>   //
>   props.put("linger.ms", "1");
>   //
>   props.put("max.block.ms", "1");
>   //

[jira] [Updated] (KAFKA-4837) Config validation in Connector plugins need to compare against both canonical and simple class names

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4837:
-
Reviewer: Ewen Cheslack-Postava
  Status: Patch Available  (was: Open)

> Config validation in Connector plugins need to compare against both canonical 
> and simple class names
> 
>
> Key: KAFKA-4837
> URL: https://issues.apache.org/jira/browse/KAFKA-4837
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
> Fix For: 0.10.2.1
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> A validation check in Connect's REST API that was added to validate that the 
> connector class name in the config matches the connector class name in the 
> request's URL is too strict by not considering both the simple and the 
> canonical name of the connector class. For instance, the following example 
> request: 
> {code}
> PUT /connector-plugins/FileStreamSinkConnector/config/validate/ HTTP/1.1
> Host: connect.example.com
> Accept: application/json
> {
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSinkConnector",
> "tasks.max": "1",
> "topics": "test-topic"
> }
> {code}
> returns a "Bad Request" response with error code "400".
> Currently the reasonable workaround is to exactly match the connector class 
> name in both places. The following will work: 
> {code}
> PUT 
> /connector-plugins/"org.apache.kafka.connect.file.FileStreamSinkConnector/config/validate/
>  HTTP/1.1
> Host: connect.example.com
> Accept: application/json
> {
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSinkConnector",
> "tasks.max": "1",
> "topics": "test-topic"
> }
> {code}
> However, this is not flexible enough and also breaks several examples in 
> documentation. Validation should take into account both simple and canonical 
> class names. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5004) poll() timeout not enforced when connecting to 0.10.0 broker

2017-04-03 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954390#comment-15954390
 ] 

Ismael Juma commented on KAFKA-5004:


This is by design. Do you have an alternative proposal?

> poll() timeout not enforced when connecting to 0.10.0 broker
> 
>
> Key: KAFKA-5004
> URL: https://issues.apache.org/jira/browse/KAFKA-5004
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Matthias J. Sax
>
> In 0.10.1, heartbeat thread and new poll timeout {{max.poll.interval.ms}} got 
> introduced via KIP-62. In 0.10.2, we added client-broker backward 
> compatibility.
> Now, if a 0.10.2 client connects to a 0.10.0 broker, the broker only 
> understand the heartbeat timeout but not the poll timeout, while the client 
> is still using the heartbeat background threat. Thus, the new client config 
> {{max.poll.interval.ms}} is ignored.
> In the worst case, the polling threat might die while the heartbeat thread is 
> still up. Thus, the broker would not timeout the client and no rebalance 
> would be triggered while at the same time the client is effectively dead not 
> making any progress in its assigned partitions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4977) kafka-connect: fix findbugs issues in connect/runtime

2017-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4977.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

Issue resolved by pull request 2763
[https://github.com/apache/kafka/pull/2763]

> kafka-connect: fix findbugs issues in connect/runtime
> -
>
> Key: KAFKA-4977
> URL: https://issues.apache.org/jira/browse/KAFKA-4977
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.0.0
>
>
> kafka-connect: fix findbugs issues in connect/runtime



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2763: Kafka 4977

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2763


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #2067

2017-04-03 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Close the producer batch append stream when the batch gets full

[ismael] MINOR: Suppress ProducerConfig warning in MirrorMaker

--
[...truncated 341.77 KB...]

kafka.server.DynamicConfigTest > shouldFailLeaderConfigsWithInvalidValues PASSED

kafka.server.DynamicConfigTest > shouldFailWhenChangingClientIdUnknownConfig 
STARTED

kafka.server.DynamicConfigTest > shouldFailWhenChangingClientIdUnknownConfig 
PASSED

kafka.server.DynamicConfigTest > shouldFailWhenChangingBrokerUnknownConfig 
STARTED

kafka.server.DynamicConfigTest > shouldFailWhenChangingBrokerUnknownConfig 
PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification STARTED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > 
shouldParseWildcardReplicationQuotaProperties STARTED

kafka.server.DynamicConfigChangeTest > 
shouldParseWildcardReplicationQuotaProperties PASSED

kafka.server.DynamicConfigChangeTest > testDefaultClientIdQuotaConfigChange 
STARTED

kafka.server.DynamicConfigChangeTest > testDefaultClientIdQuotaConfigChange 
PASSED

kafka.server.DynamicConfigChangeTest > testQuotaInitialization STARTED

kafka.server.DynamicConfigChangeTest > testQuotaInitialization PASSED

kafka.server.DynamicConfigChangeTest > testUserQuotaConfigChange STARTED

kafka.server.DynamicConfigChangeTest > testUserQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testClientIdQuotaConfigChange STARTED

kafka.server.DynamicConfigChangeTest > testClientIdQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testUserClientIdQuotaChange STARTED

kafka.server.DynamicConfigChangeTest > testUserClientIdQuotaChange PASSED

kafka.server.DynamicConfigChangeTest > shouldParseReplicationQuotaProperties 
STARTED

kafka.server.DynamicConfigChangeTest > shouldParseReplicationQuotaProperties 
PASSED

kafka.server.DynamicConfigChangeTest > 
shouldParseRegardlessOfWhitespaceAroundValues STARTED

kafka.server.DynamicConfigChangeTest > 
shouldParseRegardlessOfWhitespaceAroundValues PASSED

kafka.server.DynamicConfigChangeTest > testDefaultUserQuotaConfigChange STARTED

kafka.server.DynamicConfigChangeTest > testDefaultUserQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > shouldParseReplicationQuotaReset STARTED

kafka.server.DynamicConfigChangeTest > shouldParseReplicationQuotaReset PASSED

kafka.server.DynamicConfigChangeTest > testDefaultUserClientIdQuotaConfigChange 
STARTED

kafka.server.DynamicConfigChangeTest > testDefaultUserClientIdQuotaConfigChange 
PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic 
STARTED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange STARTED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.ServerGenerateBrokerIdTest > testGetSequenceIdMethod STARTED

kafka.server.ServerGenerateBrokerIdTest > testGetSequenceIdMethod PASSED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseHostNameAndPortToZK 
STARTED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseHostNameAndPortToZK PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testValidCreateTopicsRequests 
PASSED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
STARTED

kafka.server.CreateTopicsRequestWithPolicyTest > testErrorCreateTopicsRequests 
PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWith

[jira] [Updated] (KAFKA-5004) poll() timeout not enforced when connecting to 0.10.0 broker

2017-04-03 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5004:
---
Affects Version/s: 0.10.2.0

> poll() timeout not enforced when connecting to 0.10.0 broker
> 
>
> Key: KAFKA-5004
> URL: https://issues.apache.org/jira/browse/KAFKA-5004
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>
> In 0.10.1, heartbeat thread and new poll timeout {{max.poll.interval.ms}} got 
> introduced via KIP-62. In 0.10.2, we added client-broker backward 
> compatibility.
> Now, if a 0.10.2 client connects to a 0.10.0 broker, the broker only 
> understand the heartbeat timeout but not the poll timeout, while the client 
> is still using the heartbeat background threat. Thus, the new client config 
> {{max.poll.interval.ms}} is ignored.
> In the worst case, the polling threat might die while the heartbeat thread is 
> still up. Thus, the broker would not timeout the client and no rebalance 
> would be triggered while at the same time the client is effectively dead not 
> making any progress in its assigned partitions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5004) poll() timeout not enforced when connecting to 0.10.0 broker

2017-04-03 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954476#comment-15954476
 ] 

Matthias J. Sax commented on KAFKA-5004:


I did not know that this is by design -- talked to [~cmccabe] and he did not 
know either. IMHO, a "clean" solution would be, to disable the heartbeat thread 
if the client connects to {{0.10.0}} broker and sends heartbeats on {{poll()}} 
as {{0.10.0}} consumer does. Not sure, how complex this would be to do though. 
[~cmccabe] had the idea to set a "flag" on the heartbeat thread each time 
{{poll()}} is called, and let the heartbeat thread stop if 
{{max.poll.interval.ms}} passed and flag got not "renewed".

> poll() timeout not enforced when connecting to 0.10.0 broker
> 
>
> Key: KAFKA-5004
> URL: https://issues.apache.org/jira/browse/KAFKA-5004
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>
> In 0.10.1, heartbeat thread and new poll timeout {{max.poll.interval.ms}} got 
> introduced via KIP-62. In 0.10.2, we added client-broker backward 
> compatibility.
> Now, if a 0.10.2 client connects to a 0.10.0 broker, the broker only 
> understand the heartbeat timeout but not the poll timeout, while the client 
> is still using the heartbeat background threat. Thus, the new client config 
> {{max.poll.interval.ms}} is ignored.
> In the worst case, the polling threat might die while the heartbeat thread is 
> still up. Thus, the broker would not timeout the client and no rebalance 
> would be triggered while at the same time the client is effectively dead not 
> making any progress in its assigned partitions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to normal : kafka-trunk-jdk8 #1398

2017-04-03 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk7 #2068

2017-04-03 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #2732: KAFKA-4855 Struct SchemaBuilder should not allow d...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2732


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >