[jira] [Commented] (KAFKA-3129) Console producer issue when request-required-acks=0

2017-06-10 Thread Pankaj (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045812#comment-16045812
 ] 

Pankaj commented on KAFKA-3129:
---

[~cotedm], [~ijuma], [~vahid] I am also facing the same issue where my kafka 
console producer fails to write data. Please find below steps to recreate.

1-Make a text file with 800 record. each line have record like "Message 1"  and 
the line 2 "Message 2"  "Message 800"
2- # start zookeeper server
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

3-# start broker
.\bin\windows\kafka-server-start.bat .\config\server.properties 

4-# create topic “test”
.\bin\windows\kafka-topics.bat --create --topic test --zookeeper localhost:2181 
--partitions 1 --replication-factor 1

5-#start consumer
.\bin\windows\kafka-console-consumer.bat --topic test --zookeeper localhost:2181

6-sent file to producer
.\bin\windows\\kafka-console-producer.bat --broker-list localhost:9092 --topic 
test < my_file.txt

I have executed all above steps with below configuration.(Windows 7, 
kafka_2.11-0.10.0.1)
1- Default kafka configuration- NOK(Sometimes it consumes 366 messages, 
sometimes 700 messages.)
2- Updating kafka producer.properties for acks="1" or acks="all". both did not 
worked. still  NOK(Sometimes it consumes 366 messages, sometimes 700 messages.).

Please suggest if this issue has been fixed. I'm facing critical problem on 
production

> Console producer issue when request-required-acks=0
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Vahid Hashemian
>Assignee: Dustin Cote
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-06-10 Thread Guozhang Wang
Bharat,

I think we already have 3 committers voted on this KIP, could you conclude
the thread?


Guozhang


On Fri, Jun 2, 2017 at 3:44 PM, Jason Gustafson  wrote:

> Thanks. +1
>
> On Thu, Jun 1, 2017 at 9:40 PM, Matthias J. Sax 
> wrote:
>
> > +1
> >
> > Thanks for updating the KIP!
> >
> > -Matthias
> >
> > On 6/1/17 6:18 PM, Bill Bejeck wrote:
> > > +1
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Thu, Jun 1, 2017 at 7:45 PM, Guozhang Wang 
> > wrote:
> > >
> > >> +1 again. Thanks.
> > >>
> > >> On Tue, May 30, 2017 at 1:46 PM, BigData dev  >
> > >> wrote:
> > >>
> > >>> Hi All,
> > >>> Updated the KIP, as the consumer configurations are required for both
> > >> Admin
> > >>> Client and Consumer in Stream reset tool. Updated the KIP to use
> > >>> command-config option, similar to other tools like
> > >> kafka-consumer-groups.sh
> > >>>
> > >>>
> > >>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> > >>>  > >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> > >>>
> > >>>
> > >>> So, starting the voting process again for further inputs.
> > >>>
> > >>> This vote will run for a minimum of 72 hours.
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Bharat
> > >>>
> > >>>
> > >>>
> > >>> On Tue, May 30, 2017 at 1:18 PM, Guozhang Wang 
> > >> wrote:
> > >>>
> >  +1. Thanks!
> > 
> >  On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <
> eno.there...@gmail.com
> > >
> >  wrote:
> > 
> > > +1 thanks.
> > >
> > > Eno
> > >> On 16 May 2017, at 04:20, BigData dev 
> > >>> wrote:
> > >>
> > >> Hi All,
> > >> Given the simple and non-controversial nature of the KIP, I would
> > >>> like
> >  to
> > >> start the voting process for KIP-157: Add consumer config options
> > >> to
> > >> streams reset tool
> > >>
> > >> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > > +Add+consumer+config+options+to+streams+reset+tool
> > >>  > > +Add+consumer+config+options+to+streams+reset+tool>*
> > >>
> > >>
> > >> The vote will run for a minimum of 72 hours.
> > >>
> > >> Thanks,
> > >>
> > >> Bharat
> > >
> > >
> > 
> > 
> >  --
> >  -- Guozhang
> > 
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> > >
> >
> >
>



-- 
-- Guozhang


[GitHub] kafka pull request #3297: KAFKA-5427: Transactional producer should allow Fi...

2017-06-10 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3297

KAFKA-5427: Transactional producer should allow FindCoordinator in error 
state



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5427

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3297.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3297


commit 76ffa96261084474cde0db350b96d8e4df9cbc55
Author: Jason Gustafson 
Date:   2017-06-10T23:31:32Z

KAFKA-5427: Transactional producer should allow FindCoordinator in error 
state




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5427) Transactional producer cannot find coordinator when trying to abort transaction after error

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045748#comment-16045748
 ] 

ASF GitHub Bot commented on KAFKA-5427:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3297

KAFKA-5427: Transactional producer should allow FindCoordinator in error 
state



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-5427

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3297.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3297


commit 76ffa96261084474cde0db350b96d8e4df9cbc55
Author: Jason Gustafson 
Date:   2017-06-10T23:31:32Z

KAFKA-5427: Transactional producer should allow FindCoordinator in error 
state




> Transactional producer cannot find coordinator when trying to abort 
> transaction after error
> ---
>
> Key: KAFKA-5427
> URL: https://issues.apache.org/jira/browse/KAFKA-5427
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> It can happen that we receive an abortable error while we are already 
> aborting a transaction. In this case, we have an EndTxnRequest queued for 
> sending when we transition to ABORTABLE_ERROR. It could be that we need to 
> find the coordinator before sending this EndTxnRequest. The problem is that 
> we will fail even the FindCoordinatorRequest because we are in an error 
> state.  This causes the following endless loop:
> {code}
> [2017-06-10 19:29:33,436] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-fi
> rst-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,436] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=EndTxnRequest, 
> transactionalId=my-first-tran
> sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,536] TRACE [TransactionalId my-first-transactional-id] 
> Not sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-
> first-transactional-id, coordinatorType=TRANSACTION) because we are in an 
> error state (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,637] TRACE [TransactionalId my-first-transactional-id] 
> Request (type=EndTxnRequest, transactionalId=my-first-transactional-id, 
> producerId
> =1000, producerEpoch=0, result=ABORT) dequeued for sending 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,637] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-fi
> rst-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,637] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=EndTxnRequest, 
> transactionalId=my-first-tran
> sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,737] TRACE [TransactionalId my-first-transactional-id] 
> Not sending transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-
> first-transactional-id, coordinatorType=TRANSACTION) because we are in an 
> error state (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,837] TRACE [TransactionalId my-first-transactional-id] 
> Request (type=EndTxnRequest, transactionalId=my-first-transactional-id, 
> producerId
> =1000, producerEpoch=0, result=ABORT) dequeued for sending 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,838] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=FindCoordinatorRequest, 
> coordinatorKey=my-fi
> rst-transactional-id, coordinatorType=TRANSACTION) 
> (org.apache.kafka.clients.producer.internals.TransactionManager)
> [2017-06-10 19:29:33,838] DEBUG [TransactionalId my-first-transactional-id] 
> Enqueuing transactional request (type=EndTxnRequest, 
> transactionalId=my-first-tran
> sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
> (org.apache.kafka.clients.p

Jenkins build is back to normal : kafka-trunk-jdk7 #2380

2017-06-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5427) Transactional producer cannot find coordinator when trying to abort transaction after error

2017-06-10 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5427:
--

 Summary: Transactional producer cannot find coordinator when 
trying to abort transaction after error
 Key: KAFKA-5427
 URL: https://issues.apache.org/jira/browse/KAFKA-5427
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson
Priority: Blocker
 Fix For: 0.11.0.0


It can happen that we receive an abortable error while we are already aborting 
a transaction. In this case, we have an EndTxnRequest queued for sending when 
we transition to ABORTABLE_ERROR. It could be that we need to find the 
coordinator before sending this EndTxnRequest. The problem is that we will fail 
even the FindCoordinatorRequest because we are in an error state.  This causes 
the following endless loop:

{code}
[2017-06-10 19:29:33,436] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=FindCoordinatorRequest, 
coordinatorKey=my-fi
rst-transactional-id, coordinatorType=TRANSACTION) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,436] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=EndTxnRequest, 
transactionalId=my-first-tran
sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,536] TRACE [TransactionalId my-first-transactional-id] Not 
sending transactional request (type=FindCoordinatorRequest, coordinatorKey=my-
first-transactional-id, coordinatorType=TRANSACTION) because we are in an error 
state (org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,637] TRACE [TransactionalId my-first-transactional-id] 
Request (type=EndTxnRequest, transactionalId=my-first-transactional-id, 
producerId
=1000, producerEpoch=0, result=ABORT) dequeued for sending 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,637] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=FindCoordinatorRequest, 
coordinatorKey=my-fi
rst-transactional-id, coordinatorType=TRANSACTION) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,637] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=EndTxnRequest, 
transactionalId=my-first-tran
sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,737] TRACE [TransactionalId my-first-transactional-id] Not 
sending transactional request (type=FindCoordinatorRequest, coordinatorKey=my-
first-transactional-id, coordinatorType=TRANSACTION) because we are in an error 
state (org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,837] TRACE [TransactionalId my-first-transactional-id] 
Request (type=EndTxnRequest, transactionalId=my-first-transactional-id, 
producerId
=1000, producerEpoch=0, result=ABORT) dequeued for sending 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,838] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=FindCoordinatorRequest, 
coordinatorKey=my-fi
rst-transactional-id, coordinatorType=TRANSACTION) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,838] DEBUG [TransactionalId my-first-transactional-id] 
Enqueuing transactional request (type=EndTxnRequest, 
transactionalId=my-first-tran
sactional-id, producerId=1000, producerEpoch=0, result=ABORT) 
(org.apache.kafka.clients.producer.internals.TransactionManager)
[2017-06-10 19:29:33,938] TRACE [TransactionalId my-first-transactional-id] Not 
sending transactional request (type=FindCoordinatorRequest, coordinatorKey=my-
first-transactional-id, coordinatorType=TRANSACTION) because we are in an error 
state (org.apache.kafka.clients.producer.internals.TransactionManager)
{code}

A couple suggested improvements:

1. We should allow FindCoordinator requests regardless of the transaction state.
2. It is a bit confusing that we allow EndTxnRequest to be sent in both the 
ABORTABLE_ERROR and the ABORTING_TRANSACTION states. Perhaps we should only 
allow EndTxnRequest to be sent in ABORTING_TRANSACTION. If we hit an abortable 
error and we are already aborting, then we should just stay in 
ABORTING_TRANSACTION and perhaps log a warning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: KIP Permission Request

2017-06-10 Thread Guozhang Wang
Added. Thanks.

Guozhang

On Sat, Jun 10, 2017 at 1:28 AM, Grant Neale 
wrote:

> Good afternoon,
>
>
> I am writing to request permission to create Kafka Improvement Proposal
> (KIP) pages on the Apache Kafka confluence.
>
>
> My wiki username is grantne...@hotmail.com.
>
>
> The KIP relates to a lag-based partition assignment strategy, similar to
> that proposed here.
>
>
> Regards,
>
> Grant Neale
>



-- 
-- Guozhang


[jira] [Commented] (KAFKA-4661) Improve test coverage UsePreviousTimeOnInvalidTimestamp

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045721#comment-16045721
 ] 

ASF GitHub Bot commented on KAFKA-4661:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3288


> Improve test coverage UsePreviousTimeOnInvalidTimestamp
> ---
>
> Key: KAFKA-4661
> URL: https://issues.apache.org/jira/browse/KAFKA-4661
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> Exception branch not tested



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3288: KAFKA-4661: Improve test coverage UsePreviousTimeO...

2017-06-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3288


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4661) Improve test coverage UsePreviousTimeOnInvalidTimestamp

2017-06-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4661.
--
   Resolution: Fixed
Fix Version/s: 0.11.1.0

Issue resolved by pull request 3288
[https://github.com/apache/kafka/pull/3288]

> Improve test coverage UsePreviousTimeOnInvalidTimestamp
> ---
>
> Key: KAFKA-4661
> URL: https://issues.apache.org/jira/browse/KAFKA-4661
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> Exception branch not tested



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3123) Follower Broker cannot start if offsets are already out of range

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045695#comment-16045695
 ] 

ASF GitHub Bot commented on KAFKA-3123:
---

GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3296

KAFKA-3123: Follower Broker cannot start if offsets are already out o…

…f range

From https://github.com/apache/kafka/pull/1716#discussion_r112000498, 
ensure the cleaner is restarted if Log.truncateTo throws

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-3123

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3296


commit e12690320d68c7686ecf9ceebe53a4498b8a5f0d
Author: Mickael Maison 
Date:   2017-06-10T18:49:43Z

KAFKA-3123: Follower Broker cannot start if offsets are already out of range




> Follower Broker cannot start if offsets are already out of range
> 
>
> Key: KAFKA-3123
> URL: https://issues.apache.org/jira/browse/KAFKA-3123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, replication
>Affects Versions: 0.9.0.0
>Reporter: Soumyajit Sahu
>Assignee: Soumyajit Sahu
>Priority: Critical
>  Labels: patch
> Fix For: 0.11.0.1
>
> Attachments: 
> 0001-Fix-Follower-crashes-when-offset-out-of-range-during.patch
>
>
> I was trying to upgrade our test Windows cluster from 0.8.1.1 to 0.9.0 one 
> machine at a time. Our logs have just 2 hours of retention. I had re-imaged 
> the test machine under consideration, and got the following error in loop 
> after starting afresh with 0.9.0 broker:
> [2016-01-19 13:57:28,809] WARN [ReplicaFetcherThread-1-169595708], Replica 
> 15588 for partition [EventLogs4,1] reset its fetch offset from 0 to 
> current leader 169595708's start offset 334086 
> (kafka.server.ReplicaFetcherThread)
> [2016-01-19 13:57:28,809] ERROR [ReplicaFetcherThread-1-169595708], Error 
> getting offset for partition [EventLogs4,1] to broker 169595708 
> (kafka.server.ReplicaFetcherThread)
> java.lang.IllegalStateException: Compaction for partition [EXO_EventLogs4,1] 
> cannot be aborted and paused since it is in LogCleaningPaused state.
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply$mcV$sp(LogCleanerManager.scala:149)
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply(LogCleanerManager.scala:140)
>   at 
> kafka.log.LogCleanerManager$$anonfun$abortAndPauseCleaning$1.apply(LogCleanerManager.scala:140)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.log.LogCleanerManager.abortAndPauseCleaning(LogCleanerManager.scala:140)
>   at kafka.log.LogCleaner.abortAndPauseCleaning(LogCleaner.scala:141)
>   at kafka.log.LogManager.truncateFullyAndStartAt(LogManager.scala:304)
>   at 
> kafka.server.ReplicaFetcherThread.handleOffsetOutOfRange(ReplicaFetcherThread.scala:185)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:152)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:122)
>   at scala.Option.foreach(Option.scala:236)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:122)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:120)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>   at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>   at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:120)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120)
>   at 
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
>   at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.sca

[GitHub] kafka pull request #3296: KAFKA-3123: Follower Broker cannot start if offset...

2017-06-10 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3296

KAFKA-3123: Follower Broker cannot start if offsets are already out o…

…f range

From https://github.com/apache/kafka/pull/1716#discussion_r112000498, 
ensure the cleaner is restarted if Log.truncateTo throws

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-3123

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3296.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3296


commit e12690320d68c7686ecf9ceebe53a4498b8a5f0d
Author: Mickael Maison 
Date:   2017-06-10T18:49:43Z

KAFKA-3123: Follower Broker cannot start if offsets are already out of range




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5418) ZkUtils.getAllPartitions() may fail if a topic is marked for deletion

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045678#comment-16045678
 ] 

ASF GitHub Bot commented on KAFKA-5418:
---

GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3295

KAFKA-5418: ZkUtils.getAllPartitions() may fail if a topic is marked for 
deletion

Skip topics that don't have any partitions in zkUtils.getAllPartitions()

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-5418

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3295


commit 0aeca093c2acf47b8d4fa01b68eaef79f625e091
Author: Mickael Maison 
Date:   2017-06-10T20:10:40Z

KAFKA-5418: ZkUtils.getAllPartitions() may fail if a topic is marked for 
deletion




> ZkUtils.getAllPartitions() may fail if a topic is marked for deletion
> -
>
> Key: KAFKA-5418
> URL: https://issues.apache.org/jira/browse/KAFKA-5418
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1, 0.10.2.1
>Reporter: Edoardo Comar
>
> Running {{ZkUtils.getAllPartitions()}} on a cluster which had a topic stuck 
> in the 'marked for deletion' state 
> so it was a child of {{/brokers/topics}}
> but it had no children, i.e. the path {{/brokers/topics/thistopic/partitions}}
> did not exist, throws a ZkNoNodeException while iterating:
> {noformat}
> rg.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /brokers/topics/xyzblahfoo/partitions
>   at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:995)
>   at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:671)
>   at kafka.utils.ZkUtils.getChildren(ZkUtils.scala:537)
>   at 
> kafka.utils.ZkUtils$$anonfun$getAllPartitions$1.apply(ZkUtils.scala:817)
>   at 
> kafka.utils.ZkUtils$$anonfun$getAllPartitions$1.apply(ZkUtils.scala:816)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.ZkUtils.getAllPartitions(ZkUtils.scala:816)
> ...
>   at java.lang.Thread.run(Thread.java:809)
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for /brokers/topics/xyzblahfoo/partitions
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
>   at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1500)
>   at org.I0Itec.zkclient.ZkConnection.getChildren(ZkConnection.java:114)
>   at org.I0Itec.zkclient.ZkClient$4.call(ZkClient.java:678)
>   at org.I0Itec.zkclient.ZkClient$4.call(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:985)
>   ... 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3295: KAFKA-5418: ZkUtils.getAllPartitions() may fail if...

2017-06-10 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/3295

KAFKA-5418: ZkUtils.getAllPartitions() may fail if a topic is marked for 
deletion

Skip topics that don't have any partitions in zkUtils.getAllPartitions()

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-5418

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3295


commit 0aeca093c2acf47b8d4fa01b68eaef79f625e091
Author: Mickael Maison 
Date:   2017-06-10T20:10:40Z

KAFKA-5418: ZkUtils.getAllPartitions() may fail if a topic is marked for 
deletion




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5145) Remove task close() call from closeNonAssignedSuspendedTasks method

2017-06-10 Thread Evgeny Veretennikov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045676#comment-16045676
 ] 

Evgeny Veretennikov commented on KAFKA-5145:


It seems like ProcessorNode.close() method isn't actually called from 
StreamThread.suspendTasksAndState().

> Remove task close() call from closeNonAssignedSuspendedTasks method
> ---
>
> Key: KAFKA-5145
> URL: https://issues.apache.org/jira/browse/KAFKA-5145
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Narendra Kumar
>  Labels: newbie
> Attachments: BugTest.java, DebugTransformer.java, logs.txt
>
>
> While rebalancing ProcessorNode.close() can be called twice, once  from 
> StreamThread.suspendTasksAndState() and once from  
> StreamThread.closeNonAssignedSuspendedTasks(). If ProcessorNode.close() 
> throws some exception because of calling close() multiple times( i.e. 
> IllegalStateException from  some KafkaConsumer instance being used by some 
> processor for some lookup), we fail to close the task's state manager ( i.e. 
> call to task.closeStateManager(true); fails).  After rebalance, if the same 
> task id is launched on same application instance but in different thread then 
> the task get stuck because it fails to get lock to the task's state directory.
> Since processor close() is already called from 
> StreamThread.suspendTasksAndState() we don't need to call again from 
> StreamThread.closeNonAssignedSuspendedTasks().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4653) Improve test coverage of RocksDBStore

2017-06-10 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-4653:
-

Assignee: Jeyhun Karimov

> Improve test coverage of RocksDBStore
> -
>
> Key: KAFKA-4653
> URL: https://issues.apache.org/jira/browse/KAFKA-4653
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{putAll}} - not covered
> {{putInternal} - exceptions



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4653) Improve test coverage of RocksDBStore

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045635#comment-16045635
 ] 

ASF GitHub Bot commented on KAFKA-4653:
---

GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3294

KAFKA-4653: Improve test coverage of RocksDBStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4653

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3294.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3294


commit 77e65eaa07855651cf4449de2cca074a9fe05ece
Author: Jeyhun Karimov 
Date:   2017-06-10T17:54:27Z

RocksDBStore putAll covered




> Improve test coverage of RocksDBStore
> -
>
> Key: KAFKA-4653
> URL: https://issues.apache.org/jira/browse/KAFKA-4653
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{putAll}} - not covered
> {{putInternal} - exceptions



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3294: KAFKA-4653: Improve test coverage of RocksDBStore

2017-06-10 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3294

KAFKA-4653: Improve test coverage of RocksDBStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4653

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3294.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3294


commit 77e65eaa07855651cf4449de2cca074a9fe05ece
Author: Jeyhun Karimov 
Date:   2017-06-10T17:54:27Z

RocksDBStore putAll covered




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2017-06-10 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045634#comment-16045634
 ] 

Mickael Maison commented on KAFKA-5424:
---

I was able to reproduce when running 0.10.0.0 for the broker but it probably 
happens on all version pre 0.10.1.0 because of 
https://issues.apache.org/jira/browse/KAFKA-3396

Using the default server.properties, add:
{noformat}
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
{noformat}

Create a topic, add a deny all ACL: 
{noformat}
./kafka-acls.sh --topic mytopic --deny-host * --authorizer-properties 
zookeeper.connect=localhost --add --deny-principal User:*
{noformat}

Calling listTopics() yields:
{noformat}
Exception in thread "main" 
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
access topics: [mytopic]
{noformat}

No sure what's the best way to fix that. Maybe we could have a flag indicating 
we're trying to list topics and if set ignore {{unauthorizedTopics}} in 
{{getTopicMetadata()}}.

> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2017-06-10 Thread Mike Fagan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045633#comment-16045633
 ] 

Mike Fagan commented on KAFKA-5424:
---

Thanks, I am seeing this running a Kafka 0.10.0.2.5 cluster with Kerberos. 
Are there other details that would help debug/resolve this issue?


> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4658) Improve test coverage InMemoryKeyValueLoggedStore

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045602#comment-16045602
 ] 

ASF GitHub Bot commented on KAFKA-4658:
---

GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3293

KAFKA-4658: Improve test coverage InMemoryKeyValueLoggedStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4658

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3293.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3293


commit f5e5e8af539d8ce0e27232d8b9aac725ec19f80c
Author: Jeyhun Karimov 
Date:   2017-06-10T16:23:54Z

InMemoryKeyValueLoggedStore tests improved with putAll() and persistent()




> Improve test coverage InMemoryKeyValueLoggedStore
> -
>
> Key: KAFKA-4658
> URL: https://issues.apache.org/jira/browse/KAFKA-4658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
> Fix For: 0.11.1.0
>
>
> {{putAll} not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4658) Improve test coverage InMemoryKeyValueLoggedStore

2017-06-10 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-4658:
-

Assignee: Jeyhun Karimov

> Improve test coverage InMemoryKeyValueLoggedStore
> -
>
> Key: KAFKA-4658
> URL: https://issues.apache.org/jira/browse/KAFKA-4658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
> Fix For: 0.11.1.0
>
>
> {{putAll} not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3293: KAFKA-4658: Improve test coverage InMemoryKeyValue...

2017-06-10 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3293

KAFKA-4658: Improve test coverage InMemoryKeyValueLoggedStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4658

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3293.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3293


commit f5e5e8af539d8ce0e27232d8b9aac725ec19f80c
Author: Jeyhun Karimov 
Date:   2017-06-10T16:23:54Z

InMemoryKeyValueLoggedStore tests improved with putAll() and persistent()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5426) One second delay in kafka-console-producer

2017-06-10 Thread Francisco (JIRA)
Francisco created KAFKA-5426:


 Summary: One second delay in kafka-console-producer
 Key: KAFKA-5426
 URL: https://issues.apache.org/jira/browse/KAFKA-5426
 Project: Kafka
  Issue Type: Bug
  Components: config, producer 
Affects Versions: 0.10.2.1
 Environment: Ubuntu 16.04 with OpenJDK-8 and Ubuntu 14.04 with 
OpenJDK-7
Reporter: Francisco
Priority: Minor


Hello!

I have been trying to change the default delay for the original 
kafka-console-producer with both adding the producer.properties with a 
different configuration  for linger.ms and batch.size, and also providing it 
directly in the command line with "--property" but nothing works. 

I have also tried it in a VM with Ubuntu 14.04 and using 0.8.2.1 Kafka version 
but I have had the same result. I don't know if it has been designed like that 
to don't be able to change the behaviour of the console-producer or if this is 
a bug.

Here you can see my original post in StackOverFlow asking for help in this 
issue: 
https://stackoverflow.com/questions/44334304/kafka-spark-streaming-constant-delay-of-1-second



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[VOTE] KIP-148: Add a connect timeout for client

2017-06-10 Thread ????????
Hi All,
Since we all reach the consensus, I want to start a voting for this KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-148%3A+Add+a+connect+timeout+for+client


Thanks,
David

??????Re?? [DISCUSS] KIP-148: Add a connect timeout for client

2017-06-10 Thread ????????
--  --
??: "";<254479...@qq.com>;
: 2017??6??4??(??) 6:05
??: "dev"; 

: Re?? [DISCUSS] KIP-148: Add a connect timeout for client



>I guess one obvious question is, how does this interact with retries? 
>Does it result in a failure getting delivered to the end user more
>quickly if connecting is impossible the first few times we try?  Does
>exponential backoff still apply?


Yes, for the retries it will make the end user more quickly to connect.  After 
the produce request 
failed because of timeout,  network client close the connection and start to 
connect to the leastLoadedNode node.
If the node has no response, we will quickly close the connecting in the 
specified timeout and try another node.


And for the exponential backoff, do you mean for the TCP's exponential backoff 
or the NetworkClient's exponential backoff ?
It seems the NetworkClient has no exponential backoff (the reconnect.backoff.ms 
parameter)


Thanks
David




--  --
??: "Colin McCabe";;
: 2017??5??31??(??) 2:44
??: "dev"; 

: Re: [DISCUSS] KIP-148: Add a connect timeout for client



On Mon, May 29, 2017, at 15:46, Guozhang Wang wrote:
> On Wed, May 24, 2017 at 9:59 AM, Colin McCabe  wrote:
> 
> > On Tue, May 23, 2017, at 19:07, Guozhang Wang wrote:
> > > I think using a single config to cover end-to-end latency with connecting
> > > and request round-trip may not be best appropriate since 1) some request
> > > may need much more time than others since they are parked (fetch request
> > > with long polling, join group request etc) or throttled,
> >
> > Hmm.  My proposal was to implement _both_ end-to-end timeouts and
> > per-call timeouts.  In that case, some requests needing much more time
> > than others should not be a concern, since we can simply set a higher
> > per-call timeout on the requests we think will need more time.
> >
> > > and 2) some
> > > requests are prerequisite of others, like group request to discover the
> > > coordinator before the fetch offset request, and implementation wise
> > > these
> > > request send/receive is embedded in latter ones, hence it is not clear if
> > > the `request.timeout.ms` should cover just a single RPC or more.
> >
> > As far as I know, the request timeout has always covered a single RP  If
> > we want to implement a higher level timeout that spans multiple RPCs, we
> > can set the per-call timeouts appropriately.  For example:
> >
> > > long deadline = System.currentTimeMillis() + 6;
> > > callA(callTimeout = deadline - System.currentTimeMillis())
> > > callB(callTimeout = deadline - System.currentTimeMillis())
> >
> >
> I may have misunderstand your previous email. Just clarifying:
> 
> 1) On the client we already have some configs for controlling end-to-end
> timeout, e.g. "max.block.ms" on producer controls how long "send()" and
> "partitionsFor()" will block for, and inside such API calls multiple
> request round trips may be sent, and for the first request round trip, a
> connecting phase may or may not be included. All of these are be covered
> in
> this "max.block.ms" timeout today. However, as we discussed before not
> all
> request round trips have similar latency expectation, so it is better to
> make a per-request "request.timeout.ms" and the overall "max.block.ms"
> would need to be at least the max of them.

That makes sense.

Just to be clear, when you say "per-request timeout" are you talking
about a timeout that can be different for each request?  (This doesn't
exist today, but has been proposed.)  Or are you talking about
request.timeout.ms, the single timeout that currently applies to all
requests in NetworkClient?

> 
> 2) Now back to the question whether we should make "request.timeout.ms"
> include potential connection phase as well: assume we are going to add
> the
> pre-request "request.timeout.ms" as suggested above, then we may still
> have
> a tight bound on how long connecting should take. For example, let's say
> we
> make "joingroup.request.timeout.ms" (or "fetch.request.timeout.ms" to be
> large since we want really long polling behavior) to be a large value,
> say
> 200 seconds, then if the client is trying to connect to the broker while
> sending the request, and the broker has died, then we may still be
> blocked
> waiting for 30 seconds while I think David's motivation is to fail-fast
> in
> these cases.

Thanks for the explanation.  I think I understand better now.  David
wants to be able to have a long timeout for waiting for the server to
process the request, but a shorter timeout for waiting for the
connection to be established.  In that case, implementing the additional
timeout makes sense.

I guess one obvious question is, how does this interact with retries? 
Does it result in a failure getting delivered to the end user more
quickly if connecting is impossible the first few times we try?  Does
ex

[jira] [Assigned] (KAFKA-4656) Improve test coverage of CompositeReadOnlyKeyValueStore

2017-06-10 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-4656:
-

Assignee: Jeyhun Karimov

> Improve test coverage of CompositeReadOnlyKeyValueStore
> ---
>
> Key: KAFKA-4656
> URL: https://issues.apache.org/jira/browse/KAFKA-4656
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> exceptions not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3292: KAFKA-4656: Improve test coverage of CompositeRead...

2017-06-10 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3292

KAFKA-4656: Improve test coverage of CompositeReadOnlyKeyValueStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4656

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3292


commit 3fa739bf25c0a92e405635d9c0ae7f9c90ca8027
Author: Jeyhun Karimov 
Date:   2017-06-10T15:28:55Z

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4656) Improve test coverage of CompositeReadOnlyKeyValueStore

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045583#comment-16045583
 ] 

ASF GitHub Bot commented on KAFKA-4656:
---

GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3292

KAFKA-4656: Improve test coverage of CompositeReadOnlyKeyValueStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4656

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3292


commit 3fa739bf25c0a92e405635d9c0ae7f9c90ca8027
Author: Jeyhun Karimov 
Date:   2017-06-10T15:28:55Z

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest

Exceptions not covered in CompositeReadOnlyKeyValueStoreTest




> Improve test coverage of CompositeReadOnlyKeyValueStore
> ---
>
> Key: KAFKA-4656
> URL: https://issues.apache.org/jira/browse/KAFKA-4656
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> exceptions not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5413) Log cleaner fails due to large offset in segment file

2017-06-10 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045572#comment-16045572
 ] 

Jun Rao commented on KAFKA-5413:


Could you attach the .index file of the 2 segments too?

> Log cleaner fails due to large offset in segment file
> -
>
> Key: KAFKA-5413
> URL: https://issues.apache.org/jira/browse/KAFKA-5413
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.2.0, 0.10.2.1
> Environment: Ubuntu 14.04 LTS, Oracle Java 8u92, kafka_2.11-0.10.2.0
>Reporter: Nicholas Ngorok
>  Labels: reliability
> Attachments: .log, 002147422683.log
>
>
> The log cleaner thread in our brokers is failing with the trace below
> {noformat}
> [2017-06-08 15:49:54,822] INFO {kafka-log-cleaner-thread-0} Cleaner 0: 
> Cleaning segment 0 in log __consumer_offsets-12 (largest timestamp Thu Jun 08 
> 15:48:59 PDT 2017) into 0, retaining deletes. (kafka.log.LogCleaner)
> [2017-06-08 15:49:54,822] INFO {kafka-log-cleaner-thread-0} Cleaner 0: 
> Cleaning segment 2147343575 in log __consumer_offsets-12 (largest timestamp 
> Thu Jun 08 15:49:06 PDT 2017) into 0, retaining deletes. 
> (kafka.log.LogCleaner)
> [2017-06-08 15:49:54,834] ERROR {kafka-log-cleaner-thread-0} 
> [kafka-log-cleaner-thread-0], Error due to  (kafka.log.LogCleaner)
> java.lang.IllegalArgumentException: requirement failed: largest offset in 
> message set can not be safely converted to relative offset.
> at scala.Predef$.require(Predef.scala:224)
> at kafka.log.LogSegment.append(LogSegment.scala:109)
> at kafka.log.Cleaner.cleanInto(LogCleaner.scala:478)
> at 
> kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:405)
> at 
> kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:401)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:401)
> at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:363)
> at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:362)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at kafka.log.Cleaner.clean(LogCleaner.scala:362)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:241)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:220)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> [2017-06-08 15:49:54,835] INFO {kafka-log-cleaner-thread-0} 
> [kafka-log-cleaner-thread-0], Stopped  (kafka.log.LogCleaner)
> {noformat}
> This seems to point at the specific line [here| 
> https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/log/LogSegment.scala#L92]
>  in the kafka src where the difference is actually larger than MAXINT as both 
> baseOffset and offset are of type long. It was introduced in this [pr| 
> https://github.com/apache/kafka/pull/2210/files/56d1f8196b77a47b176b7bbd1e4220a3be827631]
> These were the outputs of dumping the first two log segments
> {noformat}
> :~$ /usr/bin/kafka-run-class kafka.tools.DumpLogSegments --deep-iteration 
> --files /kafka-logs/__consumer_offsets-12/000
> 0.log
> Dumping /kafka-logs/__consumer_offsets-12/.log
> Starting offset: 0
> offset: 1810054758 position: 0 NoTimestampType: -1 isvalid: true payloadsize: 
> -1 magic: 0 compresscodec: NONE crc: 3127861909 keysize: 34
> :~$ /usr/bin/kafka-run-class kafka.tools.DumpLogSegments --deep-iteration 
> --files /kafka-logs/__consumer_offsets-12/000
> 0002147343575.log
> Dumping /kafka-logs/__consumer_offsets-12/002147343575.log
> Starting offset: 2147343575
> offset: 2147539884 position: 0 NoTimestampType: -1 isvalid: true paylo
> adsize: -1 magic: 0 compresscodec: NONE crc: 2282192097 keysize: 34
> {noformat}
> My guess is that since 2147539884 is larger than MAXINT, we are hitting this 
> exception. Was there a specific reason, this check was added in 0.10.2?
> E.g. if the first offset is a key = "key 0" and then we have MAXINT + 1 of 
> "key 1" following, wouldn't we run into this situation whenever the log 
> cleaner runs?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


KIP Permission Request

2017-06-10 Thread Grant Neale
Good afternoon,


I am writing to request permission to create Kafka Improvement Proposal (KIP) 
pages on the Apache Kafka confluence.


My wiki username is grantne...@hotmail.com.


The KIP relates to a lag-based partition assignment strategy, similar to that 
proposed here.


Regards,

Grant Neale


[jira] [Commented] (KAFKA-4659) Improve test coverage of CachingKeyValueStore

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045525#comment-16045525
 ] 

ASF GitHub Bot commented on KAFKA-4659:
---

GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3291

KAFKA-4659: Improve test coverage of CachingKeyValueStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4659

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3291


commit c640cdc430c287e100cbb94a6956005c029e83e5
Author: Jeyhun Karimov 
Date:   2017-06-10T13:22:51Z

putIfAbsent and null pointer exception test cases covered




> Improve test coverage of CachingKeyValueStore
> -
>
> Key: KAFKA-4659
> URL: https://issues.apache.org/jira/browse/KAFKA-4659
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{putIfAbsent}} mostly not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4659) Improve test coverage of CachingKeyValueStore

2017-06-10 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-4659:
-

Assignee: Jeyhun Karimov

> Improve test coverage of CachingKeyValueStore
> -
>
> Key: KAFKA-4659
> URL: https://issues.apache.org/jira/browse/KAFKA-4659
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> {{putIfAbsent}} mostly not covered



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3291: KAFKA-4659: Improve test coverage of CachingKeyVal...

2017-06-10 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3291

KAFKA-4659: Improve test coverage of CachingKeyValueStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4659

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3291


commit c640cdc430c287e100cbb94a6956005c029e83e5
Author: Jeyhun Karimov 
Date:   2017-06-10T13:22:51Z

putIfAbsent and null pointer exception test cases covered




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4655) Improve test coverage of CompositeReadOnlySessionStore

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16045501#comment-16045501
 ] 

ASF GitHub Bot commented on KAFKA-4655:
---

GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3290

KAFKA-4655: Improve test coverage of CompositeReadOnlySessionStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4655

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3290


commit 0627b6b932bde199f26ffead651ce5e1d7f0ea82
Author: Jeyhun Karimov 
Date:   2017-06-10T12:13:26Z

Improved coverage with exceptions in fetch and internal iterator




> Improve test coverage of CompositeReadOnlySessionStore
> --
>
> Key: KAFKA-4655
> URL: https://issues.apache.org/jira/browse/KAFKA-4655
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
> Fix For: 0.11.1.0
>
>
> exceptions in fetch and internal iterator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3290: KAFKA-4655: Improve test coverage of CompositeRead...

2017-06-10 Thread jeyhunkarimov
GitHub user jeyhunkarimov opened a pull request:

https://github.com/apache/kafka/pull/3290

KAFKA-4655: Improve test coverage of CompositeReadOnlySessionStore



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeyhunkarimov/kafka KAFKA-4655

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3290


commit 0627b6b932bde199f26ffead651ce5e1d7f0ea82
Author: Jeyhun Karimov 
Date:   2017-06-10T12:13:26Z

Improved coverage with exceptions in fetch and internal iterator




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4655) Improve test coverage of CompositeReadOnlySessionStore

2017-06-10 Thread Jeyhun Karimov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeyhun Karimov reassigned KAFKA-4655:
-

Assignee: Jeyhun Karimov

> Improve test coverage of CompositeReadOnlySessionStore
> --
>
> Key: KAFKA-4655
> URL: https://issues.apache.org/jira/browse/KAFKA-4655
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Jeyhun Karimov
> Fix For: 0.11.1.0
>
>
> exceptions in fetch and internal iterator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [ANNOUNCE] New committer: Damian Guy

2017-06-10 Thread Mickael Maison
Congrats Damian!

On Sat, Jun 10, 2017 at 8:46 AM, Damian Guy  wrote:
> Thanks everyone. Looking forward to making many more contributions
> On Sat, 10 Jun 2017 at 02:46, Joe Stein  wrote:
>
>> Congrats!
>>
>>
>> ~ Joe Stein
>>
>> On Fri, Jun 9, 2017 at 6:49 PM, Neha Narkhede  wrote:
>>
>> > Well deserved. Congratulations Damian!
>> >
>> > On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang  wrote:
>> >
>> > > Hello all,
>> > >
>> > >
>> > > The PMC of Apache Kafka is pleased to announce that we have invited
>> > Damian
>> > > Guy as a committer to the project.
>> > >
>> > > Damian has made tremendous contributions to Kafka. He has not only
>> > > contributed a lot into the Streams api, but have also been involved in
>> > many
>> > > other areas like the producer and consumer clients, broker-side
>> > > coordinators (group coordinator and the ongoing transaction
>> coordinator).
>> > > He has contributed more than 100 patches so far, and have been driving
>> > on 6
>> > > KIP contributions.
>> > >
>> > > More importantly, Damian has been a very prolific reviewer on open PRs
>> > and
>> > > has been actively participating on community activities such as email
>> > lists
>> > > and slack overflow questions. Through his code contributions and
>> reviews,
>> > > Damian has demonstrated good judgement on system design and code
>> > qualities,
>> > > especially on thorough unit test coverages. We believe he will make a
>> > great
>> > > addition to the committers of the community.
>> > >
>> > >
>> > > Thank you for your contributions, Damian!
>> > >
>> > >
>> > > -- Guozhang, on behalf of the Apache Kafka PMC
>> > >
>> > --
>> > Thanks,
>> > Neha
>> >
>>


[GitHub] kafka pull request #3289: MINOR: add Yahoo benchmark to nightly runs

2017-06-10 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/3289

MINOR: add Yahoo benchmark to nightly runs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka yahoo-benchmark

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3289


commit 8eb0b35dd3bf64c470567430f1602cf4af7240a6
Author: Eno Thereska 
Date:   2017-06-08T11:05:19Z

Checkpoint

commit 421c3c54fc50a1273fde20700b08cc5be46115b0
Author: Eno Thereska 
Date:   2017-06-08T15:10:39Z

Checkpoint

commit 16e6f6a8f00eefaa947563ee18d3df0aa557871a
Author: Eno Thereska 
Date:   2017-06-08T15:48:30Z

Checkpoint

commit 4dae23a2aabf74934a932989549b6156a1cc6fa7
Author: Eno Thereska 
Date:   2017-06-08T15:50:03Z

Undo manual setup

commit 405d56492b8e859751043e6d1f231733ce93d86d
Author: Eno Thereska 
Date:   2017-06-08T16:31:45Z

Fix

commit 2fbb84ee387cf1a8cf4b42c31c1dfe8429ae1dbd
Author: Eno Thereska 
Date:   2017-06-08T16:45:55Z

Increase num threads

commit f0b6710952e1439e49e9bfd1a65a6550b990d3a3
Author: Eno Thereska 
Date:   2017-06-08T19:42:55Z

Adjust check

commit 59fb49b86876e0e5ecb0994cf3a193b10771ab24
Author: Eno Thereska 
Date:   2017-06-09T08:02:41Z

Split into different files

commit 509eefc270b1bce2367a624d5759f68ba88a9283
Author: Eno Thereska 
Date:   2017-06-09T09:03:14Z

Added other fields to event records

commit 866fb4520c2b432bedebe24051b324cb576f4d2f
Author: Eno Thereska 
Date:   2017-06-09T09:50:59Z

Some comments




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [ANNOUNCE] New committer: Damian Guy

2017-06-10 Thread Damian Guy
Thanks everyone. Looking forward to making many more contributions
On Sat, 10 Jun 2017 at 02:46, Joe Stein  wrote:

> Congrats!
>
>
> ~ Joe Stein
>
> On Fri, Jun 9, 2017 at 6:49 PM, Neha Narkhede  wrote:
>
> > Well deserved. Congratulations Damian!
> >
> > On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang  wrote:
> >
> > > Hello all,
> > >
> > >
> > > The PMC of Apache Kafka is pleased to announce that we have invited
> > Damian
> > > Guy as a committer to the project.
> > >
> > > Damian has made tremendous contributions to Kafka. He has not only
> > > contributed a lot into the Streams api, but have also been involved in
> > many
> > > other areas like the producer and consumer clients, broker-side
> > > coordinators (group coordinator and the ongoing transaction
> coordinator).
> > > He has contributed more than 100 patches so far, and have been driving
> > on 6
> > > KIP contributions.
> > >
> > > More importantly, Damian has been a very prolific reviewer on open PRs
> > and
> > > has been actively participating on community activities such as email
> > lists
> > > and slack overflow questions. Through his code contributions and
> reviews,
> > > Damian has demonstrated good judgement on system design and code
> > qualities,
> > > especially on thorough unit test coverages. We believe he will make a
> > great
> > > addition to the committers of the community.
> > >
> > >
> > > Thank you for your contributions, Damian!
> > >
> > >
> > > -- Guozhang, on behalf of the Apache Kafka PMC
> > >
> > --
> > Thanks,
> > Neha
> >
>