[jira] [Resolved] (KAFKA-7115) InMemoryKeyValueLoggedStore does not flush the changelog

2018-06-28 Thread Hashan Gayasri Udugahapattuwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hashan Gayasri Udugahapattuwa resolved KAFKA-7115.
--
Resolution: Not A Problem

This the expected behaviour

> InMemoryKeyValueLoggedStore does not flush the changelog
> 
>
> Key: KAFKA-7115
> URL: https://issues.apache.org/jira/browse/KAFKA-7115
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
> Environment: Fedora 27
>Reporter: Hashan Gayasri Udugahapattuwa
>Priority: Major
>
> The InMemoryKeyValueLoggedStore does not flush the underlying RecordCollector.
> *Please close this if this is not the intended behaviour and if the 
> RecordCollector is supposed to be flushed by the StreamTask only.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7118) Currently, it was discovered that KafkaConsumer's close() method might not be multi-thread safe when multiple cores are calling the same consumer.

2018-06-28 Thread Richard Yu (JIRA)
Richard Yu created KAFKA-7118:
-

 Summary: Currently, it was discovered that KafkaConsumer's close() 
method might not be multi-thread safe when multiple cores are calling the same 
consumer. 
 Key: KAFKA-7118
 URL: https://issues.apache.org/jira/browse/KAFKA-7118
 Project: Kafka
  Issue Type: Bug
Reporter: Richard Yu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.1 RC2

2018-06-28 Thread Ted Yu
+1

Ran test suite which passed.

On Thu, Jun 28, 2018 at 6:12 PM, Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 1.1.1.
>
> Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
> that release. A few of the more significant fixes include:
>
> KAFKA-6925  - Fix memory
> leak in StreamsMetricsThreadImpl
> KAFKA-6937  - In-sync
> replica delayed during fetch if replica throttle is exceeded
> KAFKA-6917  - Process
> txn
> completion asynchronously to avoid deadlock
> KAFKA-6893  - Create
> processors before starting acceptor to avoid ArithmeticException
> KAFKA-6870  -
> Fix ConcurrentModificationException in SampledStat
> KAFKA-6878  - Fix
> NullPointerException when querying global state store
> KAFKA-6879  - Invoke
> session init callbacks outside lock to avoid Controller deadlock
> KAFKA-6857  - Prevent
> follower from truncating to the wrong offset if undefined leader epoch is
> requested
> KAFKA-6854  - Log
> cleaner
> fails with transaction markers that are deleted during clean
> KAFKA-6747  - Check
> whether there is in-flight transaction before aborting transaction
> KAFKA-6748  - Double
> check before scheduling a new task after the punctuate call
> KAFKA-6739  -
> Fix IllegalArgumentException when down-converting from V2 to V0/V1
> KAFKA-6728  -
> Fix NullPointerException when instantiating the HeaderConverter
>
> Kafka 1.1.1 release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
>
> Release notes for the 1.1.1 release:
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, July 3, 12pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
> https://github.com/apache/kafka/tree/1.1.1-rc2
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-1.1-jdk7/157/
> *
> System tests: https://jenkins.confluent.io/job/system-test-kafka-
> branch-builder/1817
>
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
>
> Regards,
> Dong
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2780

2018-06-28 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.1.1 RC1

2018-06-28 Thread Dong Lin
Thanks for catching this Odin! Now I check the release process again, I
realized that I should have used release.py instead of doing every step
manually. So a few steps were missed. Most notably the kafka_2.11-1.1.1.tgz
generated for RC1 were compiled with Java 8 instead of Java 7. Thus a new
release is required to correct this mistake.

I am very sorry for the inconvenience. If you have time, please kindly help
test the release again. Thank you all for your help!


On Thu, Jun 28, 2018 at 4:05 AM, Manikumar 
wrote:

> Yes, looks like maven artifacts are missing on staging repo
> https://repository.apache.org/content/groups/staging/org/apa
> che/kafka/kafka_2.11/
>
> On Thu, Jun 28, 2018 at 4:18 PM Odin  wrote:
>
> > There are no 1.1.1-rc1 artifacts in the staging repo listed. Where can
> > they be found?
> >
> > Sincerely
> > Odin Standal
> >
> >
> > ​​
> >
> > ‐‐‐ Original Message ‐‐‐
> >
> > On June 22, 2018 7:09 PM, Dong Lin  wrote:
> >
> > > ​​
> > >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the second candidate for release of Apache Kafka 1.1.1.
> > >
> > > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> first
> > >
> > > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> > since
> > >
> > > that release. A few of the more significant fixes include:
> > >
> > > KAFKA-6925 https://issues.apache.org/jira/browse/KAFKA-6925 - Fix
> memory
> > >
> > > leak in StreamsMetricsThreadImpl
> > >
> > > KAFKA-6937 https://issues.apache.org/jira/browse/KAFKA-6937 - In-sync
> > >
> > > replica delayed during fetch if replica throttle is exceeded
> > >
> > > KAFKA-6917 https://issues.apache.org/jira/browse/KAFKA-6917 - Process
> > txn
> > >
> > > completion asynchronously to avoid deadlock
> > >
> > > KAFKA-6893 https://issues.apache.org/jira/browse/KAFKA-6893 - Create
> > >
> > > processors before starting acceptor to avoid ArithmeticException
> > >
> > > KAFKA-6870 https://issues.apache.org/jira/browse/KAFKA-6870 -
> > >
> > > Fix ConcurrentModificationException in SampledStat
> > >
> > > KAFKA-6878 https://issues.apache.org/jira/browse/KAFKA-6878 - Fix
> > >
> > > NullPointerException when querying global state store
> > >
> > > KAFKA-6879 https://issues.apache.org/jira/browse/KAFKA-6879 - Invoke
> > >
> > > session init callbacks outside lock to avoid Controller deadlock
> > >
> > > KAFKA-6857 https://issues.apache.org/jira/browse/KAFKA-6857 - Prevent
> > >
> > > follower from truncating to the wrong offset if undefined leader epoch
> is
> > >
> > > requested
> > >
> > > KAFKA-6854 https://issues.apache.org/jira/browse/KAFKA-6854 - Log
> > cleaner
> > >
> > > fails with transaction markers that are deleted during clean
> > >
> > > KAFKA-6747 https://issues.apache.org/jira/browse/KAFKA-6747 - Check
> > >
> > > whether there is in-flight transaction before aborting transaction
> > >
> > > KAFKA-6748 https://issues.apache.org/jira/browse/KAFKA-6748 - Double
> > >
> > > check before scheduling a new task after the punctuate call
> > >
> > > KAFKA-6739 https://issues.apache.org/jira/browse/KAFKA-6739 -
> > >
> > > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > >
> > > KAFKA-6728 https://issues.apache.org/jira/browse/KAFKA-6728 -
> > >
> > > Fix NullPointerException when instantiating the HeaderConverter
> > >
> > > Kafka 1.1.1 release plan:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> > >
> > > Release notes for the 1.1.1 release:
> > >
> > > http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > >
> > > http://kafka.apache.org/KEYS
> > >
> > > -   Release artifacts to be voted upon (source and binary):
> > >
> > > http://home.apache.org/~lindong/kafka-1.1.1-rc1/
> > >
> > > -   Maven artifacts to be voted upon:
> > >
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > -   Javadoc:
> > >
> > > http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/
> > >
> > > -   Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
> > >
> > > https://github.com/apache/kafka/tree/1.1.1-rc1
> > >
> > > -   Documentation:
> > >
> > > http://kafka.apache.org/11/documentation.html
> > >
> > > -   Protocol:
> > >
> > > http://kafka.apache.org/11/protocol.html
> > >
> > > -   Successful Jenkins builds for the 1.1 branch:
> > >
> > > Unit/integration tests:
> > https://builds.apache.org/job/kafka-1.1-jdk7/152/
> > >
> > > https://builds.apache.org/job/kafka-1.1-jdk7/152/
> > >
> > >
> > > System tests: https://jenkins.confluent.io/job/system-test-
> > >
> > > kafka-branch-builder/1817
> > >
> > > Please test and verify the release artifacts and submit a vote for this
> > RC,
> > >
> > > or report any issues so we can fix them and get a new RC out ASAP.
> > Althou

[VOTE] 1.1.1 RC2

2018-06-28 Thread Dong Lin
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.1.1.

Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
that release. A few of the more significant fixes include:

KAFKA-6925  - Fix memory
leak in StreamsMetricsThreadImpl
KAFKA-6937  - In-sync
replica delayed during fetch if replica throttle is exceeded
KAFKA-6917  - Process txn
completion asynchronously to avoid deadlock
KAFKA-6893  - Create
processors before starting acceptor to avoid ArithmeticException
KAFKA-6870  -
Fix ConcurrentModificationException in SampledStat
KAFKA-6878  - Fix
NullPointerException when querying global state store
KAFKA-6879  - Invoke
session init callbacks outside lock to avoid Controller deadlock
KAFKA-6857  - Prevent
follower from truncating to the wrong offset if undefined leader epoch is
requested
KAFKA-6854  - Log cleaner
fails with transaction markers that are deleted during clean
KAFKA-6747  - Check
whether there is in-flight transaction before aborting transaction
KAFKA-6748  - Double
check before scheduling a new task after the punctuate call
KAFKA-6739  -
Fix IllegalArgumentException when down-converting from V2 to V0/V1
KAFKA-6728  -
Fix NullPointerException when instantiating the HeaderConverter

Kafka 1.1.1 release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1

Release notes for the 1.1.1 release:
http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, July 3, 12pm PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~lindong/kafka-1.1.1-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/

* Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
https://github.com/apache/kafka/tree/1.1.1-rc2

* Documentation:
http://kafka.apache.org/11/documentation.html

* Protocol:
http://kafka.apache.org/11/protocol.html

* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: *https://builds.apache.org/job/kafka-1.1-jdk7/157/
*
System tests: https://jenkins.confluent.io/job/system-test-kafka-
branch-builder/1817


Please test and verify the release artifacts and submit a vote for this RC,
or report any issues so we can fix them and get a new RC out ASAP. Although
this release vote requires PMC votes to pass, testing, votes, and bug
reports are valuable and appreciated from everyone.


Regards,
Dong


[jira] [Resolved] (KAFKA-6944) Add system tests testing the new throttling behavior using older clients/brokers

2018-06-28 Thread Jon Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Lee resolved KAFKA-6944.

Resolution: Fixed

> Add system tests testing the new throttling behavior using older 
> clients/brokers
> 
>
> Key: KAFKA-6944
> URL: https://issues.apache.org/jira/browse/KAFKA-6944
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Affects Versions: 2.0.0
>Reporter: Jon Lee
>Priority: Major
>
> KAFKA-6028 (KIP-219) changes the throttling behavior on quota violation as 
> follows:
>  * the broker will send out a response with throttle time to the client 
> immediately and mute the channel
>  * upon receiving a response with a non-zero throttle time, the client will 
> also block sending further requests to the broker until the throttle time is 
> over.
> The current system tests assume that both clients and brokers are of the same 
> version. We'll need an additional set of quota tests that test throttling 
> behavior between older clients and newer brokers and between newer clients 
> and older brokers. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7097) VerifiableProducer does not work properly with --message-create-time argument

2018-06-28 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7097.
--
   Resolution: Fixed
 Assignee: Ted Yu
 Reviewer: Guozhang Wang
Fix Version/s: 2.1.0

> VerifiableProducer does not work properly with --message-create-time argument
> -
>
> Key: KAFKA-7097
> URL: https://issues.apache.org/jira/browse/KAFKA-7097
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0
>Reporter: Jasper Knulst
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.1.0
>
>
> If you run:
>  
> ./bin/kafka-verifiable-producer.sh --broker-list  --topic 
> test_topic_increasing_p2 --message-create-time  --acks -1 
> --max-messages 100
> the "" for --message-create-time doesn't take a 13 digit long 
> like 1529656934000. 
> The error message:
> verifiable-producer: error: argument --message-create-time: could not convert 
> '1529656934000' to Integer (For input string: "1529656934000")
>  
> When you provide a 10 digit (1529656934) epoch for the argument it does work 
> but this leads to your topic being cleaned up in a few minutes since the 
> retention time is expired.
>  
> The error seems to be obvious since VerifiableProducer.java has:
>         Long createTime = (long) res.getInt("createTime");
> when parsing the argument. This should be taken as a Long instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-325: Extend Consumer Group Command to Show Beginning Offsets

2018-06-28 Thread Jason Gustafson
Hey Gwen/Vahid,

I think that use case makes sense, but isn't it a little odd to go through
the consumer group tool? I would expect to find something like that from
the kafka-topics tool instead. I don't feel too strongly about it, but I
hate to complicate the tool by adding the need to query topic configs. If
we don't have a meaningful number to report for compacted topics anyway,
then it feels like only a half solution. I'd probably suggest leaving this
out or just reporting the absolute difference even if a topic is compacted.

-Jason



On Thu, Jun 28, 2018 at 1:05 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi James,
>
>
>
> Thanks for the feedback. I updated the KIP and added some of the benefits
>
> of this improvement (including some that you mentioned).
>
>
>
> Regards.
>
> --Vahid
>
>
>
>
>
>
>
> From:   James Cheng 
>
> To: dev@kafka.apache.org
>
> Date:   06/27/2018 09:13 PM
>
> Subject:Re: [DISCUSS] KIP-325: Extend Consumer Group Command to
>
> Show Beginning Offsets
>
>
>
>
>
>
>
> The “Motivation” section of the KIP says that the starting offset will be
>
> useful but doesn’t say why. Can you add a use-case or two to describe how
>
> it will be useful?
>
>
>
> In our case (see
>
> https://github.com/wushujames/kafka-utilities/blob/master/Co
> nsumerGroupLag/README.md
>
> ), we found the starting offset useful so that we could calculate
>
> partition size so that we could identify empty partitions (partitions
>
> where all the data had expired). In particular, we needed that info so
>
> that we could calculate “lag”. Consider that case where we are asked to
>
> mirror an abandoned topic where startOffset==endOffset==100. We would
>
> have no committed offset on it, and the topic has no data in it, so we
>
> won’t soon get any committed offset, and so “lag” is kind of undefined. We
>
> used the additional startOffset to allow us to detect this case.
>
>
>
> -James
>
>
>
> Sent from my iPhone
>
>
>
> > On Jun 26, 2018, at 11:23 AM, Vahid S Hashemian
>
>  wrote:
>
> >
>
> > Hi everyone,
>
> >
>
> > I have created a trivial KIP to improve the offset reporting of the
>
> > consumer group command:
>
> >
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-325%
> 3A+Extend+Consumer+Group+Command+to+Show+Beginning+Offsets
>
>
>
> > Looking forward to your feedback!
>
> >
>
> > Thanks.
>
> > --Vahid
>
> >
>
> >
>
>
>
>
>
>
>
>
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #2779

2018-06-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: KAFKA-7112: Only resume restoration if state is still

--
[...truncated 867.19 KB...]
kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.

Re: [DISCUSS] KIP-325: Extend Consumer Group Command to Show Beginning Offsets

2018-06-28 Thread Vahid S Hashemian
Hi James,

Thanks for the feedback. I updated the KIP and added some of the benefits 
of this improvement (including some that you mentioned).

Regards.
--Vahid



From:   James Cheng 
To: dev@kafka.apache.org
Date:   06/27/2018 09:13 PM
Subject:Re: [DISCUSS] KIP-325: Extend Consumer Group Command to 
Show Beginning Offsets



The “Motivation” section of the KIP says that the starting offset will be 
useful but doesn’t say why. Can you add a use-case or two to describe how 
it will be useful?

In our case (see 
https://github.com/wushujames/kafka-utilities/blob/master/ConsumerGroupLag/README.md
), we found the starting offset useful so that we could calculate 
partition size so that we could identify empty partitions (partitions 
where all the data had expired). In particular, we needed that info so 
that we could calculate “lag”. Consider that case where we are asked to 
mirror an abandoned topic where startOffset==endOffset==100. We would 
have no committed offset on it, and the topic has no data in it, so we 
won’t soon get any committed offset, and so “lag” is kind of undefined. We 
used the additional startOffset to allow us to detect this case.

-James

Sent from my iPhone

> On Jun 26, 2018, at 11:23 AM, Vahid S Hashemian 
 wrote:
> 
> Hi everyone,
> 
> I have created a trivial KIP to improve the offset reporting of the 
> consumer group command: 
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-325%3A+Extend+Consumer+Group+Command+to+Show+Beginning+Offsets

> Looking forward to your feedback!
> 
> Thanks.
> --Vahid
> 
> 






[jira] [Created] (KAFKA-7117) Allow AclCommand to use AdminClient API

2018-06-28 Thread Manikumar (JIRA)
Manikumar created KAFKA-7117:


 Summary: Allow AclCommand to use AdminClient API
 Key: KAFKA-7117
 URL: https://issues.apache.org/jira/browse/KAFKA-7117
 Project: Kafka
  Issue Type: Improvement
Reporter: Manikumar
Assignee: Manikumar


Currently AclCommand (kafka-acls.sh) uses authorizer class (default 
SimpleAclAuthorizer) to manage acls.

We should also allow AclCommand to support AdminClient API based acl 
management. This will allow kafka-acls.sh script users to manage acls without 
interacting zookeeper/authorizer directly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-330: Add retentionPeriod in SessionBytesStoreSupplier

2018-06-28 Thread John Roesler
+1

On Thu, Jun 28, 2018 at 4:39 AM Damian Guy  wrote:

> +1
>
> On Thu, 28 Jun 2018 at 02:16 Ted Yu  wrote:
>
> > +1
> >
> > On Wed, Jun 27, 2018 at 4:40 PM, Bill Bejeck  wrote:
> >
> > > +1
> > >
> > > -Bill
> > >
> > > On Wed, Jun 27, 2018 at 7:39 PM Guozhang Wang 
> > wrote:
> > >
> > > > Hello folks,
> > > >
> > > > I'd like to start a voting thread on KIP-330. I've intentionally
> > skipped
> > > > the discuss phase since it is a pretty straight-forward public API
> > change
> > > > and should actually be added since day one. The bug fix of KAFKA-7071
> > > > helped us to discover this overlook.
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani resolved KAFKA-7116.
---
Resolution: Implemented

Thanks [~ijuma]

> Provide separate SSL configs for Kafka Broker replication
> -
>
> Key: KAFKA-7116
> URL: https://issues.apache.org/jira/browse/KAFKA-7116
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: GEORGE LI
>Priority: Major
>
> Currently, we are using one set of SSL configs in server.properties for the 
> broker to accept the connections and replication to use as client side as 
> well. For the most part, we can use the server configs for the replication 
> client as well but there are cases where we need separation. 
> Example Inter-broker connections would like to have SSL authentication but 
> want to disable encryption for replication. This is not possible right now 
> due to same config name "cipher_suites" being used for both server and 
> client.  
> Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7116) Provide separate SSL configs for Kafka Broker replication

2018-06-28 Thread Sriharsha Chintalapani (JIRA)
Sriharsha Chintalapani created KAFKA-7116:
-

 Summary: Provide separate SSL configs for Kafka Broker replication
 Key: KAFKA-7116
 URL: https://issues.apache.org/jira/browse/KAFKA-7116
 Project: Kafka
  Issue Type: Improvement
Reporter: Sriharsha Chintalapani
Assignee: GEORGE LI


Currently, we are using one set of SSL configs in server.properties for the 
broker to accept the connections and replication to use as client side as well. 
For the most part, we can use the server configs for the replication client as 
well but there are cases where we need separation. 

Example Inter-broker connections would like to have SSL authentication but want 
to disable encryption for replication. This is not possible right now due to 
same config name "cipher_suites" being used for both server and client.  

Since this Jira introduces new configs we will add a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-28 Thread Guozhang Wang
+1 (binding), thanks Bruno.


Guozhang

On Thu, Jun 28, 2018 at 3:02 AM, Damian Guy  wrote:

> +1
>
> On Mon, 25 Jun 2018 at 02:16 Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> >
> > -Matthias
> >
> > On 6/22/18 10:25 AM, Bill Bejeck wrote:
> > > Thanks for the KIP, +1.
> > >
> > > -Bill
> > >
> > > On Fri, Jun 22, 2018 at 1:08 PM Ted Yu  wrote:
> > >
> > >> +1
> > >>
> > >> On Fri, Jun 22, 2018 at 2:50 AM, Bruno Cadonna 
> > wrote:
> > >>
> > >>> Hi list,
> > >>>
> > >>> I would like to voting on this KIP.
> > >>>
> > >>> I created a first PR[1] that adds flatTransform. Once I get some
> > >>> feedback, I will start work on flatTransformValues.
> > >>>
> > >>> Best regards,
> > >>> Bruno
> > >>>
> > >>> [1] https://github.com/apache/kafka/pull/5273
> > >>>
> > >>
> > >
> >
> >
>



-- 
-- Guozhang


Re: Help with first contribution

2018-06-28 Thread Nikolay Izhikov
Hello, Matthias!

> Did you filter for JIRAs with label "newbie" or "beginner"? 

"newbie" as it mentioned in "how to contribute" guide.
Seems, many of them already discussed or assigned.

Can you, please, suggest one?

> What component of Kafka do you want to work on/interests you most?

Actually, I don't have any specific wishes so any good-to-start issue will be 
accepted.

В Чт, 28/06/2018 в 08:02 -0700, Matthias J. Sax пишет:
> Nikolay,
> 
> thanks for your interest in contribution to Kafka!
> 
> Did you filter for JIRAs with label "newbie" or "beginner"? What
> component of Kafka do you want to work on/interests you most?
> 
> 
> -Matthias
> 
> On 6/28/18 3:20 AM, Nikolay Izhikov wrote:
> > Hello, guys.
> > 
> > I'm experienced java/scala engineer.
> > I want to contribute to kafka.
> > I read the documentation and look over jira tickets.
> > 
> > Seems it's not easy to find a right issue to start contributing with.
> > 
> > Can you, please, suggest some ticket(s) or JIRA filters I can take and 
> > solve.
> > 
> 
> 

signature.asc
Description: This is a digitally signed message part


Re: Help with first contribution

2018-06-28 Thread Matthias J. Sax
Nikolay,

thanks for your interest in contribution to Kafka!

Did you filter for JIRAs with label "newbie" or "beginner"? What
component of Kafka do you want to work on/interests you most?


-Matthias

On 6/28/18 3:20 AM, Nikolay Izhikov wrote:
> Hello, guys.
> 
> I'm experienced java/scala engineer.
> I want to contribute to kafka.
> I read the documentation and look over jira tickets.
> 
> Seems it's not easy to find a right issue to start contributing with.
> 
> Can you, please, suggest some ticket(s) or JIRA filters I can take and solve.
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-319: Replace numSegments to segmentInterval in Streams window configurations

2018-06-28 Thread John Roesler
Thanks for voting, everyone.

KIP-319 has passed with 3 binding (Guozhang, Matthias, Damian) and 3
non-binding votes (Ted, Bill, and me!).

If you wish to review the implementation, I plan to build on the draft PR:
https://github.com/apache/kafka/pull/5257

Thanks again,
-John

On Thu, Jun 28, 2018 at 4:33 AM Damian Guy  wrote:

> +1
>
> On Tue, 26 Jun 2018 at 17:22 Bill Bejeck  wrote:
>
> > +1
> >
> > On Mon, Jun 25, 2018 at 11:07 PM Matthias J. Sax 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On 6/25/18 3:00 PM, Guozhang Wang wrote:
> > > > +1
> > > >
> > > > On Mon, Jun 25, 2018 at 2:58 PM, Ted Yu  wrote:
> > > >
> > > >> +1
> > > >>
> > > >> On Mon, Jun 25, 2018 at 2:56 PM, John Roesler 
> > > wrote:
> > > >>
> > > >>> Hello All,
> > > >>>
> > > >>> Thanks for the discussion on KIP-319. I'd now like to start the
> > voting.
> > > >>>
> > > >>> As a reminder, KIP-319 proposes a fix to an issue I identified in
> > > >>> KAFKA-7080. Specifically, the issue is that we're creating
> > > >>> CachingWindowStore with the *number of segments* instead of the
> > > *segment
> > > >>> size*.
> > > >>>
> > > >>> Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > > >>> Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > > >>>
> > > >>> Additionally, here's a draft PR for clarity:
> > > >>> https://github.com/apache/kafka/pull/5257
> > > >>>
> > > >>> Thanks,
> > > >>> -John
> > > >>>
> > > >>
> > > >
> > > >
> > > >
> > >
> > >
> >
>


[jira] [Resolved] (KAFKA-6809) connections-created metric does not behave as expected

2018-06-28 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6809.
---
   Resolution: Fixed
 Reviewer: Rajini Sivaram
Fix Version/s: (was: 2.1.0)
   2.0.0

PR: [https://github.com/apache/kafka/pull/5301]

 

> connections-created metric does not behave as expected
> --
>
> Key: KAFKA-6809
> URL: https://issues.apache.org/jira/browse/KAFKA-6809
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.0.1
>Reporter: Anna Povzner
>Assignee: Stanislav Kozlovski
>Priority: Major
> Fix For: 2.0.0, 1.1.2
>
>
> "connections-created" sensor is described as "new connections established". 
> It currently records only connections that the broker/client creates, but 
> does not count connections received. Seems like we should also count 
> connections received – either include them into this metric (and also clarify 
> the description) or add a new metric (separately counting two types of 
> connections). I am not sure how useful is to separate them, so I think we 
> should do the first approach.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-280: Enhanced log compaction

2018-06-28 Thread Luís Cabral
 Hi,

Thank you all for having a look!

The KIP is now updated with the result of these late discussions, though I did 
take some liberty with this part:

   
   - If the "compaction.strategy.header" configuration is not set (or is 
blank), then the compaction strategy will fallback to "offset";


Alternatively, we can also set it to be a mandatory property when the strategy 
is "header" and fail the application to start via a config validation (I would 
honestly prefer this, but its up to your taste).

Anyway, this is now a minute detail that can be adapted during the final stage 
of this KIP, so are you all alright with me changing the status to [ACCEPTED]?

Cheers,
Luis


On Thursday, June 28, 2018, 2:08:11 PM GMT+2, Ted Yu  
wrote:  
 
 +1

On Thu, Jun 28, 2018 at 4:56 AM, Luís Cabral 
wrote:

> Hi Ted,
> Can I also get your input on this?
>
> bq. +1 from my side for using `compaction.strategy` with values
> "offset","timestamp" and "header" and `compaction.strategy.header`
> -Matthias
>
> bq. +1 from me as well.
> -Guozhang
>
>
> Cheers,
> Luis
>
>
>  

Re: [VOTE] KIP-280: Enhanced log compaction

2018-06-28 Thread Ted Yu
+1

On Thu, Jun 28, 2018 at 4:56 AM, Luís Cabral 
wrote:

> Hi Ted,
> Can I also get your input on this?
>
> bq. +1 from my side for using `compaction.strategy` with values
> "offset","timestamp" and "header" and `compaction.strategy.header`
> -Matthias
>
> bq. +1 from me as well.
> -Guozhang
>
>
> Cheers,
> Luis
>
>
>


Re: [VOTE] KIP-280: Enhanced log compaction

2018-06-28 Thread Luís Cabral
Hi Ted,
Can I also get your input on this?

bq. +1 from my side for using `compaction.strategy` with values 
"offset","timestamp" and "header" and `compaction.strategy.header`
-Matthias

bq. +1 from me as well.
-Guozhang 


Cheers,
Luis




[jira] [Created] (KAFKA-7115) InMemoryKeyValueLoggedStore does not flush the changelog

2018-06-28 Thread Hashan Gayasri Udugahapattuwa (JIRA)
Hashan Gayasri Udugahapattuwa created KAFKA-7115:


 Summary: InMemoryKeyValueLoggedStore does not flush the changelog
 Key: KAFKA-7115
 URL: https://issues.apache.org/jira/browse/KAFKA-7115
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
 Environment: Fedora 27
Reporter: Hashan Gayasri Udugahapattuwa


The InMemoryKeyValueLoggedStore does not flush the underlying RecordCollector.

*Please close this if this is not the intended behaviour and if the 
RecordCollector is supposed to be flushed by the StreamTask only.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7099) KafkaLog4jAppender - not sending any message with level DEBUG

2018-06-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7099.
--
Resolution: Duplicate

Resolving this as duplicate of  KAFKA-6415.  Please post your 
observation/ssolutions to KAFKA-6415 JIRA.

> KafkaLog4jAppender - not sending any message with level DEBUG
> -
>
> Key: KAFKA-7099
> URL: https://issues.apache.org/jira/browse/KAFKA-7099
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Vincent Lebreil
>Priority: Major
>
> KafkaLog4jAppender can be stuck if it is defined at root category with level 
> DEBUG
> {{log4j.rootLogger=DEBUG, kafka}}
> {{log4j.appender.kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender}}
> {quote}DEBUG org.apache.kafka.clients.producer.KafkaProducer:131 - Exception 
> occurred during message send:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> {quote}
> KafkaLog4jAppender is using a KafkaProducer using itself Log4j with messages 
> at levels TRACE and DEBUG. The appender used in this case is also the 
> KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Help with first contribution

2018-06-28 Thread Nikolay Izhikov
Hello, guys.

I'm experienced java/scala engineer.
I want to contribute to kafka.
I read the documentation and look over jira tickets.

Seems it's not easy to find a right issue to start contributing with.

Can you, please, suggest some ticket(s) or JIRA filters I can take and solve.

signature.asc
Description: This is a digitally signed message part


[jira] [Created] (KAFKA-7114) Kafka Not sending few partitions messages to consumers

2018-06-28 Thread Gnanasoundari (JIRA)
Gnanasoundari created KAFKA-7114:


 Summary: Kafka Not sending few partitions messages to consumers
 Key: KAFKA-7114
 URL: https://issues.apache.org/jira/browse/KAFKA-7114
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 1.1.0
Reporter: Gnanasoundari


*Infrastructure:*

Kafka is running in cluster mode with 3 brokers and 3 zookeeper instances.

Kafka broker is running in t2.xlarge AWS instance.

3 Zookeeper instances is running in single t2.xlarge AWS instance.

Kafka Version: 1.1.0

*Issue:*

Kafka is not sending messages to consumer though the consumer is still active 
and subscribed to the partition.

*How to reproduce:*
 * Continuously send 50,000 records/sec where each record size is about 200 
bytes approx. for 24hour duration.
 * We have 25 topics with 4 partitions each. All 25 topics are consumed by 4 
consumers with auto assignment of partition by kafka. 
 * Kafka clears all records in partitions except 2 partitions. For two 
partitions , only 27 messages are delivered. Remaining messages has not 
delivered to consumer. 
 * During the run kafka brokers cpu utilization is 5%

Problematic partitions details:
|TOPIC|PARTITION|CURRENT-OFFSET|LOG-END-OFFSET|LAG|CONSUMER-ID|HOST|CLIENT-ID|

|topic-di14|3|27|3919844|3919817|consumer-7-dfaf4c6f-5b71-455e-ab12-9984446a9bf4|/10.10.9.22|consumer-7|

|topic-di9|0|27|12231117|12231090|consumer-22-7922babd-8f8b-49db-a111-df523962cca8|/10.10.9.22|consumer-22|

I am not able to see any error in kafka as well as my consumer application.  
Consumer is active when i ran ./bin/kafka-consumer-groups.sh command in kafka. 

 

*Kafka Consumer settings:*

enable.auto.commit = true
 session.timeout.ms = 9
 heartbeat.interval.ms = 3
 max.poll.records = 1000

poll-interval = 60ms
 poll-timeout = 60ms

 

Please help us to know why this behavior is happening in kafka.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-28 Thread Damian Guy
+1

On Mon, 25 Jun 2018 at 02:16 Matthias J. Sax  wrote:

> +1 (binding)
>
>
> -Matthias
>
> On 6/22/18 10:25 AM, Bill Bejeck wrote:
> > Thanks for the KIP, +1.
> >
> > -Bill
> >
> > On Fri, Jun 22, 2018 at 1:08 PM Ted Yu  wrote:
> >
> >> +1
> >>
> >> On Fri, Jun 22, 2018 at 2:50 AM, Bruno Cadonna 
> wrote:
> >>
> >>> Hi list,
> >>>
> >>> I would like to voting on this KIP.
> >>>
> >>> I created a first PR[1] that adds flatTransform. Once I get some
> >>> feedback, I will start work on flatTransformValues.
> >>>
> >>> Best regards,
> >>> Bruno
> >>>
> >>> [1] https://github.com/apache/kafka/pull/5273
> >>>
> >>
> >
>
>


Re: [VOTE] KIP-330: Add retentionPeriod in SessionBytesStoreSupplier

2018-06-28 Thread Damian Guy
+1

On Thu, 28 Jun 2018 at 02:16 Ted Yu  wrote:

> +1
>
> On Wed, Jun 27, 2018 at 4:40 PM, Bill Bejeck  wrote:
>
> > +1
> >
> > -Bill
> >
> > On Wed, Jun 27, 2018 at 7:39 PM Guozhang Wang 
> wrote:
> >
> > > Hello folks,
> > >
> > > I'd like to start a voting thread on KIP-330. I've intentionally
> skipped
> > > the discuss phase since it is a pretty straight-forward public API
> change
> > > and should actually be added since day one. The bug fix of KAFKA-7071
> > > helped us to discover this overlook.
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


Re: [VOTE] KIP-277 - Fine Grained ACL for CreateTopics API

2018-06-28 Thread Edoardo Comar
Hi Guozhang,

I am not sure we want to ensure that 'create' should implies 'read' and 
'write'
as I can imagine an administrative role with authoprity to create/delete 
but not to read (or write) topic data.

I would agree that 'create' should imply 'describe' though, as one such 
admin should be able to know whether a topic exists.

Edoardo
--

Edoardo Comar

IBM Message Hub

IBM UK Ltd, Hursley Park, SO21 2JN



From:   Guozhang Wang 
To: dev@kafka.apache.org
Date:   27/06/2018 19:42
Subject:Re: [VOTE] KIP-277 - Fine Grained ACL for CreateTopics API



Hello guys,

Sorry for being late on this KIP, but while incorporating the docs of 277
and 290 I'm wondering if we should be extending the authorization with
create topics on other operations with these two KIPs:

Previously, in SimpleAclAuthorizer, "read, write, delete, or alter implies
allowing describe", but not "create" as it can only be applied on
"CLUSTER". It means that users need to specify additional rules for those
topics even if they are created by themselves.

One example of this is Kafka Streams' internal topics, before 2.0, users
need to add "create CLUSTER" plus "read / write TOPIC_NAME_LITERAL" with a
secured cluster, and I've seen some common scenarios where they forgot to
add the latter and was thinking that the created topics will be
auto-granted with read/write permissions.

Would it be natural to allow:

1. prefix wildcard "create" to imply prefix wildcard "read / write /
describe" (debatable whether we want to add "delete" and "alter" as well).
2. cluster "create" to imply "read / write / describe" on topics created 
by
the same user.



Guozhang




On Fri, May 25, 2018 at 5:55 AM, Edoardo Comar  wrote:

> Thanks Ismael, noted on the KIP
>
> On 21 May 2018 at 18:29, Ismael Juma  wrote:
> > Thanks for the KIP, +1 (binding). Can you also please describe the
> > compatibility impact of changing the error code from
> > CLUSTER_AUTHORIZATION_FAILED to TOPIC_AUTHORIZATION_FAILED?
> >
> > Ismael
> >
> > On Wed, Apr 25, 2018 at 2:45 AM Edoardo Comar  
wrote:
> >
> >> Hi,
> >>
> >> The discuss thread on KIP-277 (
> >> 
https://www.mail-archive.com/dev@kafka.apache.org/msg86540.html 
)
> >> seems to have been fruitful and concerns have been addressed, please
> allow
> >> me start a vote on it:
> >>
> >>
> >> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-

> 277+-+Fine+Grained+ACL+for+CreateTopics+API
> >>
> >> I will update the small PR to the latest KIP semantics if the vote
> passes
> >> (as I hope :-).
> >>
> >> cheers
> >> Edo
> >> --
> >>
> >> Edoardo Comar
> >>
> >> IBM Message Hub
> >>
> >> IBM UK Ltd, Hursley Park, SO21 2JN
> >> Unless stated otherwise above:
> >> IBM United Kingdom Limited - Registered in England and Wales with 
number
> >> 741598.
> >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire 
PO6
> 3AU
> >>
>
>
>
> --
> "When the people fear their government, there is tyranny; when the
> government fears the people, there is liberty." [Thomas Jefferson]
>



-- 
-- Guozhang



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [VOTE] KIP-319: Replace numSegments to segmentInterval in Streams window configurations

2018-06-28 Thread Damian Guy
+1

On Tue, 26 Jun 2018 at 17:22 Bill Bejeck  wrote:

> +1
>
> On Mon, Jun 25, 2018 at 11:07 PM Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > On 6/25/18 3:00 PM, Guozhang Wang wrote:
> > > +1
> > >
> > > On Mon, Jun 25, 2018 at 2:58 PM, Ted Yu  wrote:
> > >
> > >> +1
> > >>
> > >> On Mon, Jun 25, 2018 at 2:56 PM, John Roesler 
> > wrote:
> > >>
> > >>> Hello All,
> > >>>
> > >>> Thanks for the discussion on KIP-319. I'd now like to start the
> voting.
> > >>>
> > >>> As a reminder, KIP-319 proposes a fix to an issue I identified in
> > >>> KAFKA-7080. Specifically, the issue is that we're creating
> > >>> CachingWindowStore with the *number of segments* instead of the
> > *segment
> > >>> size*.
> > >>>
> > >>> Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > >>> Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > >>>
> > >>> Additionally, here's a draft PR for clarity:
> > >>> https://github.com/apache/kafka/pull/5257
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>
> > >
> > >
> > >
> >
> >
>