Jenkins problem with JDK 8 and Scala 2.12 ?

2017-08-08 Thread Paolo Patierno
Hi devs,

is there any problems with Jenkins for building with JDK 8 and Scala 2.12.

I see the following error in this console log 
(https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/6618/consoleFull)


ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://github.com/apache/kafka.git


I re-tried twice with same results.

All is good with JDK 7 and Scala 2.11


Thanks,

Paolo


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: Jenkins problem with JDK 8 and Scala 2.12 ?

2017-08-08 Thread Sönke Liebau
I think the JDK8 Jenkins server had disk space issues yesterday and failed
all builds, maybe this is still the same / caused by that as well.

On Tue, Aug 8, 2017 at 9:52 AM, Paolo Patierno  wrote:

> Hi devs,
>
> is there any problems with Jenkins for building with JDK 8 and Scala 2.12.
>
> I see the following error in this console log (https://builds.apache.org/
> job/kafka-pr-jdk8-scala2.12/6618/consoleFull)
>
>
> ERROR: Error fetching remote repo 'origin'
> hudson.plugins.git.GitException: Failed to fetch from git://
> github.com/apache/kafka.git
>
>
> I re-tried twice with same results.
>
> All is good with JDK 7 and Scala 2.11
>
>
> Thanks,
>
> Paolo
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Created] (KAFKA-5710) KafkaAdminClient should remove inflight call correctly after response is received

2017-08-08 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-5710:
---

 Summary: KafkaAdminClient should remove inflight call correctly 
after response is received
 Key: KAFKA-5710
 URL: https://issues.apache.org/jira/browse/KAFKA-5710
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3642: KafkaAdminClient should remove inflight call corre...

2017-08-08 Thread lindong28
GitHub user lindong28 opened a pull request:

https://github.com/apache/kafka/pull/3642

KafkaAdminClient should remove inflight call correctly after response is 
received



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lindong28/kafka KAFKA-5710

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3642.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3642






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5710) KafkaAdminClient should remove inflight call correctly after response is received

2017-08-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5710.

Resolution: Duplicate

Duplicate of KAFKA-5658.

> KafkaAdminClient should remove inflight call correctly after response is 
> received
> -
>
> Key: KAFKA-5710
> URL: https://issues.apache.org/jira/browse/KAFKA-5710
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3642: KafkaAdminClient should remove inflight call corre...

2017-08-08 Thread lindong28
Github user lindong28 closed the pull request at:

https://github.com/apache/kafka/pull/3642


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3584: KAFKA-5658. Fix AdminClient request timeout handli...

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3584


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-08-08 Thread UMESH CHAUDHARY
Hi Ewen,
Sorry, I am bit late in responding this.

Thanks for your inputs and I've updated the KIP by adding more details to
it.

Regards,
Umesh

On Mon, 31 Jul 2017 at 21:51 Ewen Cheslack-Postava 
wrote:

> On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY 
> wrote:
>
>> Hi Ewen,
>> Thanks for your comments.
>>
>> 1) Yes, there are some test and java classes which refer these configs,
>> so I will include them as well in "public interface" section of KIP. What
>> should be our approach to deal with the classes and tests which use these
>> configs: we need to change them to use JsonConverter when we plan for
>> removal of these configs right?
>>
>
> I actually meant the references in config/connect-standalone.properties
> and config/connect-distributed.properties
>
>
>> 2) I believe we can target the deprecation in 1.0.0 release as it is
>> planned in October 2017 and then removal in next major release. Let me
>> know your thoughts as we don't have any information for next major release
>> (next to 1.0.0) yet.
>>
>
> That sounds fine. Tough to say at this point what our approach to major
> version bumps will be since the approach to version numbering is changing a
> bit.
>
>
>> 3) Thats a good point and mentioned JIRA can help us to validate the
>> usage of any other converters. I will list this down in the KIP.
>>
>> Let me know if you have some additional thoughts on this.
>>
>> Regards,
>> Umesh
>>
>>
>>
>> On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava 
>> wrote:
>>
>>> Umesh,
>>>
>>> Thanks for the KIP. Straightforward and I think it's a good change.
>>> Unfortunately it is hard to tell how many people it would affect since we
>>> can't tell how many people have adjusted that config, but I think this is
>>> the right thing to do long term.
>>>
>>> A couple of quick things that might be helpful to refine:
>>>
>>> * Note that there are also some references in the example configs that we
>>> should remove.
>>> * It's nice to be explicit about when the removal is planned. This lets
>>> us
>>> set expectations with users for timeframe (especially now that we have
>>> time
>>> based releases), allows us to give info about the removal timeframe in
>>> log
>>> error messages, and lets us file a JIRA against that release so we
>>> remember
>>> to follow up. Given the update to 1.0.0 for the next release, we may also
>>> need to adjust how we deal with deprecations/removal if we don't want to
>>> have to wait all the way until 2.0 to remove (though it is unclear how
>>> exactly we will be handling version bumps from now on).
>>> * Migration path -- I think this is the major missing gap in the KIP. Do
>>> we
>>> need a migration path? If not, presumably it is because people aren't
>>> using
>>> any other converters in practice. Do we have some way of validating this
>>> (
>>> https://issues.apache.org/jira/browse/KAFKA-3988 might be pretty
>>> convincing
>>> evidence)? If there are some users using other converters, how would they
>>> migrate to newer versions which would no longer support that?
>>>
>>> -Ewen
>>>
>>>
>>> On Fri, Jul 14, 2017 at 2:37 AM, UMESH CHAUDHARY 
>>> wrote:
>>>
>>> > Hi there,
>>> > Resending as probably missed earlier to grab your attention.
>>> >
>>> > Regards,
>>> > Umesh
>>> >
>>> > -- Forwarded message -
>>> > From: UMESH CHAUDHARY 
>>> > Date: Mon, 3 Jul 2017 at 11:04
>>> > Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
>>> > configs in WorkerConfig
>>> > To: dev@kafka.apache.org 
>>> >
>>> >
>>> > Hello All,
>>> > I have added a KIP recently to deprecate and remove internal converter
>>> > configs in WorkerConfig.java class because these have ultimately just
>>> > caused a lot more trouble and confusion than it is worth.
>>> >
>>> > Please find the KIP here
>>> > >> > 174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig>
>>> > and
>>> > the related JIRA here <
>>> https://issues.apache.org/jira/browse/KAFKA-5540>.
>>> >
>>> > Appreciate your review and comments.
>>> >
>>> > Regards,
>>> > Umesh
>>> >
>>>
>>


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Tom Bentley
> >
> > Also, how do you think things would work in the context of KIP-179? Would
> > the tool still invoke these requests or would it be done by the broker
> > receiving the alterTopics/reassignPartitions protocol call?
> >
>
> My gut feel is that the tool will still invoke these requests. But I have a
> few questions to KIP-179 before I can answer this question. For example, is
> AlterTopicsRequest request sent to controller only? If the new assignment
> is not written in zookeeper, how is this information propagated to the new
> controller if the previous controller dies after it receives
> AlterTopicsRequest but before it sends LeaderAndIsrRequest? I can post
> these questions in that discussion thread later.
>
>
Let me answer here (though it's relevant to both KIPs):

As I originally envisaged it, KIP-179's support for reassigning partitions
would have more-or-less taken the logic currently in the
ReassignPartitionsCommand (that is, writing JSON to the
ZkUtils.ReassignPartitionsPath)
and put it behind a suitable network protocol API. Thus it wouldn't matter
which broker received the protocol call: It would be acted on by brokers
being notified of the change in the ZK path, just as currently. This would
have kept the ReassignPartitionsCommand relatively simple, as it currently
is.

KIP-113 is obviously seeking to make more radical changes. The algorithm
described for moving a replica to a particular directory on a different
broker (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-113%3A+Support+replicas+movement+between+log+directories#KIP-113:Supportreplicasmovementbetweenlogdirectories-2)Howtoreassignreplicabetweenlogdirectoriesacrossbrokers
)
involves both sending AlterReplicaDirRequest to "the" broker (the receiving
broker, I assume, but it's not spelled out), _as well as_ writing to the ZK
node.

This assumes the script (ReassignPartitionsCommand) has direct access to
ZooKeeper, which is what KIP-179 is seeking to deprecate. It seems a waste
of time to put the logic in the script as part of KIP-113, only for KIP-179
to have to move it back to the controller.


[VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-08-08 Thread Paolo Patierno
Hi devs,


I didn't see any more comments about this KIP. The JIRAs related to the first 
step (so making --new-consumer as deprecated with warning messages) are merged.

I'd like to start a vote for this KIP.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Tom Bentley
Hi Dong,

Thanks for your reply.

Yeah I agree with you that the total disk capacity can be useful
> particularly if it is different across brokers but it is probably of
> limited use in most cases. I also expect that most users would have their
> own customized tool across to determine the new partition reassignment
> after retrieving the partition distribution using DescribeDirsRequest.


By not providing a tool, you're just forcing people to write their own. So
your expectation will be self-fulfilling. Surely it would be better if the
project provided a tool (perhaps one which did the boring bits and gave
people the option to provide their own customized algorithm).


> And
> that customized tool can probably be easily provided with the configuration
> (e.g. disk capacity, IO parameters) of the disks in the cluster when user
> runs it.
>

Sure, but it would be better if a tool could discover this for itself. At
best you're forcing people into getting the information out-of-band (e.g.
via JMX), but worse would be if they end up using static data that doesn't
change as their cluster evolves over time.


> I am relatively neural on whether or not we should add this field. If there
> is no strong reason to add this field, I will add it if one or more
> committer recommends to do this.
>

I don't think we should add it to KIP-113: It could be added at a later
date easily enough.

Cheers,

Tom


Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-08-08 Thread Tom Bentley
The KIP is here for any one, like me, who hasn't seen it yet:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-176:+Remove+deprecated+new-consumer+option+for+tools

Paolo, the KIP says "On the next release cycle we could totally remove the
option." Exactly which release are you proposing that be?

Thanks,

Tom

On 8 August 2017 at 11:53, Paolo Patierno  wrote:

> Hi devs,
>
>
> I didn't see any more comments about this KIP. The JIRAs related to the
> first step (so making --new-consumer as deprecated with warning messages)
> are merged.
>
> I'd like to start a vote for this KIP.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

2017-08-08 Thread Paolo Patierno
Hi Tom,

good question.


Due to the new version policies, the deprecation will be in the coming 1.0.0 
release but then, due to a breaking change on the API, the removal should be on 
2.0.0 I guess.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Tom Bentley 
Sent: Tuesday, August 8, 2017 11:00 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-176 : Remove deprecated new-consumer option for tools

The KIP is here for any one, like me, who hasn't seen it yet:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-176:+Remove+deprecated+new-consumer+option+for+tools

Paolo, the KIP says "On the next release cycle we could totally remove the
option." Exactly which release are you proposing that be?

Thanks,

Tom

On 8 August 2017 at 11:53, Paolo Patierno  wrote:

> Hi devs,
>
>
> I didn't see any more comments about this KIP. The JIRAs related to the
> first step (so making --new-consumer as deprecated with warning messages)
> are merged.
>
> I'd like to start a vote for this KIP.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Windows Embedded & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>


Build failed in Jenkins: kafka-0.11.0-jdk7 #262

2017-08-08 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-5658; Fix AdminClient request timeout handling bug resulting in

--
[...truncated 859.11 KB...]

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy PASSED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testClose STARTED

kafka.api.AdminClientIntegrationTest > testClose PASSED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.AdminClientIntegrationTest > testForceClose STARTED

kafka.api.AdminClientIntegrationTest > testForceClose PASSED

kafka.api.AdminClientIntegrationTest > testListNodes STARTED

kafka.api.AdminClientIntegrationTest > testListNodes PASSED

kafka.api.AdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.AdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.AdminClientIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.AdminClientIntegrationTest > testAclOperations STARTED

kafka.api.AdminClientIntegrationTest > testAclOperations PASSED

kafka.api.AdminClientIntegrationTest > testDescribeCluster STARTED

kafka.api.AdminClientIntegrationTest > testDescribeCluster PASSED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic STARTED

kafka.api.AdminClientIntegrationTest > testDescribeNonExistingTopic PASSED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs STARTED

kafka.api.AdminClientIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.AdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testAcls PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserQuotaTest > testThrottledRequest STARTED

kafka.api.UserQuotaTest > testThrottledRequest PASSED

kafka.api

[jira] [Created] (KAFKA-5711) Bulk Restore Should Handle Deletes

2017-08-08 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-5711:
--

 Summary: Bulk Restore Should Handle Deletes
 Key: KAFKA-5711
 URL: https://issues.apache.org/jira/browse/KAFKA-5711
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 1.0.0


During bulk restoration null values indicate a delete (possibly not compacted 
yet) and should be deleted accordingly from RocksDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3643: MINOR: Error when incompatible console producer co...

2017-08-08 Thread aishraj
GitHub user aishraj opened a pull request:

https://github.com/apache/kafka/pull/3643

MINOR: Error when incompatible console producer configs.

Spinned off from 
[KAFKA-2526](https://issues.apache.org/jira/browse/KAFKA-2526), to throw an 
exception if the console producer is passed a key serializer or a value 
serializer.
To quote a comment on the JIRA:
> The longer term solution with a KIP is obviously a lot more involved, but 
you should feel free to work on it; it would be a useful improvement. Since 
there would be two separate steps, you'd probably want to either file a 
separate JIRA for the better error messages and leave this one to the KIP or 
just file that fix as a MINOR PR.

Hence opening as a MINOR PR

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aishraj/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3643.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3643


commit d6f22459e59ff631849b6bec193ea0f9e33eb3fa
Author: Aish Raj Dahal 
Date:   2017-08-08T14:41:09Z

MINOR: Error when incompatible console producer configs.

Spinned off from KAFKA-2526, throw an exception if the console producer is 
passed a key serializer or a value serializer.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Kafka Client for Swift 4

2017-08-08 Thread Jason Gustafson
You should be able to edit the wiki now. Thanks for the contribution!

-Jason

On Mon, Aug 7, 2017 at 1:54 PM, Kellan Burket Cummings <
kellan.bur...@gmail.com> wrote:

> Thanks, Jason,
> It's kellan.burket
> - Kellan
>
> On Thu, Aug 3, 2017 at 6:39 PM, Jason Gustafson 
> wrote:
>
> > Hi Kellan,
> >
> > Looks cool. If you tell me your user id, I can give you access to edit
> the
> > wiki directly.
> >
> > -Jason
> >
> > On Fri, Jul 28, 2017 at 1:41 PM, Kellan Burket Cummings <
> > kellan.bur...@gmail.com> wrote:
> >
> > > Here it is: https://github.com/kellanburket/franz
> > >
> > > Can someone with access to the Wiki please add it to the third-party
> > > clients page?
> > >
> > > Thanks,
> > > Kellan
> > >
> >
>


[jira] [Reopened] (KAFKA-5705) Kafka Server start failed and reports "unsafe memory access operation"

2017-08-08 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reopened KAFKA-5705:


> Kafka Server start failed and reports "unsafe memory access operation"
> --
>
> Key: KAFKA-5705
> URL: https://issues.apache.org/jira/browse/KAFKA-5705
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Chen He
>
> [2017-08-02 15:50:23,361] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at 
> kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply$mcV$sp(TimeIndex.scala:128)
> at 
> kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply(TimeIndex.scala:107)
> at 
> kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply(TimeIndex.scala:107)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
> at kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:107)
> at kafka.log.LogSegment.recover(LogSegment.scala:252)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231)
> at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at kafka.log.Log.loadSegments(Log.scala:188)
> at kafka.log.Log.(Log.scala:116)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Dong Lin
Thanks for your reply.

Yes, my original idea is that user can continue to collect the static
information for reassignment as they are doing now. It is the status quo. I
agree it can be beneficial to have a tool in Kafka to collect other
information that may be needed for reassignment so that user would not need
external tools at all. We can discuss this later when someone wants to lead
the design and discussion of such a KIP.

I have some questions inline.

On Tue, Aug 8, 2017 at 3:31 AM, Tom Bentley  wrote:

> > >
> > > Also, how do you think things would work in the context of KIP-179?
> Would
> > > the tool still invoke these requests or would it be done by the broker
> > > receiving the alterTopics/reassignPartitions protocol call?
> > >
> >
> > My gut feel is that the tool will still invoke these requests. But I
> have a
> > few questions to KIP-179 before I can answer this question. For example,
> is
> > AlterTopicsRequest request sent to controller only? If the new assignment
> > is not written in zookeeper, how is this information propagated to the
> new
> > controller if the previous controller dies after it receives
> > AlterTopicsRequest but before it sends LeaderAndIsrRequest? I can post
> > these questions in that discussion thread later.
> >
> >
> Let me answer here (though it's relevant to both KIPs):
>
> As I originally envisaged it, KIP-179's support for reassigning partitions
> would have more-or-less taken the logic currently in the
> ReassignPartitionsCommand (that is, writing JSON to the
> ZkUtils.ReassignPartitionsPath)
> and put it behind a suitable network protocol API. Thus it wouldn't matter
> which broker received the protocol call: It would be acted on by brokers
> being notified of the change in the ZK path, just as currently. This would
> have kept the ReassignPartitionsCommand relatively simple, as it currently
> is.
>

I am not sure I fully understand your proposal. I think you are saying that
any broker can receive and handle the AlterTopicRequest. Let's say a
non-controller broker received AlterTopicRequest, is this broker going to
send LeaderAndIsrRequest to other brokers? Or is this broker create the
reassignment znode in zookeper? I may have missed it. But I couldn't find
the explanation of AlterTopicRequest handling in KIP-179.



>
> KIP-113 is obviously seeking to make more radical changes. The algorithm
> described for moving a replica to a particular directory on a different
> broker (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> Supportreplicasmovementbetweenlogdirectories-2)
> Howtoreassignreplicabetweenlogdirectoriesacrossbrokers
>  113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> Supportreplicasmovementbetweenlogdirectories-2%
> 29Howtoreassignreplicabetweenlogdirectoriesacrossbrokers>)
> involves both sending AlterReplicaDirRequest to "the" broker (the receiving
> broker, I assume, but it's not spelled out), _as well as_ writing to the ZK
> node.
>
> This assumes the script (ReassignPartitionsCommand) has direct access to
> ZooKeeper, which is what KIP-179 is seeking to deprecate. It seems a waste
> of time to put the logic in the script as part of KIP-113, only for KIP-179
> to have to move it back to the controller.
>

I am not sure I understand what you mean by "It seems a waste of time to
put the logic in the script as part of KIP-113, only for KIP-179 to have to
move it back to the controller". I assume that the logic you mentioned is
"movement of replica to the specified log directory". This logic (or the
implementation of this logic) resides mainly in the KafkaAdminClient and
broker. The script only needs to parse the json file as appropriate and
call the new API in AdminClient as appropriate. The logic in the script is
therefore not much and can be easily moved to other classes if needed.

Can you clarify why this logic, i.e. movement of replica to the specified
log directory, needs to be moved to controller in KIP-179? I think it can
still be done in the script and controller should not need to worry about
log directory of any replica.

Thanks,
Dong


[jira] [Created] (KAFKA-5712) Expose ProducerConfig in the KafkaProducer clients

2017-08-08 Thread Leo Xuzhang Lin (JIRA)
Leo Xuzhang Lin created KAFKA-5712:
--

 Summary: Expose ProducerConfig in the KafkaProducer clients
 Key: KAFKA-5712
 URL: https://issues.apache.org/jira/browse/KAFKA-5712
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Leo Xuzhang Lin
Priority: Trivial


There is no easy way to introspect into a Producer's configuration. 

For important configurations such as `acks` and `retries`, it is useful to be 
able to verify programmatically the value for those configurations. 

Since the ProducerConfig object is read only, there seems to be no harm to 
expose that in a getter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Tom Bentley
Hi Dong,

Replies inline, as usual

> As I originally envisaged it, KIP-179's support for reassigning partitions
>
> would have more-or-less taken the logic currently in the
> > ReassignPartitionsCommand (that is, writing JSON to the
> > ZkUtils.ReassignPartitionsPath)
> > and put it behind a suitable network protocol API. Thus it wouldn't
> matter
> > which broker received the protocol call: It would be acted on by brokers
> > being notified of the change in the ZK path, just as currently. This
> would
> > have kept the ReassignPartitionsCommand relatively simple, as it
> currently
> > is.
> >
>
> I am not sure I fully understand your proposal. I think you are saying that
> any broker can receive and handle the AlterTopicRequest.


That's right.


> Let's say a
> non-controller broker received AlterTopicRequest, is this broker going to
> send LeaderAndIsrRequest to other brokers? Or is this broker create the
> reassignment znode in zookeper?


Exactly: It's going to write some JSON to the relevant znode. Other brokers
will get notified by zk when the contents of this znode changes, and do as
they do now. This is what the tool/script does now.

I will confess that I don't completely understand the role of
LeaderAndIsrRequest, since the current code just seems to write to the
znode do get the brokers to do the reassignment. If you could explain the
role of LeaderAndIsrRequest that would be great.


> I may have missed it. But I couldn't find
> the explanation of AlterTopicRequest handling in KIP-179.
>

You're right, it doesn't go into that much detail. I will fix that.


> >
> > KIP-113 is obviously seeking to make more radical changes. The algorithm
> > described for moving a replica to a particular directory on a different
> > broker (
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > Supportreplicasmovementbetweenlogdirectories-2)
> > Howtoreassignreplicabetweenlogdirectoriesacrossbrokers
> >  > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > Supportreplicasmovementbetweenlogdirectories-2%
> > 29Howtoreassignreplicabetweenlogdirectoriesacrossbrokers>)
> > involves both sending AlterReplicaDirRequest to "the" broker (the
> receiving
> > broker, I assume, but it's not spelled out), _as well as_ writing to the
> ZK
> > node.
> >
> > This assumes the script (ReassignPartitionsCommand) has direct access to
> > ZooKeeper, which is what KIP-179 is seeking to deprecate. It seems a
> waste
> > of time to put the logic in the script as part of KIP-113, only for
> KIP-179
> > to have to move it back to the controller.
> >
>
> I am not sure I understand what you mean by "It seems a waste of time to
> put the logic in the script as part of KIP-113, only for KIP-179 to have to
> move it back to the controller".


Sorry, I misunderstood slightly what you were proposing in KIP-113, so the
"waste of time" comment isn't quite right, but I'm still not convinced that
KIP-113+KIP-179 (in its current form) ends with a satisfactory result.

Let me elaborate... KIP-113 says that to support reassigning replica
between log directories across brokers:
* ...
* The script sends AlterReplicaDirRequest to those brokers which need to
move replicas...
* The script creates reassignment znode in zookeeper.
* The script retries AlterReplicaDirRequest to those broker...
* ...

So the ReassignPartitionsCommand still talks to ZK directly, but now it's
bracketed by calls to the AdminClient. KIP-179 could replace that talking
to ZK directly with a new call to the AdminClient. But then we've got a
pretty weird API, where we have to make three AdminClient calls (two of
them to the same method), to move a replica. I don't really understand why
the admin client can't present a single API method to achieve this, and
encapsulate on the server side the careful sequence of events necessary to
coordinate the movement. I understood this position is what Ismael was
advocating when he said it was better to put the logic in the controller
than spread between the script and the controller. But maybe I
misunderstood him.



> I assume that the logic you mentioned is
> "movement of replica to the specified log directory". This logic (or the
> implementation of this logic) resides mainly in the KafkaAdminClient and
> broker. The script only needs to parse the json file as appropriate and
> call the new API in AdminClient as appropriate. The logic in the script is
> therefore not much and can be easily moved to other classes if needed.
>
> Can you clarify why this logic, i.e. movement of replica to the specified
> log directory, needs to be moved to controller in KIP-179? I think it can
> still be done in the script and controller should not need to worry about
> log directory of any replica.
>
> Thanks,
> Dong
>


[jira] [Created] (KAFKA-5713) Improve '--group' option to understand strings with wildcards

2017-08-08 Thread Alla Tumarkin (JIRA)
Alla Tumarkin created KAFKA-5713:


 Summary: Improve '--group' option to understand strings with 
wildcards
 Key: KAFKA-5713
 URL: https://issues.apache.org/jira/browse/KAFKA-5713
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Alla Tumarkin


Request
Implement additional functionality for the '--group' option to be able to take 
string/wildcard combination, e.g.:
{code}
bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add 
--allow-principal 
"User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" 
--consumer --topic test --group mygroup*
{code}
in order to allow different group names that start with mygroup, e.g.:
{code}
kafka-console-consumer --zookeeper localhost:2181 --topic test 
--consumer-property group.id=mygroup1
{code}

Background
Current functionality only permits to specify an exact group name, like 
"--group mygroup" or any group as in "group *"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3644: KAFKA-5711: batch restore should handle deletes

2017-08-08 Thread bbejeck
GitHub user bbejeck opened a pull request:

https://github.com/apache/kafka/pull/3644

KAFKA-5711: batch restore should handle deletes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbejeck/kafka 
KAFKA-5711_bulk_restore_should_handle_deletes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3644.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3644


commit 10f7be8c08ed18476120be2fc010f93d9caeb3d8
Author: Bill Bejeck 
Date:   2017-08-08T18:12:11Z

KAFKA-5711: batch restore should handle deletes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Dong Lin
Hey Tom,

Thanks for the quick reply. Please see my comment inline.

On Tue, Aug 8, 2017 at 11:06 AM, Tom Bentley  wrote:

> Hi Dong,
>
> Replies inline, as usual
>
> > As I originally envisaged it, KIP-179's support for reassigning
> partitions
> >
> > would have more-or-less taken the logic currently in the
> > > ReassignPartitionsCommand (that is, writing JSON to the
> > > ZkUtils.ReassignPartitionsPath)
> > > and put it behind a suitable network protocol API. Thus it wouldn't
> > matter
> > > which broker received the protocol call: It would be acted on by
> brokers
> > > being notified of the change in the ZK path, just as currently. This
> > would
> > > have kept the ReassignPartitionsCommand relatively simple, as it
> > currently
> > > is.
> > >
> >
> > I am not sure I fully understand your proposal. I think you are saying
> that
> > any broker can receive and handle the AlterTopicRequest.
>
>
> That's right.
>
>
> > Let's say a
> > non-controller broker received AlterTopicRequest, is this broker going to
> > send LeaderAndIsrRequest to other brokers? Or is this broker create the
> > reassignment znode in zookeper?
>
>
> Exactly: It's going to write some JSON to the relevant znode. Other brokers
> will get notified by zk when the contents of this znode changes, and do as
> they do now. This is what the tool/script does now.
>
> I will confess that I don't completely understand the role of
> LeaderAndIsrRequest, since the current code just seems to write to the
> znode do get the brokers to do the reassignment. If you could explain the
> role of LeaderAndIsrRequest that would be great.
>

Currently only the controller will listen to the reassignment znode and
sends LeaderAndIsrRequest and StopReplicaRequest to brokers in order to
complete reassignment. Brokers won't need to listen to zookeeper for any
reassignment -- brokers only reacts to the request from controller.
Currently Kafka's design replies a lot on the controller to keep a
consistent view of who are the leader of partitions and what is the ISR
etc. It will be a pretty drastic change, if not impossible, for the script
to reassign partitions without going through controller.

Thus I think it is likely that your AlterTopicsRequest can only be sent to
controller. Then the controller can create the reassignment znode in
zookeeper so that the information is persisted across controller fail over.
I haven't think through this in detail though.



>
>
> > I may have missed it. But I couldn't find
> > the explanation of AlterTopicRequest handling in KIP-179.
> >
>
> You're right, it doesn't go into that much detail. I will fix that.
>
>
> > >
> > > KIP-113 is obviously seeking to make more radical changes. The
> algorithm
> > > described for moving a replica to a particular directory on a different
> > > broker (
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > > Supportreplicasmovementbetweenlogdirectories-2)
> > > Howtoreassignreplicabetweenlogdirectoriesacrossbrokers
> > >  > > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > > Supportreplicasmovementbetweenlogdirectories-2%
> > > 29Howtoreassignreplicabetweenlogdirectoriesacrossbrokers>)
> > > involves both sending AlterReplicaDirRequest to "the" broker (the
> > receiving
> > > broker, I assume, but it's not spelled out), _as well as_ writing to
> the
> > ZK
> > > node.
> > >
> > > This assumes the script (ReassignPartitionsCommand) has direct access
> to
> > > ZooKeeper, which is what KIP-179 is seeking to deprecate. It seems a
> > waste
> > > of time to put the logic in the script as part of KIP-113, only for
> > KIP-179
> > > to have to move it back to the controller.
> > >
> >
> > I am not sure I understand what you mean by "It seems a waste of time to
> > put the logic in the script as part of KIP-113, only for KIP-179 to have
> to
> > move it back to the controller".
>
>
> Sorry, I misunderstood slightly what you were proposing in KIP-113, so the
> "waste of time" comment isn't quite right, but I'm still not convinced that
> KIP-113+KIP-179 (in its current form) ends with a satisfactory result.
>
> Let me elaborate... KIP-113 says that to support reassigning replica
> between log directories across brokers:
> * ...
> * The script sends AlterReplicaDirRequest to those brokers which need to
> move replicas...
> * The script creates reassignment znode in zookeeper.
> * The script retries AlterReplicaDirRequest to those broker...
> * ...
>
> So the ReassignPartitionsCommand still talks to ZK directly, but now it's
> bracketed by calls to the AdminClient. KIP-179 could replace that talking
> to ZK directly with a new call to the AdminClient. But then we've got a
> pretty weird API, where we have to make three AdminClient calls (two of
> them to the same method), to move a replica. I don't really understand why
> the admin client ca

[jira] [Created] (KAFKA-5714) Allow whitespaces in the principal name

2017-08-08 Thread Alla Tumarkin (JIRA)
Alla Tumarkin created KAFKA-5714:


 Summary: Allow whitespaces in the principal name
 Key: KAFKA-5714
 URL: https://issues.apache.org/jira/browse/KAFKA-5714
 Project: Kafka
  Issue Type: Improvement
  Components: security
Affects Versions: 0.10.2.1
Reporter: Alla Tumarkin


Request
Improve parser behavior to allow whitespaces in the principal name in the 
config file, as in:
{code}
super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
C=Unknown
{code}

Background
Current implementation requires that there are no whitespaces after commas, i.e.
{code}
super.users=User:CN=Unknown,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
{code}

Note: having a semicolon at the end doesn't help, i.e. this does not work either
{code}
super.users=User:CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, 
C=Unknown;
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-182: Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-08-08 Thread Guozhang Wang
Damian,

Thanks for the proposal, I had a few comments on the APIs:

1. Printed#withFile seems not needed, as users should always spec if it is
to sysOut or to File at the beginning. In addition as a second thought, I
think serdes are not useful for prints anyways since we assume `toString`
is provided except for byte arrays, in which we will special handle it.

Another comment about Printed in general is it differs with other options
that it is a required option than optional one, since it includes toSysOut
/ toFile specs; what are the pros and cons for including these two in the
option and hence make it a required option than leaving them at the API
layer and make Printed as optional for mapper / label only?


2.1 KStream#through / to

We should have an overloaded function without Produced?

2.2 KStream#groupBy / groupByKey

We should have an overloaded function without Serialized?

2.3 KGroupedStream#count / reduce / aggregate

We should have an overloaded function without Materialized?

2.4 KStream#join

We should have an overloaded function without Joined?


2.5 Each of KTable's operators:

We should have an overloaded function without Produced / Serialized /
Materialized?



3.1 Produced: the static functions have overlaps, which seems not
necessary. I'd suggest jut having the following three static with another
three similar member functions:

public static  Produced withKeySerde(final Serde keySerde)

public static  Produced withValueSerde(final Serde
valueSerde)

public static  Produced withStreamPartitioner(final
StreamPartitioner partitioner)

The key idea is that by using the same function name string for static
constructor and member functions, users do not need to remember what are
the differences but can call these functions with any ordering they want,
and later calls on the same spec will win over early calls.


3.2 Serialized: similarly

public static  Serialized withKeySerde(final Serde keySerde)

public static  Serialized withValueSerde(final Serde
valueSerde)

public Serialized withKeySerde(final Serde keySerde)

public Serialized withValueSerde(final Serde valueSerde)

Also it has a final Serde otherValueSerde in one of its static
constructor, it that intentional?

3.3. Joined: similarly, keep the static constructor signatures the same as
its corresponding member fields.

3.4 Materialized: it is a bit special, and I think we can keep its static
constructors with only two `as` as they are today.K


4. Is there any modifications on StateStoreSupplier? Is it replaced by
BytesStoreSupplier? Seems some more descriptions are lacking here. Also in

public static  Materialized
as(final StateStoreSupplier
supplier)

Is the parameter in type of BytesStoreSupplier?




Guozhang


On Thu, Jul 27, 2017 at 5:26 AM, Damian Guy  wrote:

> Updated link:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 182%3A+Reduce+Streams+DSL+overloads+and+allow+easier+
> use+of+custom+storage+engines
>
> Thanks,
> Damian
>
> On Thu, 27 Jul 2017 at 13:09 Damian Guy  wrote:
>
> > Hi,
> >
> > I've put together a KIP to make some changes to the KafkaStreams DSL that
> > will hopefully allow us to:
> > 1) reduce the explosion of overloads
> > 2) add new features without having to continue adding more overloads
> > 3) provide simpler ways for people to use custom storage engines and wrap
> > them with logging, caching etc if desired
> > 4) enable per-operator caching rather than global caching without having
> > to resort to supplying a StateStoreSupplier when you just want to turn
> > caching off.
> >
> > The KIP is here:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=73631309
> >
> > Thanks,
> > Damian
> >
>



-- 
-- Guozhang


Way to check if custom SMT has been added to the classpath or even if it i working.

2017-08-08 Thread satyajit vegesna
Hi All,

i have created a custom SMT and have deployed.
I would like to know if there is a way to check if the transform is working
or not.(def not working as the messages are not getting transformed)

I am also trying to remote debug using intellij and nothing seam working,
as i do not see any control hitting the debug points.

When i check the connector list using , curl localhost:8083/connector-plugins
, i see all other connector plugins but not the SMT related ones.

Regards.


[GitHub] kafka pull request #3572: MINOR: Remove unused GroupState.state field

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3572


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3613: KAFKA-2360: Extract producer-specific configs out ...

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3613


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3645: MINOR: support retrieving cluster_id in system tes...

2017-08-08 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3645

MINOR: support retrieving cluster_id in system tests

@ewencp 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka system-test-cluster-id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3645






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3646: MINOR: Remove unneeded error handlers in deprecate...

2017-08-08 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3646

MINOR: Remove unneeded error handlers in deprecated request objects

These handlers were previously used on the broker to handle uncaught 
exceptions, but now the broker users the new Java request objects exclusively.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka 
remove-old-request-error-handlers

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3646.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3646


commit 51546938116c8820a078f4ebed0b52a8d3916be0
Author: Jason Gustafson 
Date:   2017-08-09T00:10:00Z

MINOR: Remove unneeded error handlers in deprecated request objects




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3619: MINOR: Update dependencies for 1.0.0 release

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3619


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] KIP-185: Make exactly once in order delivery per partition the default producer setting

2017-08-08 Thread Apurva Mehta
Hi,

I've put together a new KIP which proposes to ship Kafka with its strongest
delivery guarantees by default.

We currently ship with at most once semantics and don't provide any
ordering guarantees per partition. The proposal is is to provide exactly
once in order delivery per partition by default in the upcoming 1.0.0
release.

The KIP linked to below also outlines the performance characteristics of
the proposed default.

The KIP is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-185%3A+Make+exactly+once+in+order+delivery+per+partition+the+default+producer+setting

Please have a look, I would love your feedback!

Thanks,
Apurva


[GitHub] kafka pull request #3647: KAFKA-4501: Java 9 compilation fixes

2017-08-08 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3647

KAFKA-4501: Java 9 compilation fixes

Compilation fixes:
- Avoid ambiguity error when appending to Properties in Scala code
- Use position() and limit() to fix ambiguity issue
- Disable findBugs if Java 9 is used

Warning fixes:
- Avoid deprecated Class.newInstance in Utils.newInstance
- Silence a few Java 9 deprecation warnings
- var -> val and unused fixes

Also:
- Set --release option if building with Java 9

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-4501-support-java-9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3647.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3647


commit e9cde9fe45997b84f87985637deb2ec6872aaa4d
Author: Ismael Juma 
Date:   2017-07-21T12:14:03Z

Avoid ambiguity error when appending to Properties in Scala code

commit bce6848a5e03efb13e56b3a915e86d7f889ca16e
Author: Ismael Juma 
Date:   2017-07-21T12:14:40Z

Avoid deprecated Class.newInstance in Utils.newInstance

commit 5d36fb7ff6559e6bb9211fcfc665a69393fb829a
Author: Ismael Juma 
Date:   2017-07-21T12:26:41Z

Use position() and limit() to fix ambiguity issue

commit e1b9849da180da6bc8fe3fbc6b0895c4f3f9fc7c
Author: Ismael Juma 
Date:   2017-07-21T12:29:18Z

var -> val and unused fixes

commit 86d81ccc370ebbad3f6f542d6b5c92306a3810a0
Author: Ismael Juma 
Date:   2017-07-21T12:30:04Z

Silence a few Java 9 deprecation warnings

commit 1fca2f0acc53e71904aa0c37f871892e82679d1f
Author: Ismael Juma 
Date:   2017-08-01T10:05:54Z

Enable --release and disable findBugs if Java 9 is used




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] KIP-186: Increase offsets retention default to 7 days

2017-08-08 Thread Ewen Cheslack-Postava
Hi all,

I posted a simple new KIP for a problem we see with a lot of users:
KIP-186: Increase offsets retention default to 7 days

https://cwiki.apache.org/confluence/display/KAFKA/KIP-186%3A+Increase+offsets+retention+default+to+7+days

Note that in addition to the KIP text itself, the linked JIRA already
existed and has a bunch of discussion on the subject.

-Ewen


Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Jun Rao
Hi, Dong,

I think Tom was suggesting to have the AlterTopicsRequest sent to any
broker, which just writes the reassignment json to ZK. The controller will
pick up the reassignment and act on it as usual. This should work, right?

Having a separate AlterTopicsRequest and AlterReplicaDirRequest seems
simpler to me. The former is handled by the controller and the latter is
handled by the affected broker. They don't always have to be done together.
Merging the two into a single request probably will make both the api and
the implementation a bit more complicated. If we do keep the two separate
requests, it seems that we should just add AlterReplicaDirRequest to the
AdminClient interface?

Now, regarding DescribeDirsResponse. I agree that it can be used for the
status reporting in KIP-179 as well. However, it seems that reporting the
log end offset of each replica may not be easy to use. The log end offset
will be returned from different brokers in slightly different time. If
there is continuous producing traffic, the difference in log end offset
between the leader and the follower could be larger than 0 even if the
follower has fully caught up. I am wondering if it's better to instead
return the lag in offset per replica. This way, the status can probably be
reported more reliably.

Thanks,

Jun

On Tue, Aug 8, 2017 at 11:23 AM, Dong Lin  wrote:

> Hey Tom,
>
> Thanks for the quick reply. Please see my comment inline.
>
> On Tue, Aug 8, 2017 at 11:06 AM, Tom Bentley 
> wrote:
>
> > Hi Dong,
> >
> > Replies inline, as usual
> >
> > > As I originally envisaged it, KIP-179's support for reassigning
> > partitions
> > >
> > > would have more-or-less taken the logic currently in the
> > > > ReassignPartitionsCommand (that is, writing JSON to the
> > > > ZkUtils.ReassignPartitionsPath)
> > > > and put it behind a suitable network protocol API. Thus it wouldn't
> > > matter
> > > > which broker received the protocol call: It would be acted on by
> > brokers
> > > > being notified of the change in the ZK path, just as currently. This
> > > would
> > > > have kept the ReassignPartitionsCommand relatively simple, as it
> > > currently
> > > > is.
> > > >
> > >
> > > I am not sure I fully understand your proposal. I think you are saying
> > that
> > > any broker can receive and handle the AlterTopicRequest.
> >
> >
> > That's right.
> >
> >
> > > Let's say a
> > > non-controller broker received AlterTopicRequest, is this broker going
> to
> > > send LeaderAndIsrRequest to other brokers? Or is this broker create the
> > > reassignment znode in zookeper?
> >
> >
> > Exactly: It's going to write some JSON to the relevant znode. Other
> brokers
> > will get notified by zk when the contents of this znode changes, and do
> as
> > they do now. This is what the tool/script does now.
> >
> > I will confess that I don't completely understand the role of
> > LeaderAndIsrRequest, since the current code just seems to write to the
> > znode do get the brokers to do the reassignment. If you could explain the
> > role of LeaderAndIsrRequest that would be great.
> >
>
> Currently only the controller will listen to the reassignment znode and
> sends LeaderAndIsrRequest and StopReplicaRequest to brokers in order to
> complete reassignment. Brokers won't need to listen to zookeeper for any
> reassignment -- brokers only reacts to the request from controller.
> Currently Kafka's design replies a lot on the controller to keep a
> consistent view of who are the leader of partitions and what is the ISR
> etc. It will be a pretty drastic change, if not impossible, for the script
> to reassign partitions without going through controller.
>
> Thus I think it is likely that your AlterTopicsRequest can only be sent to
> controller. Then the controller can create the reassignment znode in
> zookeeper so that the information is persisted across controller fail over.
> I haven't think through this in detail though.
>
>
>
> >
> >
> > > I may have missed it. But I couldn't find
> > > the explanation of AlterTopicRequest handling in KIP-179.
> > >
> >
> > You're right, it doesn't go into that much detail. I will fix that.
> >
> >
> > > >
> > > > KIP-113 is obviously seeking to make more radical changes. The
> > algorithm
> > > > described for moving a replica to a particular directory on a
> different
> > > > broker (
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > > > Supportreplicasmovementbetweenlogdirectories-2)
> > > > Howtoreassignreplicabetweenlogdirectoriesacrossbrokers
> > > >  > > > 113%3A+Support+replicas+movement+between+log+directories#KIP-113:
> > > > Supportreplicasmovementbetweenlogdirectories-2%
> > > > 29Howtoreassignreplicabetweenlogdirectoriesacrossbrokers>)
> > > > involves both sending AlterReplicaDirRequest to "the" broker (the
> > > receiving
> > > > broker, I assume, but it's not sp

Re: [DISCUSS] KIP-113: Support replicas movement between log directories

2017-08-08 Thread Dong Lin
Hey Jun,

Thanks for the comment!

Yes, it should work. The tool can send request to any broker and broker can
just write the reassignment znode. My previous intuition is that it may be
better to only send this request to controller. But I don't have good
reasons for this restriction.

My intuition is that we can keep them separate as well. Becket and I have
discussed this both offline and in https://github.com/apache/kafka/pull/3621.
Currently I don't have a strong opinion on this and I am open to using only
one API to do both if someone can come up with a reasonable API signature
for this method. For now I have added the method alterReplicaDir() in
KafkaAdminClient instead of the AdminClient interface so that the
reassignment script can use this method without concluding what the API
would look like in AdminClient in the future.

Regarding DescribeDirsResponse, I think it is probably OK to have slightly
more lag. The script can calculate the lag of the follower replica as
Math.max(0, leaderLEO - followerLEO). I agree that it will be slightly less
accurate than the current approach in KIP-179. But even with the current
approach in KIP-179, the result provided by the script is an approximation
anyway, since there is delay from the time that leader returns response to
the time that the script collects response from all brokers and prints
result to user. I think if the slight difference in the accuracy between
the two approaches does not make a difference to the intended use-case of
this API, then we probably want to re-use the exiting request/response to
keep the protocol simple.

Thanks,
Dong





On Tue, Aug 8, 2017 at 5:56 PM, Jun Rao  wrote:

> Hi, Dong,
>
> I think Tom was suggesting to have the AlterTopicsRequest sent to any
> broker, which just writes the reassignment json to ZK. The controller will
> pick up the reassignment and act on it as usual. This should work, right?
>
> Having a separate AlterTopicsRequest and AlterReplicaDirRequest seems
> simpler to me. The former is handled by the controller and the latter is
> handled by the affected broker. They don't always have to be done together.
> Merging the two into a single request probably will make both the api and
> the implementation a bit more complicated. If we do keep the two separate
> requests, it seems that we should just add AlterReplicaDirRequest to the
> AdminClient interface?
>
> Now, regarding DescribeDirsResponse. I agree that it can be used for the
> status reporting in KIP-179 as well. However, it seems that reporting the
> log end offset of each replica may not be easy to use. The log end offset
> will be returned from different brokers in slightly different time. If
> there is continuous producing traffic, the difference in log end offset
> between the leader and the follower could be larger than 0 even if the
> follower has fully caught up. I am wondering if it's better to instead
> return the lag in offset per replica. This way, the status can probably be
> reported more reliably.
>
> Thanks,
>
> Jun
>
> On Tue, Aug 8, 2017 at 11:23 AM, Dong Lin  wrote:
>
> > Hey Tom,
> >
> > Thanks for the quick reply. Please see my comment inline.
> >
> > On Tue, Aug 8, 2017 at 11:06 AM, Tom Bentley 
> > wrote:
> >
> > > Hi Dong,
> > >
> > > Replies inline, as usual
> > >
> > > > As I originally envisaged it, KIP-179's support for reassigning
> > > partitions
> > > >
> > > > would have more-or-less taken the logic currently in the
> > > > > ReassignPartitionsCommand (that is, writing JSON to the
> > > > > ZkUtils.ReassignPartitionsPath)
> > > > > and put it behind a suitable network protocol API. Thus it wouldn't
> > > > matter
> > > > > which broker received the protocol call: It would be acted on by
> > > brokers
> > > > > being notified of the change in the ZK path, just as currently.
> This
> > > > would
> > > > > have kept the ReassignPartitionsCommand relatively simple, as it
> > > > currently
> > > > > is.
> > > > >
> > > >
> > > > I am not sure I fully understand your proposal. I think you are
> saying
> > > that
> > > > any broker can receive and handle the AlterTopicRequest.
> > >
> > >
> > > That's right.
> > >
> > >
> > > > Let's say a
> > > > non-controller broker received AlterTopicRequest, is this broker
> going
> > to
> > > > send LeaderAndIsrRequest to other brokers? Or is this broker create
> the
> > > > reassignment znode in zookeper?
> > >
> > >
> > > Exactly: It's going to write some JSON to the relevant znode. Other
> > brokers
> > > will get notified by zk when the contents of this znode changes, and do
> > as
> > > they do now. This is what the tool/script does now.
> > >
> > > I will confess that I don't completely understand the role of
> > > LeaderAndIsrRequest, since the current code just seems to write to the
> > > znode do get the brokers to do the reassignment. If you could explain
> the
> > > role of LeaderAndIsrRequest that would be great.
> > >
> >
> > Currently only the controller will listen to the reass

Re: [DISCUSS] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-08-08 Thread Jun Rao
Hi, Tom,

Thanks for the KIP. A few minor comments below.

1. Implementation wise, the broker handles adding partitions differently
from changing replica assignment. For the former, we directly update the
topic path in ZK with the new partitions. For the latter, we write the new
partition reassignment in the partition reassignment path. Changing the
replication factor is handled in the same way as changing replica
assignment. So, it would be useful to document how the broker handles these
different cases accordingly. I think it's simpler to just allow one change
(partition, replication factor, or replica assignment) in a request.

2. Currently, we only allow adding partitions. We probably want to document
the restriction in the api.

3. It's not very clear to me what status_time in ReplicaStatusResponse
means.

Jun



On Fri, Aug 4, 2017 at 10:04 AM, Dong Lin  wrote:

> Hey Tom,
>
> Thanks for your reply. Here are my thoughts:
>
> 1) I think the DescribeDirsResponse can be used by AdminClient to query the
> lag of follower replica as well. Here is how it works:
>
> - AdminClient sends DescribeDirsRequest to both the leader and the follower
> of the partition.
> - DescribeDirsResponse from both leader and follower shows the LEO of the
> leader replica and the follower replica.
> - Lag of the follower replica is the difference in the LEO between leader
> and follower.
>
> In comparison to ReplicaStatusRequest, DescribeDirsRequest needs to be send
> to each replica of the partition whereas ReplicaStatusRequest only needs to
> be sent to the leader of the partition. It doesn't seem to make much
> difference though. In practice we probably want to query the replica lag of
> many partitions and AminClient needs to send exactly one request to each
> broker with either solution. Does this make sense?
>
>
> 2) KIP-179 proposes to add the following AdminClient API:
>
> alterTopics(Collection alteredTopics, AlterTopicsOptions
> options)
>
> Where AlteredTopic includes the following fields
>
> AlteredTopic(String name, int numPartitions, int replicationFactor,
> Map> replicasAssignment)
>
> I have two comments on this:
>
> - The information in AlteredTopic seems a bit redundant. Both numPartitions
> and replicationFactor can be derived from replicasAssignment. I think we
> can probably just use the following API in AdminClient instead:
>
> AlterTopicsResult alterTopics(Map>
> partitionAssignment, AlterTopicsOptions options)
>
> - Do you think "reassignPartitions" may be a better name than
> "alterTopics"? This is more consistent with the existing name used in
> kafka-reassign-partitions.sh and I also find it to be more accurate. On the
> other hand, alterTopics seems to suggest that we can also alter topic
> config.
>
>
>
> Thanks,
> Dong
>
> On Fri, Aug 4, 2017 at 1:03 AM, Tom Bentley  wrote:
>
> > Hi Dong,
> >
> > Thanks for your reply.
> >
> > You're right that your DescribeDirsResponse contains appropriate data.
> The
> > comment about the log_end_offset in the KIP says "Enable user to track
> > movement progress by comparing LEO of the *.log and *.move". That makes
> me
> > wonder whether this would only work for replica movement on the same
> > broker. In the case of reassigning partitions between brokers it's only
> > really the leader for the partition that knows the max LEO of the ISR and
> > the LEO of the target broker. Maybe the comment is misleading and it
> would
> > be the leader doing the same thing in both cases. Can you clarify this?
>
>
> > At a conceptual level there's a lot of similarity between KIP-113 and
> > KIP-179 because they're both about moving replicas around. Concretely
> > though, ChangeReplicaDirRequest assumes the movement is on the same
> broker,
> > while the equivalent APIs in KIP-179 (whichever alternative) lack the
> > notion of disks. I wonder if a unified API would be better or not.
> > Operationally, the reasons for changing replicas between brokers are
> likely
> > to be motivated by balancing load across the cluster, or adding/removing
> > brokers. But the JBOD use case has different motivations. So I suspect
> it's
> > OK that the APIs are separate, but I'd love to hear what others think.
>
>
> > I think you're right about TopicPartitionReplica. At the protocol level
> we
> > could group topics and partitions together so avoid having the same topic
> > name multiple times when querying for the status of all the partitions
> of a
> > topic.
> >
> > Thanks again for taking the time to respond.
> >
> > Tom
> >
> > On 3 August 2017 at 23:07, Dong Lin  wrote:
> >
> > > Hey Tom,
> > >
> > > Thanks for the KIP. It seems that the prior discussion in this thread
> has
> > > focused on reassigning partitions (or AlterTopics). I haven't looked
> into
> > > this yet. I have two comments on the replicaStatus() API and the
> > > ReplicaStatusRequest:
> > >
> > > -  It seems that the use-case for ReplicaStatusRequest is covered by
> > > the DescribeDirsRequest introduced in KIP-113
> > > 

答复: [DISCUSS] KIP-178: Size-based log directory selection strategy

2017-08-08 Thread Hu Xi
Hi Lin,


Yes, it is a problem since it's not easy for us to predict the possible disk 
spaces each partition occupies in the future. How about the algorithm below:


  1.  Introduce a data structure to maintain the current free spaces for each 
log directory in `log.dirs`. Say, a ConcurrentHashMap named 
logFreeDiskMap
  2.  Initialize and update `logFreeDiskMap` before batch invoking 
`nextLogDir`. Invoke it before `makeLeaders` for instance.
  3.  Every time we call `nextLogDir`, check:
 *   If `log.retention.bytes` or  topic-level `retention.bytes` is set(> 
0), then it's to say user does want to control the total disk spaces for that 
partition. So we select the directory with most disk spaces from 
`logFreeDiskMap` and subtract `retention.bytes` from the original value.
 *   If `log.retention.bytes` or  topic-level `retention.bytes` is not set, 
meaning user does not care about the disk or he/she does not know how much the 
total disk spaces the partition occupies, then we turn back to the current 
strategy: selecting directory with least partitions

The algorithm should be working well with the situation you mentioned, namely 
large number of partitions in a single LeaderAndIsr request. However, I have to 
admit that it's poor for the situation when consecutive LeaderAndIsr requests 
come from the controller.

Any comments are welcomed.


发件人: Dong Lin 
发送时间: 2017年8月5日 2:00
收件人: dev@kafka.apache.org
主题: Re: [DISCUSS] KIP-178: Size-based log directory selection strategy

Hey Hu,

I am not sure it is OK. Say kafka-reassign-partitions.sh is used to move
100 replicas to a broker. The the controller will send LeaderAndIsrRequest
asking this broker to be the follower of these 100 partitions. While it is
true that the broker will create replicas sequentially, but they will be
created in a very short period of time (e.g. 2 seconds) and thus the
replicas will be put in the same log directory that has the most free space
at the time this broker receives the LeaderAndIsrRequest. Do you think this
is a problem?

Dong


On Thu, Aug 3, 2017 at 7:36 PM, Hu Xi  wrote:

> Hi Dong, some thoughts on your second mail. Since currently logs for
> multiple partitions are created sequentially not in parallel, it's probably
> okay for us to simply select the directory with most disk spaces in a
> single round of `nextLogDir` calling. which can be guaranteed to lead to
> extreme skew. Does it make any senses?
>
>
> 
> 发件人: Hu Xi 
> 发送时间: 2017年8月3日 16:51
> 收件人: dev@kafka.apache.org
> 主题: 答复: 答复: [DISCUSS] KIP-178: Size-based log directory selection strategy
>
>
> Dong, yes, many thanks for the comments from the second mail. Will take
> some time to figure out an algorithm to better handle the situation you
> mentioned. Thanks again.
>
>
> 
> 发件人: Dong Lin 
> 发送时间: 2017年8月3日 12:07
> 收件人: dev@kafka.apache.org
> 主题: Re: 答复: [DISCUSS] KIP-178: Size-based log directory selection strategy
>
> Hu, I think this is worth discussion even if it doesn't require new config.
> Could you also read my second email?
>
> On Wed, Aug 2, 2017 at 6:17 PM, Hu Xi  wrote:
>
> > Thanks Dong,  do you mean it is more like a naive improvement and no KIP
> > is needed  then?
> >
> > 
> > 发件人: Dong Lin 
> > 发送时间: 2017年8月3日 9:10
> > 收件人: dev@kafka.apache.org
> > 主题: Re: [DISCUSS] KIP-178: Size-based log directory selection strategy
> >
> > Hey Xu,
> >
> > Thanks for the KIP. This is a very good idea to select log directory
> based
> > on the free disk space. Do you think we can simply simply change the
> > implementation to select log directory based on the free disk space
> instead
> > of adding a new config? Or is there any good reason that user will want
> to
> > select log directory with the least partition number instead of the one
> > with the most free disk space?
> >
> > Thanks,
> > Dong
> >
> >
> > On Wed, Aug 2, 2017 at 6:03 PM, Hu Xi  wrote:
> >
> > > Hi all, how do you think of this KIP? Any comments are welcomed.
> > >
> > >
> > > 
> > > 发件人: Hu Xi 
> > > 发送时间: 2017年7月18日 15:21
> > > 收件人: dev@kafka.apache.org
> > > 主题: [DISCUSS] KIP-178: Size-based log directory selection strategy
> > >
> > >
> > > Hi all,
> > >
> > >  KIP-178 is created for a discussion on how LogManager selects log
> > > directory. In this KIP, a new strategy is introduced to allow for the
> > real
> > > disk spaces for each directories. Be free to drop your comments here.
> > > Thanks.
> > >
> >
>


[GitHub] kafka pull request #3645: MINOR: support retrieving cluster_id in system tes...

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3645


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5715) ConsumerGroupCommand failed to show in ascending order for partitions without consumers

2017-08-08 Thread huxihx (JIRA)
huxihx created KAFKA-5715:
-

 Summary: ConsumerGroupCommand failed to show in ascending order 
for partitions without consumers 
 Key: KAFKA-5715
 URL: https://issues.apache.org/jira/browse/KAFKA-5715
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.11.0.0
Reporter: huxihx
Assignee: huxihx
Priority: Minor


For active consumer groups, ConsumerGroupCommand shows partitions in ascending 
order which is a usually expected behavior for users. But for inactive groups 
or partitions without consumer assigned, the tool prints them in a random 
order. The behavior should be same for both inactive and active groups.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3648: KAFKA-5715: ConsumerGroupCommand should always sho...

2017-08-08 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/3648

KAFKA-5715: ConsumerGroupCommand should always show partitions in ascending 
order

Currently, ConsumerGroupCommand shows in ascending order for partitions 
with active consumer assigned, but failed to do so for partitions without 
consumer assigned. The behavior should keep same.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-5715

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3648


commit b03c1d65f3187012afc0292453d0f3cfe8258065
Author: huxihx 
Date:   2017-08-09T03:15:20Z

KAFKA-5715: ConsumerGroupCommand should always show partitions in ascending 
order.

Currently, ConsumerGroupCommand shows in ascending order for partitions 
with acitve consumer assigned, but failed to do so for partitions without 
consumer assigned. The behavior should keep same.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5704) Auto topic creation causes failure with older clusters

2017-08-08 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5704.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3641
[https://github.com/apache/kafka/pull/3641]

> Auto topic creation causes failure with older clusters
> --
>
> Key: KAFKA-5704
> URL: https://issues.apache.org/jira/browse/KAFKA-5704
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
> Fix For: 1.0.0, 0.11.0.1
>
>
> The new automatic internal topic creation always tries to check the topic and 
> create it if missing. However, older brokers that we should still be 
> compatible with don't support some requests that are used. This results in an 
> UnsupportedVersionException which some of the TopicAdmin code notes that it 
> can throw but then isn't caught in the initializers, causing the entire 
> process to fail.
> We should probably just catch it, log a message, and allow things to proceed 
> hoping that the user has already created the topics correctly (as we used to 
> do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3641: KAFKA-5704 Corrected Connect distributed startup b...

2017-08-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3641


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: 答复: [VOTE] KIP-177: Consumer perf tool should count rebalance time

2017-08-08 Thread Ewen Cheslack-Postava
Hu,

The vote period is supposed to give people who might still want to vote -1
a chance to do so, so we'd leave it open until at least 72 hours after the
vote was initiated.

However, now that enough time has passed feel free to close the vote and
move the KIP to Adopted. I'll give it a +1 as well.

By the way, I noticed that the KIP was accidentally created under the ARIES
space in Confluence. Could we move it to the right location under the KAFKA
space? It's not a huge deal since it's linked from the KIP page, but it
doesn't show up in the navbar on the left when you're browsing the wiki
since it isn't in the same space.

-Ewen

On Sun, Aug 6, 2017 at 5:16 AM, Mickael Maison 
wrote:

> +1
>
> On Sun, Aug 6, 2017 at 4:15 AM, Hu Xi  wrote:
> > Hi all, a naive question please. Since this KIP already collects four
> binding +1 votes, could we mark it as Accepted and move it under 'Adopted
> KIPs'? Thanks
> >
> >
> >
> >
> > 
> > 发件人: Guozhang Wang 
> > 发送时间: 2017年8月5日 2:55
> > 收件人: dev@kafka.apache.org
> > 主题: Re: [VOTE] KIP-177: Consumer perf tool should count rebalance time
> >
> > +1
> >
> > On Fri, Aug 4, 2017 at 11:02 AM, Ismael Juma  wrote:
> >
> >> Thanks for the KIP, +1 (binding)
> >>
> >> Ismael
> >>
> >> On Fri, Aug 4, 2017 at 3:13 AM, Hu Xi  wrote:
> >>
> >> > Hi all,
> >> >
> >> >
> >> > I'd like to kick off the vote for KIP-177: https://cwiki.apache.org/
> >> > confluence/display/ARIES/KIP-177%3A+Consumer+perf+tool+
> >> > should+count+rebalance+time.
> >> >
> >> >
> >> > A PR for this KIP can be found at  >> > kafka/pull/3188> https://github.com/apache/kafka/pull/3188
> >> >
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
>


Re: [VOTE] KIP-163: Lower the Minimum Required ACL Permission of OffsetFetch

2017-08-08 Thread Ewen Cheslack-Postava
+1, thanks for the KIP!


On Sun, Aug 6, 2017 at 5:22 AM, Mickael Maison 
wrote:

> +1
> While it's not fixing the Read/Write inconsistency it's a step in the
> right direction
>
> On Thu, Aug 3, 2017 at 4:46 PM, Jason Gustafson 
> wrote:
> > +1
> >
> > On Wed, Aug 2, 2017 at 2:40 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com
> >> wrote:
> >
> >> Hi all,
> >>
> >> Thanks to everyone who participated in the discussion on KIP-163, and
> >> provided feedback.
> >> The KIP can be found at
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 163%3A+Lower+the+Minimum+Required+ACL+Permission+of+OffsetFetch
> >> .
> >> I believe the concerns have been addressed in the current version of the
> >> KIP; so I'd like to start a vote.
> >>
> >> Thanks.
> >> --Vahid
> >>
> >>
> >>
>


答复: 答复: [VOTE] KIP-177: Consumer perf tool should count rebalance time

2017-08-08 Thread Hu Xi
Ewen,


Thanks for the information. Will do that asap.



发件人: Ewen Cheslack-Postava 
发送时间: 2017年8月9日 11:33
收件人: dev@kafka.apache.org
主题: Re: 答复: [VOTE] KIP-177: Consumer perf tool should count rebalance time

Hu,

The vote period is supposed to give people who might still want to vote -1
a chance to do so, so we'd leave it open until at least 72 hours after the
vote was initiated.

However, now that enough time has passed feel free to close the vote and
move the KIP to Adopted. I'll give it a +1 as well.

By the way, I noticed that the KIP was accidentally created under the ARIES
space in Confluence. Could we move it to the right location under the KAFKA
space? It's not a huge deal since it's linked from the KIP page, but it
doesn't show up in the navbar on the left when you're browsing the wiki
since it isn't in the same space.

-Ewen

On Sun, Aug 6, 2017 at 5:16 AM, Mickael Maison 
wrote:

> +1
>
> On Sun, Aug 6, 2017 at 4:15 AM, Hu Xi  wrote:
> > Hi all, a naive question please. Since this KIP already collects four
> binding +1 votes, could we mark it as Accepted and move it under 'Adopted
> KIPs'? Thanks
> >
> >
> >
> >
> > 
> > 发件人: Guozhang Wang 
> > 发送时间: 2017年8月5日 2:55
> > 收件人: dev@kafka.apache.org
> > 主题: Re: [VOTE] KIP-177: Consumer perf tool should count rebalance time
> >
> > +1
> >
> > On Fri, Aug 4, 2017 at 11:02 AM, Ismael Juma  wrote:
> >
> >> Thanks for the KIP, +1 (binding)
> >>
> >> Ismael
> >>
> >> On Fri, Aug 4, 2017 at 3:13 AM, Hu Xi  wrote:
> >>
> >> > Hi all,
> >> >
> >> >
> >> > I'd like to kick off the vote for KIP-177: https://cwiki.apache.org/
> >> > confluence/display/ARIES/KIP-177%3A+Consumer+perf+tool+
> >> > should+count+rebalance+time.
> >> >
> >> >
> >> > A PR for this KIP can be found at  >> > kafka/pull/3188> https://github.com/apache/kafka/pull/3188
> >> >
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk8 #1891

2017-08-08 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: support retrieving cluster_id in system tests

--
[...truncated 878.30 KB...]
kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride SKIPPED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
SKIPPED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

unit.kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest 
STARTED

unit.kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest 
PASSED

unit.kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

unit.kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

unit.kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExcept

Re: [DISCUSS] KIP-183 - Change PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-08 Thread Ewen Cheslack-Postava
Thanks for the KIP. Generally the move away from ZK and to native Kafka
requests is good, so I'm generally +1 on this. A couple of
comments/questions.

* You gave the signature
electPreferredReplicaLeader(Collection partitions) for the
admin client. The old command allows not specifying the topic partitions
which results in election for all topic partitions. How would this be
expressed in the new API? Does the tool need to do a metadata request and
then issue the request for all topic partitions or would there be a
shortcut via the protocol (e.g. another electPreferredReplicaLeader() with
no args which translates to, e.g., an empty list or null in the request)?
* All the existing *Options classes for the AdminClient have at least a
timeout. Should we add that to this APIs options?
* Other AdminClient *Result classes provide some convenience methods when
you just want all the results. Should we have that as well?
* I don't think any other requests include error strings in the response,
do they? I'm pretty sure we just generate the error string on the client
side based on the error code. If an error code was ambiguous, we should
probably just split it across 2 or more codes.
* re: permissions, would there also be a per-topic permission required? (I
haven't kept track of all the ACLs and required permissions so I'm not
sure, but a bunch of other operations check per-resource they are modifying
-- this operation could be considered to be against either Cluster or
Topics.)
* Should we consider making the request only valid against the broker and
having the command do a metadata request to discover the controller and
then send the request directly to it? I think this could a) simplify the
implementation a bit by taking the ZK node out of the equation b) avoid the
REPLICA_LEADER_ELECTION_IN_PROGRESS error since this is only required due
to using the ZK node to communicate with the controller afaict and c)
potentially open the door for a synchronous request/response in the future.

-Ewen

On Mon, Aug 7, 2017 at 8:21 AM, Tom Bentley  wrote:

> Hi,
>
> I've updated this KIP slightly, to clarify a couple of points.
>
> One thing in particular that I would like feedback on is what authorization
> should be required for triggering the election of the preferred leader?
>
> Another thing would be whether the request and responses should be grouped
> by topic name. This would make for smaller messages when triggering
> elections for multiple partitions of the same topic.
>
> I'd be grateful for any feedback you may have.
>
> Cheers,
>
> Tom
>
> On 2 August 2017 at 18:34, Tom Bentley  wrote:
>
> > In a similar vein to KIP-179 I've created KIP-183 (
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-183+-+Change+
> > PreferredReplicaLeaderElectionCommand+to+use+AdminClient) which is about
> > deprecating the --zookeeper option to kafka-preferred-replica-
> election.sh
> > and replacing it with an option which would use a new AdminClient-based
> API.
> >
> > As it stands the KIP is focussed on simply moving the existing
> > functionality behind the AdminClient.
> >
> > I'd be grateful for any feedback people may have on this.
> >
> > Thanks,
> >
> > Tom
> >
>


Build failed in Jenkins: kafka-0.11.0-jdk7 #263

2017-08-08 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: support retrieving cluster_id in system tests

--
[...truncated 2.43 MB...]
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka

Re: [DISCUSS] KIP-184 Rename LogCleaner and related classes to LogCompactor

2017-08-08 Thread Pranav Maniar
Thanks Guozhang for the suggestion.

For now, I have updated KIP incorporating your suggestion.
Personally I think implicitly enabling compaction whenever policy is set to
compact is more appropriate. Because new users like me will always assume
that setting policy to compact will enable compaction.

But having said that, It will be interesting to know, if there are any
use-cases where user would explicitly want to turn off the compactor.

Thanks,
Pranav

On Tue, Aug 8, 2017 at 2:20 AM, Guozhang Wang  wrote:

> Thanks for the KIP proposal,
>
> I thought one suggestion before this discussion is to deprecate the "
> log.cleaner.enable" and always turn on compaction for those topics that
> have compact policies?
>
>
> Guozhang
>
> On Sat, Aug 5, 2017 at 9:36 AM, Pranav Maniar 
> wrote:
>
> > Hi All,
> >
> > Following a discussion on JIRA KAFKA-1944
> >  . I have created
> > KIP-184
> >  > 184%3A+Rename+LogCleaner+and+related+classes+to+LogCompactor>
> > as
> > it will require configuration change.
> >
> > As per the process I am starting Discussion on mail thread for KIP-184.
> >
> > Renaming of configuration "log.cleaner.enable" is discussed on
> KAFKA-1944.
> > But other log.cleaner configuration also seems to be used by cleaner
> only.
> > So to maintain naming consistency, I have proposed to rename all these
> > configuration.
> >
> > Please provide your suggestion/views for the same. Thanks !
> >
> >
> > Thanks,
> > Pranav
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-08-08 Thread Ewen Cheslack-Postava
Great, looking good. I'd probably be a bit more concrete about the Proposed
Changes (e.g., "will log an warning if the config is specified" and "since
the JsonConverter is the default, the configs will be removed immediately
from the example worker configuration files").

Other than that this LGTM and I'll be happy to get rid of those settings!

-Ewen

On Tue, Aug 8, 2017 at 2:54 AM, UMESH CHAUDHARY  wrote:

> Hi Ewen,
> Sorry, I am bit late in responding this.
>
> Thanks for your inputs and I've updated the KIP by adding more details to
> it.
>
> Regards,
> Umesh
>
> On Mon, 31 Jul 2017 at 21:51 Ewen Cheslack-Postava 
> wrote:
>
>> On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY 
>> wrote:
>>
>>> Hi Ewen,
>>> Thanks for your comments.
>>>
>>> 1) Yes, there are some test and java classes which refer these configs,
>>> so I will include them as well in "public interface" section of KIP. What
>>> should be our approach to deal with the classes and tests which use these
>>> configs: we need to change them to use JsonConverter when we plan for
>>> removal of these configs right?
>>>
>>
>> I actually meant the references in config/connect-standalone.properties
>> and config/connect-distributed.properties
>>
>>
>>> 2) I believe we can target the deprecation in 1.0.0 release as it is
>>> planned in October 2017 and then removal in next major release. Let me
>>> know your thoughts as we don't have any information for next major release
>>> (next to 1.0.0) yet.
>>>
>>
>> That sounds fine. Tough to say at this point what our approach to major
>> version bumps will be since the approach to version numbering is changing a
>> bit.
>>
>>
>>> 3) Thats a good point and mentioned JIRA can help us to validate the
>>> usage of any other converters. I will list this down in the KIP.
>>>
>>> Let me know if you have some additional thoughts on this.
>>>
>>> Regards,
>>> Umesh
>>>
>>>
>>>
>>> On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava 
>>> wrote:
>>>
 Umesh,

 Thanks for the KIP. Straightforward and I think it's a good change.
 Unfortunately it is hard to tell how many people it would affect since
 we
 can't tell how many people have adjusted that config, but I think this
 is
 the right thing to do long term.

 A couple of quick things that might be helpful to refine:

 * Note that there are also some references in the example configs that
 we
 should remove.
 * It's nice to be explicit about when the removal is planned. This lets
 us
 set expectations with users for timeframe (especially now that we have
 time
 based releases), allows us to give info about the removal timeframe in
 log
 error messages, and lets us file a JIRA against that release so we
 remember
 to follow up. Given the update to 1.0.0 for the next release, we may
 also
 need to adjust how we deal with deprecations/removal if we don't want to
 have to wait all the way until 2.0 to remove (though it is unclear how
 exactly we will be handling version bumps from now on).
 * Migration path -- I think this is the major missing gap in the KIP.
 Do we
 need a migration path? If not, presumably it is because people aren't
 using
 any other converters in practice. Do we have some way of validating
 this (
 https://issues.apache.org/jira/browse/KAFKA-3988 might be pretty
 convincing
 evidence)? If there are some users using other converters, how would
 they
 migrate to newer versions which would no longer support that?

 -Ewen


 On Fri, Jul 14, 2017 at 2:37 AM, UMESH CHAUDHARY 
 wrote:

 > Hi there,
 > Resending as probably missed earlier to grab your attention.
 >
 > Regards,
 > Umesh
 >
 > -- Forwarded message -
 > From: UMESH CHAUDHARY 
 > Date: Mon, 3 Jul 2017 at 11:04
 > Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
 > configs in WorkerConfig
 > To: dev@kafka.apache.org 
 >
 >
 > Hello All,
 > I have added a KIP recently to deprecate and remove internal converter
 > configs in WorkerConfig.java class because these have ultimately just
 > caused a lot more trouble and confusion than it is worth.
 >
 > Please find the KIP here
 >  174+-+Deprecate+and+remove+internal+converter+configs+in+
 WorkerConfig>
 > and
 > the related JIRA here .
 >
 > Appreciate your review and comments.
 >
 > Regards,
 > Umesh
 >

>>>


Re: [DISCUSS] KIP-184 Rename LogCleaner and related classes to LogCompactor

2017-08-08 Thread Ewen Cheslack-Postava
A simple log message is standard, but the KIP should probably specify what
happens when the deprecated config is encountered.

Other than that, the change LGTM. Other things that might be worth
addressing

* log.compactor.min.compaction.lag.ms seems a bit redundant with compactor
and compaction. Not sure if we'd want to tweak the new version.
* The class renaming doesn't even need to be in the KIP as it is an
implementation detail.

-Ewen

On Tue, Aug 8, 2017 at 10:17 PM, Pranav Maniar  wrote:

> Thanks Guozhang for the suggestion.
>
> For now, I have updated KIP incorporating your suggestion.
> Personally I think implicitly enabling compaction whenever policy is set to
> compact is more appropriate. Because new users like me will always assume
> that setting policy to compact will enable compaction.
>
> But having said that, It will be interesting to know, if there are any
> use-cases where user would explicitly want to turn off the compactor.
>
> Thanks,
> Pranav
>
> On Tue, Aug 8, 2017 at 2:20 AM, Guozhang Wang  wrote:
>
> > Thanks for the KIP proposal,
> >
> > I thought one suggestion before this discussion is to deprecate the "
> > log.cleaner.enable" and always turn on compaction for those topics that
> > have compact policies?
> >
> >
> > Guozhang
> >
> > On Sat, Aug 5, 2017 at 9:36 AM, Pranav Maniar 
> > wrote:
> >
> > > Hi All,
> > >
> > > Following a discussion on JIRA KAFKA-1944
> > >  . I have created
> > > KIP-184
> > >  > > 184%3A+Rename+LogCleaner+and+related+classes+to+LogCompactor>
> > > as
> > > it will require configuration change.
> > >
> > > As per the process I am starting Discussion on mail thread for KIP-184.
> > >
> > > Renaming of configuration "log.cleaner.enable" is discussed on
> > KAFKA-1944.
> > > But other log.cleaner configuration also seems to be used by cleaner
> > only.
> > > So to maintain naming consistency, I have proposed to rename all these
> > > configuration.
> > >
> > > Please provide your suggestion/views for the same. Thanks !
> > >
> > >
> > > Thanks,
> > > Pranav
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #1892

2017-08-08 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-185: Make exactly once in order delivery per partition the default producer setting

2017-08-08 Thread Ewen Cheslack-Postava
Apurva,

For the benchmarking, I have a couple of questions:

1. Re: the mention of exactly once, this is within a producer session,
right? And so really only idempotent. Applications still need to take extra
steps for exactly once if they, e.g., are producing data from some other
log like a DB txn log.

2.

> Further, the results above show that there is a large improvement in
throughput and latency when we go from max.in.flight=1 to max.in.flight=2,
but then there no discernible difference for higher values of this setting.

If in the tests there's no difference with higher values, perhaps leaving
it alone is better. There are a bunch of other configs we expose and this
test only evaluates one environment. Without testing things like cross-DC
traffic, I'd be wary of jumping to the conclusion that max.in.flight > 2
never makes a difference and that some people aren't already relying on a
larger default OOTB.

3. The acks=all change is actually unrelated to the title of the KIP and
orthogonal to all the other changes. It's also the most risky since
acks=all needs more network round trips. And while I think it makes sense
to have the more durable default, this seems like it's actually fairly
likely to break things for some people (even if a minority of people). This
one seems like a setting change that needs more sensitive handling, e.g.
both release notes and log notification that the default is going to
change, followed by the actual change later.

-Ewen

On Tue, Aug 8, 2017 at 5:23 PM, Apurva Mehta  wrote:

> Hi,
>
> I've put together a new KIP which proposes to ship Kafka with its strongest
> delivery guarantees by default.
>
> We currently ship with at most once semantics and don't provide any
> ordering guarantees per partition. The proposal is is to provide exactly
> once in order delivery per partition by default in the upcoming 1.0.0
> release.
>
> The KIP linked to below also outlines the performance characteristics of
> the proposed default.
>
> The KIP is here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 185%3A+Make+exactly+once+in+order+delivery+per+partition+
> the+default+producer+setting
>
> Please have a look, I would love your feedback!
>
> Thanks,
> Apurva
>