Re: Invalid topology building: Processor has no access to StateStore

2017-11-07 Thread Guozhang Wang
Hello Florian,

In PAPI, you need to associate the state store with the processor node via
the Topology#addStateStore(StoreBuilder); note that StateStore#init() is
supposed to be triggered by the Streams library only but not by the users:
it is called when the library is constructing the specified topology.


Guozhang


On Tue, Nov 7, 2017 at 12:59 PM, Florian Hussonnois 
wrote:

> Hi,
>
> Currently it's possible to register a store from a Processor instance via
> the method StateStore#init(ProcessorContext, StateStore).
>
> However, when this method is used the registered store seems to never be
> add to the the variable ProcessorNode#stateStores which is leading
> to Invalid topology building exception when accessing store with the method
> ProcessorContext#getStateStore
>
> Is it the desired behavior or a bug ?
>
> --
> Florian HUSSONNOIS
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-6185) java.lang.OutOfMemoryError memory leak on 1.0.0 with 0.11.0.1 on disk and converting to 0.9 clients

2017-11-07 Thread Brett Rann (JIRA)
Brett Rann created KAFKA-6185:
-

 Summary: java.lang.OutOfMemoryError memory leak on 1.0.0 with 
0.11.0.1 on disk and converting to 0.9 clients
 Key: KAFKA-6185
 URL: https://issues.apache.org/jira/browse/KAFKA-6185
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Ubuntu 14.04.5 LTS
5 brokers: 1&2 on 1.0.0 3,4,5 on 0.11.0.1
inter.broker.protocol.version=0.11.0.1
log.message.format.version=0.11.0.1
clients a mix of 0.9, 0.10, 0.11
Reporter: Brett Rann
 Attachments: Kafka_Internals___Datadog.png, 
Kafka_Internals___Datadog.png

We are testing 1.0.0 in a couple of environments.
Both have about 5 brokers, with two 1.0.0 brokers and the rest 0.11.0.1 brokers.
One is using on disk message format 0.9.0.1, the other 0.11.0.1
we have 0.9, 0.10, and 0.11 clients connecting.

The cluster on the 0.11.0.1 format is consistently having memory issues.

The first occurrence of the error comes along with this stack trace

{noformat}
{"timestamp":"2017-11-06 
14:22:32,402","level":"ERROR","logger":"kafka.server.KafkaApis","thread":"kafka-request-handler-7","message":"[KafkaApi-1]
 Error when handling request 
{replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=maxwell.users,partitions=[{partition=0,fetch_offset=227537,max_bytes=1100},{partition=4,fetch_offset=354468,max_bytes=1100},{partition=5,fetch_offset=266524,max_bytes=1100},{partition=8,fetch_offset=324562,max_bytes=1100},{partition=10,fetch_offset=292931,max_bytes=1100},{partition=12,fetch_offset=325718,max_bytes=1100},{partition=15,fetch_offset=229036,max_bytes=1100}]}]}"}
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:101)
at 
org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:520)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:518)
at scala.Option.map(Option.scala:146)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:518)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:508)
at scala.Option.flatMap(Option.scala:171)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:508)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:556)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:555)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:569)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:569)
at 
kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2034)
at 
kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:52)
at 
kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2033)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:569)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:588)
at 
kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:175)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:587)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:604)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:604)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:596)
at kafka.server.KafkaApis.handle(KafkaApis.scala:100)
{noformat}

And then after a few of those it settles into this kind of pattern

{noformat}
{"timestamp":"2017-11-06 
15:06:48,114","level":"ERROR","logger":"kafka.server.KafkaApis","thread":"kafka-r

[jira] [Created] (KAFKA-6184) report a metric of the lag between the consumer offset and the start offset of the log

2017-11-07 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6184:
--

 Summary: report a metric of the lag between the consumer offset 
and the start offset of the log
 Key: KAFKA-6184
 URL: https://issues.apache.org/jira/browse/KAFKA-6184
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Jun Rao


Currently, the consumer reports a metric of the lag between the high watermark 
of a log and the consumer offset. It will be useful to report a similar lag 
metric between the consumer offset and the start offset of the log. If this 
latter lag gets close to 0, it's an indication that the consumer may lose data 
soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread UMESH CHAUDHARY
Congratulations Onur!

On Tue, 7 Nov 2017 at 21:44 Jun Rao  wrote:

> Affan,
>
> All known problems in the controller are described in the doc linked from
> https://issues.apache.org/jira/browse/KAFKA-5027.
>
> Thanks,
>
> Jun
>
> On Mon, Nov 6, 2017 at 11:00 PM, Affan Syed  wrote:
>
> > Congrats Onur,
> >
> > Can you also share the document where all known problems are listed; I am
> > assuming these bugs are still valid for the current stable release.
> >
> > Affan
> >
> > - Affan
> >
> > On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:
> >
> > > Hi, everyone,
> > >
> > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Onur
> > > Karaman.
> > >
> > > Onur's most significant work is the improvement of Kafka controller,
> > which
> > > is the brain of a Kafka cluster. Over time, we have accumulated quite a
> > few
> > > correctness and performance issues in the controller. There have been
> > > attempts to fix controller issues in isolation, which would make the
> code
> > > base more complicated without a clear path of solving all problems.
> Onur
> > is
> > > the one who took a holistic approach, by first documenting all known
> > > issues, writing down a new design, coming up with a plan to deliver the
> > > changes in phases and executing on it. At this point, Onur has
> completed
> > > the two most important phases: making the controller single threaded
> and
> > > changing the controller to use the async ZK api. The former fixed
> > multiple
> > > deadlocks and race conditions. The latter significantly improved the
> > > performance when there are many partitions. Experimental results show
> > that
> > > Onur's work reduced the controlled shutdown time by a factor of 100
> times
> > > and the controller failover time by a factor of 3 times.
> > >
> > > Congratulations, Onur!
> > >
> > > Thanks,
> > >
> > > Jun (on behalf of the Apache Kafka PMC)
> > >
> >
>


Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2017-11-07 Thread Jeff Widman
Any other feedback from folks on KIP-211?

A prime benefit of this KIP is that it removes the need for the consumer to
commit offsets for partitions where the offset hasn't changed. Right now,
if the consumer doesn't commit those offsets, they will be deleted, so the
consumer keeps blindly (re)committing duplicate offsets, wasting
network/disk I/O.

On Mon, Oct 30, 2017 at 3:47 PM, Jeff Widman  wrote:

> I support this as the proposed change seems both more intuitive and safer.
>
> Right now we've essentially hacked this at my day job by bumping the
> offset retention period really high, but this is a much cleaner solution.
>
> I don't have any use-cases that require custom retention periods on a
> per-group basis.
>
> On Mon, Oct 30, 2017 at 10:15 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
>> Bump!
>>
>>
>>
>> From:   Vahid S Hashemian/Silicon Valley/IBM
>> To: dev 
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
>> Group Offsets
>>
>>
>> Hi all,
>>
>> I created a KIP to address the group offset expiration issue reported in
>> KAFKA-4682:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%
>> 3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
>>
>> Your feedback is welcome!
>>
>> Thanks.
>> --Vahid
>>
>>
>>
>>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>



-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


[GitHub] kafka pull request #4190: MINOR: Fix Javadoc Issues

2017-11-07 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/4190

MINOR: Fix Javadoc Issues

This PR mainly fixes some broken links and invalid references in the 
clients Javadoc

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
minor/fix_javadoc_issues_1711

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4190


commit c7489c5dd9f7b0b2b059275951d129ecaac1bf3d
Author: Vahid Hashemian 
Date:   2017-11-07T22:01:57Z

MINOR: Fix Javadoc Issues

This PR mainly fixes some broken links and invalid references in the 
clients Javadoc




---


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Stephane Maarek
Okay makes sense thanks! As you said maybe in the future (or now), it's worth 
starting a server java dependency jar that's not called "client".
Probably a debate for another day (

Tom, crossing fingers to see more votes on this! Good stuff
 

On 7/11/17, 9:51 pm, "Ismael Juma"  wrote:

The idea is that you only depend on a Java jar. The core jar includes the
Scala version in the name and you should not care about that when
implementing a Java interface.

Ismael

On Tue, Nov 7, 2017 at 10:37 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Thanks !
>
> How about a java folder package in the core then ? It's not a separate jar
> and it's still java?
>
> Nonetheless I agree these are details. I just got really confused when
> trying to write my policy and would hope that confusion is not shared by
> others because it's a "client " class although should only reside within a
> broker
>
> On 7 Nov. 2017 9:04 pm, "Ismael Juma"  wrote:
>
> The location of the policies is fine. Note that the package _does not_
> include clients in the name. If we ever have enough server side only code
> to merit a separate JAR, we can do that and it's mostly compatible (users
> would only have to update their build dependency). Generally, all public
> APIs going forward will be in Java.
>
> Ismael
>
> On Tue, Nov 7, 2017 at 9:44 AM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Tom,
> >
> > Regarding the java / scala compilation, I believe this is fine (the
> > compiler will know), but any reason why you don't want the policy to be
> > implemented using Scala ? (like the Authorizer)
> > It's usually not best practice to mix in scala / java code.
> >
> > Thanks!
> > Stephane
> >
> > Kind regards,
> > Stephane
> >
> > [image: Simple Machines]
> >
> > Stephane Maarek | Developer
> >
> > +61 416 575 980
> > steph...@simplemachines.com.au
> > simplemachines.com.au
> > Level 2, 145 William Street, Sydney NSW 2010
> >
> > On 7 November 2017 at 20:27, Tom Bentley  wrote:
> >
> > > Hi Stephane,
> > >
> > > The vote on this KIP is on-going.
> > >
> > > I think it would be OK to make minor changes, but Edoardo and Mickael
> > would
> > > have to to not disagree with them.
> > >
> > > The packages have not been brought up as a problem before now. I don't
> > know
> > > the reason they're in the client's package, but I agree that it's not
> > > ideal. To me the situation with the policies is analogous to the
> > situation
> > > with the Authorizer which is in core: They're both broker-side
> extensions
> > > points which users can provide their own implementations of. I don't
> know
> > > whether the scala compiler is OK compiling interdependent scala and
> java
> > > code (maybe Ismael knows?), but if it is, I would be happy if these
> > > server-side policies were moved.
> > >
> > > Cheers,
> > >
> > > Tom
> > >
> > > On 7 November 2017 at 08:45, Stephane Maarek <
> > steph...@simplemachines.com.
> > > au
> > > > wrote:
> > >
> > > > Hi Tom,
> > > >
> > > > What's the status of this? I was about to create a KIP to implement 
a
> > > > SimpleCreateTopicPolicy
> > > > (and Alter, etc...)
> > > > These policies would have some most basic parameter to check for
> > > > replication factor and min insync replicas (mostly) so that end 
users
> > can
> > > > leverage them out of the box. This KIP obviously changes the
> interface
> > so
> > > > I'd like this to be in before I propose my KIP
> > > >
> > > > I'll add my +1 to this, and hopefully we get quick progress so I can
> > > > propose my KIP.
> > > >
> > > > Finally, have the packages been discussed?
> > > > I find it extremely awkward to have the current CreateTopicPolicy
> part
> > of
> > > > the kafka-clients package, and would love to see the next classes
> > you're
> > > > implementing appear in core/src/main/scala/kafka/policy or
> > > server/policy.
> > > > Unless I'm missing something?
> > > >
> > > > Thanks for driving this
> > > > Stephane
> > > >
> > > > Kind regards,
> > > > Stephane
> > > >
> > > > [image: Simple Machines]
> > > >
> > > > Stephane Maarek | Developer
> > > >
> > > > +61 416 575 980
> > > > steph...@simplemachines.com.au
> > > > simplemachines.com.au
> > > > Level 2, 145 William Street, Sydney NSW 2010
> > > >
> > > > On 25 October 2017 at 19:45, Tom Bentley 
> > wrote:
> > > >
> > > > > It's been two weeks since I started the vote on this KIP and
> although
> > > > there
> > > > > are two votes so 

[GitHub] kafka pull request #4189: KAFKA-6146: minimize the number of triggers enqueu...

2017-11-07 Thread onurkaraman
GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/4189

KAFKA-6146: minimize the number of triggers enqueuing 
PreferredReplicaLeaderElection events

We currently enqueue a PreferredReplicaLeaderElection controller event in 
PreferredReplicaElectionHandler's handleCreation, handleDeletion, and 
handleDataChange. We can just enqueue the event upon znode creation and after 
preferred replica leader election completes. The processing of this latter 
enqueue will register the exist watch on PreferredReplicaElectionZNode and 
perform any pending preferred replica leader election that may have occurred 
between completion and registration.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka KAFKA-6146

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4189


commit d7087fec5ee00bd7a13ebe435905e85845aeb29f
Author: Onur Karaman 
Date:   2017-11-07T21:19:52Z

KAFKA-6146: minimize the number of triggers enqueuing 
PreferredReplicaLeaderElection events

We currently enqueue a PreferredReplicaLeaderElection controller event in 
PreferredReplicaElectionHandler's handleCreation, handleDeletion, and 
handleDataChange. We can just enqueue the event upon znode creation and after 
preferred replica leader election completes. The processing of this latter 
enqueue will register the exist watch on PreferredReplicaElectionZNode and 
perform any pending preferred replica leader election that may have occurred 
between completion and registration.




---


Invalid topology building: Processor has no access to StateStore

2017-11-07 Thread Florian Hussonnois
Hi,

Currently it's possible to register a store from a Processor instance via
the method StateStore#init(ProcessorContext, StateStore).

However, when this method is used the registered store seems to never be
add to the the variable ProcessorNode#stateStores which is leading
to Invalid topology building exception when accessing store with the method
ProcessorContext#getStateStore

Is it the desired behavior or a bug ?

-- 
Florian HUSSONNOIS


[jira] [Resolved] (KAFKA-6110) Warning when running the broker on Windows

2017-11-07 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian resolved KAFKA-6110.

   Resolution: Duplicate
Fix Version/s: 1.1.0

> Warning when running the broker on Windows
> --
>
> Key: KAFKA-6110
> URL: https://issues.apache.org/jira/browse/KAFKA-6110
> Project: Kafka
>  Issue Type: Bug
> Environment: Windows 10 VM
>Reporter: Vahid Hashemian
>Priority: Minor
> Fix For: 1.1.0
>
>
> *This issue exists in 1.0.0-RC2.*
> The following warning appears in the broker log at startup:
> {code}
> [2017-10-23 15:29:49,370] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #1701: MINOR: increase usability of shell-scripts

2017-11-07 Thread igalic
Github user igalic closed the pull request at:

https://github.com/apache/kafka/pull/1701


---


[jira] [Created] (KAFKA-6183) Broker should send OffsetCommitResponse only after it has written offset to cache

2017-11-07 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-6183:
---

 Summary: Broker should send OffsetCommitResponse only after it has 
written offset to cache
 Key: KAFKA-6183
 URL: https://issues.apache.org/jira/browse/KAFKA-6183
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin


Currently broker sends OffsetCommitResponse to client before it writes 
committed offset to disk and cache. Thus client does not have read-after-write 
semantics when committing and reading offset. The following sequence of events 
may happen:

- Client sends offset commit request to broker.
- Broker sends offset commit response back.
- Client sends offset fetch request to broker.
- Broker returns an empty offset fetch response to client.
- Broker writes the committed offset to disk and cache.

Broker should return OffsetCommitResponse after it has written committed offset 
to disk and memory, similar to the approach that broker returns ProduceResponse 
after it has written data to disk. Note that the data does not have to be 
flushed to disk. This change makes offset commit semantics easier to use 
without incurring cost on the broker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4188: KAFKA-6180: AbstractConfig.getList() should never ...

2017-11-07 Thread lahabana
GitHub user lahabana opened a pull request:

https://github.com/apache/kafka/pull/4188

KAFKA-6180: AbstractConfig.getList() should never return null

  Return an empty list instead of null to make handling these easier

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lahabana/kafka kafka6180

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4188


commit a5ef570a5ee472c52b7adfd20c2cc315008fad48
Author: cmolter 
Date:   2017-11-07T18:18:47Z

KAFKA-6180: AbstractConfig.getList() should never return null

  Return an empty list instead of null to make handling these easier




---


Re: [DISCUSS] KIP-220: Add AdminClient into Kafka Streams' ClientSupplier

2017-11-07 Thread Guozhang Wang
Here is what I'm thinking about this trade-off: we want to fail fast when
brokers do not yet support the requested API version, with the cost that we
need to do this one-time thing with an expensed NetworkClient plus a bit
longer starting up latency. Here are a few different options:

1) we ask all the brokers: this is one extreme of the trade-off, we still
need to handle UnsupportedApiVersionsException since brokers can downgrade.
2) we ask a random broker: this is what we did, and a bit weaker than 1)
but saves on latency.
3) we do not ask anyone: this is the other extreme of the trade-off.

To me I think 1) is an overkill, so I did not include that in my proposals.
Between 2) and 3) I'm slightly preferring 3) since even under this case we
are sort of failing fast because we will throw the exception at the first
rebalance, but I can still see the value of option 2). Ideally we can have
some APIs from AdminClient to check API versions but this does not exist
today and I do not want to drag too long with growing scope on this KIP, so
the current proposal's implementation uses expendable Network which is a
bit fuzzy.


Guozhang


On Tue, Nov 7, 2017 at 2:07 AM, Matthias J. Sax 
wrote:

> I would prefer to keep the current check. We could even improve it, and
> do the check to more than one brokers (even all provided
> bootstrap.servers) or some random servers after we got all meta data
> about the cluster.
>
>
> -Matthias
>
> On 11/7/17 1:01 AM, Guozhang Wang wrote:
> > Hello folks,
> >
> > One update I'd like to propose regarding "compatibility checking":
> > currently we create a single StreamsKafkaClient at the beginning to issue
> > an ApiVersionsRequest to a random broker and then check on its versions,
> > and fail if it does not satisfy the version (0.10.1+ without EOS, 0.11.0
> > with EOS); after this check we throw this client away. My original plan
> is
> > to replace this logic with the NetworkClient's own apiVersions, but after
> > some digging while working on the PR I found that exposing this
> apiVersions
> > variable from NetworkClient through AdminClient is not very straight
> > forward, plus it would need an API change on the AdminClient itself as
> well
> > to expose the versions information.
> >
> > On the other hand, this one-time compatibility checking's benefit may be
> > not significant: 1) it asks a random broker upon starting up, and hence
> > does not guarantee all broker's support the corresponding API versions at
> > that time; 2) brokers can still be upgraded / downgraded after the
> streams
> > app is up and running, and hence we still need to handle
> > UnsupportedVersionExceptions thrown from the producer / consumer / admin
> > client during the runtime anyways.
> >
> > So I'd like to propose two options in this KIP:
> >
> > 1) we remove this one-time compatibility check on Streams starting up in
> > this KIP, and solely rely on handling producer / consumer / admin
> client's
> > API UnsupportedVersionException throwable exceptions. Please share your
> > thoughts about this.
> >
> > 2) we create a one-time NetworkClient upon starting up, send the
> > ApiVersionsRequest and get the response and do the checking; after that
> > throw this client away.
> >
> > Please let me know what do you think. Thanks!
> >
> >
> > Guozhang
> >
> >
> > On Mon, Nov 6, 2017 at 7:56 AM, Matthias J. Sax 
> > wrote:
> >
> >> Thanks for the update and clarification.
> >>
> >> Sounds good to me :)
> >>
> >>
> >> -Matthias
> >>
> >>
> >>
> >> On 11/6/17 12:16 AM, Guozhang Wang wrote:
> >>> Thanks Matthias,
> >>>
> >>> 1) Updated the KIP page to include KAFKA-6126.
> >>> 2) For passing configs, I agree, will make a pass over the existing
> >> configs
> >>> passed to StreamsKafkaClient and update the wiki page accordingly, to
> >>> capture all changes that would happen for the replacement in this
> single
> >>> KIP.
> >>> 3) For internal topic purging, I'm not sure if we need to include this
> >> as a
> >>> public change since internal topics are meant for abstracted away from
> >> the
> >>> Streams users; they should not leverage such internal topics elsewhere
> >>> themselves. The only thing I can think of is for Kafka operators this
> >> would
> >>> mean that such internal topics would be largely reduced in their
> >> footprint,
> >>> but that would not be needed in the KIP as well.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>>
> >>> On Sat, Nov 4, 2017 at 9:00 AM, Matthias J. Sax  >
> >>> wrote:
> >>>
>  I like this KIP. Can you also link to
>  https://issues.apache.org/jira/browse/KAFKA-6126 in the KIP?
> 
>  What I am wondering though: if we start to partially (ie, step by
> step)
>  replace the existing StreamsKafkaClient with Java AdminClient, don't
> we
>  need more KIPs? For example, if we use purge-api for internal topics,
> it
>  seems like a change that requires a KIP. Similar for passing configs
> --
>  the old client might have different config than the old client

Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-07 Thread Ted Yu
https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2470/ is green.

Thanks Ismael.

On Tue, Nov 7, 2017 at 3:46 AM, Ismael Juma  wrote:

> I changed the Jenkins jobs to use Oracle JDK 9 instead of 9.0.1 until
> INFRA-15448 is fixed.
>
> Ismael
>
> On Mon, Nov 6, 2017 at 6:25 PM, Ismael Juma  wrote:
>
> > Thanks!
> >
> > Ismael
> >
> > On Mon, Nov 6, 2017 at 3:48 AM, Ted Yu  wrote:
> >
> >> Logged https://issues.apache.org/jira/browse/INFRA-15448
> >>
> >> On Thu, Nov 2, 2017 at 11:39 PM, Ismael Juma  wrote:
> >>
> >> > This looks to be an issue in Jenkins, not in Kafka. Apache Infra
> updated
> >> > Java 9 to 9.0.1 and it seems to have broken some of the Jenkins code.
> >> >
> >> > Ismael
> >> >
> >> > On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:
> >> >
> >> > > Looking at earlier runs, e.g. :
> >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/console
> >> > >
> >> > > FAILURE: Build failed with an exception.
> >> > >
> >> > > * What went wrong:
> >> > > Could not determine java version from '9.0.1'.
> >> > >
> >> > >
> >> > > This was the first build with 'out of range of int' exception:
> >> > >
> >> > >
> >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/console
> >> > >
> >> > >
> >> > > However, I haven't found the commit which was at the tip of repo at
> >> that
> >> > > time.
> >> > >
> >> > >
> >> > > On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang 
> >> > wrote:
> >> > >
> >> > > > Noticed that as well, could we track down to which git commit /
> >> version
> >> > > > upgrade caused the issue?
> >> > > >
> >> > > >
> >> > > > Guozhang
> >> > > >
> >> > > > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu 
> wrote:
> >> > > >
> >> > > > > Hi,
> >> > > > > I took a look at recent runs under https://builds.apache.
> >> > > > > org/job/kafka-pr-jdk9-scala2.12
> >> > > > >
> >> > > > > All the recent runs failed with:
> >> > > > >
> >> > > > > Could not update commit status of the Pull Request on GitHub.
> >> > > > > org.kohsuke.github.HttpException: Server returned HTTP response
> >> > code:
> >> > > > > 201, message: 'Created' for URL:
> >> > > > > https://api.github.com/repos/apache/kafka/statuses/
> >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> >> > > > > at org.kohsuke.github.Requester.
> parse(Requester.java:633)
> >> > > > > at org.kohsuke.github.Requester.
> parse(Requester.java:594)
> >> > > > > at org.kohsuke.github.Requester._to(Requester.java:272)
> >> > > > > at org.kohsuke.github.Requester.to(Requester.java:234)
> >> > > > > at org.kohsuke.github.GHRepository.createCommitStatus(
> >> > > > > GHRepository.java:1071)
> >> > > > >
> >> > > > > ...
> >> > > > >
> >> > > > > Caused by: com.fasterxml.jackson.databind.JsonMappingException:
> >> > > > > Numeric value (4298492118) out of range of int
> >> > > > >  at [Source: {"url":"https://api.github.com/repos/apache/kafka/
> >> > > statuses/
> >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> >> > > > > state":"pending","description":"Build
> >> > > > > started sha1 is
> >> > > > > merged.","target_url":"https://builds.apache.org/job/kafka-
> >> > > > > pr-jdk9-scala2.12/2397/","context":"JDK
> >> > > > > 9 and Scala 2.12",
> >> > > > >
> >> > > > >
> >> > > > > Should we upgrade the version for jackson ?
> >> > > > >
> >> > > > >
> >> > > > > Cheers
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > -- Guozhang
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread Jun Rao
Affan,

All known problems in the controller are described in the doc linked from
https://issues.apache.org/jira/browse/KAFKA-5027.

Thanks,

Jun

On Mon, Nov 6, 2017 at 11:00 PM, Affan Syed  wrote:

> Congrats Onur,
>
> Can you also share the document where all known problems are listed; I am
> assuming these bugs are still valid for the current stable release.
>
> Affan
>
> - Affan
>
> On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:
>
> > Hi, everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer Onur
> > Karaman.
> >
> > Onur's most significant work is the improvement of Kafka controller,
> which
> > is the brain of a Kafka cluster. Over time, we have accumulated quite a
> few
> > correctness and performance issues in the controller. There have been
> > attempts to fix controller issues in isolation, which would make the code
> > base more complicated without a clear path of solving all problems. Onur
> is
> > the one who took a holistic approach, by first documenting all known
> > issues, writing down a new design, coming up with a plan to deliver the
> > changes in phases and executing on it. At this point, Onur has completed
> > the two most important phases: making the controller single threaded and
> > changing the controller to use the async ZK api. The former fixed
> multiple
> > deadlocks and race conditions. The latter significantly improved the
> > performance when there are many partitions. Experimental results show
> that
> > Onur's work reduced the controlled shutdown time by a factor of 100 times
> > and the controller failover time by a factor of 3 times.
> >
> > Congratulations, Onur!
> >
> > Thanks,
> >
> > Jun (on behalf of the Apache Kafka PMC)
> >
>


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread Affan Syed
Congrats Onur,

Can you also share the document where all known problems are listed; I am
assuming these bugs are still valid for the current stable release.

Affan

- Affan

On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:

> Hi, everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Onur
> Karaman.
>
> Onur's most significant work is the improvement of Kafka controller, which
> is the brain of a Kafka cluster. Over time, we have accumulated quite a few
> correctness and performance issues in the controller. There have been
> attempts to fix controller issues in isolation, which would make the code
> base more complicated without a clear path of solving all problems. Onur is
> the one who took a holistic approach, by first documenting all known
> issues, writing down a new design, coming up with a plan to deliver the
> changes in phases and executing on it. At this point, Onur has completed
> the two most important phases: making the controller single threaded and
> changing the controller to use the async ZK api. The former fixed multiple
> deadlocks and race conditions. The latter significantly improved the
> performance when there are many partitions. Experimental results show that
> Onur's work reduced the controlled shutdown time by a factor of 100 times
> and the controller failover time by a factor of 3 times.
>
> Congratulations, Onur!
>
> Thanks,
>
> Jun (on behalf of the Apache Kafka PMC)
>


[jira] [Created] (KAFKA-6182) Automatic co-partitioning of topics via automatic intermediate topic with matching partitions

2017-11-07 Thread Antony Stubbs (JIRA)
Antony Stubbs created KAFKA-6182:


 Summary: Automatic co-partitioning of topics via automatic 
intermediate topic with matching partitions
 Key: KAFKA-6182
 URL: https://issues.apache.org/jira/browse/KAFKA-6182
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: Antony Stubbs


Currently it is up to the user to ensure that two input topics for a join have 
the same number of partitions. It would be great have Kafka streams detect this 
automatically, or at least give the user and easy way, and create an 
intermediate topic with the same number of partitions as the topic being joins 
with.

See 
https://docs.confluent.io/current/streams/developer-guide.html#joins-require-co-partitioning-of-the-input-data



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6181) Examining log messages with {{--deep-iteration}} should show superset of fields

2017-11-07 Thread Yeva Byzek (JIRA)
Yeva Byzek created KAFKA-6181:
-

 Summary: Examining log messages with {{--deep-iteration}} should 
show superset of fields
 Key: KAFKA-6181
 URL: https://issues.apache.org/jira/browse/KAFKA-6181
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.11.0.0
Reporter: Yeva Byzek
Priority: Minor


Printing log data on Kafka brokers using {{kafka.tools.DumpLogSegments}}:
{{--deep-iteration}} should show a superset of fields in each message, as 
compared to without this parameter, however some fields are missing.  Impact: 
users need to execute both commands to get the full set of fields.

{noformat}
kafka-run-class kafka.tools.DumpLogSegments \
--print-data-log \
--files .log
Dumping .log
Starting offset: 0
baseOffset: 0 lastOffset: 35 baseSequence: -1 lastSequence: -1 producerId: -1 
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false position: 0 
CreateTime: 1509987569448 isvalid: true size: 3985 magic: 2 compresscodec: NONE 
crc: 4227905507
{noformat}


{noformat}
kafka-run-class kafka.tools.DumpLogSegments \
--print-data-log \
--files .log \
--deep-iteration
Dumping .log
Starting offset: 0
offset: 0 position: 0 CreateTime: 1509987569420 isvalid: true keysize: -1 
valuesize: 100
magic: 2 compresscodec: NONE producerId: -1 sequence: -1 isTransactional: false 
headerKeys: [] payload: 
SSXVNJHPDQDXVCRASTVYBCWVMGNYKRXVZXKGXTSPSJDGYLUEGQFLAQLOCFLJBEPOWFNSOMYARHAOPUFOJHHDXEHXJBHW
{noformat}

Notice, for example, that {{partitionLeaderEpoch}} and {{crc}} are missing. 
Print these and all missing fields.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-11-07 Thread Paolo Patierno
Because Guozhang and Colin are doing a great job on reviewing the related PR 
and we are really close to have it in a decent/final shape, what do the other 
committers think about this KIP ?

We are stuck at 2 binding votes (and something like 5 non binding). We need 
last binding vote for having the KIP got accepted and we can start to think 
about merging the PR for a future 1.1.0 release.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Kamal 
Sent: Monday, November 6, 2017 1:26 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new 
Admin Client API

+1 (non-binding)

On Thu, Nov 2, 2017 at 2:13 PM, Paolo Patierno  wrote:

> Thanks Jason !
>
>
> I have just updated the KIP with DeleteRecordsOptions definition.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Jason Gustafson 
> Sent: Wednesday, November 1, 2017 10:09 PM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> +1 (binding). Just one nit: the KIP doesn't define the DeleteRecordsOptions
> object. I see it's empty in the PR, but we may as well document the full
> API in the KIP.
>
> -Jason
>
>
> On Wed, Nov 1, 2017 at 2:54 PM, Guozhang Wang  wrote:
>
> > Made a pass over the PR and left some comments. I'm +1 on the wiki design
> > page as well.
> >
> > On Tue, Oct 31, 2017 at 7:13 AM, Bill Bejeck  wrote:
> >
> > > +1
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Tue, Oct 31, 2017 at 4:36 AM, Paolo Patierno 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > >
> > > > because I don't see any further discussion around KIP-204 (
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 204+%3A+adding+records+deletion+operation+to+the+new+
> Admin+Client+API)
> > > > and I have already opened a PR with the implementation, can we
> re-cover
> > > the
> > > > vote started on October 18 ?
> > > >
> > > > There are only "non binding" votes up to now.
> > > >
> > > > Thanks,
> > > >
> > > >
> > > > Paolo Patierno
> > > > Senior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Azure & IoT
> > > > Microsoft Azure Advisor
> > > >
> > > > Twitter : @ppatierno
> > > > Linkedin : paolopatierno
> > > > Blog : DevExperience
> > > >
> > > >
> > > > 
> > > > From: Viktor Somogyi 
> > > > Sent: Wednesday, October 18, 2017 10:49 AM
> > > > To: dev@kafka.apache.org
> > > > Subject: Re: [VOTE] KIP-204 : adding records deletion operation to
> the
> > > new
> > > > Admin Client API
> > > >
> > > > +1 (non-binding)
> > > >
> > > > On Wed, Oct 18, 2017 at 8:23 AM, Manikumar <
> manikumar.re...@gmail.com>
> > > > wrote:
> > > >
> > > > > + (non-binding)
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Manikumar
> > > > >
> > > > > On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin 
> > wrote:
> > > > >
> > > > > > Thanks for the KIP. +1 (non-binding)
> > > > > >
> > > > > > On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu 
> > wrote:
> > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno <
> > > ppatie...@live.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I didn't see any further discussion around this KIP, so I'd
> > like
> > > to
> > > > > > start
> > > > > > > > the vote for it.
> > > > > > > >
> > > > > > > > Just for reference : https://cwiki.apache.org/
> > > > > > > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > > > > > > deletion+operation+to+the+new+Admin+Client+API
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Paolo Patierno
> > > > > > > > Senior Software Engineer (IoT) @ Red Hat
> > > > > > > > Microsoft MVP on Azure & IoT
> > > > > > > > Microsoft Azure Advisor
> > > > > > > >
> > > > > > > > Twitter : @ppatierno
> > > > > > > > Linkedin : paolopatierno > linkedin.com/in/paolopatierno
> > > >
> > > > > > > > Blog : DevExperience
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-07 Thread Jeyhun Karimov
Hi Jan,

Thanks for your comments. I agree that the implementation should not
introduce new "bugs" or "known issues" in future.
I think  we can either i) just drop RecordContext argument for join methods
or ii) introduce binary aggregation logic for RecordContexts for
two-input-stream-operators.
Any other comments/suggestions of course are welcome.


Cheers,
Jeyhun

On Tue, Nov 7, 2017 at 1:04 PM Jan Filipiak 
wrote:

>
> On 07.11.2017 12:59, Jan Filipiak wrote:
> >
> > On 07.11.2017 11:20, Matthias J. Sax wrote:
> >> About implementation if we do the KIP as proposed: I agree with Guozhang
> >> that we would need to use the currently processed record's metadata in
> >> the context. This does leak some implementation details, but I
> >> personally don't see a big issue here (at the same time, I am also fine
> >> to remove the RecordContext for joins if people think it's an issue).
> >>
> >> About the API: while I agree with Jan, that having two APIs for input
> >> streams/tables and "derived" streams/table (ie, result of
> >> KStream-KStream join or an aggregation) would be a way to avoid some
> >> semantic issue, I am not sure if it is worth the effort. IMHO, it would
> >> make the API more convoluted and if users access the RecordContext on a
> >> derived stream/table it's a "user error"
> > Why make it easy for the users to make mistakes in order to save some
> > effort
> > (That I dont quite think is that big actually)
> >> -- it's not really wrong as
> >> users still get the current records context, but of course, we would
> >> leak implementation details (as above, I don't see a bit issue here
> >> though).
> >>
> >> At the same time, I disagree with Jan that "its not to hard to have a
> >> user keeping track" -- if we apply this argument, we could even argue
> >> that it's not to hard to use a Transformer instead of a map/filter etc.
> >> We want to add "syntactic sugar" with this change and thus should really
> >> provide value and not introduce a half-baked solution for which users
> >> still need to do manual customizing.
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 11/7/17 5:54 AM, Jan Filipiak wrote:
> >>> I Aggree completely.
> >>>
> >>> Exposing this information in a place where it has no _natural_
> >>> belonging
> >>> might really be a bad blocker in the long run.
> >>>
> >>> Concerning your first point. I would argue its not to hard to have a
> >>> user keep track of these. If we still don't want the user
> >>> to keep track of these I would argue that all > projection only <
> >>> transformations on a Source-backed KTable/KStream
> >>> could also return a Ktable/KStream instance of the type we return from
> >>> the topology builder.
> >>> Only after any operation that exceeds projection or filter one would
> >>> return a KTable not granting access to this any longer.
> >>>
> >>> Even then its difficult already: I never ran a topology with caching
> >>> but
> >>> I am not even 100% sure what the record Context means behind
> >>> a materialized KTable with Caching? Topic and Partition are probably
> >>> with some reasoning but offset is probably only the offset causing the
> >>> flush?
> >>> So one might aswell think to drop offsets from this RecordContext.
> >>>
> >>> Best Jan
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On 07.11.2017 03:18, Guozhang Wang wrote:
>  Regarding the API design (the proposed set of overloads v.s. one
>  overload
>  on #map to enrich the record), I think what we have represents a good
>  trade-off between API succinctness and user convenience: on one
>  hand we
>  definitely want to keep as fewer overloaded functions as possible.
>  But on
>  the other hand if we only do that in, say, the #map() function then
>  this
>  enrichment could be an overkill: think of a topology that has 7
>  operators
>  in a chain, where users want to access the record context on
>  operator #2
>  and #6 only, with the "enrichment" manner they need to do the
>  enrichment on
>  operator #2 and keep it that way until #6. In addition, the
>  RecordContext
>  fields (topic, offset, etc) are really orthogonal to the key-value
>  payloads
>  themselves, so I think separating them into this object is a
>  cleaner way.
> 
>  Regarding the RecordContext inheritance, this is actually a good point
>  that
>  have not been discussed thoroughly before. Here are my my two
>  cents: one
>  natural way would be to inherit the record context from the
>  "triggering"
>  record, for example in a join operator, if the record from stream A
>  triggers the join then the record context is inherited from with that
>  record. This is also aligned with the lower-level PAPI interface. A
>  counter
>  argument, though, would be that this is sort of leaking the internal
>  implementations of the DSL, so that moving forward if we did some
>  refactoring to our join implement

Jenkins build is back to normal : kafka-trunk-jdk8 #2194

2017-11-07 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-07 Thread Jan Filipiak


On 07.11.2017 12:59, Jan Filipiak wrote:


On 07.11.2017 11:20, Matthias J. Sax wrote:

About implementation if we do the KIP as proposed: I agree with Guozhang
that we would need to use the currently processed record's metadata in
the context. This does leak some implementation details, but I
personally don't see a big issue here (at the same time, I am also fine
to remove the RecordContext for joins if people think it's an issue).

About the API: while I agree with Jan, that having two APIs for input
streams/tables and "derived" streams/table (ie, result of
KStream-KStream join or an aggregation) would be a way to avoid some
semantic issue, I am not sure if it is worth the effort. IMHO, it would
make the API more convoluted and if users access the RecordContext on a
derived stream/table it's a "user error"
Why make it easy for the users to make mistakes in order to save some 
effort

(That I dont quite think is that big actually)

-- it's not really wrong as
users still get the current records context, but of course, we would
leak implementation details (as above, I don't see a bit issue here 
though).


At the same time, I disagree with Jan that "its not to hard to have a
user keeping track" -- if we apply this argument, we could even argue
that it's not to hard to use a Transformer instead of a map/filter etc.
We want to add "syntactic sugar" with this change and thus should really
provide value and not introduce a half-baked solution for which users
still need to do manual customizing.


-Matthias


On 11/7/17 5:54 AM, Jan Filipiak wrote:

I Aggree completely.

Exposing this information in a place where it has no _natural_ 
belonging

might really be a bad blocker in the long run.

Concerning your first point. I would argue its not to hard to have a
user keep track of these. If we still don't want the user
to keep track of these I would argue that all > projection only <
transformations on a Source-backed KTable/KStream
could also return a Ktable/KStream instance of the type we return from
the topology builder.
Only after any operation that exceeds projection or filter one would
return a KTable not granting access to this any longer.

Even then its difficult already: I never ran a topology with caching 
but

I am not even 100% sure what the record Context means behind
a materialized KTable with Caching? Topic and Partition are probably
with some reasoning but offset is probably only the offset causing the
flush?
So one might aswell think to drop offsets from this RecordContext.

Best Jan







On 07.11.2017 03:18, Guozhang Wang wrote:
Regarding the API design (the proposed set of overloads v.s. one 
overload

on #map to enrich the record), I think what we have represents a good
trade-off between API succinctness and user convenience: on one 
hand we
definitely want to keep as fewer overloaded functions as possible. 
But on
the other hand if we only do that in, say, the #map() function then 
this
enrichment could be an overkill: think of a topology that has 7 
operators
in a chain, where users want to access the record context on 
operator #2

and #6 only, with the "enrichment" manner they need to do the
enrichment on
operator #2 and keep it that way until #6. In addition, the 
RecordContext

fields (topic, offset, etc) are really orthogonal to the key-value
payloads
themselves, so I think separating them into this object is a 
cleaner way.


Regarding the RecordContext inheritance, this is actually a good point
that
have not been discussed thoroughly before. Here are my my two 
cents: one
natural way would be to inherit the record context from the 
"triggering"

record, for example in a join operator, if the record from stream A
triggers the join then the record context is inherited from with that
record. This is also aligned with the lower-level PAPI interface. A
counter
argument, though, would be that this is sort of leaking the internal
implementations of the DSL, so that moving forward if we did some
refactoring to our join implementations so that the triggering 
record can
change, the RecordContext would also be different. I do not know 
how much
it would really affect end users, but would like to hear your 
opinions.

Agreed to 100% exposing this information


Guozhang


On Mon, Nov 6, 2017 at 1:00 PM, Jeyhun Karimov 
wrote:


Hi Jan,

Sorry for late reply.


The API Design doesn't look appealing


In terms of API design we tried to preserve the java functional
interfaces.
We applied the same set of rich methods for KTable to make it 
compatible

with the rest of overloaded APIs.

It should be 100% sufficient to offer a KTable + KStream that is
directly
feed from a topic with 1 additional overload for the #map() 
methods to

cover every usecase while keeping the API in a way better state.

- IMO this seems a workaround, rather than a direct solution.

Perhaps we should continue this discussion in DISCUSS thread.


Cheers,
Jeyhun


On Mon, Nov 6, 2017 at 9:14 PM Jan Filipiak 


wrote:


Hi.

I

Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-07 Thread Jan Filipiak


On 07.11.2017 11:20, Matthias J. Sax wrote:

About implementation if we do the KIP as proposed: I agree with Guozhang
that we would need to use the currently processed record's metadata in
the context. This does leak some implementation details, but I
personally don't see a big issue here (at the same time, I am also fine
to remove the RecordContext for joins if people think it's an issue).

About the API: while I agree with Jan, that having two APIs for input
streams/tables and "derived" streams/table (ie, result of
KStream-KStream join or an aggregation) would be a way to avoid some
semantic issue, I am not sure if it is worth the effort. IMHO, it would
make the API more convoluted and if users access the RecordContext on a
derived stream/table it's a "user error"

Why make it hard for the users to make mistakes in order to save some effort
(That I dont quite think is that big actually)

-- it's not really wrong as
users still get the current records context, but of course, we would
leak implementation details (as above, I don't see a bit issue here though).

At the same time, I disagree with Jan that "its not to hard to have a
user keeping track" -- if we apply this argument, we could even argue
that it's not to hard to use a Transformer instead of a map/filter etc.
We want to add "syntactic sugar" with this change and thus should really
provide value and not introduce a half-baked solution for which users
still need to do manual customizing.


-Matthias


On 11/7/17 5:54 AM, Jan Filipiak wrote:

I Aggree completely.

Exposing this information in a place where it has no _natural_ belonging
might really be a bad blocker in the long run.

Concerning your first point. I would argue its not to hard to have a
user keep track of these. If we still don't want the user
to keep track of these I would argue that all > projection only <
transformations on a Source-backed KTable/KStream
could also return a Ktable/KStream instance of the type we return from
the topology builder.
Only after any operation that exceeds projection or filter one would
return a KTable not granting access to this any longer.

Even then its difficult already: I never ran a topology with caching but
I am not even 100% sure what the record Context means behind
a materialized KTable with Caching? Topic and Partition are probably
with some reasoning but offset is probably only the offset causing the
flush?
So one might aswell think to drop offsets from this RecordContext.

Best Jan







On 07.11.2017 03:18, Guozhang Wang wrote:

Regarding the API design (the proposed set of overloads v.s. one overload
on #map to enrich the record), I think what we have represents a good
trade-off between API succinctness and user convenience: on one hand we
definitely want to keep as fewer overloaded functions as possible. But on
the other hand if we only do that in, say, the #map() function then this
enrichment could be an overkill: think of a topology that has 7 operators
in a chain, where users want to access the record context on operator #2
and #6 only, with the "enrichment" manner they need to do the
enrichment on
operator #2 and keep it that way until #6. In addition, the RecordContext
fields (topic, offset, etc) are really orthogonal to the key-value
payloads
themselves, so I think separating them into this object is a cleaner way.

Regarding the RecordContext inheritance, this is actually a good point
that
have not been discussed thoroughly before. Here are my my two cents: one
natural way would be to inherit the record context from the "triggering"
record, for example in a join operator, if the record from stream A
triggers the join then the record context is inherited from with that
record. This is also aligned with the lower-level PAPI interface. A
counter
argument, though, would be that this is sort of leaking the internal
implementations of the DSL, so that moving forward if we did some
refactoring to our join implementations so that the triggering record can
change, the RecordContext would also be different. I do not know how much
it would really affect end users, but would like to hear your opinions.

Agreed to 100% exposing this information


Guozhang


On Mon, Nov 6, 2017 at 1:00 PM, Jeyhun Karimov 
wrote:


Hi Jan,

Sorry for late reply.


The API Design doesn't look appealing


In terms of API design we tried to preserve the java functional
interfaces.
We applied the same set of rich methods for KTable to make it compatible
with the rest of overloaded APIs.

It should be 100% sufficient to offer a KTable + KStream that is
directly

feed from a topic with 1 additional overload for the #map() methods to
cover every usecase while keeping the API in a way better state.

- IMO this seems a workaround, rather than a direct solution.

Perhaps we should continue this discussion in DISCUSS thread.


Cheers,
Jeyhun


On Mon, Nov 6, 2017 at 9:14 PM Jan Filipiak 
wrote:


Hi.

I do understand that it might come in Handy.
   From my POV in any relati

wiki delete comments permission?

2017-11-07 Thread Glen Mazza
Hi Devs, getting greedy after getting my earlier write access request 
for the Kafka Wiki granted 
(https://www.mail-archive.com/dev@kafka.apache.org/msg16718.html), would 
it be OK if I (mazzag) also had the Wiki "delete comments" right?  I'd 
like to get rid of some of the spam and no longer relevant comments.


Looking at the developer setup page 
(https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup), we 
have a 2013 comment about the sbt command given not being correct, but 
sbt is not in the instructions anymore (i.e., no longer used/relevant).  
Also an autogenerated spam comment right under it ("I just wanted to 
comment on your blog and say I really enjoyed reading your blog 
here...[Sound Box Link]") just added for the spammer's SEO purposes, 
should also be nuked.


Regards,
Glen



Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-07 Thread Ismael Juma
I changed the Jenkins jobs to use Oracle JDK 9 instead of 9.0.1 until
INFRA-15448 is fixed.

Ismael

On Mon, Nov 6, 2017 at 6:25 PM, Ismael Juma  wrote:

> Thanks!
>
> Ismael
>
> On Mon, Nov 6, 2017 at 3:48 AM, Ted Yu  wrote:
>
>> Logged https://issues.apache.org/jira/browse/INFRA-15448
>>
>> On Thu, Nov 2, 2017 at 11:39 PM, Ismael Juma  wrote:
>>
>> > This looks to be an issue in Jenkins, not in Kafka. Apache Infra updated
>> > Java 9 to 9.0.1 and it seems to have broken some of the Jenkins code.
>> >
>> > Ismael
>> >
>> > On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:
>> >
>> > > Looking at earlier runs, e.g. :
>> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/console
>> > >
>> > > FAILURE: Build failed with an exception.
>> > >
>> > > * What went wrong:
>> > > Could not determine java version from '9.0.1'.
>> > >
>> > >
>> > > This was the first build with 'out of range of int' exception:
>> > >
>> > >
>> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/console
>> > >
>> > >
>> > > However, I haven't found the commit which was at the tip of repo at
>> that
>> > > time.
>> > >
>> > >
>> > > On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang 
>> > wrote:
>> > >
>> > > > Noticed that as well, could we track down to which git commit /
>> version
>> > > > upgrade caused the issue?
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu  wrote:
>> > > >
>> > > > > Hi,
>> > > > > I took a look at recent runs under https://builds.apache.
>> > > > > org/job/kafka-pr-jdk9-scala2.12
>> > > > >
>> > > > > All the recent runs failed with:
>> > > > >
>> > > > > Could not update commit status of the Pull Request on GitHub.
>> > > > > org.kohsuke.github.HttpException: Server returned HTTP response
>> > code:
>> > > > > 201, message: 'Created' for URL:
>> > > > > https://api.github.com/repos/apache/kafka/statuses/
>> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
>> > > > > at org.kohsuke.github.Requester.parse(Requester.java:633)
>> > > > > at org.kohsuke.github.Requester.parse(Requester.java:594)
>> > > > > at org.kohsuke.github.Requester._to(Requester.java:272)
>> > > > > at org.kohsuke.github.Requester.to(Requester.java:234)
>> > > > > at org.kohsuke.github.GHRepository.createCommitStatus(
>> > > > > GHRepository.java:1071)
>> > > > >
>> > > > > ...
>> > > > >
>> > > > > Caused by: com.fasterxml.jackson.databind.JsonMappingException:
>> > > > > Numeric value (4298492118) out of range of int
>> > > > >  at [Source: {"url":"https://api.github.com/repos/apache/kafka/
>> > > statuses/
>> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
>> > > > > state":"pending","description":"Build
>> > > > > started sha1 is
>> > > > > merged.","target_url":"https://builds.apache.org/job/kafka-
>> > > > > pr-jdk9-scala2.12/2397/","context":"JDK
>> > > > > 9 and Scala 2.12",
>> > > > >
>> > > > >
>> > > > > Should we upgrade the version for jackson ?
>> > > > >
>> > > > >
>> > > > > Cheers
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > > >
>> > >
>> >
>>
>
>


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread Paolo Patierno
Congrats Onur ! Well deserved !


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Viktor Somogyi 
Sent: Tuesday, November 7, 2017 11:34 AM
To: dev@kafka.apache.org
Subject: Re: [ANNOUNCE] New committer: Onur Karaman

Congrats Onur!

On Tue, Nov 7, 2017 at 9:17 AM, Tom Bentley  wrote:

> Well done Onur.
>
> On 7 November 2017 at 06:52, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Congratulations Onur!!
> > On Tue, 7 Nov 2017 at 06:30, Jaikiran Pai 
> > wrote:
> >
> > > Congratulations Onur!
> > >
> > > -Jaikiran
> > >
> > >
> > > On 06/11/17 10:54 PM, Jun Rao wrote:
> > > > Hi, everyone,
> > > >
> > > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Onur
> > > > Karaman.
> > > >
> > > > Onur's most significant work is the improvement of Kafka controller,
> > > which
> > > > is the brain of a Kafka cluster. Over time, we have accumulated
> quite a
> > > few
> > > > correctness and performance issues in the controller. There have been
> > > > attempts to fix controller issues in isolation, which would make the
> > code
> > > > base more complicated without a clear path of solving all problems.
> > Onur
> > > is
> > > > the one who took a holistic approach, by first documenting all known
> > > > issues, writing down a new design, coming up with a plan to deliver
> the
> > > > changes in phases and executing on it. At this point, Onur has
> > completed
> > > > the two most important phases: making the controller single threaded
> > and
> > > > changing the controller to use the async ZK api. The former fixed
> > > multiple
> > > > deadlocks and race conditions. The latter significantly improved the
> > > > performance when there are many partitions. Experimental results show
> > > that
> > > > Onur's work reduced the controlled shutdown time by a factor of 100
> > times
> > > > and the controller failover time by a factor of 3 times.
> > > >
> > > > Congratulations, Onur!
> > > >
> > > > Thanks,
> > > >
> > > > Jun (on behalf of the Apache Kafka PMC)
> > > >
> > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread Viktor Somogyi
Congrats Onur!

On Tue, Nov 7, 2017 at 9:17 AM, Tom Bentley  wrote:

> Well done Onur.
>
> On 7 November 2017 at 06:52, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Congratulations Onur!!
> > On Tue, 7 Nov 2017 at 06:30, Jaikiran Pai 
> > wrote:
> >
> > > Congratulations Onur!
> > >
> > > -Jaikiran
> > >
> > >
> > > On 06/11/17 10:54 PM, Jun Rao wrote:
> > > > Hi, everyone,
> > > >
> > > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Onur
> > > > Karaman.
> > > >
> > > > Onur's most significant work is the improvement of Kafka controller,
> > > which
> > > > is the brain of a Kafka cluster. Over time, we have accumulated
> quite a
> > > few
> > > > correctness and performance issues in the controller. There have been
> > > > attempts to fix controller issues in isolation, which would make the
> > code
> > > > base more complicated without a clear path of solving all problems.
> > Onur
> > > is
> > > > the one who took a holistic approach, by first documenting all known
> > > > issues, writing down a new design, coming up with a plan to deliver
> the
> > > > changes in phases and executing on it. At this point, Onur has
> > completed
> > > > the two most important phases: making the controller single threaded
> > and
> > > > changing the controller to use the async ZK api. The former fixed
> > > multiple
> > > > deadlocks and race conditions. The latter significantly improved the
> > > > performance when there are many partitions. Experimental results show
> > > that
> > > > Onur's work reduced the controlled shutdown time by a factor of 100
> > times
> > > > and the controller failover time by a factor of 3 times.
> > > >
> > > > Congratulations, Onur!
> > > >
> > > > Thanks,
> > > >
> > > > Jun (on behalf of the Apache Kafka PMC)
> > > >
> > >
> > >
> >
>


[jira] [Created] (KAFKA-6180) AbstractConfig.getList() should never return null

2017-11-07 Thread Charly Molter (JIRA)
Charly Molter created KAFKA-6180:


 Summary: AbstractConfig.getList() should never return null
 Key: KAFKA-6180
 URL: https://issues.apache.org/jira/browse/KAFKA-6180
 Project: Kafka
  Issue Type: Improvement
  Components: config
Affects Versions: 1.0.0, 0.11.0.0, 0.10.2.1, 0.10.2.0, 0.10.1.1, 0.10.1.0
Reporter: Charly Molter
Priority: Trivial


AbstractConfig.getList returns null if the property is unset and there's no 
default.

This creates a lot of cases where we need to do null checks (and remember them).
It's good practice to just return an empty list as usually code naturally 
handles empty lists.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2952

2017-11-07 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Eliminate unnecessary Topic(And)Partition allocations in

--
[...truncated 1.40 MB...]

org.apache.kafka.common.utils.UtilsTest > testAbs STARTED

org.apache.kafka.common.utils.UtilsTest > testAbs PASSED

org.apache.kafka.common.utils.UtilsTest > testMin STARTED

org.apache.kafka.common.utils.UtilsTest > testMin PASSED

org.apache.kafka.common.utils.UtilsTest > toArray STARTED

org.apache.kafka.common.utils.UtilsTest > toArray PASSED

org.apache.kafka.common.utils.UtilsTest > utf8ByteBufferSerde STARTED

org.apache.kafka.common.utils.UtilsTest > utf8ByteBufferSerde PASSED

org.apache.kafka.common.utils.UtilsTest > testJoin STARTED

org.apache.kafka.common.utils.UtilsTest > testJoin PASSED

org.apache.kafka.common.utils.UtilsTest > 
testReadFullyOrFailWithPartialFileChannelReads STARTED

org.apache.kafka.common.utils.UtilsTest > 
testReadFullyOrFailWithPartialFileChannelReads PASSED

org.apache.kafka.common.utils.UtilsTest > toArrayDirectByteBuffer STARTED

org.apache.kafka.common.utils.UtilsTest > toArrayDirectByteBuffer PASSED

org.apache.kafka.common.utils.UtilsTest > testReadBytes STARTED

org.apache.kafka.common.utils.UtilsTest > testReadBytes PASSED

org.apache.kafka.common.utils.UtilsTest > testGetHost STARTED

org.apache.kafka.common.utils.UtilsTest > testGetHost PASSED

org.apache.kafka.common.utils.UtilsTest > testGetPort STARTED

org.apache.kafka.common.utils.UtilsTest > testGetPort PASSED

org.apache.kafka.common.utils.UtilsTest > 
testReadFullyWithPartialFileChannelReads STARTED

org.apache.kafka.common.utils.UtilsTest > 
testReadFullyWithPartialFileChannelReads PASSED

org.apache.kafka.common.utils.UtilsTest > testRecursiveDelete STARTED

org.apache.kafka.common.utils.UtilsTest > testRecursiveDelete PASSED

org.apache.kafka.common.utils.UtilsTest > testReadFullyOrFailWithRealFile 
STARTED

org.apache.kafka.common.utils.UtilsTest > testReadFullyOrFailWithRealFile PASSED

org.apache.kafka.common.utils.UtilsTest > writeToBuffer STARTED

org.apache.kafka.common.utils.UtilsTest > writeToBuffer PASSED

org.apache.kafka.common.utils.UtilsTest > testReadFullyIfEofIsReached STARTED

org.apache.kafka.common.utils.UtilsTest > testReadFullyIfEofIsReached PASSED

org.apache.kafka.common.utils.UtilsTest > testFormatBytes STARTED

org.apache.kafka.common.utils.UtilsTest > testFormatBytes PASSED

org.apache.kafka.common.utils.UtilsTest > utf8ByteArraySerde STARTED

org.apache.kafka.common.utils.UtilsTest > utf8ByteArraySerde PASSED

org.apache.kafka.common.utils.UtilsTest > testCloseAll STARTED

org.apache.kafka.common.utils.UtilsTest > testCloseAll PASSED

org.apache.kafka.common.utils.UtilsTest > testFormatAddress STARTED

org.apache.kafka.common.utils.UtilsTest > testFormatAddress PASSED

org.apache.kafka.common.utils.SanitizerTest > testSanitize STARTED

org.apache.kafka.common.utils.SanitizerTest > testSanitize PASSED

org.apache.kafka.common.utils.SanitizerTest > testJmxSanitize STARTED

org.apache.kafka.common.utils.SanitizerTest > testJmxSanitize PASSED

org.apache.kafka.common.utils.JavaTest > testLoadKerberosLoginModule STARTED

org.apache.kafka.common.utils.JavaTest > testLoadKerberosLoginModule PASSED

org.apache.kafka.common.utils.JavaTest > testIsIBMJdk STARTED

org.apache.kafka.common.utils.JavaTest > testIsIBMJdk PASSED

org.apache.kafka.common.utils.JavaTest > testJavaVersion STARTED

org.apache.kafka.common.utils.JavaTest > testJavaVersion PASSED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateInt STARTED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateInt PASSED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateLong STARTED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateLong PASSED

org.apache.kafka.common.utils.ChecksumsTest > 
testUpdateByteBufferWithOffsetPosition STARTED

org.apache.kafka.common.utils.ChecksumsTest > 
testUpdateByteBufferWithOffsetPosition PASSED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateByteBuffer STARTED

org.apache.kafka.common.utils.ChecksumsTest > testUpdateByteBuffer PASSED

org.apache.kafka.common.utils.ByteUtilsTest > testInvalidVarlong STARTED

org.apache.kafka.common.utils.ByteUtilsTest > testInvalidVarlong PASSED

org.apache.kafka.common.utils.ByteUtilsTest > testVarlongSerde STARTED

org.apache.kafka.common.utils.ByteUtilsTest > testVarlongSerde PASSED

org.apache.kafka.common.utils.ByteUtilsTest > testReadUnsignedInt STARTED

org.apache.kafka.common.utils.ByteUtilsTest > testReadUnsignedInt PASSED

org.apache.kafka.common.utils.ByteUtilsTest > testWriteUnsignedIntLEToArray 
STARTED

org.apache.kafka.common.utils.ByteUtilsTest > testWriteUnsignedIntLEToArray 
PASSED

org.apache.kafka.common.utils.ByteUtilsTest > testInvalidVarint STARTED

org.apache.kafka.common.utils.ByteUtilsTest > testInvalidVarint PASSED

org.apache.kafka.common.utils.ByteUtilsTest > t

Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Ismael Juma
The idea is that you only depend on a Java jar. The core jar includes the
Scala version in the name and you should not care about that when
implementing a Java interface.

Ismael

On Tue, Nov 7, 2017 at 10:37 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Thanks !
>
> How about a java folder package in the core then ? It's not a separate jar
> and it's still java?
>
> Nonetheless I agree these are details. I just got really confused when
> trying to write my policy and would hope that confusion is not shared by
> others because it's a "client " class although should only reside within a
> broker
>
> On 7 Nov. 2017 9:04 pm, "Ismael Juma"  wrote:
>
> The location of the policies is fine. Note that the package _does not_
> include clients in the name. If we ever have enough server side only code
> to merit a separate JAR, we can do that and it's mostly compatible (users
> would only have to update their build dependency). Generally, all public
> APIs going forward will be in Java.
>
> Ismael
>
> On Tue, Nov 7, 2017 at 9:44 AM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Tom,
> >
> > Regarding the java / scala compilation, I believe this is fine (the
> > compiler will know), but any reason why you don't want the policy to be
> > implemented using Scala ? (like the Authorizer)
> > It's usually not best practice to mix in scala / java code.
> >
> > Thanks!
> > Stephane
> >
> > Kind regards,
> > Stephane
> >
> > [image: Simple Machines]
> >
> > Stephane Maarek | Developer
> >
> > +61 416 575 980
> > steph...@simplemachines.com.au
> > simplemachines.com.au
> > Level 2, 145 William Street, Sydney NSW 2010
> >
> > On 7 November 2017 at 20:27, Tom Bentley  wrote:
> >
> > > Hi Stephane,
> > >
> > > The vote on this KIP is on-going.
> > >
> > > I think it would be OK to make minor changes, but Edoardo and Mickael
> > would
> > > have to to not disagree with them.
> > >
> > > The packages have not been brought up as a problem before now. I don't
> > know
> > > the reason they're in the client's package, but I agree that it's not
> > > ideal. To me the situation with the policies is analogous to the
> > situation
> > > with the Authorizer which is in core: They're both broker-side
> extensions
> > > points which users can provide their own implementations of. I don't
> know
> > > whether the scala compiler is OK compiling interdependent scala and
> java
> > > code (maybe Ismael knows?), but if it is, I would be happy if these
> > > server-side policies were moved.
> > >
> > > Cheers,
> > >
> > > Tom
> > >
> > > On 7 November 2017 at 08:45, Stephane Maarek <
> > steph...@simplemachines.com.
> > > au
> > > > wrote:
> > >
> > > > Hi Tom,
> > > >
> > > > What's the status of this? I was about to create a KIP to implement a
> > > > SimpleCreateTopicPolicy
> > > > (and Alter, etc...)
> > > > These policies would have some most basic parameter to check for
> > > > replication factor and min insync replicas (mostly) so that end users
> > can
> > > > leverage them out of the box. This KIP obviously changes the
> interface
> > so
> > > > I'd like this to be in before I propose my KIP
> > > >
> > > > I'll add my +1 to this, and hopefully we get quick progress so I can
> > > > propose my KIP.
> > > >
> > > > Finally, have the packages been discussed?
> > > > I find it extremely awkward to have the current CreateTopicPolicy
> part
> > of
> > > > the kafka-clients package, and would love to see the next classes
> > you're
> > > > implementing appear in core/src/main/scala/kafka/policy or
> > > server/policy.
> > > > Unless I'm missing something?
> > > >
> > > > Thanks for driving this
> > > > Stephane
> > > >
> > > > Kind regards,
> > > > Stephane
> > > >
> > > > [image: Simple Machines]
> > > >
> > > > Stephane Maarek | Developer
> > > >
> > > > +61 416 575 980
> > > > steph...@simplemachines.com.au
> > > > simplemachines.com.au
> > > > Level 2, 145 William Street, Sydney NSW 2010
> > > >
> > > > On 25 October 2017 at 19:45, Tom Bentley 
> > wrote:
> > > >
> > > > > It's been two weeks since I started the vote on this KIP and
> although
> > > > there
> > > > > are two votes so far there are no binding votes. Any feedback from
> > > > > committers would be appreciated.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Tom
> > > > >
> > > > > On 12 October 2017 at 10:09, Edoardo Comar 
> > wrote:
> > > > >
> > > > > > Thanks Tom with the last additions (changes to the protocol) it
> now
> > > > > > supersedes KIP-170
> > > > > >
> > > > > > +1 non-binding
> > > > > > --
> > > > > >
> > > > > > Edoardo Comar
> > > > > >
> > > > > > IBM Message Hub
> > > > > >
> > > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > > >
> > > > > >
> > > > > >
> > > > > > From:   Tom Bentley 
> > > > > > To: dev@kafka.apache.org
> > > > > > Date:   11/10/2017 09:21
> > > > > > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> > > > > >
> > > > > >
> >

Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Stephane Maarek
Thanks !

How about a java folder package in the core then ? It's not a separate jar
and it's still java?

Nonetheless I agree these are details. I just got really confused when
trying to write my policy and would hope that confusion is not shared by
others because it's a "client " class although should only reside within a
broker

On 7 Nov. 2017 9:04 pm, "Ismael Juma"  wrote:

The location of the policies is fine. Note that the package _does not_
include clients in the name. If we ever have enough server side only code
to merit a separate JAR, we can do that and it's mostly compatible (users
would only have to update their build dependency). Generally, all public
APIs going forward will be in Java.

Ismael

On Tue, Nov 7, 2017 at 9:44 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Tom,
>
> Regarding the java / scala compilation, I believe this is fine (the
> compiler will know), but any reason why you don't want the policy to be
> implemented using Scala ? (like the Authorizer)
> It's usually not best practice to mix in scala / java code.
>
> Thanks!
> Stephane
>
> Kind regards,
> Stephane
>
> [image: Simple Machines]
>
> Stephane Maarek | Developer
>
> +61 416 575 980
> steph...@simplemachines.com.au
> simplemachines.com.au
> Level 2, 145 William Street, Sydney NSW 2010
>
> On 7 November 2017 at 20:27, Tom Bentley  wrote:
>
> > Hi Stephane,
> >
> > The vote on this KIP is on-going.
> >
> > I think it would be OK to make minor changes, but Edoardo and Mickael
> would
> > have to to not disagree with them.
> >
> > The packages have not been brought up as a problem before now. I don't
> know
> > the reason they're in the client's package, but I agree that it's not
> > ideal. To me the situation with the policies is analogous to the
> situation
> > with the Authorizer which is in core: They're both broker-side
extensions
> > points which users can provide their own implementations of. I don't
know
> > whether the scala compiler is OK compiling interdependent scala and java
> > code (maybe Ismael knows?), but if it is, I would be happy if these
> > server-side policies were moved.
> >
> > Cheers,
> >
> > Tom
> >
> > On 7 November 2017 at 08:45, Stephane Maarek <
> steph...@simplemachines.com.
> > au
> > > wrote:
> >
> > > Hi Tom,
> > >
> > > What's the status of this? I was about to create a KIP to implement a
> > > SimpleCreateTopicPolicy
> > > (and Alter, etc...)
> > > These policies would have some most basic parameter to check for
> > > replication factor and min insync replicas (mostly) so that end users
> can
> > > leverage them out of the box. This KIP obviously changes the interface
> so
> > > I'd like this to be in before I propose my KIP
> > >
> > > I'll add my +1 to this, and hopefully we get quick progress so I can
> > > propose my KIP.
> > >
> > > Finally, have the packages been discussed?
> > > I find it extremely awkward to have the current CreateTopicPolicy part
> of
> > > the kafka-clients package, and would love to see the next classes
> you're
> > > implementing appear in core/src/main/scala/kafka/policy or
> > server/policy.
> > > Unless I'm missing something?
> > >
> > > Thanks for driving this
> > > Stephane
> > >
> > > Kind regards,
> > > Stephane
> > >
> > > [image: Simple Machines]
> > >
> > > Stephane Maarek | Developer
> > >
> > > +61 416 575 980
> > > steph...@simplemachines.com.au
> > > simplemachines.com.au
> > > Level 2, 145 William Street, Sydney NSW 2010
> > >
> > > On 25 October 2017 at 19:45, Tom Bentley 
> wrote:
> > >
> > > > It's been two weeks since I started the vote on this KIP and
although
> > > there
> > > > are two votes so far there are no binding votes. Any feedback from
> > > > committers would be appreciated.
> > > >
> > > > Thanks,
> > > >
> > > > Tom
> > > >
> > > > On 12 October 2017 at 10:09, Edoardo Comar 
> wrote:
> > > >
> > > > > Thanks Tom with the last additions (changes to the protocol) it
now
> > > > > supersedes KIP-170
> > > > >
> > > > > +1 non-binding
> > > > > --
> > > > >
> > > > > Edoardo Comar
> > > > >
> > > > > IBM Message Hub
> > > > >
> > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > >
> > > > >
> > > > >
> > > > > From:   Tom Bentley 
> > > > > To: dev@kafka.apache.org
> > > > > Date:   11/10/2017 09:21
> > > > > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> > > > >
> > > > >
> > > > >
> > > > > I would like to start a vote on KIP-201, which proposes to replace
> > the
> > > > > existing policy interfaces with a single new policy interface that
> > also
> > > > > extends policy support to cover new and existing APIs in the
> > > AdminClient.
> > > > >
> > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > > > > apache.org_confluence_display_KAFKA_KIP-2D201-253A-
> > > > > 2BRationalising-2BPolicy-2Binterfaces&d=DwIBaQ&c=jf_
> > > > iaSHvJObTbx-siA1ZOg&r=
> > > > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=
> tE3xo2lmmoCoF

Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-07 Thread Matthias J. Sax
About implementation if we do the KIP as proposed: I agree with Guozhang
that we would need to use the currently processed record's metadata in
the context. This does leak some implementation details, but I
personally don't see a big issue here (at the same time, I am also fine
to remove the RecordContext for joins if people think it's an issue).

About the API: while I agree with Jan, that having two APIs for input
streams/tables and "derived" streams/table (ie, result of
KStream-KStream join or an aggregation) would be a way to avoid some
semantic issue, I am not sure if it is worth the effort. IMHO, it would
make the API more convoluted and if users access the RecordContext on a
derived stream/table it's a "user error" -- it's not really wrong as
users still get the current records context, but of course, we would
leak implementation details (as above, I don't see a bit issue here though).

At the same time, I disagree with Jan that "its not to hard to have a
user keeping track" -- if we apply this argument, we could even argue
that it's not to hard to use a Transformer instead of a map/filter etc.
We want to add "syntactic sugar" with this change and thus should really
provide value and not introduce a half-baked solution for which users
still need to do manual customizing.


-Matthias


On 11/7/17 5:54 AM, Jan Filipiak wrote:
> I Aggree completely.
> 
> Exposing this information in a place where it has no _natural_ belonging
> might really be a bad blocker in the long run.
> 
> Concerning your first point. I would argue its not to hard to have a
> user keep track of these. If we still don't want the user
> to keep track of these I would argue that all > projection only <
> transformations on a Source-backed KTable/KStream
> could also return a Ktable/KStream instance of the type we return from
> the topology builder.
> Only after any operation that exceeds projection or filter one would
> return a KTable not granting access to this any longer.
> 
> Even then its difficult already: I never ran a topology with caching but
> I am not even 100% sure what the record Context means behind
> a materialized KTable with Caching? Topic and Partition are probably
> with some reasoning but offset is probably only the offset causing the
> flush?
> So one might aswell think to drop offsets from this RecordContext.
> 
> Best Jan
> 
> 
> 
> 
> 
> 
> 
> On 07.11.2017 03:18, Guozhang Wang wrote:
>> Regarding the API design (the proposed set of overloads v.s. one overload
>> on #map to enrich the record), I think what we have represents a good
>> trade-off between API succinctness and user convenience: on one hand we
>> definitely want to keep as fewer overloaded functions as possible. But on
>> the other hand if we only do that in, say, the #map() function then this
>> enrichment could be an overkill: think of a topology that has 7 operators
>> in a chain, where users want to access the record context on operator #2
>> and #6 only, with the "enrichment" manner they need to do the
>> enrichment on
>> operator #2 and keep it that way until #6. In addition, the RecordContext
>> fields (topic, offset, etc) are really orthogonal to the key-value
>> payloads
>> themselves, so I think separating them into this object is a cleaner way.
>>
>> Regarding the RecordContext inheritance, this is actually a good point
>> that
>> have not been discussed thoroughly before. Here are my my two cents: one
>> natural way would be to inherit the record context from the "triggering"
>> record, for example in a join operator, if the record from stream A
>> triggers the join then the record context is inherited from with that
>> record. This is also aligned with the lower-level PAPI interface. A
>> counter
>> argument, though, would be that this is sort of leaking the internal
>> implementations of the DSL, so that moving forward if we did some
>> refactoring to our join implementations so that the triggering record can
>> change, the RecordContext would also be different. I do not know how much
>> it would really affect end users, but would like to hear your opinions.
> Agreed to 100% exposing this information
>>
>>
>> Guozhang
>>
>>
>> On Mon, Nov 6, 2017 at 1:00 PM, Jeyhun Karimov 
>> wrote:
>>
>>> Hi Jan,
>>>
>>> Sorry for late reply.
>>>
>>>
>>> The API Design doesn't look appealing
>>>
>>>
>>> In terms of API design we tried to preserve the java functional
>>> interfaces.
>>> We applied the same set of rich methods for KTable to make it compatible
>>> with the rest of overloaded APIs.
>>>
>>> It should be 100% sufficient to offer a KTable + KStream that is
>>> directly
 feed from a topic with 1 additional overload for the #map() methods to
 cover every usecase while keeping the API in a way better state.
>>>
>>> - IMO this seems a workaround, rather than a direct solution.
>>>
>>> Perhaps we should continue this discussion in DISCUSS thread.
>>>
>>>
>>> Cheers,
>>> Jeyhun
>>>
>>>
>>> On Mon, Nov 6, 2017 at 9:14 PM Jan Filipiak 
>>

Re: [DISCUSS] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2017-11-07 Thread Jorge Esteban Quilcate Otoya
Hi Tom,

1. You're right. I've updated the KIP accordingly.
2. Yes, I have add it to keep consistency, but I'd like to know what others
think about this too.

Cheers,
Jorge.

El mar., 7 nov. 2017 a las 9:29, Tom Bentley ()
escribió:

> Hi again Jorge,
>
> A couple of minor points:
>
> 1. ConsumerGroupDescription has the member `name`, but everywhere else that
> I've seen the term "group id" is used, so perhaps calling it "id" or
> "groupId" would be more consistent.
> 2. I think you've added ConsumerGroupListing for consistency with
> TopicListing. For topics it makes sense because at well as the name there
> is whether the topic is internal. For consumer groups, though there is just
> the name and having a separate ConsumerGroupListing seems like it doesn't
> add very much, and would mostly get in the way when using the API. I would
> be interested in what others thought about this.
>
> Cheers,
>
> Tom
>
> On 6 November 2017 at 22:16, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Thanks for the feedback!
> >
> > @Ted Yu: Links added.
> >
> > KIP updated. Changes:
> >
> > * `#listConsumerGroups(ListConsumerGroupsOptions options)` added to the
> > API.
> > * `DescribeConsumerGroupResult` and `ConsumerGroupDescription` classes
> > described.
> >
> > Cheers,
> > Jorge.
> >
> >
> >
> >
> > El lun., 6 nov. 2017 a las 20:28, Guozhang Wang ()
> > escribió:
> >
> > > Hi Matthias,
> > >
> > > You meant "list groups" I think?
> > >
> > > Guozhang
> > >
> > > On Mon, Nov 6, 2017 at 11:17 AM, Matthias J. Sax <
> matth...@confluent.io>
> > > wrote:
> > >
> > > > The main goal of this KIP is to enable decoupling StreamsResetter
> from
> > > > core module. For this case (ie, using AdminClient within
> > > > StreamsResetter) we get the group.id from the user as command line
> > > > argument. Thus, I think the KIP is useful without "describe group"
> > > > command to.
> > > >
> > > > I am happy to include "describe group" command in the KIP. Just want
> to
> > > > point out, that there is no reason to insist on it IMHO.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 11/6/17 7:06 PM, Guozhang Wang wrote:
> > > > > A quick question: I think we do not yet have the `list consumer
> > groups`
> > > > > func as in the old AdminClient. Without this `describe group` given
> > the
> > > > > group id would not be very useful. Could you include this as well
> in
> > > your
> > > > > KIP? More specifically, you can look at kafka.admin.AdminClientfor
> > more
> > > > > details on the APIs.
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Nov 6, 2017 at 7:22 AM, Ted Yu 
> wrote:
> > > > >
> > > > >> Please fill out Discussion thread and JIRA fields.
> > > > >>
> > > > >> Thanks
> > > > >>
> > > > >> On Mon, Nov 6, 2017 at 2:02 AM, Tom Bentley <
> t.j.bent...@gmail.com>
> > > > wrote:
> > > > >>
> > > > >>> Hi Jorge,
> > > > >>>
> > > > >>> Thanks for the KIP. A few initial comments:
> > > > >>>
> > > > >>> 1. The AdminClient doesn't have any API like
> `listConsumerGroups()`
> > > > >>> currently, so in general how does a client know the group ids it
> is
> > > > >>> interested in?
> > > > >>> 2. Could you fill in the API of DescribeConsumerGroupResult, just
> > so
> > > > >>> everyone knows exactly what being proposed.
> > > > >>> 3. Can you describe the ConsumerGroupDescription class?
> > > > >>> 4. Probably worth mentioning that this will use
> > > > >>> DescribeGroupsRequest/Response, and also enumerating the error
> > codes
> > > > >> that
> > > > >>> can return (or, equivalently, enumerate the exceptions throw from
> > the
> > > > >>> futures obtained from the DescribeConsumerGroupResult).
> > > > >>>
> > > > >>> Cheers,
> > > > >>>
> > > > >>> Tom
> > > > >>>
> > > > >>> On 6 November 2017 at 08:19, Jorge Esteban Quilcate Otoya <
> > > > >>> quilcate.jo...@gmail.com> wrote:
> > > > >>>
> > > >  Hi everyone,
> > > > 
> > > >  I would like to start a discussion on KIP-222 [1] based on issue
> > > [2].
> > > > 
> > > >  Looking forward to feedback.
> > > > 
> > > >  [1]
> > > >  https://cwiki.apache.org/confluence/pages/viewpage.
> > > > >>> action?pageId=74686265
> > > >  [2] https://issues.apache.org/jira/browse/KAFKA-6058
> > > > 
> > > >  Cheers,
> > > >  Jorge.
> > > > 
> > > > >>>
> > > > >>
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


[GitHub] kafka pull request #4187: MINOR: Handle error metrics removal during shutdow...

2017-11-07 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4187

MINOR: Handle error metrics removal during shutdown



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka MINOR-metrics-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4187.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4187


commit 7c282414ae1b1edcd21edbcc3e310684ade034c4
Author: Rajini Sivaram 
Date:   2017-11-07T09:46:59Z

MINOR: Handle error metrics removal during shutdown




---


Re: [DISCUSS] KIP-220: Add AdminClient into Kafka Streams' ClientSupplier

2017-11-07 Thread Matthias J. Sax
I would prefer to keep the current check. We could even improve it, and
do the check to more than one brokers (even all provided
bootstrap.servers) or some random servers after we got all meta data
about the cluster.


-Matthias

On 11/7/17 1:01 AM, Guozhang Wang wrote:
> Hello folks,
> 
> One update I'd like to propose regarding "compatibility checking":
> currently we create a single StreamsKafkaClient at the beginning to issue
> an ApiVersionsRequest to a random broker and then check on its versions,
> and fail if it does not satisfy the version (0.10.1+ without EOS, 0.11.0
> with EOS); after this check we throw this client away. My original plan is
> to replace this logic with the NetworkClient's own apiVersions, but after
> some digging while working on the PR I found that exposing this apiVersions
> variable from NetworkClient through AdminClient is not very straight
> forward, plus it would need an API change on the AdminClient itself as well
> to expose the versions information.
> 
> On the other hand, this one-time compatibility checking's benefit may be
> not significant: 1) it asks a random broker upon starting up, and hence
> does not guarantee all broker's support the corresponding API versions at
> that time; 2) brokers can still be upgraded / downgraded after the streams
> app is up and running, and hence we still need to handle
> UnsupportedVersionExceptions thrown from the producer / consumer / admin
> client during the runtime anyways.
> 
> So I'd like to propose two options in this KIP:
> 
> 1) we remove this one-time compatibility check on Streams starting up in
> this KIP, and solely rely on handling producer / consumer / admin client's
> API UnsupportedVersionException throwable exceptions. Please share your
> thoughts about this.
> 
> 2) we create a one-time NetworkClient upon starting up, send the
> ApiVersionsRequest and get the response and do the checking; after that
> throw this client away.
> 
> Please let me know what do you think. Thanks!
> 
> 
> Guozhang
> 
> 
> On Mon, Nov 6, 2017 at 7:56 AM, Matthias J. Sax 
> wrote:
> 
>> Thanks for the update and clarification.
>>
>> Sounds good to me :)
>>
>>
>> -Matthias
>>
>>
>>
>> On 11/6/17 12:16 AM, Guozhang Wang wrote:
>>> Thanks Matthias,
>>>
>>> 1) Updated the KIP page to include KAFKA-6126.
>>> 2) For passing configs, I agree, will make a pass over the existing
>> configs
>>> passed to StreamsKafkaClient and update the wiki page accordingly, to
>>> capture all changes that would happen for the replacement in this single
>>> KIP.
>>> 3) For internal topic purging, I'm not sure if we need to include this
>> as a
>>> public change since internal topics are meant for abstracted away from
>> the
>>> Streams users; they should not leverage such internal topics elsewhere
>>> themselves. The only thing I can think of is for Kafka operators this
>> would
>>> mean that such internal topics would be largely reduced in their
>> footprint,
>>> but that would not be needed in the KIP as well.
>>>
>>>
>>> Guozhang
>>>
>>>
>>> On Sat, Nov 4, 2017 at 9:00 AM, Matthias J. Sax 
>>> wrote:
>>>
 I like this KIP. Can you also link to
 https://issues.apache.org/jira/browse/KAFKA-6126 in the KIP?

 What I am wondering though: if we start to partially (ie, step by step)
 replace the existing StreamsKafkaClient with Java AdminClient, don't we
 need more KIPs? For example, if we use purge-api for internal topics, it
 seems like a change that requires a KIP. Similar for passing configs --
 the old client might have different config than the old client? Can we
 double check this?

 Thus, it might make sense to replace the old client with the new one in
 one shot.


 -Matthias

 On 11/4/17 4:01 AM, Ted Yu wrote:
> Looks good overall.
>
> bq. the creation within StreamsPartitionAssignor
>
> Typo above: should be StreamPartitionAssignor
>
> On Fri, Nov 3, 2017 at 4:49 PM, Guozhang Wang 
 wrote:
>
>> Hello folks,
>>
>> I have filed a new KIP on adding AdminClient into Streams for internal
>> topic management.
>>
>> Looking for feedback on
>>
>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 220%3A+Add+AdminClient+into+Kafka+Streams%27+ClientSupplier
>> > 220%3A+Add+AdminClient+into+Kafka+Streams%27+ClientSupplier>*
>>
>> --
>> -- Guozhang
>>
>


>>>
>>>
>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Ismael Juma
The location of the policies is fine. Note that the package _does not_
include clients in the name. If we ever have enough server side only code
to merit a separate JAR, we can do that and it's mostly compatible (users
would only have to update their build dependency). Generally, all public
APIs going forward will be in Java.

Ismael

On Tue, Nov 7, 2017 at 9:44 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Tom,
>
> Regarding the java / scala compilation, I believe this is fine (the
> compiler will know), but any reason why you don't want the policy to be
> implemented using Scala ? (like the Authorizer)
> It's usually not best practice to mix in scala / java code.
>
> Thanks!
> Stephane
>
> Kind regards,
> Stephane
>
> [image: Simple Machines]
>
> Stephane Maarek | Developer
>
> +61 416 575 980
> steph...@simplemachines.com.au
> simplemachines.com.au
> Level 2, 145 William Street, Sydney NSW 2010
>
> On 7 November 2017 at 20:27, Tom Bentley  wrote:
>
> > Hi Stephane,
> >
> > The vote on this KIP is on-going.
> >
> > I think it would be OK to make minor changes, but Edoardo and Mickael
> would
> > have to to not disagree with them.
> >
> > The packages have not been brought up as a problem before now. I don't
> know
> > the reason they're in the client's package, but I agree that it's not
> > ideal. To me the situation with the policies is analogous to the
> situation
> > with the Authorizer which is in core: They're both broker-side extensions
> > points which users can provide their own implementations of. I don't know
> > whether the scala compiler is OK compiling interdependent scala and java
> > code (maybe Ismael knows?), but if it is, I would be happy if these
> > server-side policies were moved.
> >
> > Cheers,
> >
> > Tom
> >
> > On 7 November 2017 at 08:45, Stephane Maarek <
> steph...@simplemachines.com.
> > au
> > > wrote:
> >
> > > Hi Tom,
> > >
> > > What's the status of this? I was about to create a KIP to implement a
> > > SimpleCreateTopicPolicy
> > > (and Alter, etc...)
> > > These policies would have some most basic parameter to check for
> > > replication factor and min insync replicas (mostly) so that end users
> can
> > > leverage them out of the box. This KIP obviously changes the interface
> so
> > > I'd like this to be in before I propose my KIP
> > >
> > > I'll add my +1 to this, and hopefully we get quick progress so I can
> > > propose my KIP.
> > >
> > > Finally, have the packages been discussed?
> > > I find it extremely awkward to have the current CreateTopicPolicy part
> of
> > > the kafka-clients package, and would love to see the next classes
> you're
> > > implementing appear in core/src/main/scala/kafka/policy or
> > server/policy.
> > > Unless I'm missing something?
> > >
> > > Thanks for driving this
> > > Stephane
> > >
> > > Kind regards,
> > > Stephane
> > >
> > > [image: Simple Machines]
> > >
> > > Stephane Maarek | Developer
> > >
> > > +61 416 575 980
> > > steph...@simplemachines.com.au
> > > simplemachines.com.au
> > > Level 2, 145 William Street, Sydney NSW 2010
> > >
> > > On 25 October 2017 at 19:45, Tom Bentley 
> wrote:
> > >
> > > > It's been two weeks since I started the vote on this KIP and although
> > > there
> > > > are two votes so far there are no binding votes. Any feedback from
> > > > committers would be appreciated.
> > > >
> > > > Thanks,
> > > >
> > > > Tom
> > > >
> > > > On 12 October 2017 at 10:09, Edoardo Comar 
> wrote:
> > > >
> > > > > Thanks Tom with the last additions (changes to the protocol) it now
> > > > > supersedes KIP-170
> > > > >
> > > > > +1 non-binding
> > > > > --
> > > > >
> > > > > Edoardo Comar
> > > > >
> > > > > IBM Message Hub
> > > > >
> > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > >
> > > > >
> > > > >
> > > > > From:   Tom Bentley 
> > > > > To: dev@kafka.apache.org
> > > > > Date:   11/10/2017 09:21
> > > > > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> > > > >
> > > > >
> > > > >
> > > > > I would like to start a vote on KIP-201, which proposes to replace
> > the
> > > > > existing policy interfaces with a single new policy interface that
> > also
> > > > > extends policy support to cover new and existing APIs in the
> > > AdminClient.
> > > > >
> > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > > > > apache.org_confluence_display_KAFKA_KIP-2D201-253A-
> > > > > 2BRationalising-2BPolicy-2Binterfaces&d=DwIBaQ&c=jf_
> > > > iaSHvJObTbx-siA1ZOg&r=
> > > > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=
> tE3xo2lmmoCoFZAX60PBT-
> > > > > J8TBDWcv-tarJyAlgwfJY&s=puFqZ3Ny4Xcdil5A5huwA5WZtS3WZp
> > D9517uJkCgrCk&e=
> > > > >
> > > > >
> > > > > Thanks for your time.
> > > > >
> > > > > Tom
> > > > >
> > > > >
> > > > >
> > > > > Unless stated otherwise above:
> > > > > IBM United Kingdom Limited - Registered in England and Wales with
> > > number
> > > > > 741598.
> > > > > Registered office: PO

Jenkins build is back to normal : kafka-trunk-jdk9 #175

2017-11-07 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #4152: MINOR: Eliminate unnecessary Topic(And)Partition a...

2017-11-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4152


---


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Stephane Maarek
Hi Tom,

Regarding the java / scala compilation, I believe this is fine (the
compiler will know), but any reason why you don't want the policy to be
implemented using Scala ? (like the Authorizer)
It's usually not best practice to mix in scala / java code.

Thanks!
Stephane

Kind regards,
Stephane

[image: Simple Machines]

Stephane Maarek | Developer

+61 416 575 980
steph...@simplemachines.com.au
simplemachines.com.au
Level 2, 145 William Street, Sydney NSW 2010

On 7 November 2017 at 20:27, Tom Bentley  wrote:

> Hi Stephane,
>
> The vote on this KIP is on-going.
>
> I think it would be OK to make minor changes, but Edoardo and Mickael would
> have to to not disagree with them.
>
> The packages have not been brought up as a problem before now. I don't know
> the reason they're in the client's package, but I agree that it's not
> ideal. To me the situation with the policies is analogous to the situation
> with the Authorizer which is in core: They're both broker-side extensions
> points which users can provide their own implementations of. I don't know
> whether the scala compiler is OK compiling interdependent scala and java
> code (maybe Ismael knows?), but if it is, I would be happy if these
> server-side policies were moved.
>
> Cheers,
>
> Tom
>
> On 7 November 2017 at 08:45, Stephane Maarek  au
> > wrote:
>
> > Hi Tom,
> >
> > What's the status of this? I was about to create a KIP to implement a
> > SimpleCreateTopicPolicy
> > (and Alter, etc...)
> > These policies would have some most basic parameter to check for
> > replication factor and min insync replicas (mostly) so that end users can
> > leverage them out of the box. This KIP obviously changes the interface so
> > I'd like this to be in before I propose my KIP
> >
> > I'll add my +1 to this, and hopefully we get quick progress so I can
> > propose my KIP.
> >
> > Finally, have the packages been discussed?
> > I find it extremely awkward to have the current CreateTopicPolicy part of
> > the kafka-clients package, and would love to see the next classes you're
> > implementing appear in core/src/main/scala/kafka/policy or
> server/policy.
> > Unless I'm missing something?
> >
> > Thanks for driving this
> > Stephane
> >
> > Kind regards,
> > Stephane
> >
> > [image: Simple Machines]
> >
> > Stephane Maarek | Developer
> >
> > +61 416 575 980
> > steph...@simplemachines.com.au
> > simplemachines.com.au
> > Level 2, 145 William Street, Sydney NSW 2010
> >
> > On 25 October 2017 at 19:45, Tom Bentley  wrote:
> >
> > > It's been two weeks since I started the vote on this KIP and although
> > there
> > > are two votes so far there are no binding votes. Any feedback from
> > > committers would be appreciated.
> > >
> > > Thanks,
> > >
> > > Tom
> > >
> > > On 12 October 2017 at 10:09, Edoardo Comar  wrote:
> > >
> > > > Thanks Tom with the last additions (changes to the protocol) it now
> > > > supersedes KIP-170
> > > >
> > > > +1 non-binding
> > > > --
> > > >
> > > > Edoardo Comar
> > > >
> > > > IBM Message Hub
> > > >
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > >
> > > >
> > > > From:   Tom Bentley 
> > > > To: dev@kafka.apache.org
> > > > Date:   11/10/2017 09:21
> > > > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> > > >
> > > >
> > > >
> > > > I would like to start a vote on KIP-201, which proposes to replace
> the
> > > > existing policy interfaces with a single new policy interface that
> also
> > > > extends policy support to cover new and existing APIs in the
> > AdminClient.
> > > >
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > > > apache.org_confluence_display_KAFKA_KIP-2D201-253A-
> > > > 2BRationalising-2BPolicy-2Binterfaces&d=DwIBaQ&c=jf_
> > > iaSHvJObTbx-siA1ZOg&r=
> > > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=tE3xo2lmmoCoFZAX60PBT-
> > > > J8TBDWcv-tarJyAlgwfJY&s=puFqZ3Ny4Xcdil5A5huwA5WZtS3WZp
> D9517uJkCgrCk&e=
> > > >
> > > >
> > > > Thanks for your time.
> > > >
> > > > Tom
> > > >
> > > >
> > > >
> > > > Unless stated otherwise above:
> > > > IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > > 741598.
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > > >
> > >
> >
>


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Tom Bentley
Hi Stephane,

The vote on this KIP is on-going.

I think it would be OK to make minor changes, but Edoardo and Mickael would
have to to not disagree with them.

The packages have not been brought up as a problem before now. I don't know
the reason they're in the client's package, but I agree that it's not
ideal. To me the situation with the policies is analogous to the situation
with the Authorizer which is in core: They're both broker-side extensions
points which users can provide their own implementations of. I don't know
whether the scala compiler is OK compiling interdependent scala and java
code (maybe Ismael knows?), but if it is, I would be happy if these
server-side policies were moved.

Cheers,

Tom

On 7 November 2017 at 08:45, Stephane Maarek  wrote:

> Hi Tom,
>
> What's the status of this? I was about to create a KIP to implement a
> SimpleCreateTopicPolicy
> (and Alter, etc...)
> These policies would have some most basic parameter to check for
> replication factor and min insync replicas (mostly) so that end users can
> leverage them out of the box. This KIP obviously changes the interface so
> I'd like this to be in before I propose my KIP
>
> I'll add my +1 to this, and hopefully we get quick progress so I can
> propose my KIP.
>
> Finally, have the packages been discussed?
> I find it extremely awkward to have the current CreateTopicPolicy part of
> the kafka-clients package, and would love to see the next classes you're
> implementing appear in core/src/main/scala/kafka/policy or server/policy.
> Unless I'm missing something?
>
> Thanks for driving this
> Stephane
>
> Kind regards,
> Stephane
>
> [image: Simple Machines]
>
> Stephane Maarek | Developer
>
> +61 416 575 980
> steph...@simplemachines.com.au
> simplemachines.com.au
> Level 2, 145 William Street, Sydney NSW 2010
>
> On 25 October 2017 at 19:45, Tom Bentley  wrote:
>
> > It's been two weeks since I started the vote on this KIP and although
> there
> > are two votes so far there are no binding votes. Any feedback from
> > committers would be appreciated.
> >
> > Thanks,
> >
> > Tom
> >
> > On 12 October 2017 at 10:09, Edoardo Comar  wrote:
> >
> > > Thanks Tom with the last additions (changes to the protocol) it now
> > > supersedes KIP-170
> > >
> > > +1 non-binding
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Message Hub
> > >
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > >
> > >
> > > From:   Tom Bentley 
> > > To: dev@kafka.apache.org
> > > Date:   11/10/2017 09:21
> > > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> > >
> > >
> > >
> > > I would like to start a vote on KIP-201, which proposes to replace the
> > > existing policy interfaces with a single new policy interface that also
> > > extends policy support to cover new and existing APIs in the
> AdminClient.
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > > apache.org_confluence_display_KAFKA_KIP-2D201-253A-
> > > 2BRationalising-2BPolicy-2Binterfaces&d=DwIBaQ&c=jf_
> > iaSHvJObTbx-siA1ZOg&r=
> > > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=tE3xo2lmmoCoFZAX60PBT-
> > > J8TBDWcv-tarJyAlgwfJY&s=puFqZ3Ny4Xcdil5A5huwA5WZtS3WZpD9517uJkCgrCk&e=
> > >
> > >
> > > Thanks for your time.
> > >
> > > Tom
> > >
> > >
> > >
> > > Unless stated otherwise above:
> > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> > > 741598.
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> > 3AU
> > >
> >
>


Re: [VOTE] KIP-201: Rationalising policy interfaces

2017-11-07 Thread Stephane Maarek
Hi Tom,

What's the status of this? I was about to create a KIP to implement a
SimpleCreateTopicPolicy
(and Alter, etc...)
These policies would have some most basic parameter to check for
replication factor and min insync replicas (mostly) so that end users can
leverage them out of the box. This KIP obviously changes the interface so
I'd like this to be in before I propose my KIP

I'll add my +1 to this, and hopefully we get quick progress so I can
propose my KIP.

Finally, have the packages been discussed?
I find it extremely awkward to have the current CreateTopicPolicy part of
the kafka-clients package, and would love to see the next classes you're
implementing appear in core/src/main/scala/kafka/policy or server/policy.
Unless I'm missing something?

Thanks for driving this
Stephane

Kind regards,
Stephane

[image: Simple Machines]

Stephane Maarek | Developer

+61 416 575 980
steph...@simplemachines.com.au
simplemachines.com.au
Level 2, 145 William Street, Sydney NSW 2010

On 25 October 2017 at 19:45, Tom Bentley  wrote:

> It's been two weeks since I started the vote on this KIP and although there
> are two votes so far there are no binding votes. Any feedback from
> committers would be appreciated.
>
> Thanks,
>
> Tom
>
> On 12 October 2017 at 10:09, Edoardo Comar  wrote:
>
> > Thanks Tom with the last additions (changes to the protocol) it now
> > supersedes KIP-170
> >
> > +1 non-binding
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   Tom Bentley 
> > To: dev@kafka.apache.org
> > Date:   11/10/2017 09:21
> > Subject:[VOTE] KIP-201: Rationalising policy interfaces
> >
> >
> >
> > I would like to start a vote on KIP-201, which proposes to replace the
> > existing policy interfaces with a single new policy interface that also
> > extends policy support to cover new and existing APIs in the AdminClient.
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > apache.org_confluence_display_KAFKA_KIP-2D201-253A-
> > 2BRationalising-2BPolicy-2Binterfaces&d=DwIBaQ&c=jf_
> iaSHvJObTbx-siA1ZOg&r=
> > EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ&m=tE3xo2lmmoCoFZAX60PBT-
> > J8TBDWcv-tarJyAlgwfJY&s=puFqZ3Ny4Xcdil5A5huwA5WZtS3WZpD9517uJkCgrCk&e=
> >
> >
> > Thanks for your time.
> >
> > Tom
> >
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>


Re: [DISCUSS] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2017-11-07 Thread Tom Bentley
Hi again Jorge,

A couple of minor points:

1. ConsumerGroupDescription has the member `name`, but everywhere else that
I've seen the term "group id" is used, so perhaps calling it "id" or
"groupId" would be more consistent.
2. I think you've added ConsumerGroupListing for consistency with
TopicListing. For topics it makes sense because at well as the name there
is whether the topic is internal. For consumer groups, though there is just
the name and having a separate ConsumerGroupListing seems like it doesn't
add very much, and would mostly get in the way when using the API. I would
be interested in what others thought about this.

Cheers,

Tom

On 6 November 2017 at 22:16, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Thanks for the feedback!
>
> @Ted Yu: Links added.
>
> KIP updated. Changes:
>
> * `#listConsumerGroups(ListConsumerGroupsOptions options)` added to the
> API.
> * `DescribeConsumerGroupResult` and `ConsumerGroupDescription` classes
> described.
>
> Cheers,
> Jorge.
>
>
>
>
> El lun., 6 nov. 2017 a las 20:28, Guozhang Wang ()
> escribió:
>
> > Hi Matthias,
> >
> > You meant "list groups" I think?
> >
> > Guozhang
> >
> > On Mon, Nov 6, 2017 at 11:17 AM, Matthias J. Sax 
> > wrote:
> >
> > > The main goal of this KIP is to enable decoupling StreamsResetter from
> > > core module. For this case (ie, using AdminClient within
> > > StreamsResetter) we get the group.id from the user as command line
> > > argument. Thus, I think the KIP is useful without "describe group"
> > > command to.
> > >
> > > I am happy to include "describe group" command in the KIP. Just want to
> > > point out, that there is no reason to insist on it IMHO.
> > >
> > >
> > > -Matthias
> > >
> > > On 11/6/17 7:06 PM, Guozhang Wang wrote:
> > > > A quick question: I think we do not yet have the `list consumer
> groups`
> > > > func as in the old AdminClient. Without this `describe group` given
> the
> > > > group id would not be very useful. Could you include this as well in
> > your
> > > > KIP? More specifically, you can look at kafka.admin.AdminClientfor
> more
> > > > details on the APIs.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Mon, Nov 6, 2017 at 7:22 AM, Ted Yu  wrote:
> > > >
> > > >> Please fill out Discussion thread and JIRA fields.
> > > >>
> > > >> Thanks
> > > >>
> > > >> On Mon, Nov 6, 2017 at 2:02 AM, Tom Bentley 
> > > wrote:
> > > >>
> > > >>> Hi Jorge,
> > > >>>
> > > >>> Thanks for the KIP. A few initial comments:
> > > >>>
> > > >>> 1. The AdminClient doesn't have any API like `listConsumerGroups()`
> > > >>> currently, so in general how does a client know the group ids it is
> > > >>> interested in?
> > > >>> 2. Could you fill in the API of DescribeConsumerGroupResult, just
> so
> > > >>> everyone knows exactly what being proposed.
> > > >>> 3. Can you describe the ConsumerGroupDescription class?
> > > >>> 4. Probably worth mentioning that this will use
> > > >>> DescribeGroupsRequest/Response, and also enumerating the error
> codes
> > > >> that
> > > >>> can return (or, equivalently, enumerate the exceptions throw from
> the
> > > >>> futures obtained from the DescribeConsumerGroupResult).
> > > >>>
> > > >>> Cheers,
> > > >>>
> > > >>> Tom
> > > >>>
> > > >>> On 6 November 2017 at 08:19, Jorge Esteban Quilcate Otoya <
> > > >>> quilcate.jo...@gmail.com> wrote:
> > > >>>
> > >  Hi everyone,
> > > 
> > >  I would like to start a discussion on KIP-222 [1] based on issue
> > [2].
> > > 
> > >  Looking forward to feedback.
> > > 
> > >  [1]
> > >  https://cwiki.apache.org/confluence/pages/viewpage.
> > > >>> action?pageId=74686265
> > >  [2] https://issues.apache.org/jira/browse/KAFKA-6058
> > > 
> > >  Cheers,
> > >  Jorge.
> > > 
> > > >>>
> > > >>
> > > >
> > > >
> > > >
> > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread Tom Bentley
Well done Onur.

On 7 November 2017 at 06:52, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Congratulations Onur!!
> On Tue, 7 Nov 2017 at 06:30, Jaikiran Pai 
> wrote:
>
> > Congratulations Onur!
> >
> > -Jaikiran
> >
> >
> > On 06/11/17 10:54 PM, Jun Rao wrote:
> > > Hi, everyone,
> > >
> > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Onur
> > > Karaman.
> > >
> > > Onur's most significant work is the improvement of Kafka controller,
> > which
> > > is the brain of a Kafka cluster. Over time, we have accumulated quite a
> > few
> > > correctness and performance issues in the controller. There have been
> > > attempts to fix controller issues in isolation, which would make the
> code
> > > base more complicated without a clear path of solving all problems.
> Onur
> > is
> > > the one who took a holistic approach, by first documenting all known
> > > issues, writing down a new design, coming up with a plan to deliver the
> > > changes in phases and executing on it. At this point, Onur has
> completed
> > > the two most important phases: making the controller single threaded
> and
> > > changing the controller to use the async ZK api. The former fixed
> > multiple
> > > deadlocks and race conditions. The latter significantly improved the
> > > performance when there are many partitions. Experimental results show
> > that
> > > Onur's work reduced the controlled shutdown time by a factor of 100
> times
> > > and the controller failover time by a factor of 3 times.
> > >
> > > Congratulations, Onur!
> > >
> > > Thanks,
> > >
> > > Jun (on behalf of the Apache Kafka PMC)
> > >
> >
> >
>


[VOTE] KIP-179: Change ReassignPartitionsCommand to use AdminClient

2017-11-07 Thread Tom Bentley
Hi,

I would like to start a vote on KIP-179 which would add an AdminClient API
for partition reassignment and interbroker replication throttling.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-179+-+Change+ReassignPartitionsCommand+to+use+AdminClient

Thanks,

Tom