Fwd: Warning from dev@kafka.apache.org

2019-08-16 Thread Manish G
...why do I get this mail?

-- Forwarded message -
From: 
Date: Sat, Aug 17, 2019 at 10:25 AM
Subject: Warning from dev@kafka.apache.org
To: 


Hi! This is the ezmlm program. I'm managing the
dev@kafka.apache.org mailing list.

I'm working for my owner, who can be reached
at dev-ow...@kafka.apache.org.


Messages to you from the dev mailing list seem to
have been bouncing. I've attached a copy of the first bounce
message I received.

If this message bounces too, I will send you a probe. If the probe bounces,
I will remove your address from the dev mailing list,
without further notice.


I've kept a list of which messages from the dev mailing list have
bounced from your address.

Copies of these messages may be in the archive.
To retrieve a set of messages 123-145 (a maximum of 100 per request),
send a short message to:
   

To receive a subject and author list for the last 100 or so messages,
send a short message to:
   

Here are the message numbers:

   106348
   106424
   106493
   106491
   106380

--- Enclosed is a copy of the bounce message I received.

Return-Path: <>
Received: (qmail 61167 invoked for bounce); 6 Aug 2019 23:28:32 -
Date: 6 Aug 2019 23:28:32 -
From: mailer-dae...@apache.org
To: dev-return-1063...@kafka.apache.org
Subject: failure notice


Build failed in Jenkins: kafka-trunk-jdk11 #761

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8601: Add UniformStickyPartitioner and tests (#7199)

--
[...truncated 2.59 MB...]
org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store 

Build failed in Jenkins: kafka-trunk-jdk8 #3858

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-8600: Use RPC generation for DescribeDelegationTokens protocol

[wangguoz] KAFKA-8802: ConcurrentSkipListMap shows performance regression in 
cache

--
[...truncated 6.53 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #760

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[manikumar] KAFKA-8600: Use RPC generation for DescribeDelegationTokens protocol

[wangguoz] KAFKA-8802: ConcurrentSkipListMap shows performance regression in 
cache

[cmccabe] MINOR: Remove Deprecated Scala Procedure Syntax (#7214)

--
[...truncated 2.60 MB...]
org.apache.kafka.trogdor.common.JsonSerializationTest > 
testDeserializationDoesNotProduceNulls PASSED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization STARTED

org.apache.kafka.trogdor.task.TaskSpecTest > testTaskSpecSerialization PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureClose STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.powermock.reflect.internal.WhiteboxImpl 
(file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.powermock/powermock-reflect/2.0.2/79df0e5792fba38278b90f9e22617f5684313017/powermock-reflect-2.0.2.jar)
 to method java.lang.Object.clone()
WARNING: Please consider reporting this to the maintainers of 
org.powermock.reflect.internal.WhiteboxImpl
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureClose PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureBadUrl STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureBadUrl PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureMissingPeriod 
STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testConfigureMissingPeriod 
PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testNoMetrics STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testNoMetrics PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testClientError STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testClientError PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testServerError STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testServerError PASSED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testMetricValues STARTED

org.apache.kafka.tools.PushHttpMetricsReporterTest > testMetricValues PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records PASSED


Jenkins build is back to normal : kafka-2.3-jdk8 #90

2019-08-16 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Colin McCabe
+1 (binding)

Thanks, Rajini!

best,
Colin

On Fri, Aug 16, 2019, at 09:52, Rajini Sivaram wrote:
> Hi all,
> 
> I would like to start the vote for KIP-504:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-504+-+Add+new+Java+Authorizer+Interface
> 
> This KIP replaces the Scala Authorizer API with a new Java API similar to
> other pluggable APIs in the broker and addresses known limitations in the
> existing API.
> 
> Thanks for all the feedback!
> 
> Regards,
> 
> Rajini
>


Jenkins build is back to normal : kafka-trunk-jdk11 #759

2019-08-16 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-482: The Kafka Protocol should Support Optional Fields

2019-08-16 Thread Colin McCabe
On Tue, Aug 13, 2019, at 10:01, Jason Gustafson wrote:
> > Right, I was planning on doing exactly that for all the auto-generated
> > RPCs. For the manual RPCs, it would be a lot of work. It’s probably a
> > better use of time to convert the manual ones to auto gen first (with the
> > possible exception of Fetch/Produce, where the ROI may be higher for the
> > manual work)
> 
> Yeah, that makes sense. Maybe we can include the version bump for all RPCs
> in this KIP, but we can implement it lazily as the protocols are converted.

Good idea.

best,
Colin

> 
> -Jason
> 
> On Mon, Aug 12, 2019 at 7:16 PM Colin McCabe  wrote:
> 
> > On Mon, Aug 12, 2019, at 11:22, Jason Gustafson wrote:
> > > Hi Colin,
> > >
> > > Thanks for the KIP! This is a significant improvement. One of my personal
> > > interests in this proposal is solving the compatibility problems we have
> > > with the internal schemas used to define consumer offsets and transaction
> > > metadata. Currently we have to guard schema bumps with the inter-broker
> > > protocol format. Once the format is bumped, there is no way to downgrade.
> > > By fixing this, we can potentially begin using the new schemas before the
> > > IBP is bumped while still allowing downgrade.
> > >
> > > There are a surprising number of other situations we have encountered
> > this
> > > sort of problem. We have hacked around it in special cases by allowing
> > > nullable fields to the end of the schema, but this is not really an
> > > extensible approach. I'm looking forward to having a better option.
> >
> > Yeah, this problem keeps coming up.
> >
> > >
> > > With that said, I have a couple questions on the proposal:
> > >
> > > 1. For each request API, we need one version bump to begin support for
> > > "flexible versions." Until then, we won't have the option of using tagged
> > > fields even if the broker knows how to handle them. Does it make sense to
> > > go ahead and do a universal bump of each request API now so that we'll
> > have
> > > this option going forward?
> >
> > Right, I was planning on doing exactly that for all the auto-generated
> > RPCs. For the manual RPCs, it would be a lot of work. It’s probably a
> > better use of time to convert the manual ones to auto gen first (with the
> > possible exception of Fetch/Produce, where the ROI may be higher for the
> > manual work)
> >
> > > 2. The alternating length/tag header encoding lets us save a byte in the
> > > common case. The downside is that it's a bit more complex to specify. It
> > > also has some extra cost if the length exceeds the tag substantially.
> > > Basically we'd have to pad the tag, right? I think I'm wondering if we
> > > should just bite the bullet and use two varints instead.
> >
> > That’s a fair point. It would be shorter on average, but worse for some
> > exceptional cases. Also, the decoding would be more complex, which might be
> > a good reason to go for just having two varints. Yeah, let’s simplify.
> >
> > Regards,
> > Colin
> >
> > >
> > > Thanks,
> > > Jason
> > >
> > >
> > > On Fri, Aug 9, 2019 at 4:31 PM Colin McCabe  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've made some updates to this KIP. Specifically, I wanted to avoid
> > > > including escape bytes in the serialization format, since it was too
> > > > complex. Also, I think this is a good opportunity to slim down our
> > > > variable length fields.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Thu, Jul 11, 2019, at 20:52, Colin McCabe wrote:
> > > > > On Tue, Jul 9, 2019, at 15:29, Jose Armando Garcia Sancio wrote:
> > > > > > Thanks Colin for the KIP. For my own edification why are we doing
> > this
> > > > > > "Optional fields can have any type, except for an array of
> > > > structures."?
> > > > > > Why can't we have an array of structures?
> > > > >
> > > > > Optional fields are serialized starting with their total length. This
> > > > > is straightforward to calculate for primitive fields like INT32, (or
> > > > > even an array of INT32), but more difficult to calculate for an array
> > > > > of structures. Basically, we'd have to do a two-pass serialization
> > > > > where we first calculate the lengths of everything, and then write it
> > > > > out.
> > > > >
> > > > > The nice thing about this KIP is that there's nothing in the protocol
> > > > > stopping us from adding support for this feature in the future. We
> > > > > wouldn't have to really change the protocol at all to add support.
> > But
> > > > > we'd have to change a lot of serialization code. Given almost all of
> > > > > our use-cases for optional fields are adding an extra field here or
> > > > > there, it seems reasonable not to support it for right now.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > >
> > > > > > --
> > > > > > -Jose
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-482: The Kafka Protocol should Support Optional Fields

2019-08-16 Thread Colin McCabe
On Tue, Aug 13, 2019, at 11:19, David Jacot wrote:
> Hi Colin,
> 
> Thank you for the KIP! Things are well explained!. It is huge improvement
> for the Kafka protocol. I have few comments on the proposal:
> 
> 1. The interleaved tag/length header sounds like a great optimisation as it
> would be shorter on average. The downside, as
> you already pointed out, is that it makes the decoding and the specs more
> complex. Personally, I would also favour using two
> vaints in this particular case to keep things simple.

Hi David,

Thanks for the review.

I changed this to be two separate unsigned varints, as you and Jason suggested. 
 The extra complexity is just probably not worth it to save a byte here.  Using 
two varints also saves space if the length of the tag and the length of the 
size are not similar in size (i.e. it improves the worst case scenario).

> 
> 2. As discussed, I wonder if it would make sense to extend to KIP to also
> support optional fields in the Record Header. I think
> that it could be interesting to have such capability for common fields
> across all the requests or responses (e.g. tracing id).

Yeah, I think this is a great idea.  I added a section about updating to a new 
version of the request header and response header for message versions in 
flexibleVersions.  This will give us the ability to add optional stuff to the 
headers when needed in the future.  For things that span all requests, like 
ClientType, TraceId, etc., this will be very useful.

best,
Colin

> 
> Regards,
> David
> 
> 
> 
> On Tue, Aug 13, 2019 at 10:00 AM Jason Gustafson  wrote:
> 
> > > Right, I was planning on doing exactly that for all the auto-generated
> > RPCs. For the manual RPCs, it would be a lot of work. It’s probably a
> > better use of time to convert the manual ones to auto gen first (with the
> > possible exception of Fetch/Produce, where the ROI may be higher for the
> > manual work)
> >
> > Yeah, that makes sense. Maybe we can include the version bump for all RPCs
> > in this KIP, but we can implement it lazily as the protocols are converted.
> >
> > -Jason
> >
> > On Mon, Aug 12, 2019 at 7:16 PM Colin McCabe  wrote:
> >
> > > On Mon, Aug 12, 2019, at 11:22, Jason Gustafson wrote:
> > > > Hi Colin,
> > > >
> > > > Thanks for the KIP! This is a significant improvement. One of my
> > personal
> > > > interests in this proposal is solving the compatibility problems we
> > have
> > > > with the internal schemas used to define consumer offsets and
> > transaction
> > > > metadata. Currently we have to guard schema bumps with the inter-broker
> > > > protocol format. Once the format is bumped, there is no way to
> > downgrade.
> > > > By fixing this, we can potentially begin using the new schemas before
> > the
> > > > IBP is bumped while still allowing downgrade.
> > > >
> > > > There are a surprising number of other situations we have encountered
> > > this
> > > > sort of problem. We have hacked around it in special cases by allowing
> > > > nullable fields to the end of the schema, but this is not really an
> > > > extensible approach. I'm looking forward to having a better option.
> > >
> > > Yeah, this problem keeps coming up.
> > >
> > > >
> > > > With that said, I have a couple questions on the proposal:
> > > >
> > > > 1. For each request API, we need one version bump to begin support for
> > > > "flexible versions." Until then, we won't have the option of using
> > tagged
> > > > fields even if the broker knows how to handle them. Does it make sense
> > to
> > > > go ahead and do a universal bump of each request API now so that we'll
> > > have
> > > > this option going forward?
> > >
> > > Right, I was planning on doing exactly that for all the auto-generated
> > > RPCs. For the manual RPCs, it would be a lot of work. It’s probably a
> > > better use of time to convert the manual ones to auto gen first (with the
> > > possible exception of Fetch/Produce, where the ROI may be higher for the
> > > manual work)
> > >
> > > > 2. The alternating length/tag header encoding lets us save a byte in
> > the
> > > > common case. The downside is that it's a bit more complex to specify.
> > It
> > > > also has some extra cost if the length exceeds the tag substantially.
> > > > Basically we'd have to pad the tag, right? I think I'm wondering if we
> > > > should just bite the bullet and use two varints instead.
> > >
> > > That’s a fair point. It would be shorter on average, but worse for some
> > > exceptional cases. Also, the decoding would be more complex, which might
> > be
> > > a good reason to go for just having two varints. Yeah, let’s simplify.
> > >
> > > Regards,
> > > Colin
> > >
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > > >
> > > > On Fri, Aug 9, 2019 at 4:31 PM Colin McCabe 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I've made some updates to this KIP. Specifically, I wanted to avoid
> > > > > including escape bytes in the serialization format, since it was too
> > 

Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Rajini Sivaram
Hi Don,

That should be fine. I guess Ranger loads policies from the database
synchronously when the authorizer is configured, similar to
SimpleAclAuthorizer loading from ZooKeeper. Ranger can continue to load
synchronously from `configure()` or `start()` and return an empty map from
`start()`. That would retain the existing behaviour.. When the same
database stores policies for all listeners and the policies are not stored
in Kafka, there is no value in making the load asynchronous.

Regards,

Rajini


On Fri, Aug 16, 2019 at 6:43 PM Don Bosco Durai  wrote:

> Hi Rajini
>
> Assuming this doesn't affect custom plugins, I don't have any concerns
> regarding this.
>
> I do have one question regarding:
>
> "For authorizers that don’t store metadata in ZooKeeper, ensure that
> authorizer metadata for each listener is available before starting up the
> listener. This enables different authorization metadata stores for
> different listeners."
>
> Since Ranger uses its own database for storing policies, do you anticipate
> any issues?
>
> Thanks
>
> Bosco
>
>
> On 8/16/19, 6:49 AM, "Rajini Sivaram"  wrote:
>
> Hi all,
>
> I made another change to the KIP. The KIP was originally proposing to
> extend SimpleAclAuthorizer to also implement the new API (in addition
> to
> the existing API). But since we use the new API when available, this
> can
> break custom authorizers that extend this class and override specific
> methods of the old API. To avoid breaking any existing custom
> implementations that extend this class, particularly because it is in
> the
> public package kafka.security.auth, the KIP now proposes to retain the
> old
> authorizer as-is.  SimpleAclAuthorizer will continue to implement the
> old
> API, but will be deprecated. A new authorizer implementation
> kafka.security.authorizer.AclAuthorizer will be added for the new API
> (this
> will not be in the public package).
>
> Please let me know if you have any concerns.
>
> Regards,
>
> Rajini
>
>
> On Fri, Aug 16, 2019 at 8:48 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> wrote:
>
> > Thanks Colin.
> >
> > If there are no other concerns, I will start vote later today. Many
> thanks
> > to every one for the feedback.
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Aug 15, 2019 at 11:57 PM Colin McCabe 
> wrote:
> >
> >> Thanks, Rajini.  It looks good to me.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Thu, Aug 15, 2019, at 11:37, Rajini Sivaram wrote:
> >> > Hi Colin,
> >> >
> >> > Thanks for the review. I have updated the KIP to move the
> interfaces for
> >> > request context and server info to the authorizer package. These
> are now
> >> > called AuthorizableRequestContext and AuthorizerServerInfo.
> Endpoint is
> >> now
> >> > a class in org.apache.kafka.common to make it reusable since we
> already
> >> > have multiple implementations of it. I have removed requestName
> from the
> >> > request context interface since authorizers can distinguish
> follower
> >> fetch
> >> > and consumer fetch from the operation being authorized. So 16-bit
> >> request
> >> > type should be sufficient for audit logging.  Also replaced
> AuditFlag
> >> with
> >> > two booleans as you suggested.
> >> >
> >> > Can you take another look and see if the KIP is ready for voting?
> >> >
> >> > Thanks for all your help!
> >> >
> >> > Regards,
> >> >
> >> > Rajini
> >> >
> >> > On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe 
> >> wrote:
> >> >
> >> > > Hi Rajini,
> >> > >
> >> > > I think it would be good to rename KafkaRequestContext to
> something
> >> like
> >> > > AuthorizableRequestContext, and put it in the
> >> > > org.apache.kafka.server.authorizer namespace.  If we put it in
> the
> >> > > org.apache.kafka.common namespace, then it's not really clear
> that
> >> it's
> >> > > part of the Authorizer API.  Since this class is purely an
> interface,
> >> with
> >> > > no concrete implementation of anything, there's nothing common
> to
> >> really
> >> > > reuse in any case.  We definitely don't want someone to
> accidentally
> >> add or
> >> > > remove methods because they think this is just another internal
> class
> >> used
> >> > > for requests.
> >> > >
> >> > > The BrokerInfo class is a nice improvement.  It looks like it
> will be
> >> > > useful for passing in information about the context we're
> running
> >> in.  It
> >> > > would be nice to call this class ServerInfo rather than
> BrokerInfo,
> >> since
> >> > > we will want to run the authorizer on controllers as well as on
> >> brokers,
> >> > > and the controller may run as a separate process post KIP-500.
> I also
> >> > > think that this class 

Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Don Bosco Durai
Hi Rajini

Assuming this doesn't affect custom plugins, I don't have any concerns 
regarding this.

I do have one question regarding: 

"For authorizers that don’t store metadata in ZooKeeper, ensure that authorizer 
metadata for each listener is available before starting up the listener. This 
enables different authorization metadata stores for different listeners."

Since Ranger uses its own database for storing policies, do you anticipate any 
issues?

Thanks

Bosco


On 8/16/19, 6:49 AM, "Rajini Sivaram"  wrote:

Hi all,

I made another change to the KIP. The KIP was originally proposing to
extend SimpleAclAuthorizer to also implement the new API (in addition to
the existing API). But since we use the new API when available, this can
break custom authorizers that extend this class and override specific
methods of the old API. To avoid breaking any existing custom
implementations that extend this class, particularly because it is in the
public package kafka.security.auth, the KIP now proposes to retain the old
authorizer as-is.  SimpleAclAuthorizer will continue to implement the old
API, but will be deprecated. A new authorizer implementation
kafka.security.authorizer.AclAuthorizer will be added for the new API (this
will not be in the public package).

Please let me know if you have any concerns.

Regards,

Rajini


On Fri, Aug 16, 2019 at 8:48 AM Rajini Sivaram 
wrote:

> Thanks Colin.
>
> If there are no other concerns, I will start vote later today. Many thanks
> to every one for the feedback.
>
> Regards,
>
> Rajini
>
>
> On Thu, Aug 15, 2019 at 11:57 PM Colin McCabe  wrote:
>
>> Thanks, Rajini.  It looks good to me.
>>
>> best,
>> Colin
>>
>>
>> On Thu, Aug 15, 2019, at 11:37, Rajini Sivaram wrote:
>> > Hi Colin,
>> >
>> > Thanks for the review. I have updated the KIP to move the interfaces 
for
>> > request context and server info to the authorizer package. These are 
now
>> > called AuthorizableRequestContext and AuthorizerServerInfo. Endpoint is
>> now
>> > a class in org.apache.kafka.common to make it reusable since we already
>> > have multiple implementations of it. I have removed requestName from 
the
>> > request context interface since authorizers can distinguish follower
>> fetch
>> > and consumer fetch from the operation being authorized. So 16-bit
>> request
>> > type should be sufficient for audit logging.  Also replaced AuditFlag
>> with
>> > two booleans as you suggested.
>> >
>> > Can you take another look and see if the KIP is ready for voting?
>> >
>> > Thanks for all your help!
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>> > On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe 
>> wrote:
>> >
>> > > Hi Rajini,
>> > >
>> > > I think it would be good to rename KafkaRequestContext to something
>> like
>> > > AuthorizableRequestContext, and put it in the
>> > > org.apache.kafka.server.authorizer namespace.  If we put it in the
>> > > org.apache.kafka.common namespace, then it's not really clear that
>> it's
>> > > part of the Authorizer API.  Since this class is purely an interface,
>> with
>> > > no concrete implementation of anything, there's nothing common to
>> really
>> > > reuse in any case.  We definitely don't want someone to accidentally
>> add or
>> > > remove methods because they think this is just another internal class
>> used
>> > > for requests.
>> > >
>> > > The BrokerInfo class is a nice improvement.  It looks like it will be
>> > > useful for passing in information about the context we're running
>> in.  It
>> > > would be nice to call this class ServerInfo rather than BrokerInfo,
>> since
>> > > we will want to run the authorizer on controllers as well as on
>> brokers,
>> > > and the controller may run as a separate process post KIP-500.  I 
also
>> > > think that this class should be in the
>> org.apache.kafka.server.authorizer
>> > > namespace.  Again, it is an interface, not a concrete implementation,
>> and
>> > > it's an interface that is very specifically for the authorizer.
>> > >
>> > > I agree that we probably don't have enough information preserved for
>> > > requests currently to always know what entity made them.  So we can
>> leave
>> > > that out for now (except in the special case of Fetch).  Perhaps we
>> can add
>> > > this later if it's needed.
>> > >
>> > > I understand the intention behind AuthorizationMode (which is now
>> called
>> > > AuditFlag in the latest revision).  But it still feels complex.  What
>> if we
>> > > just had two booleans in Action: logSuccesses and logFailures?  That
>> seems
  

[VOTE] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Rajini Sivaram
Hi all,

I would like to start the vote for KIP-504:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-504+-+Add+new+Java+Authorizer+Interface

This KIP replaces the Scala Authorizer API with a new Java API similar to
other pluggable APIs in the broker and addresses known limitations in the
existing API.

Thanks for all the feedback!

Regards,

Rajini


[jira] [Created] (KAFKA-8810) Add mechanism to detect topology mismatch between streams instances

2019-08-16 Thread Vinoth Chandar (JIRA)
Vinoth Chandar created KAFKA-8810:
-

 Summary: Add mechanism to detect topology mismatch between streams 
instances
 Key: KAFKA-8810
 URL: https://issues.apache.org/jira/browse/KAFKA-8810
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Vinoth Chandar


Noticed this while reading through the StreamsPartitionAssignor related code. 
If an user accidentally deploys a different topology on one of the instances, 
there is no mechanism to detect this and refuse assignment/take action. Given 
Kafka Streams is designed as an embeddable library, I feel this is rather an 
important scenario to handle. For e.g, kafka streams is embedded into a web 
front end tier and operators deploy a hot fix for a site issue to a few 
instances that are leaking memory and that accidentally also deploys some 
topology changes with it. 


Please feel free to close the issue, if its a duplicate. (Could not find a 
ticket for this) 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: Requesting Contribution Access

2019-08-16 Thread Bill Bejeck
Hi Hanumanth,

You'll need to create an account on Apache Jira first, then give us your
username on this mailing list.

Thanks,
Bill

On Fri, Aug 16, 2019 at 11:14 AM bandi soft  wrote:

> Hi Kafka Dev Team,
>
> I am very new to open source contributions,and I am really interested to
> work on some of the kafka features.
> Can you please provide me contributor permissions.
>
> Thanks,
> Hanumanth.
>


Build failed in Jenkins: kafka-trunk-jdk11 #758

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] MINOR: Fix bugs in handling zero-length ImplicitLinkedHashCollections

--
[...truncated 6.28 MB...]

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldDeleteFromStore STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldDeleteFromStore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldDeleteIfSerializedValueIsNull STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldDeleteIfSerializedValueIsNull PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnPutIfAbsentNullKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnPutIfAbsentNullKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnPutNullKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnPutNullKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullToKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullToKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldNotThrowNullPointerExceptionOnPutAllNullKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldNotThrowNullPointerExceptionOnPutAllNullKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnGetNullKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnGetNullKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullFromKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnRangeNullFromKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnDeleteNullKey STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldThrowNullPointerExceptionOnDeleteNullKey PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldReturnSameResultsForGetAndRangeWithEqualKeys STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
shouldReturnSameResultsForGetAndRangeWithEqualKeys PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryKeyValueLoggedStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldCompareSegmentIdOnly STARTED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldCompareSegmentIdOnly PASSED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldBeEqualIfIdIsEqual STARTED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldBeEqualIfIdIsEqual PASSED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldDeleteStateDirectoryOnDestroy STARTED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldDeleteStateDirectoryOnDestroy PASSED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldHashOnSegmentIdOnly STARTED

org.apache.kafka.streams.state.internals.TimestampedSegmentTest > 
shouldHashOnSegmentIdOnly PASSED

org.apache.kafka.streams.state.StateSerdesTest > 
shouldThrowForUnknownKeyTypeForBuiltinTypes STARTED

org.apache.kafka.streams.state.StateSerdesTest > 
shouldThrowForUnknownKeyTypeForBuiltinTypes PASSED

org.apache.kafka.streams.state.StateSerdesTest > 

Requesting Contribution Access

2019-08-16 Thread bandi soft
Hi Kafka Dev Team,

I am very new to open source contributions,and I am really interested to
work on some of the kafka features.
Can you please provide me contributor permissions.

Thanks,
Hanumanth.


[DISCUSS] KIP-508: Make Suppression State Queriable

2019-08-16 Thread Dongjin Lee
Hi all,

I would like to start a discussion of KIP-508, making suppression state
queriable. Please give it a read when you are free and give some feedbacks.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable

Thanks,
Dongjin

-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*
*github:  github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Rajini Sivaram
Hi all,

I made another change to the KIP. The KIP was originally proposing to
extend SimpleAclAuthorizer to also implement the new API (in addition to
the existing API). But since we use the new API when available, this can
break custom authorizers that extend this class and override specific
methods of the old API. To avoid breaking any existing custom
implementations that extend this class, particularly because it is in the
public package kafka.security.auth, the KIP now proposes to retain the old
authorizer as-is.  SimpleAclAuthorizer will continue to implement the old
API, but will be deprecated. A new authorizer implementation
kafka.security.authorizer.AclAuthorizer will be added for the new API (this
will not be in the public package).

Please let me know if you have any concerns.

Regards,

Rajini


On Fri, Aug 16, 2019 at 8:48 AM Rajini Sivaram 
wrote:

> Thanks Colin.
>
> If there are no other concerns, I will start vote later today. Many thanks
> to every one for the feedback.
>
> Regards,
>
> Rajini
>
>
> On Thu, Aug 15, 2019 at 11:57 PM Colin McCabe  wrote:
>
>> Thanks, Rajini.  It looks good to me.
>>
>> best,
>> Colin
>>
>>
>> On Thu, Aug 15, 2019, at 11:37, Rajini Sivaram wrote:
>> > Hi Colin,
>> >
>> > Thanks for the review. I have updated the KIP to move the interfaces for
>> > request context and server info to the authorizer package. These are now
>> > called AuthorizableRequestContext and AuthorizerServerInfo. Endpoint is
>> now
>> > a class in org.apache.kafka.common to make it reusable since we already
>> > have multiple implementations of it. I have removed requestName from the
>> > request context interface since authorizers can distinguish follower
>> fetch
>> > and consumer fetch from the operation being authorized. So 16-bit
>> request
>> > type should be sufficient for audit logging.  Also replaced AuditFlag
>> with
>> > two booleans as you suggested.
>> >
>> > Can you take another look and see if the KIP is ready for voting?
>> >
>> > Thanks for all your help!
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>> > On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe 
>> wrote:
>> >
>> > > Hi Rajini,
>> > >
>> > > I think it would be good to rename KafkaRequestContext to something
>> like
>> > > AuthorizableRequestContext, and put it in the
>> > > org.apache.kafka.server.authorizer namespace.  If we put it in the
>> > > org.apache.kafka.common namespace, then it's not really clear that
>> it's
>> > > part of the Authorizer API.  Since this class is purely an interface,
>> with
>> > > no concrete implementation of anything, there's nothing common to
>> really
>> > > reuse in any case.  We definitely don't want someone to accidentally
>> add or
>> > > remove methods because they think this is just another internal class
>> used
>> > > for requests.
>> > >
>> > > The BrokerInfo class is a nice improvement.  It looks like it will be
>> > > useful for passing in information about the context we're running
>> in.  It
>> > > would be nice to call this class ServerInfo rather than BrokerInfo,
>> since
>> > > we will want to run the authorizer on controllers as well as on
>> brokers,
>> > > and the controller may run as a separate process post KIP-500.  I also
>> > > think that this class should be in the
>> org.apache.kafka.server.authorizer
>> > > namespace.  Again, it is an interface, not a concrete implementation,
>> and
>> > > it's an interface that is very specifically for the authorizer.
>> > >
>> > > I agree that we probably don't have enough information preserved for
>> > > requests currently to always know what entity made them.  So we can
>> leave
>> > > that out for now (except in the special case of Fetch).  Perhaps we
>> can add
>> > > this later if it's needed.
>> > >
>> > > I understand the intention behind AuthorizationMode (which is now
>> called
>> > > AuditFlag in the latest revision).  But it still feels complex.  What
>> if we
>> > > just had two booleans in Action: logSuccesses and logFailures?  That
>> seems
>> > > to cover all the cases here.  MANDATORY_AUTHORIZE = true, true.
>> > > OPTIONAL_AUTHORIZE = true, false.  FILTER = true, false.
>> LIST_AUTHORIZED =
>> > > false, false.  Would there be anything lost versus having the enum?
>> > >
>> > > best,
>> > > Colin
>> > >
>> > >
>> > > On Wed, Aug 14, 2019, at 06:29, Mickael Maison wrote:
>> > > > Hi Rajini,
>> > > >
>> > > > Thanks for the KIP!
>> > > > I really like that authorize() will be able to take a batch of
>> > > > requests, this will speed up many implementations!
>> > > >
>> > > > On Tue, Aug 13, 2019 at 5:57 PM Rajini Sivaram <
>> rajinisiva...@gmail.com>
>> > > wrote:
>> > > > >
>> > > > > Thanks David! I have fixed the typo.
>> > > > >
>> > > > > Also made a couple of changes to make the context interfaces more
>> > > generic.
>> > > > > KafkaRequestContext now returns the 16-bit API key as Colin
>> suggested
>> > > as
>> > > > > well as the friendly name used in metrics which are useful in
>> audit
>> > > logs.

[jira] [Created] (KAFKA-8809) Infinite retry if secure cluster tried to be reached from non-secure consumer

2019-08-16 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8809:


 Summary: Infinite retry if secure cluster tried to be reached from 
non-secure consumer
 Key: KAFKA-8809
 URL: https://issues.apache.org/jira/browse/KAFKA-8809
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.3.0, 2.4.0
Reporter: Gabor Somogyi


Such case the following happening without throwing exception:
{code:java}
19/08/15 04:10:44 INFO AppInfoParser: Kafka version: 2.3.0
19/08/15 04:10:44 INFO AppInfoParser: Kafka commitId: fc1aaa116b661c8a
19/08/15 04:10:44 INFO AppInfoParser: Kafka startTimeMs: 1565867444977
19/08/15 04:10:44 INFO KafkaConsumer: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Subscribed to topic(s): topic-68f2c4c2-71a4-4380-a7c4-6fe0b9eea7ef
19/08/15 04:10:44 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:45 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:45 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:46 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:46 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
19/08/15 04:10:47 WARN NetworkClient: [Consumer clientId=consumer-1, 
groupId=spark-kafka-source-8f333224-77d8-477c-a401-5bd0fce85d69--1266757633-driver-0]
 Bootstrap broker 127.0.0.1:62995 (id: -1 rack: null) disconnected
19/08/15 04:10:47 INFO Selector: [SocketServer brokerId=0] Failed 
authentication with /127.0.0.1 (Unexpected Kafka request of type METADATA 
during SASL handshake.)
{code}
I've tried to find a timeout or retry count but nothing helped.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Build failed in Jenkins: kafka-trunk-jdk8 #3857

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-8592; Fix for resolving variables for dynamic config as 
per

--
[...truncated 2.59 MB...]

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED


[jira] [Created] (KAFKA-8808) Kafka Inconsistent Retention Period across partitions

2019-08-16 Thread Prashant (JIRA)
Prashant created KAFKA-8808:
---

 Summary: Kafka Inconsistent Retention Period across partitions
 Key: KAFKA-8808
 URL: https://issues.apache.org/jira/browse/KAFKA-8808
 Project: Kafka
  Issue Type: Bug
  Components: log, log cleaner
Affects Versions: 1.0.0
Reporter: Prashant


Our topic is created with  retention period of 3 days.  Topic has four 
partitions.  Broker level default is 12 hour. 

Some partition's segment get deleted even before 3 days. Logs show that segment 
is marked for deletion because it exceeded *4320ms*  = 12 hours. 

 

"INFO Found deletable segments with base offsets [43275] due to retention time 
*4320ms* breach (kafka.log.Log)"

 

This does not happen for all partitions.  Post full cluster bounce , partitions 
facing this issue change. 

 

*Topic config :* 

Topic:TOPICNAME PartitionCount:4 ReplicationFactor:2 
Configs:retention.ms=25920,segment.ms=4320
 Topic: TOPICNAME Partition: 0 Leader: 1 Replicas: 1,8 Isr: 1,8
 Topic: TOPICNAME Partition: 1 Leader: 2 Replicas: 2,9 Isr: 9,2
 Topic: TOPICNAME Partition: 2 Leader: 3 Replicas: 3,1 Isr: 1,3
 Topic: TOPICNAME Partition: 3 Leader: 4 Replicas: 4,2 Isr: 4,2

 

*Broker config :*

log.retention.ms=4320

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Build failed in Jenkins: kafka-trunk-jdk8 #3856

2019-08-16 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Use max retries for consumer group tests to avoid flakiness

[cmccabe] MINOR: Fix bugs in handling zero-length ImplicitLinkedHashCollections

--
[...truncated 6.52 MB...]
kafka.server.OffsetsForLeaderEpochRequestTest > 
testOffsetsForLeaderEpochErrorCodes PASSED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
STARTED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
PASSED

kafka.network.DynamicConnectionQuotaTest > testDynamicConnectionQuota STARTED

kafka.network.DynamicConnectionQuotaTest > testDynamicConnectionQuota PASSED

kafka.network.DynamicConnectionQuotaTest > testDynamicListenerConnectionQuota 
STARTED

kafka.network.DynamicConnectionQuotaTest > testDynamicListenerConnectionQuota 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testControlPlaneRequest STARTED

kafka.network.SocketServerTest > testControlPlaneRequest PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > testConnectionRateLimit STARTED

kafka.network.SocketServerTest > testConnectionRateLimit PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest 

Re: [VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-08-16 Thread Mickael Maison
Closing this vote as we are well over 72 hours :)

The vote has passed with +6 binding votes (Guozhang, Vahid, Bill,
Harsha, Colin and Jason) and +7 non binding votes (Gabor, Jungtaek,
Ryanne, Andrew, Eno, Edo and Patrik).

Thanks to everyone that reviewed and helped improve this proposal.

On Thu, Aug 15, 2019 at 9:18 PM Guozhang Wang  wrote:
>
> +1 (binding).
>
> Thanks!
>
>
> Guozhang
>
> On Wed, Aug 14, 2019 at 5:18 PM Vahid Hashemian 
> wrote:
>
> > +1 (binding)
> >
> > Thanks Michael for the suggestion of simplifying offset
> > retrieval/alteration operations.
> >
> > --Vahid
> >
> > On Wed, Aug 14, 2019 at 4:42 PM Bill Bejeck  wrote:
> >
> > > Thanks for the KIP Mickael, looks very useful.
> > > +1 (binding)
> > >
> > > -Bill
> > >
> > > On Wed, Aug 14, 2019 at 6:14 PM Harsha Chintalapani 
> > > wrote:
> > >
> > > > Thanks for the KIP Mickael. LGTM +1 (binding).
> > > > -Harsha
> > > >
> > > >
> > > > On Wed, Aug 14, 2019 at 1:10 PM, Colin McCabe 
> > > wrote:
> > > >
> > > > > Thanks, Mickael. +1 (binding)
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > On Wed, Aug 14, 2019, at 12:07, Gabor Somogyi wrote:
> > > > >
> > > > > +1 (non-binding)
> > > > > I've read it through in depth and as Jungtaek said Spark can make
> > good
> > > > use
> > > > > of it.
> > > > >
> > > > > On Wed, 14 Aug 2019, 17:06 Jungtaek Lim,  wrote:
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > I found it very useful for Spark's case. (Discussion on KIP-505
> > > described
> > > > > it.)
> > > > >
> > > > > Thanks for driving the effort!
> > > > >
> > > > > 2019년 8월 14일 (수) 오후 8:49, Mickael Maison  > >님이
> > > > 작성:
> > > > >
> > > > > Hi Guozhang,
> > > > >
> > > > > Thanks for taking a look.
> > > > >
> > > > > 1. Right, I updated the titles of the code blocks
> > > > >
> > > > > 2. Yes that's a good idea. I've updated the KIP
> > > > >
> > > > > Thank you
> > > > >
> > > > > On Wed, Aug 14, 2019 at 11:05 AM Mickael Maison
> > > > >  wrote:
> > > > >
> > > > > Hi Colin,
> > > > >
> > > > > Thanks for raising these 2 valid points. I've updated the KIP
> > > > >
> > > > > accordingly.
> > > > >
> > > > > On Tue, Aug 13, 2019 at 9:50 PM Guozhang Wang 
> > > > >
> > > > > wrote:
> > > > >
> > > > > Hi Mickael,
> > > > >
> > > > > Thanks for the KIP!
> > > > >
> > > > > Just some minor comments.
> > > > >
> > > > > 1. Java class names are stale, e.g. "CommitOffsetsOptions.java
> > > > > "
> > > > >
> > > > > should
> > > > >
> > > > > be
> > > > >
> > > > > "AlterOffsetsOptions".
> > > > >
> > > > > 2. I'd suggest we change the future structure of "AlterOffsetsResult"
> > > > >
> > > > > to
> > > > >
> > > > > *KafkaFuture>>*
> > > > >
> > > > > This is because we will have a hierarchy of two-layers of errors
> > > > >
> > > > > since
> > > > >
> > > > > we
> > > > >
> > > > > need to find out the group coordinator first and then issue the
> > > > >
> > > > > commit
> > > > >
> > > > > offset request (see e.g. the ListConsumerGroupOffsetsResult which
> > > > >
> > > > > exclude
> > > > >
> > > > > partitions that have errors, or the DeleteMembersResult as part of
> > > > >
> > > > > KIP-345).
> > > > >
> > > > > If the discover-coordinator returns non-triable error, we would set
> > > > >
> > > > > it
> > > > >
> > > > > on
> > > > >
> > > > > the first layer of the KafkaFuture, and the per-partition error would
> > > > >
> > > > > be
> > > > >
> > > > > set on the second layer of the KafkaFuture.
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Tue, Aug 13, 2019 at 9:36 AM Colin McCabe 
> > > > >
> > > > > wrote:
> > > > >
> > > > > Hi Mickael,
> > > > >
> > > > > Considering that KIP-496, which adds a way of deleting consumer
> > > > >
> > > > > offsets
> > > > >
> > > > > from AdminClient, looks like it is going to get in, this seems like
> > > > > functionality we should definitely have.
> > > > >
> > > > > For alterConsumerGroupOffsets, is the intention to ignore
> > > > >
> > > > > partitions
> > > > >
> > > > > that
> > > > >
> > > > > are not specified in the map? If so, we should specify that in the
> > > > >
> > > > > JavaDoc.
> > > > >
> > > > > isolationLevel seems like it should be an enum rather than a
> > > > >
> > > > > string. The
> > > > >
> > > > > existing enum is in org.apache.kafka.common.requests, so we should
> > > > >
> > > > > probably
> > > > >
> > > > > create a new one which is public in org.apache.kafka.clients.admin.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > On Mon, Mar 25, 2019, at 06:10, Mickael Maison wrote:
> > > > >
> > > > > Bumping this thread once again
> > > > >
> > > > > Ismael, have I answered your questions?
> > > > > While this has received a few non-binding +1s, no committers have
> > voted
> > > > > yet. If you have concerns or questions, please let me know.
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Mon, Feb 11, 2019 at 11:51 AM Mickael Maison
> > > > >  wrote:
> > > > >
> > > > > Bumping this 

Re: [VOTE] KIP-496: Administrative API to delete consumer offsets

2019-08-16 Thread Mickael Maison
+1 (non binding)
Thanks!

On Thu, Aug 15, 2019 at 11:53 PM Colin McCabe  wrote:
>
> On Thu, Aug 15, 2019, at 11:47, Jason Gustafson wrote:
> > Hey Colin, I think deleting all offsets is equivalent to deleting the
> > group, which can be done with the `deleteConsumerGroups` api. I debated
> > whether there should be a way to delete partitions for all unsubscribed
> > topics, but I decided to start with a simple API.
>
> That's a fair point-- deleting the group covers the main use-case for 
> deleting all offsets.  So we might as well keep it simple for now.
>
> cheers,
> Colin
>
> >
> > I'm going to close this vote. The final result is +3 with myself, Guozhang,
> > and Colin voting.
> >
> > -Jason
> >
> > On Tue, Aug 13, 2019 at 9:21 AM Colin McCabe  wrote:
> >
> > > Hi Jason,
> > >
> > > Thanks for the KIP.
> > >
> > > Is there ever a desire to delete all the offsets for a given group?
> > > Should the protocol and tools support this?
> > >
> > > +1 (binding)
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, Aug 12, 2019, at 10:57, Guozhang Wang wrote:
> > > > +1 (binding).
> > > >
> > > > Thanks Jason!
> > > >
> > > > On Wed, Aug 7, 2019 at 11:18 AM Jason Gustafson 
> > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I'd like to start a vote on KIP-496:
> > > > >
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-496%3A+Administrative+API+to+delete+consumer+offsets
> > > > > .
> > > > > +1
> > > > > from me of course.
> > > > >
> > > > > -Jason
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >


Re: [DISCUSS] KIP-504 - Add new Java Authorizer Interface

2019-08-16 Thread Rajini Sivaram
Thanks Colin.

If there are no other concerns, I will start vote later today. Many thanks
to every one for the feedback.

Regards,

Rajini


On Thu, Aug 15, 2019 at 11:57 PM Colin McCabe  wrote:

> Thanks, Rajini.  It looks good to me.
>
> best,
> Colin
>
>
> On Thu, Aug 15, 2019, at 11:37, Rajini Sivaram wrote:
> > Hi Colin,
> >
> > Thanks for the review. I have updated the KIP to move the interfaces for
> > request context and server info to the authorizer package. These are now
> > called AuthorizableRequestContext and AuthorizerServerInfo. Endpoint is
> now
> > a class in org.apache.kafka.common to make it reusable since we already
> > have multiple implementations of it. I have removed requestName from the
> > request context interface since authorizers can distinguish follower
> fetch
> > and consumer fetch from the operation being authorized. So 16-bit request
> > type should be sufficient for audit logging.  Also replaced AuditFlag
> with
> > two booleans as you suggested.
> >
> > Can you take another look and see if the KIP is ready for voting?
> >
> > Thanks for all your help!
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, Aug 14, 2019 at 8:59 PM Colin McCabe  wrote:
> >
> > > Hi Rajini,
> > >
> > > I think it would be good to rename KafkaRequestContext to something
> like
> > > AuthorizableRequestContext, and put it in the
> > > org.apache.kafka.server.authorizer namespace.  If we put it in the
> > > org.apache.kafka.common namespace, then it's not really clear that it's
> > > part of the Authorizer API.  Since this class is purely an interface,
> with
> > > no concrete implementation of anything, there's nothing common to
> really
> > > reuse in any case.  We definitely don't want someone to accidentally
> add or
> > > remove methods because they think this is just another internal class
> used
> > > for requests.
> > >
> > > The BrokerInfo class is a nice improvement.  It looks like it will be
> > > useful for passing in information about the context we're running in.
> It
> > > would be nice to call this class ServerInfo rather than BrokerInfo,
> since
> > > we will want to run the authorizer on controllers as well as on
> brokers,
> > > and the controller may run as a separate process post KIP-500.  I also
> > > think that this class should be in the
> org.apache.kafka.server.authorizer
> > > namespace.  Again, it is an interface, not a concrete implementation,
> and
> > > it's an interface that is very specifically for the authorizer.
> > >
> > > I agree that we probably don't have enough information preserved for
> > > requests currently to always know what entity made them.  So we can
> leave
> > > that out for now (except in the special case of Fetch).  Perhaps we
> can add
> > > this later if it's needed.
> > >
> > > I understand the intention behind AuthorizationMode (which is now
> called
> > > AuditFlag in the latest revision).  But it still feels complex.  What
> if we
> > > just had two booleans in Action: logSuccesses and logFailures?  That
> seems
> > > to cover all the cases here.  MANDATORY_AUTHORIZE = true, true.
> > > OPTIONAL_AUTHORIZE = true, false.  FILTER = true, false.
> LIST_AUTHORIZED =
> > > false, false.  Would there be anything lost versus having the enum?
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Wed, Aug 14, 2019, at 06:29, Mickael Maison wrote:
> > > > Hi Rajini,
> > > >
> > > > Thanks for the KIP!
> > > > I really like that authorize() will be able to take a batch of
> > > > requests, this will speed up many implementations!
> > > >
> > > > On Tue, Aug 13, 2019 at 5:57 PM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > > > >
> > > > > Thanks David! I have fixed the typo.
> > > > >
> > > > > Also made a couple of changes to make the context interfaces more
> > > generic.
> > > > > KafkaRequestContext now returns the 16-bit API key as Colin
> suggested
> > > as
> > > > > well as the friendly name used in metrics which are useful in audit
> > > logs.
> > > > > `Authorizer#start` is now provided a generic `BrokerInfo` interface
> > > that
> > > > > gives cluster id, broker id and endpoint information. The generic
> > > interface
> > > > > can potentially be used in other broker plugins in future and
> provides
> > > > > dynamically generated configs like broker id and ports which are
> > > currently
> > > > > not available to plugins unless these configs are statically
> > > configured.
> > > > > Please let me know if there are any concerns.
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Tue, Aug 13, 2019 at 4:30 PM David Jacot 
> > > wrote:
> > > > >
> > > > > > Hi Rajini,
> > > > > >
> > > > > > Thank you for the update! It looks good to me. There is a typo
> in the
> > > > > > `AuditFlag` enum: `MANDATORY_AUTHOEIZE` -> `MANDATORY_AUTHORIZE`.
> > > > > >
> > > > > > Regards,
> > > > > > David
> > > > > >
> > > > > > On Mon, Aug 12, 2019 at 2:54 PM Rajini Sivaram <
> > > rajinisiva...@gmail.com>
> > > > > > 

[jira] [Resolved] (KAFKA-8592) Broker Dynamic Configuration fails to resolve variables as per KIP-421

2019-08-16 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8592.
---
Resolution: Fixed

> Broker Dynamic Configuration fails to resolve variables as per KIP-421 
> ---
>
> Key: KAFKA-8592
> URL: https://issues.apache.org/jira/browse/KAFKA-8592
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 2.3.0
>Reporter: TEJAL ADSUL
>Priority: Major
> Fix For: 2.3.1
>
>
> In add/alter new configs for DynamicConfigs it does not go through the 
> KafkaConfig
> eg: bin/kafka-configs --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 0 --alter --add-config log.cleaner.threads=2
> However the bootstrap-server localhost is parsed through the kafkaConfig to 
> create the adminClient but not the log.cleaner.thread. 
> As the configs are not parsed using the KafkaConfig if we pass variables in 
> configs they are bot resolved at run time.
> In order to resolve the variables in alterConfig/addConfigs flow we need to 
> parse the new configs  using KafkaConfig before they are parsed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-16 Thread Maulin Vasavada
Hi Harsha/Colin

I did the sample with a custom Provider for TrustStoreManager and tried
using ssl.provider Kafka config AND the way KIP-492 is suggesting (by
adding Provider programmatically instead of relying on
ssl.provider+java.security. The below sample is followed by my detailed
findings. I'll appreciate if you can go through it carefully and see if you
see my point.

package providertest;

import java.security.Provider;

public class MyProvider extends Provider {

private static final String name = "MyProvider";
private static double version = 1.0d;
private static String info = "Maulin's SSL Provider v"+version;

public MyProvider() {
super(name, version, info);
this.put("TrustManagerFactory.PKIX",
"providertest.MyTrustManagerFactory");
}
}



*Details:*

KIP-492 documents that it will use Security.addProvider() assuming it will
add it as position '0' which is not a correct assumption. The
addProvider()'s documentation says it will add it to the last available
position. You may want to correct that to say
Security.insertProviderAt(provider, 1).

Now coming back to our specific discussion,

1. SPIFFE example uses Custom Algorithm - spiffe. Hence when you add that
provider in the provider list via Security.addProvider() the position where
it gets added doesn't matter (even if you don't end up adding it as first
entry) since that is the ONLY provider for SPIFFE specific algorithm you
might have.

We do *not* have custom algorithm for Key/Trust StoreMangers. Which means
we have to use X509, PKIX etc "Standard Algorithms" ((
https://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html))
in our provider to override the TrustStoreManager (see my sample code) and
KeyStoreManger and KeyManager. This creates another challenge mentioned in
the below point.

2. In order to make our Provider for loading custom TrustStore work, we
have to add the provider as 'first' in the list since there are others with
the same algorithm.

However, the programatic way of adding provider
(Security.insertProviderAt()) is *not* deterministic for ordering since
different code can call the method for a different provider and depending
upon the order of the call our provider can be first or pushed down the
list. This can happen very well in any client application using Kafka. This
is specially problematic for a case when you want to guarantee order for a
Provider having "Standard Algorithms".

If we add our provider in java.security file that definitely guarantees the
order(unless somebody calls removeProvider() which is less likely). But if
we add our provider in java.security file it will defeat the purpose of the
KIP-492.

In the gist - Apache Kafka must not rely on "un-deterministic" method to
rely on Provider ordering.

3. If we just use existing ssl.provider kafka configuration then our
provider will be used in SSLContext.getInstance(protocol, provider) call in
SslFactory.java and if our provider does not have implementation for
SSLContext.TLS/TLSv1.1/TLSv1.2 etc it breaks (we tested it). Example: In
MyProvider sample above you see that I didn't add SSLContext.TLSv1 as
"Service+Algorithm" and that didn't work for me. In SPIFFE provider you
don't have this challenge since you are planning to bypass ssl.provider as
you mention in the KIP-492.


*Overall summary,*

1. Any provider based mechanisms- a) existing ssl.provider and b)KIP-492,
for loading key/trust store using "Standard Algorithms" do not work

2. Approach suggested in our KIP-486 works without any issue and it is *not*
our context specific solve

3. Based on above we feel KIP-492 and KIP-486 are complimentary changes and
not contradicting or redundent.

If you want we can do a joint session somehow to walk through the sample I
have and various experiments I did. I would encourage you to do similar
exercise by writing a Provider for "Standard Algorithm" for
TrustStoreManager (like our needs) and see what you find since only writing
samples can bring out the complexity/challenges we face.

Thanks
Maulin

On Wed, Aug 14, 2019 at 11:15 PM Maulin Vasavada 
wrote:

> Just to update - still working on it. Get to work only on and off on it :(
>
> On Thu, Aug 8, 2019 at 4:05 PM Maulin Vasavada 
> wrote:
>
>> Hi Harsha
>>
>> Let me try to write samples and will let you know.
>>
>> Thanks
>> Maulin
>>
>> On Thu, Aug 8, 2019 at 4:00 PM Harsha Ch  wrote:
>>
>>> Hi Maulin,
>>>  With java security providers can be as custom you would like it
>>> to
>>> be. If you only want to to implement a custom way of loading the
>>> keystore and truststore and not implement any protocol/encryption
>>> handling
>>> you can leave them empty and no need to implement.
>>> Have you looked into the links I pasted before?
>>>
>>> https://github.com/spiffe/spiffe-example/blob/master/java-spiffe/spiffe-security-provider/src/main/java/spiffe/api/provider/SpiffeKeyStore.java
>>>
>>>