Build failed in Jenkins: kafka-2.1-jdk8 #104

2019-01-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7741: Streams exclude javax dependency (#6121)

--
[...truncated 916.23 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource STARTED

kafka.sec

Re: [VOTE] KIP-414: Expose Embedded ClientIds in Kafka Streams

2019-01-10 Thread John Roesler
Hi Guozhang,

It sounds reasonable to me. I'm +1 (nonbinding).

-John

On Tue, Jan 8, 2019 at 8:51 PM Guozhang Wang  wrote:

> Hello folks,
>
> I'd like to start a voting process for the following KIP:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-414%3A+Expose+Embedded+ClientIds+in+Kafka+Streams
>
> It is a pretty straight-forward and small augment to Stream's public
> ThreadMetadata interface, as an outcome of the discussion on KIP-345. Hence
> I think we can skip the DISCUSS thread and move on to voting directly. If
> people have any questions about the context of KIP-414 or KIP-345, please
> feel free to read these two wiki pages and let me know.
>
>
> -- Guozhang
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3304

2019-01-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.1-jdk8 #103

2019-01-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove throwing exception if not found from describe topics

--
[...truncated 914.97 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource STAR

[jira] [Reopened] (KAFKA-7741) Bad dependency via SBT

2019-01-10 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler reopened KAFKA-7741:
-

> Bad dependency via SBT
> --
>
> Key: KAFKA-7741
> URL: https://issues.apache.org/jira/browse/KAFKA-7741
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
> Environment: Windows 10 professional, IntelliJ IDEA 2017.1
>Reporter: sacha barber
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> I am using the Kafka-Streams-Scala 2.1.0 JAR.
> And if I create a new Scala project using SBT with these dependencies 
> {code}
> name := "ScalaKafkaStreamsDemo"
> version := "1.0"
> scalaVersion := "2.12.1"
> libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-streams" % "2.0.0"
> libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "2.0.0"
> //TEST
> libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % Test
> libraryDependencies += "org.apache.kafka" % "kafka-streams-test-utils" % 
> "2.0.0" % Test
> {code}
> I get this error
>  
> {code}
> SBT 'ScalaKafkaStreamsDemo' project refresh failed
> Error:Error while importing SBT project:...[info] Resolving 
> jline#jline;2.14.1 ...
> [warn] [FAILED ] 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}: (0ms)
> [warn]  local: tried
> [warn] 
> C:\Users\sacha\.ivy2\local\javax.ws.rs\javax.ws.rs-api\2.1.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> [warn]  public: tried
> [warn] 
> https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1.1/javax.ws.rs-api-2.1.1.${packaging.type}
> [info] downloading 
> https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-test-utils/2.1.0/kafka-streams-test-utils-2.1.0.jar
>  ...
> [info] [SUCCESSFUL ] 
> org.apache.kafka#kafka-streams-test-utils;2.1.0!kafka-streams-test-utils.jar 
> (344ms)
> [warn] ::
> [warn] :: FAILED DOWNLOADS ::
> [warn] :: ^ see resolution messages for details ^ ::
> [warn] ::
> [warn] :: javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [warn] ::
> [trace] Stack trace suppressed: run 'last *:ssExtractDependencies' for the 
> full output.
> [trace] Stack trace suppressed: run 'last *:update' for the full output.
> [error] (*:ssExtractDependencies) sbt.ResolveException: download failed: 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [error] (*:update) sbt.ResolveException: download failed: 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [error] Total time: 8 s, completed 16-Dec-2018 19:27:21
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=384M; 
> support was removed in 8.0See complete log in  href="file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log">file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log
> {code}
> This seems to be a common issue with bad dependency from Kafka to 
> javax.ws.rs-api.
> if I drop the Kafka version down to 2.0.0 and add this line to my SBT file 
> this error goes away
> {code}
> libraryDependencies += "javax.ws.rs" % "javax.ws.rs-api" % "2.1" 
> artifacts(Artifact("javax.ws.rs-api", "jar", "jar"))`
> {code}
>  
> However I would like to work with 2.1.0 version.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7741) Bad dependency via SBT

2019-01-10 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7741.
--
   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.1
   2.2.0

> Bad dependency via SBT
> --
>
> Key: KAFKA-7741
> URL: https://issues.apache.org/jira/browse/KAFKA-7741
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
> Environment: Windows 10 professional, IntelliJ IDEA 2017.1
>Reporter: sacha barber
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> I am using the Kafka-Streams-Scala 2.1.0 JAR.
> And if I create a new Scala project using SBT with these dependencies 
> {code}
> name := "ScalaKafkaStreamsDemo"
> version := "1.0"
> scalaVersion := "2.12.1"
> libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-streams" % "2.0.0"
> libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "2.0.0"
> //TEST
> libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % Test
> libraryDependencies += "org.apache.kafka" % "kafka-streams-test-utils" % 
> "2.0.0" % Test
> {code}
> I get this error
>  
> {code}
> SBT 'ScalaKafkaStreamsDemo' project refresh failed
> Error:Error while importing SBT project:...[info] Resolving 
> jline#jline;2.14.1 ...
> [warn] [FAILED ] 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}: (0ms)
> [warn]  local: tried
> [warn] 
> C:\Users\sacha\.ivy2\local\javax.ws.rs\javax.ws.rs-api\2.1.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> [warn]  public: tried
> [warn] 
> https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1.1/javax.ws.rs-api-2.1.1.${packaging.type}
> [info] downloading 
> https://repo1.maven.org/maven2/org/apache/kafka/kafka-streams-test-utils/2.1.0/kafka-streams-test-utils-2.1.0.jar
>  ...
> [info] [SUCCESSFUL ] 
> org.apache.kafka#kafka-streams-test-utils;2.1.0!kafka-streams-test-utils.jar 
> (344ms)
> [warn] ::
> [warn] :: FAILED DOWNLOADS ::
> [warn] :: ^ see resolution messages for details ^ ::
> [warn] ::
> [warn] :: javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [warn] ::
> [trace] Stack trace suppressed: run 'last *:ssExtractDependencies' for the 
> full output.
> [trace] Stack trace suppressed: run 'last *:update' for the full output.
> [error] (*:ssExtractDependencies) sbt.ResolveException: download failed: 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [error] (*:update) sbt.ResolveException: download failed: 
> javax.ws.rs#javax.ws.rs-api;2.1.1!javax.ws.rs-api.${packaging.type}
> [error] Total time: 8 s, completed 16-Dec-2018 19:27:21
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=384M; 
> support was removed in 8.0See complete log in  href="file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log">file:/C:/Users/sacha/.IdeaIC2017.1/system/log/sbt.last.log
> {code}
> This seems to be a common issue with bad dependency from Kafka to 
> javax.ws.rs-api.
> if I drop the Kafka version down to 2.0.0 and add this line to my SBT file 
> this error goes away
> {code}
> libraryDependencies += "javax.ws.rs" % "javax.ws.rs-api" % "2.1" 
> artifacts(Artifact("javax.ws.rs-api", "jar", "jar"))`
> {code}
>  
> However I would like to work with 2.1.0 version.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-349 Priorities for Source Topics

2019-01-10 Thread Colin McCabe
Hi all,

Just as a quick reminder, this is not really a complete proposal.  There are a 
bunch of unresolved issues with this KIP.  One example is how this interacts 
with incremental fetch sessions.  It is not mentioned anywhere in the KIP text. 
 Previously we discussed some approaches, but there was no clear consensus.

Another example is the issue of starvation.  The KIP discusses "an idea" for 
handling starvation, but the details are very sparse-- just a sentence of two.  
At minimum we would need some kind of configuration for the proposed "lag 
deltas".  It's also not clear that the proposed mechanism would work, since we 
don't receive lag metrics for partitions that we don't fetch.  But if we do 
fetch from the partitions, we may receive data, which would cause our policy to 
not be strict prioties.  Keep in mind, even attempting to fetch 1 byte may 
cause us to read an entire message, as described in KIP-74.

It seems that we don't understand the potential use-cases.  The only use-case 
referenced by the KIP is this one, by Bala Prassanna:

 > We use Kafka to process the asynchronous events of our Document Management 
 > System such as preview generation, indexing for search etc.
 > The traffic gets generated via Web and Desktop Sync application. In such 
 > cases, we had to prioritize the traffic from web and consume them first.  
 > But this might lead to the starvation of events from sync if the consumer 
 > speed is slow and the event rate is high from web.  A solution to handle 
 > the starvation with a timeout after which the events are consumed normally 
 > for a specified period of time would be great and help us use our 
 > resources effectively.

Reading this carefully, it seems that the problem is actually starvation, not 
implementing priorities.  Bala already implemented priorities outside of Kafka. 
 If you read the discussion on KAFKA-6690, Bala also makes this comment: "We 
would need this in both Consumer API and Streams API."  The current KIP does 
not discuss adding priorities to Streams-- only to the basic consumer API.  So 
it seems clear that KIP-349 does not address Bala's use-case at all.

Stepping back a little bit, it seems like a few people have spoken up recently 
asking for some way to re-order the messages they receive from the Kafka 
consumer.  For example, ChienHsing Wu has discussed a use-case where he wants 
to receive messages in a "round robin" order.  All of this is possible by doing 
some local buffering and using the pause and resume APIs.  Perhaps we should 
consider better documenting these APIs, and adding some examples.  Or perhaps 
we should consider some kind of API to do pluggable buffering on the client 
side.

In any case, this needs more discussion.  We need to be clear and definite 
about what use cases we want to solve, and the tradeoffs we're making to solve 
them.  For now, I have to reiterate my -1 (binding).

Colin


On Thu, Jan 10, 2019, at 10:46, Adam Bellemare wrote:
> Looks good to me then!
> 
> +1 non-binding
> 
> 
> 
> > On Jan 10, 2019, at 1:22 PM, Afshartous, Nick  
> > wrote:
> > 
> > 
> > Hi Adam,
> > 
> > 
> > This change is only intended for the basic consumer API.
> > 
> > 
> > Cheers,
> > 
> > --
> > 
> >Nick
> > 
> > 
> > 
> > From: Adam Bellemare 
> > Sent: Sunday, January 6, 2019 11:45 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [VOTE] KIP-349 Priorities for Source Topics
> > 
> > Hi Nick
> > 
> > Is this change only for the basic consumer? How would this affect anything 
> > with Kafka Streams?
> > 
> > Thanks
> > 
> > 
> >> On Jan 5, 2019, at 10:52 PM, n...@afshartous.com wrote:
> >> 
> >> Bumping again for more votes.
> >> --
> >> Nick
> >> 
> >> 
> >>> On Dec 26, 2018, at 12:36 PM, n...@afshartous.com wrote:
> >>> 
> >>> Bumping this thread for more votes
> >>> 
> >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D349-3A-2BPriorities-2Bfor-2BSource-2BTopics&d=DwIFAg&c=-SicqtCl7ffNuxX6bdsSog&r=P28z_ShLjFv5AP-w9-b_auYBx8qTrjk2JPYZKbjmJTs&m=5qg4fCOVMtRYYLu2e8h8KmDyis_uk3aFqT5Eq0x4hN8&s=Sbrd5XSwEZiMc9iTPJjRQafl4ubXwIOnsnFzhBEa0h0&e=
> >>>  
> >>>  >>>  
> >>> 

Build failed in Jenkins: kafka-2.1-jdk8 #102

2019-01-10 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7773; Add end to end system test relying on verifiable consumer

--
[...truncated 914.99 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnWildcardResource STAR

Build failed in Jenkins: kafka-trunk-jdk8 #3303

2019-01-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove throwing exception if not found from describe topics

--
[...truncated 4.50 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqu

Re: [VOTE] KIP-349 Priorities for Source Topics

2019-01-10 Thread Adam Bellemare
Looks good to me then!

+1 non-binding



> On Jan 10, 2019, at 1:22 PM, Afshartous, Nick  wrote:
> 
> 
> Hi Adam,
> 
> 
> This change is only intended for the basic consumer API.
> 
> 
> Cheers,
> 
> --
> 
>Nick
> 
> 
> 
> From: Adam Bellemare 
> Sent: Sunday, January 6, 2019 11:45 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-349 Priorities for Source Topics
> 
> Hi Nick
> 
> Is this change only for the basic consumer? How would this affect anything 
> with Kafka Streams?
> 
> Thanks
> 
> 
>> On Jan 5, 2019, at 10:52 PM, n...@afshartous.com wrote:
>> 
>> Bumping again for more votes.
>> --
>> Nick
>> 
>> 
>>> On Dec 26, 2018, at 12:36 PM, n...@afshartous.com wrote:
>>> 
>>> Bumping this thread for more votes
>>> 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D349-3A-2BPriorities-2Bfor-2BSource-2BTopics&d=DwIFAg&c=-SicqtCl7ffNuxX6bdsSog&r=P28z_ShLjFv5AP-w9-b_auYBx8qTrjk2JPYZKbjmJTs&m=5qg4fCOVMtRYYLu2e8h8KmDyis_uk3aFqT5Eq0x4hN8&s=Sbrd5XSwEZiMc9iTPJjRQafl4ubXwIOnsnFzhBEa0h0&e=
>>>  
>>> >>  
>>> >
>> 
>> 
>> 
>> 


Re: [VOTE] KIP-349 Priorities for Source Topics

2019-01-10 Thread Afshartous, Nick

Hi Adam,


This change is only intended for the basic consumer API.


Cheers,

--

Nick



From: Adam Bellemare 
Sent: Sunday, January 6, 2019 11:45 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-349 Priorities for Source Topics

Hi Nick

Is this change only for the basic consumer? How would this affect anything with 
Kafka Streams?

Thanks


> On Jan 5, 2019, at 10:52 PM, n...@afshartous.com wrote:
>
> Bumping again for more votes.
> --
>  Nick
>
>
>> On Dec 26, 2018, at 12:36 PM, n...@afshartous.com wrote:
>>
>> Bumping this thread for more votes
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D349-3A-2BPriorities-2Bfor-2BSource-2BTopics&d=DwIFAg&c=-SicqtCl7ffNuxX6bdsSog&r=P28z_ShLjFv5AP-w9-b_auYBx8qTrjk2JPYZKbjmJTs&m=5qg4fCOVMtRYYLu2e8h8KmDyis_uk3aFqT5Eq0x4hN8&s=Sbrd5XSwEZiMc9iTPJjRQafl4ubXwIOnsnFzhBEa0h0&e=
>>  
>> >  
>> >
>
>
>
>


Re: [VOTE] KIP-382 MirrorMaker 2.0

2019-01-10 Thread Jun Rao
Hi, Ryanne,

Thanks for the explanation. All make sense to me now. +1 on the KIP from me.

Jun

On Wed, Jan 9, 2019 at 7:16 PM Ryanne Dolan  wrote:

> Thanks Jun.
>
> > 103. My point was that the MirrorMakerConnector can die while the
> Heartbeat connector is still alive. So, one can't solely rely on Heartbeat
> for monitoring?
>
> Each cluster will have a heartbeat topic produced by
> MirrorHeartbeatConnector, which doesn't have an associated "source" other
> than time. This topic gets picked up by downstream MirrorSourceConnectors
> and replicated like A.heartbeat. So the heartbeat topic itself isn't
> particular useful for monitoring, but the downstream A.heartbeat shows that
> heartbeats are being replicated successfully from A -> B. If a
> MirrorSourceConnector fails while replicating A -> B, you'd still see
> heartbeats in cluster B, but not A.heartbeat.
>
> 105. You're correct, you don't need to know "B" in order to go from A's
> topic1 to B's A.topic1, i.e. migrating downstream. But you need to know "B"
> to go from A's B.topic1 to B's topic1. In the latter case, you are
> consuming a remote topic to begin with, and then migrating to the source
> cluster, i.e. migrating upstream. N.B. you strip the "B" prefix in this
> case, rather than add the "A" prefix. And you can't just strip all
> prefixes, because you could be migrating from e.g. A's C.topic1 to B's
> C.topic1, i.e. migrating "laterally", if you will.
>
> I suppose we could break this out into multiple methods (upstream,
> downstream, lateral etc), but I think that would add a lot more complexity
> and confusion to the API. By providing both A and B, the single method can
> always figure out what to do.
>
> 107. done
>
> Thanks,
> Ryanne
>
>
>
>
> On Wed, Jan 9, 2019 at 6:11 PM Jun Rao  wrote:
>
>> Hi, Ryanne,
>>
>> 103. My point was that the MirrorMakerConnector can die while the
>> Heartbeat connector is still alive. So, one can't solely rely on Heartbeat
>> for monitoring?
>>
>> 105. Hmm, maybe I don't understand how this is done. Let's say we replica
>> topic1 from cluster A to cluster B. My understanding is that to translate
>> the offset from A to B for a consumer group, we read A.checkpoint file in
>> cluster B to get the timestamp of the last checkpointed offset, call
>> consumer.offsetsForTimes() on A.topic1 in cluster B to translate the
>> timestamp to a local offset, and return .
>> Is that right? If so, in all steps, we don't need to know the
>> targetClusterAlias B. We just need to know the connection string to
>> cluster B, which targetConsumerConfig provides.
>>
>> 107. Thanks. Could you add that description to the KIP?
>>
>> Thanks,
>>
>> Jun
>>
>> On Mon, Jan 7, 2019 at 3:50 PM Ryanne Dolan 
>> wrote:
>>
>>> Thanks Jun, I've updated the KIP as requested. Brief notes below:
>>>
>>> 100. added "...out-of-the-box (without custom handlers)..."
>>>
>>> 101. done. Good idea to include a MessageFormatter.
>>>
>>> 102. done.
>>>
>>> > 103. [...] why is Heartbeat a separate connector?
>>>
>>> Heartbeats themselves are replicated via MirrorSource/SinkConnector, so
>>> if replication stops, you'll stop seeing heartbeats in downstream clusters.
>>> I've updated the KIP to make this clearer and have added a bullet to
>>> Rejected Alternatives.
>>>
>>> 104. added "heartbeat.retention.ms", "checkpoint.retention.ms", thanks.
>>> The heartbeat topic doesn't need to be compacted.
>>>
>>> > 105. [...] I am not sure why targetClusterAlias is useful
>>>
>>> In order to map A's B.topic1 to B's topic1, we need to know B.
>>>
>>> > 106. [...] should the following properties be prefixed with "consumer."
>>>
>>> No, they are part of Connect's worker config.
>>>
>>> > 107. So, essentially it's running multiple logical connect clusters on
>>> the same shared worker nodes?
>>>
>>> Correct. Rather than configure each Connector and Worker and Herder
>>> individually, a single top-level configuration file is used. And instead of
>>> running a bunch of separate worker processes on each node, a single process
>>> runs multiple workers. This is implemented using a separate driver based on
>>> ConnectDistributed, but which runs multiple DistributedHerders. Each
>>> DistributedHerder uses a different Kafka cluster for coordination -- they
>>> are completely separate apart from running in the same process.
>>>
>>> Thanks for helping improve the doc!
>>> Ryanne
>>>
>>> On Fri, Jan 4, 2019 at 10:33 AM Jun Rao  wrote:
>>>
 Hi, Ryanne,

 Thanks for KIP.  Still have a few more comments below.

 100. "This is not possible with MirrorMaker today -- records would be
 replicated back and forth indefinitely, and the topics in either cluster
 would be merged inconsistently between clusters. " This is not 100% true
 since MM can do the topic renaming through MirrorMakerMessageHandler.

 101. For both Heartbeat and checkpoint, could you define the full
 schema,
 including the field type? Also how are they serialized into the Ka

Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-01-10 Thread Bill Bejeck
+1 from me.  Great job on the KIP.

-Bill

On Thu, Jan 10, 2019 at 11:35 AM John Roesler  wrote:

> It's a +1 (nonbinding) from me as well.
>
> Thanks for sticking with this, Adam!
> -John
>
> On Wed, Jan 9, 2019 at 6:22 PM Guozhang Wang  wrote:
>
> > Hello Adam,
> >
> > I'm +1 on the current proposal, thanks!
> >
> >
> > Guozhang
> >
> > On Mon, Jan 7, 2019 at 6:13 AM Adam Bellemare 
> > wrote:
> >
> > > Hi All
> > >
> > > I would like to call a new vote on KIP-213. The design has changed
> > > substantially. Perhaps more importantly, the KIP and associated
> > > documentation has been greatly simplified. I know this KIP has been on
> > the
> > > mailing list for a long time, but the help from John Roesler and
> Guozhang
> > > Wang have helped put it into a much better state. I would appreciate
> any
> > > feedback or votes.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
> > >
> > >
> > >
> > > Thank you
> > >
> > > Adam Bellemare
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-01-10 Thread John Roesler
It's a +1 (nonbinding) from me as well.

Thanks for sticking with this, Adam!
-John

On Wed, Jan 9, 2019 at 6:22 PM Guozhang Wang  wrote:

> Hello Adam,
>
> I'm +1 on the current proposal, thanks!
>
>
> Guozhang
>
> On Mon, Jan 7, 2019 at 6:13 AM Adam Bellemare 
> wrote:
>
> > Hi All
> >
> > I would like to call a new vote on KIP-213. The design has changed
> > substantially. Perhaps more importantly, the KIP and associated
> > documentation has been greatly simplified. I know this KIP has been on
> the
> > mailing list for a long time, but the help from John Roesler and Guozhang
> > Wang have helped put it into a much better state. I would appreciate any
> > feedback or votes.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
> >
> >
> >
> > Thank you
> >
> > Adam Bellemare
> >
>
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk8 #3302

2019-01-10 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] KAFKA-5994; Log ClusterAuthorizationException for all 
ClusterAction

--
[...truncated 2.25 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.ka

Re: [DISCUSSION] KIP-412: Extend Admin API to support dynamic application log levels

2019-01-10 Thread Ryanne Dolan
Makes sense, thanks.

Ryanne

On Wed, Jan 9, 2019, 11:28 PM Stanislav Kozlovski  Sorry about cutting the last message short. I was meaning to say that in
> the future we would be able to introduce finer-grained logging
> configuration (e.g enable debug logs for operations pertaining to this
> topic) and that would be easier to do if we are to know what the target
> resource of an IncrementalAlterConfig request is.
>
> Separating the resource types also allows us to not return a huge
> DescribeConfigs response on the BROKER resource type - the logging
> configurations can be quite verbose.
>
> I hope that answers your question
>
> Best,
> Stanislav
>
> On Wed, Jan 9, 2019 at 3:26 PM Stanislav Kozlovski  >
> wrote:
>
> > Hey Ryanne, thanks for taking a look at the KIP!
> >
> > I think that it is useful to specify the distinction between a standard
> > Kafka config and the log level configs. The log level can be looked at
> as a
> > separate resource as it does not change the behavior of the Kafka broker
> in
> > any way.
> > In terms of practical benefits, separating the two eases this KIP's
> > implementation and user's implementation of AlterConfigPolicy (e.g deny
> all
> > requests that try to alter log level) significantly. We would also be
> able
> > to introduce a
> >
> > On Wed, Jan 9, 2019 at 1:48 AM Ryanne Dolan 
> wrote:
> >
> >> > To differentiate between the normal Kafka config settings and the
> >> application's log level settings, we will introduce a new resource type
> -
> >> BROKER_LOGGERS
> >>
> >> Stanislav, can you explain why log level wouldn't be a "normal Kafka
> >> config
> >> setting"?
> >>
> >> Ryanne
> >>
> >> On Tue, Jan 8, 2019, 4:26 PM Stanislav Kozlovski <
> stanis...@confluent.io
> >> wrote:
> >>
> >> > Hey there everybody,
> >> >
> >> > I'd like to start a discussion about KIP-412. Please take a look at
> the
> >> KIP
> >> > if you can, I would appreciate any feedback :)
> >> >
> >> > KIP: KIP-412
> >> > <
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
> >> > >
> >> > JIRA: KAFKA-7800 
> >> >
> >> > --
> >> > Best,
> >> > Stanislav
> >> >
> >>
> >
> >
> > --
> > Best,
> > Stanislav
> >
>
>
> --
> Best,
> Stanislav
>


[DISCUSS] KIP-379: Multiple Consumer Group Management

2019-01-10 Thread Alex D
Hi Jason,


Sorry for the late reply too.
All proposed/dismissed changes done.
Please review the latest PR updates: https://github.com/apache/kafka/pull/5726/

#1. Done. Backwards compatibility with old csv format is preserved.
#2. Not. Considered not to omit the GROUP column for `--describe`
output of a single `--group` since this is less important from a
compatibility perspective.
#3. Done. All tables of --describe output now start from the GROUP column.
#4. Not. Can be implemented as a separate feature.

Do you confirm changes?

Thank you,
Alex Dunayevsky

>Hi Alex,
>
>Sorry for the late reply. Your message didn't get attached to the main
>thread and I missed it.
>
>#1. Yeah, I think we should continue to accept the old csv format.
>Compatibility is important for all public APIs.
>#2. I think this is less important from a compatibility perspective. On the
>one hand, it makes the output compatible with currently supported usage. On
>the other, it makes it more annoying to write tools which invoke this
>command because they need to treat the single group case separately. I'm
>probably leaning toward not doing this one, but I don't have a strong
>opinion.
>#3. To clarify, my suggestion was to put the group id first. I think Vahid
>was in agreement. From your comment, it sounds like you agree as well?
>#4. I agree supporting regex can be left for future work.
>
>Thanks,
>Jason
>
>On Mon, Nov 5, 2018 at 7:55 AM Alex D  wrote:
>
>>Hello guys,
>>
>>Thank you for your suggestions!
>>I've made a short resume of all suggestions proposed for further
>>possible code corrections.
>>Since not all opinions match, let's review once again and decide.
>>
>>#1. Support old csv format. Proposed by Jason.
>>Yes: Jason, Vahid
>>
>>If backwards compatibility is important for this specific (and, I
>>believe, infrequent) case, ready to make corrections. Final word?
>>
>>#2. Do not show group name for `--describe` output in case a single
>>`--group` is specified. Proposed by Jason.
>>Yes: Jason
>>
>>Alternatively, the user will always expect the output to be the
>>same
>>for any `--describe` query. Ready to make corrections if this is
>>important. Final word?
>>
>>#3. GROUP column should not be the first in the row. Proposed by Jason.
>>Yes: Jason
>>No:  Vahid
>>
>>For the group offset configuration, the group entity appears to be
>>the top priority and starting a table with a GROUP column makes more
>>sense, I believe. Plus, it's quicker and easier to spot to which group
>>the offsets belong to.
>>Apply corrections or leave as is?
>>
>>#4. Single regex vs multiple `--group` flags. Proposed by eazama..
>>
>>There are a few reasons behind this. Firstly, there are no rules
>>for
>>defining group names unlike for topic names that have their own
>>validation routine according to a simple regex. Group names may
>>contain any symbols possible and simply splitting them by comma won't
>>work, at least without using escape characters maybe. Secondly,
>>repetition of the `--group` flag had already been implemented for the
>>group deletion logic and we don't not want to break the backwards
>>compatibility. Finally, visually, it's a bit easier to read and
>>compose a long query with a large number of groups than throwing
>>everything into one very long string.
>>
>>#5. Valid scenario where we would want to delete all consumer groups.
>>Asked by Vahid.
>>
>>There should be one, I believe ;) Already received a few requests
>>from colleagues.
>>
>># KIP approvals:
>>Suman: +1
>>
>>> Sat, 20 Oct 2018 17:10:16 GMT,  wrote:
>>> Is there a reason for using multiple --group flags over having it accept
>>a regex?
>>>
>>> The topics command currently accepts a regex for most operations and
>>doesn't support using
>>> multiple topics flags. It seems like it would be better to take a more
>>standardized approach
>>> to providing this type of information.
>>>
>>>
 On Oct 19, 2018, at 10:28 AM, Suman B N  wrote:

 This eases debugging metadata information of consumer groups and
>>offsets in
 case of client hungs which we have been facing frequently.
 +1 from me. Well done Alex!

 -Suman

 On Fri, Oct 19, 2018 at 8:36 PM Vahid Hashemian <
>>vahid.hashem...@gmail.com>
 wrote:

> Thanks for proposing the KIP. Looks good to me overall.
>
> I agree with Jason's suggestion that it would be best to keep the
>>current
> output format when a single '--group' is present. Because otherwise,
>>there
> would be an impact to users who rely on the current output format.
>>Also,
> starting with a GROUP column makes more sense to me.
>
> Also, and for my own info, is there a valid scenario where we would
>>want to
> delete all consumer groups? It sounds to me like a potentially
>>dangerous
> feature. I would imagine that it would help more with dev/test
> environments, where we normally have a 

Re: [DISCUSS] KIP-379: Multiple Consumer Group Management

2019-01-10 Thread Alex D
Hi Jason,

Sorry for the late reply too.
All proposed/dismissed changes done.
Please review the latest PR updates: https://github.com/apache/kafka/pull/5726/

#1. Done. Backwards compatibility with old csv format is preserved.
#2. Not. Considered not to omit the GROUP column for `--describe`
output of a single `--group` since this is less important from a
compatibility perspective.
#3. Done. All tables of --describe output now start from the GROUP column.
#4. Not. Can be implemented as a separate feature.

Do you confirm changes?

Thank you,
Alex Dunayevsky

>Hi Alex,
>
>Sorry for the late reply. Your message didn't get attached to the main
>thread and I missed it.
>
>#1. Yeah, I think we should continue to accept the old csv format.
>Compatibility is important for all public APIs.
>#2. I think this is less important from a compatibility perspective. On the
>one hand, it makes the output compatible with currently supported usage. On
>the other, it makes it more annoying to write tools which invoke this
>command because they need to treat the single group case separately. I'm
>probably leaning toward not doing this one, but I don't have a strong
>opinion.
>#3. To clarify, my suggestion was to put the group id first. I think Vahid
>was in agreement. From your comment, it sounds like you agree as well?
>#4. I agree supporting regex can be left for future work.
>
>Thanks,
>Jason
>
>On Mon, Nov 5, 2018 at 7:55 AM Alex D  wrote:
>
>>Hello guys,
>>
>>Thank you for your suggestions!
>>I've made a short resume of all suggestions proposed for further
>>possible code corrections.
>>Since not all opinions match, let's review once again and decide.
>>
>>#1. Support old csv format. Proposed by Jason.
>>Yes: Jason, Vahid
>>
>>If backwards compatibility is important for this specific (and, I
>>believe, infrequent) case, ready to make corrections. Final word?
>>
>>#2. Do not show group name for `--describe` output in case a single
>>`--group` is specified. Proposed by Jason.
>>Yes: Jason
>>
>>Alternatively, the user will always expect the output to be the
>>same
>>for any `--describe` query. Ready to make corrections if this is
>>important. Final word?
>>
>>#3. GROUP column should not be the first in the row. Proposed by Jason.
>>Yes: Jason
>>No:  Vahid
>>
>>For the group offset configuration, the group entity appears to be
>>the top priority and starting a table with a GROUP column makes more
>>sense, I believe. Plus, it's quicker and easier to spot to which group
>>the offsets belong to.
>>Apply corrections or leave as is?
>>
>>#4. Single regex vs multiple `--group` flags. Proposed by eazama..
>>
>>There are a few reasons behind this. Firstly, there are no rules
>>for
>>defining group names unlike for topic names that have their own
>>validation routine according to a simple regex. Group names may
>>contain any symbols possible and simply splitting them by comma won't
>>work, at least without using escape characters maybe. Secondly,
>>repetition of the `--group` flag had already been implemented for the
>>group deletion logic and we don't not want to break the backwards
>>compatibility. Finally, visually, it's a bit easier to read and
>>compose a long query with a large number of groups than throwing
>>everything into one very long string.
>>
>>#5. Valid scenario where we would want to delete all consumer groups.
>>Asked by Vahid.
>>
>>There should be one, I believe ;) Already received a few requests
>>from colleagues.
>>
>># KIP approvals:
>>Suman: +1
>>
>>> Sat, 20 Oct 2018 17:10:16 GMT,  wrote:
>>> Is there a reason for using multiple --group flags over having it accept
>>a regex?
>>>
>>> The topics command currently accepts a regex for most operations and
>>doesn't support using
>>> multiple topics flags. It seems like it would be better to take a more
>>standardized approach
>>> to providing this type of information.
>>>
>>>
 On Oct 19, 2018, at 10:28 AM, Suman B N  wrote:

 This eases debugging metadata information of consumer groups and
>>offsets in
 case of client hungs which we have been facing frequently.
 +1 from me. Well done Alex!

 -Suman

 On Fri, Oct 19, 2018 at 8:36 PM Vahid Hashemian <
>>vahid.hashem...@gmail.com>
 wrote:

> Thanks for proposing the KIP. Looks good to me overall.
>
> I agree with Jason's suggestion that it would be best to keep the
>>current
> output format when a single '--group' is present. Because otherwise,
>>there
> would be an impact to users who rely on the current output format.
>>Also,
> starting with a GROUP column makes more sense to me.
>
> Also, and for my own info, is there a valid scenario where we would
>>want to
> delete all consumer groups? It sounds to me like a potentially
>>dangerous
> feature. I would imagine that it would help more with dev/test
> environments, where we normally have a f

[jira] [Created] (KAFKA-7811) Avoid unnecessary lock acquire when KafkaConsumer commit offsets

2019-01-10 Thread lambdaliu (JIRA)
lambdaliu created KAFKA-7811:


 Summary: Avoid unnecessary lock acquire when KafkaConsumer commit 
offsets
 Key: KAFKA-7811
 URL: https://issues.apache.org/jira/browse/KAFKA-7811
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.1.0, 2.0.1, 1.1.1, 1.0.2, 0.11.0.3, 0.10.2.2
Reporter: lambdaliu
Assignee: lambdaliu


In KafkaConsumer#commitSync, we have the following logic:

 
{code:java}
public void commitAsync(OffsetCommitCallback callback) {
acquireAndEnsureOpen();
try {
commitAsync(subscriptions.allConsumed(), callback);
} finally {
release();
}
}
{code}
This function calls another commitAsync which also call `acquireAndEnsureOpen`.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5994) Improve transparency of broker user ACL misconfigurations

2019-01-10 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5994.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 5021
[https://github.com/apache/kafka/pull/5021]

> Improve transparency of broker user ACL misconfigurations
> -
>
> Key: KAFKA-5994
> URL: https://issues.apache.org/jira/browse/KAFKA-5994
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.2.1
>Reporter: Dustin Cote
>Priority: Major
> Fix For: 2.2.0
>
>
> When the user for inter broker communication is not a super user and ACLs are 
> configured with allow.everyone.if.no.acl.found=false, the cluster will not 
> serve data. This is extremely confusing to debug because there is no security 
> negotiation problem or indication of an error other than no data can make it 
> in or out of the broker. If one knew to look in the authorizer log, it would 
> be more clear, but that didn't make it into my workflow at least. Here's an 
> example of a problematic debugging scenario
> SASL_SSL, SSL, SASL_PLAINTEXT ports on the brokers
> SASL user specified in `super.users`
> SSL specified as the inter broker protocol
> The only way I could figure out ACLs were an issue without gleaning it 
> through configuration inspection was that controlled shutdown indicated that 
> a cluster action had failed. 
> It would be good if we could be more transparent about the failure here. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #3301

2019-01-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2019-01-10 Thread Becket Qin
One thing I would like to bring up is the improvements regarding cross-colo
failover. I think it is an important scenario and it would be great if MM2
address the needs.

I chatted with Ryanne offline via email about my thoughts. Ideally in a
multi-location Kafka deployment, users would like to see:
1. A global checkpoint for a consumer group
2. On failover, consumers resume consumption at the exact message based on
the last checkpoint.

With timestamp based failover, we can achieve 1, but not 2. The proposal in
the KIP also can achieve 1, but not 2. However, with some additional help
such as RecordBatch level headers, we may be able to achieve 2 on top of
the proposal in this KIP.

I realize that this should probably not be a blocker for the KIP as the KIP
does a lot of other things. My concern is that if later on there is some
other solution that meets both requirements. What we do in MM2 will become
thrown away work. So it might still worth understanding whether the KIP is
working towards the right final solution, even if we are not doing all the
work in this KIP.

Again, I don't see this necessarily as a blocker for this KIP. So I am fine
if people think we should defer the improvements of cross-cluster failover
to a later decision.

Thanks,

Jiangjie (Becket) Qin

On Tue, Jan 8, 2019 at 2:23 AM Ryanne Dolan  wrote:

> Hi Ewen, thanks for the questions.
>
> > On the ACL management, can you explain how things are supposed to work...
>
> There are two types of topics in play here: regular topics and remote
> topics. MM2 replicates regular source topics -> remote topics.
>
> MM2 doesn't create or modify regular topics, but it fully owns and manages
> remote topics, including their ACL. MM2 automatically syncs ACL changes
> from source topic to remote topic (but not the other way around), s.t. if
> an operator changes the ACL on a source topic, the corresponding remote
> topic is updated.
>
> Only MM2 can write to remote topics, and their ACLs are configured to
> enforce this. Additionally, any READ rules for a source topic are
> propagated to the remote topic. This is important for consumer
> migration/failover to work reliably -- a failed-over consumer must have
> access to the replicated data in a foreign cluster. Keep in mind they are
> the same records after all!
>
> Where there is an A -> B replication flow, a principal with read access to
> Cluster A's topic1 will also have read access to Cluster B'a A.topic1 (a
> remote topic). However, that does NOT mean that the same principal will
> automatically have access to Cluster B's topic1, since topic1 is not a
> remote topic. This is because the records in Cluster A's topic1 are NOT the
> same as the records in Cluster B's topic1, and in fact they may have vastly
> different access control requirements.
>
> Consider the common arrangement where an organization has multiple Kafka
> clusters for prod vs staging, internet/DMZ vs intranet, etc. You might want
> to use MM2 to replicate a topic "foo" from prod to staging, for example. In
> this case, the topic will show up in the staging cluster as "prod.foo". MM2
> will make sure that any principal that can read "foo" in prod can also read
> "prod.foo" in staging, since it's the same principal and the same data. You
> don't have to manually create or configure "prod.foo" -- you just tell MM2
> to replicate "foo" from prod to staging.
>
> In this example, MM2 does not touch anything in the prod cluster -- it just
> reads from "foo". (In fact, it doesn't write to prod at all, not even
> offsets). And importantly, any changes to staging topics don't effect
> anything in prod.
>
> > is this purely for a mirroring but not DR and failover cases
>
> DR (failover/failback, and client migration in general) is the primary
> motivation for the MM2 design. ACL sync in particular exists to ensure
> clients can migrate between clusters and still have access to the same
> data.
>
> > In particular, the rules outlined state that only MM2 would be able to
> write on the new cluster
>
> Only MM2 can write to _remote topics_ (on any cluster). That says nothing
> of normal topics.
>
> > at some point you need to adjust ACLs for the failed-over apps to write
>
> It depends. WRITE access is not sync'd across clusters by MM2, so you may
> need some other mechanism to manage that. This limitation is by design --
> it's unsafe and generally undesirable to apply write access across
> clusters.
>
> Consider the prod vs staging example again. If you are replicating "foo"
> from prod -> staging, you want app1 to have access to both prod's "foo" and
> staging's "prod.foo", since this is the same principal and the same data,
> just on separate clusters. But that doesn't mean you want prod apps to
> write to staging, nor staging apps to write to prod. This is probably the
> whole reason you have staging vs prod in the first place! Instead, you will
> want to be deliberate when promoting an application from staging to prod,
> which may invo

[jira] [Created] (KAFKA-7810) AdminClient not found flink consumer group

2019-01-10 Thread dengjie (JIRA)
dengjie created KAFKA-7810:
--

 Summary: AdminClient not found flink consumer group
 Key: KAFKA-7810
 URL: https://issues.apache.org/jira/browse/KAFKA-7810
 Project: Kafka
  Issue Type: Bug
  Components: admin, clients, consumer
Affects Versions: 2.1.0
Reporter: dengjie


Start the Flink (v1.7.1) application, then use the *listAllConsumerGroups()* of 
the Kafka *AdminClient* class to not find the Flink consumer group name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)