[jira] [Resolved] (KAFKA-3978) Cannot truncate to a negative offset (-1) exception at broker startup

2018-03-13 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-3978.

   Resolution: Fixed
Fix Version/s: 1.2.0

> Cannot truncate to a negative offset (-1) exception at broker startup
> -
>
> Key: KAFKA-3978
> URL: https://issues.apache.org/jira/browse/KAFKA-3978
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
> Environment: 3.13.0-87-generic 
>Reporter: Juho Mäkinen
>Assignee: Dong Lin
>Priority: Critical
>  Labels: reliability, startup
> Fix For: 1.2.0
>
>
> During broker startup sequence the broker server.log has this exception. 
> Problem persists after multiple restarts and also on another broker in the 
> cluster.
> {code}
> INFO [Socket Server on Broker 1002], Started 1 acceptor threads 
> (kafka.network.SocketServer)
> INFO [Socket Server on Broker 1002], Started 1 acceptor threads 
> (kafka.network.SocketServer)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [ExpirationReaper-1002], Starting  
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> INFO [GroupCoordinator 1002]: Starting up. 
> (kafka.coordinator.GroupCoordinator)
> INFO [GroupCoordinator 1002]: Starting up. 
> (kafka.coordinator.GroupCoordinator)
> INFO [GroupCoordinator 1002]: Startup complete. 
> (kafka.coordinator.GroupCoordinator)
> INFO [GroupCoordinator 1002]: Startup complete. 
> (kafka.coordinator.GroupCoordinator)
> INFO [Group Metadata Manager on Broker 1002]: Removed 0 expired offsets in 9 
> milliseconds. (kafka.coordinator.GroupMetadataManager)
> INFO [Group Metadata Manager on Broker 1002]: Removed 0 expired offsets in 9 
> milliseconds. (kafka.coordinator.GroupMetadataManager)
> INFO [ThrottledRequestReaper-Produce], Starting  
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> INFO [ThrottledRequestReaper-Produce], Starting  
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> INFO [ThrottledRequestReaper-Fetch], Starting  
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> INFO [ThrottledRequestReaper-Fetch], Starting  
> (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
> INFO Will not load MX4J, mx4j-tools.jar is not in the classpath 
> (kafka.utils.Mx4jLoader$)
> INFO Will not load MX4J, mx4j-tools.jar is not in the classpath 
> (kafka.utils.Mx4jLoader$)
> INFO Creating /brokers/ids/1002 (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> INFO Creating /brokers/ids/1002 (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
> INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
> INFO Registered broker 1002 at path /brokers/ids/1002 with addresses: 
> PLAINTEXT -> EndPoint(172.16.2.22,9092,PLAINTEXT) (kafka.utils.ZkUtils)
> INFO Registered broker 1002 at path /brokers/ids/1002 with addresses: 
> PLAINTEXT -> EndPoint(172.16.2.22,9092,PLAINTEXT) (kafka.utils.ZkUtils)
> INFO Kafka version : 0.10.0.0 (org.apache.kafka.common.utils.AppInfoParser)
> INFO Kafka commitId : b8642491e78c5a13 
> (org.apache.kafka.common.utils.AppInfoParser)
> INFO [Kafka Server 1002], started (kafka.server.KafkaServer)
> INFO [Kafka Server 1002], started (kafka.server.KafkaServer)
> Error when handling request 
> {controller_id=1004,controller_epoch=1,partition_states=[..REALLY LONG OUTPUT 
> SNIPPED AWAY..], 
> live_leaders=[{id=1004,host=172.16.6.187,port=9092},{id=1003,host=172.16.2.21,port=9092}]}
>  (kafka.server.KafkaApis)
> ERROR java.lang.IllegalArgumentException: Cannot truncate to a negative 
> offset (-1).
> at kafka.log.Log.truncateTo(Log.scala:731)
> at 
> kafka.log.LogManager$$anonfun$truncateTo$2.apply(LogManager.scala:288)
> at 
> kafka.log.LogManager$$anonfun$truncateTo$2.apply(LogManager.scala:280)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> 

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2018-03-13 Thread Matthias J. Sax
Warren,

thanks for following up this KIP. And sorry for the "messy" discussion
thread. Adding this feature is a little tricky. We still hope to get it
into 1.2 release, but atm there is not much progress.

However, for your use case, you can replace .map() with .transform()
that allows you to access the record's timestamp (via the provided
`context` object) as extracted from the TimestampExtractor. See the docs
for more details:
https://kafka.apache.org/documentation/streams/developer-guide/dsl-api.html#applying-processors-and-transformers-processor-api-integration


-Matthias

On 3/13/18 12:51 PM, Warren, Brad wrote:
> Hi devs,
> 
>  
> 
> It’s a bit difficult to put all of the pieces together regarding the
> status and API changes around the KIPs dealing with exposing the record
> metadata in the Processor and DSL APIs.  This is a feature that my team
> here at American Airlines is keenly interested in and I’d like to
> provide a real world use case to help move the discussion along:
> 
>  
> 
> I have a source topic that contains a text value that includes datetimes
> without a year.  The desire is to order the records in a stream by an
> extracted timestamp from the record value and we plan to use the
> timestamp from the source topic to provide the year.  We’re hoping to
> use the DSL.  Something like:
> 
>  
> 
> val streamOrderedByMyValueTime = Builder.stream(“sourceTopic”).map( K,V
> -> KeyValue(KR, VR, timestamp) )
> 
>  
> 
> so then I can do
> 
>  
> 
> groupBy(), aggregate(), etc.
> 
>  
> 
> Inside the mapper, my timestamp would be something like
> LocalDateTime.of(yearFromIncomingConsumerRecordTimestamp,
> monthFromValue, dayFromValue, ….)
> 
>  
> 
> Looking at the wiki here
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73637757,
> what is the proposed implementation of RichValueMapper?  Is it going to
> support what I want to do here?
> 
>  
> 
>  
> 
> Thanks,
> 
> Brad
> 
>  
> 
> cid:49F8CA06-65F7-457B-9DC0-8251F696295B
> 
>  
> 
> *Brad Warren***
> 
> /Principal Application Architect/
> 
> /Airport Technology///
> 
>  
> 
> brad.war...@aa.com
> 
>  
> 
> cid:DB82A805-2411-4411-8D3D-3688F7234324
> 
>  
> 
>  
> 
> 
> 
>  
> 
> NOTICE: This email and any attachments are for the exclusive and
> confidential use of the intended recipient(s). If you are not an
> intended recipient, please do not read, distribute, or take action in
> reliance upon this message. If you have received this in error, please
> notify me immediately by return email and promptly delete this message
> and its attachments from your computer.
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] 1.1.0 RC2

2018-03-13 Thread Satish Duggana
Hi Damian,
Thanks for starting vote thread for 1.1.0 release.

There may be a typo on the tag to be voted upon for this release candidate.
I guess it should be https://github.com/apache/kafka/tree/1.1.0-rc2 instead
of https://github.com/apache/kafka/tree/1.1.0-rc.

On Wed, Mar 14, 2018 at 8:27 AM, Satish Duggana 
wrote:

> Hi Damian,
> Given release plan link in earlier mail is about 1.0 release. You may want
> to replace that with 1.1.0 release plan link[1].
>
> 1 - https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=75957546
>
> Thanks,
> Satish.
>
> On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the third candidate for release of Apache Kafka 1.1.0.
>>
>> This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
>> Please see the release plan for more details:
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
>>
>> A few highlights:
>>
>> * Significant Controller improvements (much faster and session expiration
>> edge cases fixed)
>> * Data balancing across log directories (JBOD)
>> * More efficient replication when the number of partitions is large
>> * Dynamic Broker Configs
>> * Delegation tokens (KIP-48)
>> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
>>
>> Release notes for the 1.1.0 release:
>> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Friday, March 16, 1pm PDT>
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/
>>
>> * Javadoc:
>> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
>>
>> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
>> https://github.com/apache/kafka/tree/1.1.0-rc
>> 2
>>
>>
>> * Documentation:
>> http://kafka.apache.org/11/documentation.html
>> 
>>
>> * Protocol:
>> http://kafka.apache.org/11/protocol.html
>> 
>>
>> * Successful Jenkins builds for the 1.1 branch:
>> Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
>> System tests: https://jenkins.confluent.io/j
>> ob/system-test-kafka/job/1.1/38/
>>
>> /**
>>
>> Thanks,
>> Damian
>>
>>
>> *
>>
>
>


Jenkins build is back to normal : kafka-trunk-jdk9 #472

2018-03-13 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.1.0 RC2

2018-03-13 Thread Satish Duggana
Hi Damian,
Given release plan link in earlier mail is about 1.0 release. You may want
to replace that with 1.1.0 release plan link[1].

1 -
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546

Thanks,
Satish.

On Wed, Mar 14, 2018 at 12:47 AM, Damian Guy  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.1.0.
>
> This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
> Please see the release plan for more details:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
>
> A few highlights:
>
> * Significant Controller improvements (much faster and session expiration
> edge cases fixed)
> * Data balancing across log directories (JBOD)
> * More efficient replication when the number of partitions is large
> * Dynamic Broker Configs
> * Delegation tokens (KIP-48)
> * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
>
> Release notes for the 1.1.0 release:
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, March 16, 1pm PDT>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> https://github.com/apache/kafka/tree/1.1.0-rc
> 2
>
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
> 
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
> 
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
> System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/
> 38/
>
> /**
>
> Thanks,
> Damian
>
>
> *
>


Build failed in Jenkins: kafka-trunk-jdk7 #3252

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Remove unused server exceptions (#4701)

--
[...truncated 1.54 MB...]

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] PASSED
ERROR: Could not install GRADLE_3_5_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
  

Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-03-13 Thread Jun Rao
Hi, Jan,

Thanks for sharing your view.

I agree with you that recopying the data potentially makes the state
management easier since the consumer can just rebuild its state from
scratch (i.e., no need for state reshuffling).

On the flip slide, I saw a few disadvantages of the approach that you
suggested. (1) Building the state from the input topic from scratch is in
general less efficient than state reshuffling. Let's say one computes a
count per key from an input topic. The former requires reading all existing
records in the input topic whereas the latter only requires reading data
proportional to the number of unique keys. (2) The switching of the topic
needs modification to the application. If there are many applications on a
topic, coordinating such an effort may not be easy. Also, it's not clear
how to enforce exactly-once semantic during the switch. (3) If a topic
doesn't need any state management, recopying the data seems wasteful. In
that case, in place partition expansion seems more desirable.

I understand your concern about adding complexity in KStreams. But, perhaps
we could iterate on that a bit more to see if it can be simplified.

Jun


On Mon, Mar 12, 2018 at 11:21 PM, Jan Filipiak 
wrote:

> Hi Jun,
>
> I will focus on point 61 as I think its _the_ fundamental part that I cant
> get across at the moment.
>
> Kafka is the platform to have state materialized multiple times from one
> input. I emphasize this: It is the building block in architectures that
> allow you to
> have your state maintained multiple times. You put a message in once, and
> you have it pop out as often as you like. I believe you understand this.
>
> Now! The path of thinking goes the following: I am using apache kafka and
> I _want_ my state multiple times. What am I going todo?
>
> A) Am I going to take my state that I build up, plunge some sort of RPC
> layer ontop of it, use that RPC layer to throw my records across instances?
> B) Am I just going to read the damn message twice?
>
> Approach A is fundamentally flawed and a violation of all that is good and
> holy in kafka deployments. I can not understand how this Idea can come in
> the first place.
> (I do understand: IQ in streams, they polluted the kafka streams codebase
> really bad already. It is not funny! I think they are equally flawed as A)
>
> I say, we do what Kafka is good at. We repartition the topic once. We
> switch the consumers.
> (Those that need more partitions are going to rebuild their state in
> multiple partitions by reading the new topic, those that don't just assign
> the new partitions properly)
> We switch producers. Done!
>
> The best thing! It is trivial, hipster stream processor will have an easy
> time with that aswell. Its so super simple. And simple IS good!
> It is what kafka was build todo. It is how we do it today. All I am saying
> is that a little broker help doing the producer swap is super useful.
>
> For everyone interested in why kafka is so powerful with approach B,
> please watch https://youtu.be/bEbeZPVo98c?t=1633
> I already looked up a good point in time, I think after 5 minutes the
> "state" topic is handled and you should be able to understand me
> and inch better.
>
> Please do not do A to the project, it deserves better!
>
> Best Jan
>
>
>
> On 13.03.2018 02:40, Jun Rao wrote:
>
>> Hi, Jan,
>>
>> Thanks for the reply. A few more comments below.
>>
>> 50. Ok, we can think a bit harder for supporting compacted topics.
>>
>> 51. This is a fundamental design question. In the more common case, the
>> reason why someone wants to increase the number of partitions is that the
>> consumer application is slow and one wants to run more consumer instances
>> to increase the degree of parallelism. So, fixing the number of running
>> consumer instances when expanding the partitions won't help this case. If
>> we do need to increase the number of consumer instances, we need to
>> somehow
>> reshuffle the state of the consumer across instances. What we have been
>> discussing in this KIP is whether we can do this more effectively through
>> the KStream library (e.g. through a 2-phase partition expansion). This
>> will
>> add some complexity, but it's probably better than everyone doing this in
>> the application space. The recopying approach that you mentioned doesn't
>> seem to address the consumer state management issue when the consumer
>> switches from an old to a new topic.
>>
>> 52. As for your example, it depends on whether the join key is the same
>> between (A,B) and (B,C). If the join key is the same, we can do a 2-phase
>> partition expansion of A, B, and C together. If the join keys are
>> different, one would need to repartition the data on a different key for
>> the second join, then the partition expansion can be done independently
>> between (A,B) and (B,C).
>>
>> 53. If you always fix the number of consumer instances, we you described
>> works. However, as I mentioned in #51, I am not sure how your proposal
>> dea

[jira] [Created] (KAFKA-6652) The controller should log failed attempts to transition a replica to OfflineReplica state

2018-03-13 Thread Lucas Wang (JIRA)
Lucas Wang created KAFKA-6652:
-

 Summary: The controller should log failed attempts to transition a 
replica to OfflineReplica state
 Key: KAFKA-6652
 URL: https://issues.apache.org/jira/browse/KAFKA-6652
 Project: Kafka
  Issue Type: Improvement
Reporter: Lucas Wang
Assignee: Lucas Wang


In certain conditions, the controller's attempt to transition a replica to 
OfflineReplica state could fail, e.g. the condition described in 
[KAFKA-6650|https://issues.apache.org/jira/browse/KAFKA-6650]. When that 
happens, there should be logs to indicate the failed state transitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2473

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Remove unused server exceptions (#4701)

--
[...truncated 416.46 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offset

[jira] [Created] (KAFKA-6651) SchemaBuilder should not allow Arrays or Maps to be created by type()

2018-03-13 Thread Jeremy Custenborder (JIRA)
Jeremy Custenborder created KAFKA-6651:
--

 Summary: SchemaBuilder should not allow Arrays or Maps to be 
created by type()
 Key: KAFKA-6651
 URL: https://issues.apache.org/jira/browse/KAFKA-6651
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Jeremy Custenborder


The following code should throw an exception because we cannot set 
valueSchema() or keySchema() once the builder is returned. 
{code:java}
SchemaBuilder.type(Schema.Type.ARRAY);
SchemaBuilder.type(Schema.Type.MAP);{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6650) The controller should be able to handle a partially deleted topic

2018-03-13 Thread Lucas Wang (JIRA)
Lucas Wang created KAFKA-6650:
-

 Summary: The controller should be able to handle a partially 
deleted topic
 Key: KAFKA-6650
 URL: https://issues.apache.org/jira/browse/KAFKA-6650
 Project: Kafka
  Issue Type: Bug
Reporter: Lucas Wang
Assignee: Lucas Wang


A previous controller could have deleted some partitions of a topic from ZK, 
but not all partitions, and then died.
In that case, the new controller should be able to handle the partially deleted 
topic, and finish the deletion.

In the current code base, if there is no leadership info for a replica's 
partition, the transition to OfflineReplica state for the replica will fail. 
Afterwards the transition to ReplicaDeletionStarted will fail as well since the 
only valid previous state for ReplicaDeletionStarted is OfflineReplica. 
Furthermore, it means the topic deletion will never finish.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2472

2018-03-13 Thread Apache Jenkins Server
See 




Re: Requesting a minor UI change on the documentation page

2018-03-13 Thread Guozhang Wang
Hello Raman,

Your attached images seems be prevented by apache mailing list, could you
upload them to some sharable links and refer them here?


Guozhang


On Tue, Mar 13, 2018 at 2:53 AM, raman bedi  wrote:

>  Hi Team,
> This is rather an insignificant change but will help beginners like me
> coming to your page not getting confused about what links are available to
> be clicked on the page when watching the video:
>
> On the URL:
> https://kafka.apache.org/0110/documentation/streams/
>
>
> As you can see, when I hover onto the links, the mouse pointer on the links
> changes its position to grab rather than to be shown as a pointer. This
> creates issues to understand this link as a clickable area.
>
> By simply changing the following CSS on the page:
>
>
>  You can easily make it look like follows:
>
>
>  Thus resolving the confusion.
>
> Thanks and Regards,
> Raman Bedi
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-6649) ReplicaFetcher stopped after non fatal exception is thrown

2018-03-13 Thread Julio Ng (JIRA)
Julio Ng created KAFKA-6649:
---

 Summary: ReplicaFetcher stopped after non fatal exception is thrown
 Key: KAFKA-6649
 URL: https://issues.apache.org/jira/browse/KAFKA-6649
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 1.1.0
Reporter: Julio Ng


We have seen several under-replication partitions, usually triggered by topic 
creation. After digging in the logs, we see the below:
{noformat}
[2018-03-12 22:40:17,641] ERROR [ReplicaFetcher replicaId=12, leaderId=0, 
fetcherId=1] Error due to (kafka.server.ReplicaFetcherThread)
kafka.common.KafkaException: Error processing data for partition 
[[TOPIC_NAME_REMOVED]]-84 offset 2098535
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169)
 at scala.Option.foreach(Option.scala:257)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:166)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
 at 
kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:164)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Caused by: org.apache.kafka.common.errors.OffsetOutOfRangeException: Cannot 
increment the log start offset to 2098535 of partition 
[[TOPIC_NAME_REMOVED]]-84 since it is larger than the high watermark -1
[2018-03-12 22:40:17,641] INFO [ReplicaFetcher replicaId=12, leaderId=0, 
fetcherId=1] Stopped (kafka.server.ReplicaFetcherThread){noformat}
It looks like that after the ReplicaFetcherThread is stopped, the replicas 
start to lag behind, presumably because we are not fetching from the leader 
anymore. Further examining, the ShutdownableThread.scala object:
{noformat}
override def run(): Unit = {
 info("Starting")
 try {
   while (isRunning)
 doWork()
 } catch {
   case e: FatalExitError =>
 shutdownInitiated.countDown()
 shutdownComplete.countDown()
 info("Stopped")
 Exit.exit(e.statusCode())
   case e: Throwable =>
 if (isRunning)
   error("Error due to", e)
 } finally {
   shutdownComplete.countDown()
 }
 info("Stopped")
}{noformat}
For the Throwable (non-fatal) case, it just exits the while loop and the thread 
stops doing work. I am not sure whether this is the intended behavior of the 
ShutdownableThread, or the exception should be caught and we should keep 
calling doWork()

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3251

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6634: Delay starting new transaction in task.initializeTopology

--
[...truncated 1.91 MB...]
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarlies

Requesting a minor UI change on the documentation page

2018-03-13 Thread raman bedi
 Hi Team,
This is rather an insignificant change but will help beginners like me
coming to your page not getting confused about what links are available to
be clicked on the page when watching the video:

On the URL:
https://kafka.apache.org/0110/documentation/streams/


As you can see, when I hover onto the links, the mouse pointer on the links
changes its position to grab rather than to be shown as a pointer. This
creates issues to understand this link as a clickable area.

By simply changing the following CSS on the page:


 You can easily make it look like follows:


 Thus resolving the confusion.

Thanks and Regards,
Raman Bedi


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2018-03-13 Thread Warren, Brad
Hi devs,

It's a bit difficult to put all of the pieces together regarding the status and 
API changes around the KIPs dealing with exposing the record metadata in the 
Processor and DSL APIs.  This is a feature that my team here at American 
Airlines is keenly interested in and I'd like to provide a real world use case 
to help move the discussion along:

I have a source topic that contains a text value that includes datetimes 
without a year.  The desire is to order the records in a stream by an extracted 
timestamp from the record value and we plan to use the timestamp from the 
source topic to provide the year.  We're hoping to use the DSL.  Something like:

val streamOrderedByMyValueTime = Builder.stream("sourceTopic").map( K,V -> 
KeyValue(KR, VR, timestamp) )

so then I can do

groupBy(), aggregate(), etc.

Inside the mapper, my timestamp would be something like 
LocalDateTime.of(yearFromIncomingConsumerRecordTimestamp, monthFromValue, 
dayFromValue, )

Looking at the wiki here 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73637757, what 
is the proposed implementation of RichValueMapper?  Is it going to support what 
I want to do here?


Thanks,
Brad

[cid:49F8CA06-65F7-457B-9DC0-8251F696295B]

Brad Warren
Principal Application Architect
Airport Technology

brad.war...@aa.com

[cid:DB82A805-2411-4411-8D3D-3688F7234324]





NOTICE: This email and any attachments are for the exclusive and confidential 
use of the intended recipient(s). If you are not an intended recipient, please 
do not read, distribute, or take action in reliance upon this message. If you 
have received this in error, please notify me immediately by return email and 
promptly delete this message and its attachments from your computer.


[VOTE] 1.1.0 RC2

2018-03-13 Thread Damian Guy
Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.1.0.

This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913

A few highlights:

* Significant Controller improvements (much faster and session expiration
edge cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)

Release notes for the 1.1.0 release:
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Friday, March 16, 1pm PDT>

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~damianguy/kafka-1.1.0-rc2/javadoc/

* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
https://github.com/apache/kafka/tree/1.1.0-rc
2


* Documentation:
http://kafka.apache.org/11/documentation.html


* Protocol:
http://kafka.apache.org/11/protocol.html


* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/78
System tests: https://jenkins.confluent.io/job/system-test-kafka/job/1.1/38/

/**

Thanks,
Damian


*


Jenkins build is back to normal : kafka-1.1-jdk7 #78

2018-03-13 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6648) Fetcher.getTopicMetadata() only returns "healthy" partitions, not all

2018-03-13 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-6648:
---

 Summary: Fetcher.getTopicMetadata() only returns "healthy" 
partitions, not all
 Key: KAFKA-6648
 URL: https://issues.apache.org/jira/browse/KAFKA-6648
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.1
Reporter: radai rosenblatt
Assignee: radai rosenblatt


{code}
if (!shouldRetry) {
   HashMap> topicsPartitionInfos = new HashMap<>();
   for (String topic : cluster.topics())
  topicsPartitionInfos.put(topic, 
cluster.availablePartitionsForTopic(topic));
   return topicsPartitionInfos;
}
{code}

this leads to inconsistent behavior upstream, for example in 
KafkaConsumer.partitionsFor(), where if there's valid metadata all partitions 
would be returned, whereas if MD doesnt exist (or has expired) a subset of 
partitions (only the healthy ones) would be returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk7 #3250

2018-03-13 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6647) KafkaStreams.cleanUp creates .lock file in directory its trying to clean

2018-03-13 Thread George Bloggs (JIRA)
George Bloggs created KAFKA-6647:


 Summary: KafkaStreams.cleanUp creates .lock file in directory its 
trying to clean
 Key: KAFKA-6647
 URL: https://issues.apache.org/jira/browse/KAFKA-6647
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.1
 Environment: windows 10.
java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
org.apache.kafka:kafka-streams:1.0.1
Reporter: George Bloggs


When calling kafkaStreams.cleanUp() before starting a stream the 
StateDirectory.cleanRemovedTasks() method contains this check:
{code:java}
... Line 240
  if (lock(id, 0)) {
long now = time.milliseconds();
long lastModifiedMs = taskDir.lastModified();
if (now > lastModifiedMs + cleanupDelayMs) {
log.info("{} Deleting obsolete state directory {} 
for task {} as {}ms has elapsed (cleanup delay is {}ms)", logPrefix(), dirName, 
id, now - lastModifiedMs, cleanupDelayMs);
Utils.delete(taskDir);
}
}
{code}
The check for lock(id,0) will create a .lock file in the directory that 
subsequently is going to be deleted. If the .lock file already exists from a 
previous run the attempt to delete the .lock file fails with 
AccessDeniedException.

This leaves the .lock file in the taskDir. Calling Utils.delete(taskDir) will 
then attempt to remove the taskDir path calling Files.delete(path).

The call to files.delete(path) in postVisitDirectory will then fail 
java.nio.file.DirectoryNotEmptyException as the failed attempt to delete the 
.lock file left the directory not empty.

This seems to then cause issues using streams from a topic to an in memory 
store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6646) Add a GlobalKStream object type for stream event broadcast

2018-03-13 Thread Antony Stubbs (JIRA)
Antony Stubbs created KAFKA-6646:


 Summary: Add a GlobalKStream object type for stream event broadcast
 Key: KAFKA-6646
 URL: https://issues.apache.org/jira/browse/KAFKA-6646
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 1.1.0
Reporter: Antony Stubbs


There are some use cases where having a global KStream object is useful. For 
example, where a single event is sent, with very low frequency, to a cluster of 
Kafka stream nodes to trigger all nodes to do some processing of state stored 
on their instance.

Workaround currently is to either create a second kstream app instance, being 
careful to configure it with a different state dir, and give it a unique app 
name per instance, then create a kstream in each one. Or - you can use the 
normal consumer client inside your kstream app with unique consumer groups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk9 #471

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6634: Delay starting new transaction in task.initializeTopology

--
[...truncated 1.48 MB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateSavedOffsetWhenOffsetToClearToIsBetweenEpochs PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest >

[jira] [Resolved] (KAFKA-4027) Leader for a cetain partition unavailable forever

2018-03-13 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4027.
--
Resolution: Duplicate

This behavior is changed in KAFKA-5758.  Please reopen if you think the issue 
still exists in newer versions.

> Leader for a cetain partition unavailable forever
> -
>
> Key: KAFKA-4027
> URL: https://issues.apache.org/jira/browse/KAFKA-4027
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: tu nguyen khac
>Priority: Major
>
> I have a cluster of brokers ( 9 box) , i 'm naming it from 0 --> 8 . 
> Yesterday some servers went down ( hard reset ) i regularly restart these 
> server ( down servers ) but after that some topics cannot assign leader 
> i checked server log and retrieved these logging : 
> kafka.common.NotAssignedReplicaException: Leader 1 failed to record follower 
> 6's position -1 since the replica is not recognized to be one of the assigned 
> replicas 1 for partition [tos_htv3tv.com,31].
>   at 
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:251)
>   at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:864)
>   at 
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:861)
>   at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>   at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>   at 
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:861)
>   at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:470)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:496)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:77)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)
> i tried to run Prefered Leader but it didn't work ( some partitions has node 
> leader ) :(



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-257 - Configurable Quota Management

2018-03-13 Thread Viktor Somogyi
Hi Rajini,

Well, I didn't really have specific use cases for having other metadata
(like isr and replicas), just thought it would be a more robust interface.
But yea, since currently there are no specific use cases, it makes sense
not to include them.

Viktor

On Tue, Mar 13, 2018 at 10:47 AM, Rajini Sivaram 
wrote:

> I have submitted a PR with the changes proposed in this KIP (
> https://github.com/apache/kafka/pull/4699). I added an additional method
> to
> the quota callback in the KIP to simplify metrics updates when quotas are
> updated.
>
> Feedback and suggestions are welcome. If there are no other concerns, I
> will start vote later this week.
>
> Thank you,
>
> Rajini
>
> On Wed, Mar 7, 2018 at 12:34 PM, Rajini Sivaram 
> wrote:
>
> > Hi Viktor,
> >
> > Thanks for reviewing the KIP.
> >
> > 1. Yes, that is correct. Typically quotas would depend only on the
> current
> > partition state. But if you did want to track deleted partitions, you can
> > calculate the diff.
> > 2. I can't think of an use case where ISRs or other replica information
> > would be useful to configure quotas. Since partition leaders process
> > fetch/produce requests, this is clearly useful in terms of setting
> quotas.
> > But I have defined PartitionMetadata trait rather than just using the
> > leader as an int so that we can add additional methods in future if
> > required. This keeps the interface extensible. Did you have any use case
> in
> > mind where additional metadata would be useful?
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Mar 6, 2018 at 8:56 AM, Viktor Somogyi 
> > wrote:
> >
> >> Hi Rajini,
> >>
> >> I've read through your KIP and it looks good, I only have two things to
> >> clarify.
> >> 1. How do we detect removed partitions in updatePartitionMetadata? I'm
> >> presuming that the list here is the currently existing map of
> partitions,
> >> so if something is removed it can be calculated as the diff of the
> current
> >> and the previous update. Is that right?
> >> 2. PartitionMetadata contains only the leader at this moment, however
> >> there
> >> are similar classes that contain more information, like the replicas,
> isr,
> >> offline replicas. I think including them might make sense to provide a
> >> more
> >> robust API. What do you think?
> >>
> >> Thanks,
> >> Viktor
> >>
> >> On Wed, Feb 21, 2018 at 7:57 PM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> >> wrote:
> >>
> >> > Hi all,
> >> >
> >> > I have submitted KIP-257 to enable customisation of client quota
> >> > computation:
> >> >
> >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 257+-+Configurable+Quota+Management
> >> >
> >> >
> >> > The KIP proposes to make quota management pluggable to enable
> >> group-based
> >> > and partition-based quotas for clients.
> >> >
> >> > Feedback and suggestions are welcome.
> >> >
> >> > Thank you...
> >> >
> >> > Regards,
> >> >
> >> > Rajini
> >> >
> >>
> >
> >
>


[jira] [Resolved] (KAFKA-6634) Delay initiating the txn on producers until initializeTopology with EOS turned on

2018-03-13 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6634.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

> Delay initiating the txn on producers until initializeTopology with EOS 
> turned on
> -
>
> Key: KAFKA-6634
> URL: https://issues.apache.org/jira/browse/KAFKA-6634
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 1.1.0
>
>
> In Streams EOS implementation, the created producers for tasks will initiate 
> a txn immediately after being created in the constructor of `StreamTask`. 
> However, the task may not process any data and hence producer may not send 
> any records for that started txn for a long time because of the restoration 
> process. And with default txn.session.timeout valued at 60 seconds, it means 
> that if the restoration takes more than that amount of time, upon starting 
> the producer will immediately get the error that its producer epoch is 
> already old.
> To fix this, we should consider instantiating the txn only after the 
> restoration phase is done. Although this may have a caveat that if the 
> producer is already fenced, it will not be notified until then, in 
> initializeTopology. But I think this should not be a correctness issue since 
> during the restoration process we do not make any changes to the processing 
> state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-255: OAuth Authentication via SASL/OAUTHBEARER

2018-03-13 Thread Rajini Sivaram
Hi Ron,

Thanks for the response. All sound good, I think the only outstanding
question is around callbacks vs classes provided through the login context.
As you have pointed out, there are advantages of both approaches. Even
though my preference is for callbacks, it is not a blocker since the
current approach works fine too. I will make the case for callbacks anyway,
using OAuthTokenValidator as an example:


   - As you mentioned, the main advantage of using callbacks is
   consistency. It is the standard plug-in mechanism for SASL implementations
   in Java and keeps code consistent with built-in mechanisms like Kerberos as
   well as our own implementations like PLAIN and SCRAM.
   - With the current approach, there are two classes OAuthTokenValidator
   and a default implementation OAuthBearerUnsecuredJwtValidator. I was
   thinking that we would have a public callback class
OAuthTokenValidatorCallback
   instead and a default callback handler
   OAuthBearerUnsecuredJwtValidatorCallbackHandler. So it would be two
   classes either way?
   - JAAS config is very opaque, we don't log it because it could contain
   passwords. Your option substitution classes could help here, but it has
   generally made it difficult to diagnose failures in the past. Callback
   handlers on the the other hand are logged as part of the broker configs and
   can be easily made dynamically updatable.
   - In the current implementation, an instance of  OAuthTokenValidator
   is created and configured for every SaslServer, i.e every connection. We
   create one server callback handler instance per mechanism and cache it.
   This is useful if we need to make an external connection or load trust
   stores etc.

For token retriever, I think either approach is fine, since it is tied in
with login anyway and would benefit from login manager cache either way.

Regards,

Rajini

On Sat, Mar 10, 2018 at 4:19 AM, Ron Dagostino  wrote:

> Hi Rajini.  Thanks for the great feedback.  See below for my
> thoughts/conclusions.  I haven't implemented any of it yet or changed the
> KIP, but I will start to work on the areas where we are in agreement
> immediately, and I will await your feedback on the areas where an
> additional iteration is needed to arrive at a conclusion.
>
> Regarding (1), yes, we can and should eliminate some public API.  See
> below.
>
> Regarding (2), I will change the exception hierarchy so that it is
> unchecked.
>
> Regarding (3) and (4), yes, I agree, the expiring/refresh code can and
> should be simplified.  The name of the Login class (I called it
> ExpiringCredentialRefreshingLogin) must be part of the public API because
> it is the class that must be set via the oauthbearer.sasl.login.class
> property.  Its underlying implementation doesn't have to be public, but the
> fully-qualified name has to be well-known and fixed so that it can be
> associated with that configuration property.  As you point out, we are not
> unifying the refresh logic for OAUTHBEARER and GSSAPI, though it could be
> undertaken at some point in the future; the name "
> ExpiringCredentialRefreshingLogin" should probably be used if/when that
> unification happens.  In the meantime, the class that we expose should
> probably be called "OAuthBearerLogin", and it's fully-qualified name and
> the fact that it recognizes several refresh-related property names in the
> config, with certain min/max/default values, are the only things that
> should be public.  I also agree from (4) that we can stipulate that
> SASL/OAUTHBEARER only supports the case where OAUTHBEARER is the only SASL
> mechanism communicated to the code, either because there is only one SASL
> mechanism defined for the cluster or because the config is done via the new
> dynamic functionality from KIP-226 that eliminates the
> mechanism-to-login-module ambiguity associated with declaring multiple SASL
> mechanisms in a single JAAS config file.  Given all of this, everything I
> defined for token refresh could be internal implementation detail except
> for ExpiringCredentialLoginModule, which would no longer be needed, and we
> only have to expose a single class called OAuthBearerLogin.
>
> Regarding (5), I'm glad you agree the substitutable module options
> functionality is generally useful, and I will publish a separate KIP for
> it.  I'm thinking the package will be
> org.apache.kafka.common.security.optionsubs (I'll gladly accept anything
> better if anyone can come up with something -- "optionsubs" is better than
> "smo" but it still isn't that great, and unfortunately it is the best
> relatively short thing I can think of at the moment).  I'll also see what I
> can do to minimize the surface area of the API; that discussion can be done
> separately as part of that KIP's discussion thread.
>
> Regarding (6), I agree that exposing the validated token via a publicly
> defined SaslServer negotiated property name eliminates the need for the
> OAuthBearerSaslServer interface; I will ma

Build failed in Jenkins: kafka-trunk-jdk8 #2471

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: add DEFAULT_PORT for Trogdor Agent and Coordinator 
(#4674)

--
[...truncated 419.52 KB...]

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.a

Build failed in Jenkins: kafka-trunk-jdk9 #470

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: add DEFAULT_PORT for Trogdor Agent and Coordinator 
(#4674)

--
[...truncated 1.48 MB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED
ERROR: Could not install GRADLE_4_4_HOME
java.lang.NullPointerException
at 
hudson

Build failed in Jenkins: kafka-trunk-jdk8 #2470

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6624; Prevent concurrent log flush and log deletion (#4663)

[jason] KAFKA-6024; Move arg validation in KafkaConsumer ahead of

[jason] MINOR: Use large batches in metrics test for conversion time >= 1ms

[github] MINOR: Streams doc example should not close store (#4667)

--
[...truncated 3.09 MB...]

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldBeFlushedWithAutoCompleteIfBufferedRecords STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldBeFlushedWithAutoCompleteIfBufferedRecords PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldNotBeFlushedAfterFlush STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldNotBeFlushedAfterFlush PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnBeginTransactionIfTransactionsNotInitialized STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnBeginTransactionIfTransactionsNotInitialized PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendIfProducerGotFenced STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendIfProducerGotFenced PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCommitTransactionIfNoTransactionGotStarted STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCommitTransactionIfNoTransactionGotStarted PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowSendOffsetsToTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowSendOffsetsToTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCloseIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCloseIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishLatestAndCumulativeConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled
 STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishLatestAndCumulativeConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled
 PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendOffsetsToTransactionTransactionIfNoTransactionGotStarted 
STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendOffsetsToTransactionTransactionIfNoTransactionGotStarted PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnNullConsumerGroupIdWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnNullConsumerGroupIdWhenSendOffsetsToTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldAddOffsetsWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldAddOffsetsWhenSendOffsetsToTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortForNonAutoCompleteIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortForNonAutoCompleteIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldResetSentOffsetsFlagOnlyWhenBeginningNewTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldResetSentOffsetsFlagOnlyWhenBeginningNewTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPreserveCommittedMessagesOnAbortIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPreserveCommittedMessagesOnAbortIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldIgnoreEmptyOffsetsWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldIgnoreEmptyOffsetsWhenSendOffsetsToTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerAlreadyInitializedForTransactions STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerAlreadyInitializedForTransactions PASSED

org.apache.kafka.clients.prod

Build failed in Jenkins: kafka-1.1-jdk7 #77

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Use large batches in metrics test for conversion time >= 1ms

[matthias] MINOR: Streams doc example should not close store (#4667)

--
[...truncated 1.89 MB...]

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = false] PASSED
ERROR: Could not install GRADLE_3_5_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:895)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:458)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:666)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:631)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:391)
at hudson.scm.SCM.poll(SCM.java:408)
at hudson.model.AbstractProject._poll(AbstractProject.java:1384)
at hudson.model.AbstractProject.poll(AbstractProject.java:1287)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:594)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:640)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration PASSED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled 
STARTED

org.apache.kafka.streams.integration.KTable

Build failed in Jenkins: kafka-trunk-jdk7 #3249

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Streams doc example should not close store (#4667)

--
[...truncated 420.43 KB...]

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningNewConsumer
 STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningNewConsumer
 PASSED

kafka.tools.ConsoleConsumerTest > 
shouldSetAutoResetToSmallestWhenFromBeginningConfigured STARTED

kafka.tools.ConsoleConsumerTest > 
shouldSetAutoResetToSmallestWhenFromBeginningConfigured PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetLargest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetLargest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldResetUnConsumedOffsetsBeforeExitForNewConsumer STARTED

kafka.tools.ConsoleConsumerTest > 
shouldResetUnConsumedOffsetsBeforeExitForNewConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginningOldConsumer
 PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageTest > testChecksum STARTED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp STARTED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable STARTED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination STARTED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping STARTED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues STARTED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte STARTED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality STARTED

kafka.message.MessageTest > testEquality PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.m

[jira] [Created] (KAFKA-6645) Sticky Partition Assignment across Kafka Streams application restarts

2018-03-13 Thread Giridhar Addepalli (JIRA)
Giridhar Addepalli created KAFKA-6645:
-

 Summary: Sticky Partition Assignment across Kafka Streams 
application restarts
 Key: KAFKA-6645
 URL: https://issues.apache.org/jira/browse/KAFKA-6645
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Giridhar Addepalli


Since Kafka Streams applications have lot of state in the stores in the 
general, it would be good to remember the assignment of partitions to machines. 
So that when whole application is restarted for whatever reason, there is a way 
to use past assignment of partitions to machines and there won't be need to 
build up state by reading off of changelog kafka topic and would result in 
faster start-up.

Samza has support for Host Affinity 
(https://samza.apache.org/learn/documentation/0.14/yarn/yarn-host-affinity.html)

KIP-54 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-54+-+Sticky+Partition+Assignment+Strategy)]
 , handles cases where some members of consumer group goes down / comes up, and 
KIP-54 ensures there is minimal diff between assignments before and after 
rebalance. 

But to handle whole restart use case, we need to remember past assignment 
somewhere, and use it after restart.

Please let us know if this is already solved problem / some cleaner way of 
achieving this objective



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk9 #469

2018-03-13 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3248

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6024; Move arg validation in KafkaConsumer ahead of

[jason] MINOR: Use large batches in metrics test for conversion time >= 1ms

--
[...truncated 1.90 MB...]
org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationWithSslTest > 
testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTable

Build failed in Jenkins: kafka-1.0-jdk7 #167

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: Streams doc example should not close store (#4667)

--
[...truncated 374.99 KB...]
kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > t

Re: [DISCUSS] KIP-257 - Configurable Quota Management

2018-03-13 Thread Rajini Sivaram
I have submitted a PR with the changes proposed in this KIP (
https://github.com/apache/kafka/pull/4699). I added an additional method to
the quota callback in the KIP to simplify metrics updates when quotas are
updated.

Feedback and suggestions are welcome. If there are no other concerns, I
will start vote later this week.

Thank you,

Rajini

On Wed, Mar 7, 2018 at 12:34 PM, Rajini Sivaram 
wrote:

> Hi Viktor,
>
> Thanks for reviewing the KIP.
>
> 1. Yes, that is correct. Typically quotas would depend only on the current
> partition state. But if you did want to track deleted partitions, you can
> calculate the diff.
> 2. I can't think of an use case where ISRs or other replica information
> would be useful to configure quotas. Since partition leaders process
> fetch/produce requests, this is clearly useful in terms of setting quotas.
> But I have defined PartitionMetadata trait rather than just using the
> leader as an int so that we can add additional methods in future if
> required. This keeps the interface extensible. Did you have any use case in
> mind where additional metadata would be useful?
>
> Regards,
>
> Rajini
>
> On Tue, Mar 6, 2018 at 8:56 AM, Viktor Somogyi 
> wrote:
>
>> Hi Rajini,
>>
>> I've read through your KIP and it looks good, I only have two things to
>> clarify.
>> 1. How do we detect removed partitions in updatePartitionMetadata? I'm
>> presuming that the list here is the currently existing map of partitions,
>> so if something is removed it can be calculated as the diff of the current
>> and the previous update. Is that right?
>> 2. PartitionMetadata contains only the leader at this moment, however
>> there
>> are similar classes that contain more information, like the replicas, isr,
>> offline replicas. I think including them might make sense to provide a
>> more
>> robust API. What do you think?
>>
>> Thanks,
>> Viktor
>>
>> On Wed, Feb 21, 2018 at 7:57 PM, Rajini Sivaram 
>> wrote:
>>
>> > Hi all,
>> >
>> > I have submitted KIP-257 to enable customisation of client quota
>> > computation:
>> >
>> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 257+-+Configurable+Quota+Management
>> >
>> >
>> > The KIP proposes to make quota management pluggable to enable
>> group-based
>> > and partition-based quotas for clients.
>> >
>> > Feedback and suggestions are welcome.
>> >
>> > Thank you...
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>>
>
>


[jira] [Created] (KAFKA-6644) Make Server Info more generic for Kafka Interactive Queries

2018-03-13 Thread Mike Graham (JIRA)
Mike Graham created KAFKA-6644:
--

 Summary: Make Server Info more generic for Kafka Interactive 
Queries
 Key: KAFKA-6644
 URL: https://issues.apache.org/jira/browse/KAFKA-6644
 Project: Kafka
  Issue Type: Improvement
  Components: config, streams
Reporter: Mike Graham


when working to implement *kafka streams interactive queries*, i see that i can 
set `application.server` with `host:port`

*i would like a more generic mechanism to set additional properties.*

i'm using cloud foundry containers for my kafka streams app. i scale out my 
containers using `*cf scale*`. each gets its own instance id. the *instance id* 
can be used in an http header to get cloud foundry to route the http to the 
correct instance

https://docs.cloudfoundry.org/concepts/http-routing.html#app-instance-routing


i realize, per Matthias J Sax, that Kafka Streams only distributes the 
information but does not use it. Thus, i can put the instance-id as the port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS]KIP-235 DNS alias and secured connections

2018-03-13 Thread Rajini Sivaram
Hi Jonathan,

Thanks for the updates. Looks good to me. You could start voting if there
are no other concerns.

Regards,

Rajini

On Fri, Mar 9, 2018 at 6:02 PM, Skrzypek, Jonathan  wrote:

> Hi,
>
> There has been further discussion on the ticket and it seems having an
> additional option to trigger the DNS lookup behaviour would be the best
> approach.
>
> https://issues.apache.org/jira/browse/KAFKA-6195
>
> Updated the KIP https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 235%3A+Add+DNS+alias+support+for+secured+connection
>
> Would value your opinions.
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Skrzypek, Jonathan [Tech]
> Sent: 22 February 2018 16:21
> To: 'dev@kafka.apache.org'
> Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi,
>
> Could anyone take a look at the pull request, so that if ok I can start a
> VOTE thread ?
>
> Regards,
>
> Jonathan Skrzypek
>
> -Original Message-
> From: Skrzypek, Jonathan [Tech]
> Sent: 09 February 2018 13:57
> To: 'dev@kafka.apache.org'
> Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi,
>
> I have raised a PR https://github.com/apache/kafka/pull/4485 with
> suggested code changes.
> There are however reported failures, don't understand what's the issue
> since tests are passing.
> Any ideas ?
>
>
> Jonathan Skrzypek
>
> -Original Message-
> From: Skrzypek, Jonathan [Tech]
> Sent: 29 January 2018 16:51
> To: dev@kafka.apache.org
> Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi,
>
> Yes I believe this might address what you're seeing as well.
>
> Jonathan Skrzypek
> Middleware Engineering
> Messaging Engineering
> Goldman Sachs International
>
> -Original Message-
> From: Stephane Maarek [mailto:steph...@simplemachines.com.au]
> Sent: 06 December 2017 10:43
> To: dev@kafka.apache.org
> Subject: RE: [DISCUSS]KIP-235 DNS alias and secured connections
>
> Hi Jonathan
>
> I think this will be very useful. I reported something similar here :
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.
> apache.org_jira_browse_KAFKA-2D4781&d=DwIFaQ&c=
> 7563p3e2zaQw0AB1wrFVgyagb2IE5rTZOYPxLxfZlX4&r=nNmJlu1rR_QFAPdxGlafmDu9_
> r6eaCbPOM0NM1EHo-E&m=3R1dVnw5Ttyz1YbVIMSRNMz2gjWsQm
> bTNXl63kwXvKo&s=MywacMwh18eVH_NvLY6Ffhc3CKMh43Tai3WMUf9PsjM&e=
>
> Please confirm your kip will address it ?
>
> Stéphane
>
> On 6 Dec. 2017 8:20 pm, "Skrzypek, Jonathan" 
> wrote:
>
> > True, amended the KIP, thanks.
> >
> > Jonathan Skrzypek
> > Middleware Engineering
> > Messaging Engineering
> > Goldman Sachs International
> >
> >
> > -Original Message-
> > From: Tom Bentley [mailto:t.j.bent...@gmail.com]
> > Sent: 05 December 2017 18:19
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS]KIP-235 DNS alias and secured connections
> >
> > Hi Jonathan,
> >
> > It might be worth mentioning in the KIP that this is necessary only
> > for
> > *Kerberos* on SASL, and not other SASL mechanisms. Reading the JIRA it
> > makes sensem, but I was confused up until that point.
> >
> > Cheers,
> >
> > Tom
> >
> > On 5 December 2017 at 17:53, Skrzypek, Jonathan
> > 
> > wrote:
> >
> > > Hi,
> > >
> > > I would like to discuss a KIP I've submitted :
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.or
> > > g_
> > > confluence_display_KAFKA_KIP-2D&d=DwIBaQ&c=7563p3e2zaQw0AB1wrFVgyagb
> > > 2I
> > > E5rTZOYPxLxfZlX4&r=nNmJlu1rR_QFAPdxGlafmDu9_r6eaCbPOM0NM1EHo-E&m=GWK
> > > XA
> > > ILbqxFU2j7LtoOx9MZ00uy_jJcGWWIG92CyAuc&s=fv5WAkOgLhVOmF4vhEzq_39CWnE
> > > o0 q0AJbqhAuDFDT0&e=
> > > 235%3A+Add+DNS+alias+support+for+secured+connection
> > >
> > > Feedback and suggestions welcome !
> > >
> > > Regards,
> > > Jonathan Skrzypek
> > > Middleware Engineering
> > > Messaging Engineering
> > > Goldman Sachs International
> > > Christchurch Court - 10-15 Newgate Street London EC1A 7HD
> > > Tel: +442070512977
> > >
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2469

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Fix incorrect JavaDoc (type mismatch) (#4632)

--
[...truncated 3.52 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords STARTED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerT

Build failed in Jenkins: kafka-trunk-jdk9 #468

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6624; Prevent concurrent log flush and log deletion (#4663)

--
[...truncated 1.48 MB...]
kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidOldConsumerConfigWithAutoOffsetResetSmallest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldPa

Build failed in Jenkins: kafka-1.1-jdk7 #76

2018-03-13 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6624; Prevent concurrent log flush and log deletion (#4663)

--
[...truncated 416.73 KB...]

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

k