Re: [DISCUSS] KIP-80: Kafka REST Server

2016-10-11 Thread Harsha Chintalapani
Neha,
"But I haven't seen any community emails or patches being submitted by you
guys, so I'm wondering why you are concerned about whether the community is
open to accepting patches or not."

I think you are talking about contributing patches to this repository
right? https://github.com/confluentinc/kafka-rest . All I am saying
the guidelines/governance model is not clear on the project and I guess its
driven by opening a github issue request.  Its the repository owned by
confluent and as much I appreciate that the features we mentioned are in
the roadmap and welcoming us to contribute to the project. It doesn't
gurantee what we want to add in the furture will be in your roadmap.

Hence the reason having it part of Kafka community will help a lot as other
users can participate in the discussions.  We are happy to drive any
feature additions through KIPs which gives everyone a chance to participate
and add to the discussions.

Thanks,
Harsha

On Fri, Oct 7, 2016 at 11:52 PM Michael Pearce 
wrote:

> +1
>
> I agree on the governance comments whole heartedly.
>
> Also i agree about the contribution comments made earlier in the thread, i
> personally am less likely to spend any of mine, or give project time within
> my internal projects to developers contributing to another commercial
> companies project even so technically open source, as then there is that
> commercial companies interest will always prevail and essentially can
> always have a final vote where disagreement. Im sure they never intend to,
> but there is that true reality. This is why we have community open source
> projects.
>
> I can find many different implementations now of a rest endpoint on
> GitHub, BitBucket etc. Each one has their benefits and disadvantages in
> their implementation. By making / providing one this would bring together
> these solutions, unifying those developers and also bringing the best of
> all.
>
> I understand the concern on the community burden adding/supporting more
> surface area for every client. But something like REST is universal and
> worthy to be owned by the community.
>
> Mike
>
>
> 
> From: Andrew Schofield 
> Sent: Saturday, October 8, 2016 1:19 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-80: Kafka REST Server
>
> There's a massive difference between the governance of Kafka and the
> governance of the REST proxy.
>
> In Kafka, there is a broad community of people contributing their opinions
> about future enhancements in the form of KIPs. There's some really deep
> consideration that goes into some of the trickier KIPs. There are people
> outside Confluent deeply knowledgeable  in Kafka and building the
> reputations to become committers. I get the impression that the roadmap of
> Kafka is not really community-owned (what's the big feature for Kafka 0.11,
> for example), but the conveyor belt of smaller features in the form of KIPs
> works  nicely. It's a good example of open-source working well.
>
> The equivalent for the REST proxy is basically issues on GitHub. The
> roadmap is less clear. There's not really a community properly engaged in
> the way that there is with Kafka. So, you could say that it's clear that
> fewer people are interested, but I think  the whole governance thing is a
> big barrier to engagement. And it's looking like it's getting out of date.
>
> In technical terms, I can think of two big improvements to the REST proxy.
> First, it needs to use the new consumer API so that it's possible to secure
> connections between the REST proxy and Kafka. I don't care too much which
> method calls it uses actually  uses to consume messages, but I do care that
> I cannot secure connections because of network security rules. Second,
> there's an affinity between a consumer and the instance of the REST proxy
> to which it first connected. Kafka itself avoids this kind of affinity for
> good reason, and in the name of availability the REST proxy should too.
> These are natural KIPs.
>
> I think it would be good to have the code for the REST proxy contributed
> to Apache Kafka so that it would be able to be developed in the same way.
>
> Andrew Schofield
>
> From: Suresh Srinivas 
> Sent: 07 October 2016 22:41:52
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-80: Kafka REST Server
>
> ASF already gives us a clear framework and governance model for community
> development. This is already understood by the people contributing to
> Apache Kafka project, and they are the same people who want to contribute
> to the REST server capability as well. Everyone is in agreement on the
> need for collaborating on this effort. So why not contribute the code to
> Apache Kafka. This will help avoid duplication of effort and forks that
> may crop up, hugely benefitting the user community. This will also avoid
> having to define a process similar to ASF on a GitHub 

[GitHub] kafka pull request #2010: MINOR: Fixed broken links in the documentation

2016-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2010


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Store flushing on commit.interval.ms from KIP-63 introduces aggregation latency

2016-10-11 Thread Guozhang Wang
Hello Greg,

I can share some context of KIP-63 here:

1. Like Eno mentioned, we believe RocksDB's own mem-table is already
optimizing a large portion of IO access for its write performance, and
adding an extra caching layer on top of that was mainly for saving ser-de
costs (note that you still need to ser / deser key-value objects into bytes
when interacting with RocksDB). Although it may further help IO, it is not
the main motivation.

2. As part of KIP-63 Bill helped investigating the pros / cons of such
object caching (https://issues.apache.org/jira/browse/KAFKA-3973), and our
conclusion based on that is, although it saves serde costs, it also makes
memory management very hard in the long run, with caching based on
num.records, not num.bytes. And when you have an OOM in one of the
instances, it may well result in cascading failures from rebalances and
task migration. Ideally, we want to have some restrict memory bound for
better capacity planning and integration with cluster resource managers
(see
https://cwiki.apache.org/confluence/display/KAFKA/Discussion%3A+Memory+Management+in+Kafka+Streams
for more details).

3. So as part of KIP-63, we removed object-oriented caching and replaced
with bytes caches, and in addition add the RocksDBConfigSetter to allow
users to configure their RocksDB to tune for their write /
space amplifications for IO.


With that, I think shutting off caching for your case should not degrading
the performance too much assuming RocksDB itself can already do a good job
in terms of write access, it may add extra serde costs though depending
your use case (originally it is like 1000 records per cache, so roughly
speaking you are saving those many serde calls per store). But if you do
observe significant performance degradation I'd personally love to learn
more and help on that end.


Guozhang





On Tue, Oct 11, 2016 at 10:10 AM, Greg Fodor  wrote:

> Thanks Eno -- my understanding is that cache is already enabled to be
> 100MB per rocksdb so it should be on already, but I'll check. I was
> wondering if you could shed some light on the changes between 0.10.0
> and 0.10.1 -- in 0.10.0 there was an intermediate cache within
> RocksDbStore -- presumably this was there to improve performance,
> despite there still being a lower level cache managed by rocksdb. Can
> you shed some light why this cache was needed in 0.10.0? If it sounds
> like our use case won't warrant the same need then we might be OK.
>
> Overall however, this is really problematic for us, since we will have
> to turn off caching for effectively all of our jobs. The way our
> system works is that we have a number of jobs running kafka streams
> that are configured via database tables we change via our web stack.
> For example, when we want to tell our jobs to begin processing data
> for a user, we insert a record for that user into the database which
> gets passed via kafka connect to a kafka topic. The kafka streams job
> is consuming this topic, does some basic group by operations and
> repartitions on it, and joins it against other data streams so that it
> knows what users should be getting processed.
>
> So fundamentally we have two types of aggregations: the typical case
> that was I think the target for the optimizations in KIP-63, where
> latency is less critical since we are counting and emitting counts for
> analysis, etc. And the other type of aggregation is where we are doing
> simple transformations on data coming from the database in a way to
> configure the live behavior of the job. Latency here is very
> sensitive: users expect the job to react and start sending data for a
> user immediately after the database records are changed.
>
> So as you can see, since this is the paradigm we use to operate jobs,
> we're in a bad position if we ever want to take advantage of the work
> in KIP-63. All of our jobs are set up to work in this way, so we will
> either have to maintain our fork or will have to shut off caching for
> all of our jobs, neither of which sounds like a very good path.
>
> On Tue, Oct 11, 2016 at 4:16 AM, Eno Thereska 
> wrote:
> > Hi Greg,
> >
> > An alternative would be to set up RocksDB's cache, while keeping the
> streams cache to 0. That might give you what you need, especially if you
> can work with RocksDb and don't need to change the store.
> >
> > For example, here is how to set the Block Cache size to 100MB and the
> Write Buffer size to 32MB
> >
> > https://github.com/facebook/rocksdb/wiki/Block-Cache <
> https://github.com/facebook/rocksdb/wiki/Block-Cache>
> > https://github.com/facebook/rocksdb/wiki/Basic-Operations#write-buffer <
> https://github.com/facebook/rocksdb/wiki/Basic-Operations#write-buffer>
> >
> > They can override these settings by creating an impl of
> RocksDBConfigSetter and setting 
> StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG
> in Kafka Streams.
> >
> > Hope this helps,
> > Eno
> >
> >> On 10 Oct 2016, at 

Build failed in Jenkins: kafka-trunk-jdk7 #1623

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4283: records deleted from CachingKeyValueStore still appear in

--
[...truncated 14115 lines...]
org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekNext PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldPeekAndIterate PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndPeekNextCalled PASSED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled STARTED

org.apache.kafka.streams.state.internals.DelegatingPeekingKeyValueIteratorTest 
> shouldThrowNoSuchElementWhenNoMoreItemsLeftAndNextCalled PASSED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators STARTED

org.apache.kafka.streams.state.internals.MergedSortedCacheWindowStoreIteratorTest
 > shouldIterateOverValueFromBothIterators PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfNoStoresFoundWithName PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfKVStoreClosed PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNotAllStoresAvailable PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierNotLogged PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreatePersistenStoreSupplierWithLoggedConfig PASSED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged STARTED

org.apache.kafka.streams.state.StoresTest > 
shouldCreateInMemoryStoreSupplierNotLogged PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED


[GitHub] kafka pull request #2013: MINOR: Code refactor - cleaning up some boolean as...

2016-10-11 Thread imandhan
GitHub user imandhan opened a pull request:

https://github.com/apache/kafka/pull/2013

MINOR: Code refactor - cleaning up some boolean assignments



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/imandhan/kafka KAFKA-refactor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2013


commit a9d3b4369a11943c389e0ccc20ccfbafee0ce6f4
Author: Ishita Mandhan 
Date:   2016-10-12T00:51:29Z

MINOR: Code Refactor

Refactored some if else statements which had a boolean type and a boolean 
in the conditional, to remove explicitly stated booleans; cleaning up the code 
a little. For example - if (x==1) true else false is changed to (x==1)

commit db02aafdfcbc9b37587abd7a7f84f9d4edd66454
Author: Ishita Mandhan 
Date:   2016-10-12T00:51:29Z

MINOR: Code Refactor

Refactored some if else statements which had a boolean type and a boolean 
in the conditional, to remove explicitly stated booleans; cleaning up the code 
a little. For example - if (x==1) true else false is changed to (x==1)

commit 1813f6e759edeb99b8e91fae7e077afa7522fce4
Author: Ishita Mandhan 
Date:   2016-10-12T01:04:37Z

Merge branch 'KAFKA-refactor' of https://github.com/imandhan/kafka into 
KAFKA-refactor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #972

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4283: records deleted from CachingKeyValueStore still appear in

--
[...truncated 3067 lines...]
kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.consumer.TopicFilterTest > testWhitelists STARTED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson STARTED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists STARTED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression STARTED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator STARTED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure 
STARTED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor STARTED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.log.FileMessageSetTest > testTruncate STARTED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation STARTED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead STARTED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > 
testTruncateNotCalledIfSizeIsBiggerThanTargetSize STARTED

kafka.log.FileMessageSetTest > 
testTruncateNotCalledIfSizeIsBiggerThanTargetSize PASSED

kafka.log.FileMessageSetTest > testFileSize STARTED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits STARTED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testWriteToChannelThatConsumesPartially STARTED

kafka.log.FileMessageSetTest > testWriteToChannelThatConsumesPartially PASSED

kafka.log.FileMessageSetTest > testTruncateNotCalledIfSizeIsSameAsTargetSize 
STARTED

kafka.log.FileMessageSetTest > testTruncateNotCalledIfSizeIsSameAsTargetSize 
PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue STARTED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent STARTED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize STARTED

kafka.log.FileMessageSetTest > testTruncateIfSizeIsDifferentToTargetSize PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage STARTED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition STARTED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead STARTED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo STARTED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse STARTED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown STARTED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion STARTED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch STARTED

kafka.log.FileMessageSetTest > testSearch 

Build failed in Jenkins: kafka-0.10.1-jdk7 #67

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4283: records deleted from CachingKeyValueStore still appear in

--
[...truncated 426 lines...]
kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals STARTED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonFutureVersion PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri STARTED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromUri PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV1 PASSED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 STARTED

kafka.cluster.BrokerEndPointTest > testFromJsonV2 PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
STARTED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride STARTED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig STARTED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.FetcherTest > testFetcher STARTED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED


[jira] [Created] (KAFKA-4293) ByteBufferMessageSet.deepIterator burns CPU catching EOFExceptions

2016-10-11 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-4293:
---

 Summary: ByteBufferMessageSet.deepIterator burns CPU catching 
EOFExceptions
 Key: KAFKA-4293
 URL: https://issues.apache.org/jira/browse/KAFKA-4293
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.1
Reporter: radai rosenblatt


around line 110:
{noformat}
try {
while (true)
innerMessageAndOffsets.add(readMessageFromStream(compressed))
} catch {
case eofe: EOFException =>
// we don't do anything at all here, because the finally
// will close the compressed input stream, and we simply
// want to return the innerMessageAndOffsets
{noformat}

the only indication the code has that the end of the oteration was reached is 
by catching EOFException (which will be thrown inside readMessageFromStream()).

profiling runs performed at linkedIn show 10% of the total broker CPU time 
taken up by Throwable.fillInStack() because of this behaviour.

unfortunately InputStream.available() cannot be relied upon (concrete example - 
GZipInputStream will not correctly return 0) so the fix would probably be a 
wire format change to also encode the number of messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-47 - Add timestamp-based log deletion policy

2016-10-11 Thread Guozhang Wang
+1.

On Fri, Oct 7, 2016 at 3:35 PM, Gwen Shapira  wrote:

> +1 (binding)
>
> On Wed, Oct 5, 2016 at 1:55 PM, Bill Warshaw  wrote:
> > Bumping for visibility.  KIP is here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-47+-+
> Add+timestamp-based+log+deletion+policy
> >
> > On Wed, Aug 24, 2016 at 2:32 PM Bill Warshaw 
> wrote:
> >
> >> Hello Guozhang,
> >>
> >> KIP-71 seems unrelated to this KIP.  KIP-47 is just adding a new
> deletion
> >> policy (minimum timestamp), while KIP-71 is allowing deletion and
> >> compaction to coexist.
> >>
> >> They both will touch LogManager, but the change for KIP-47 is very
> >> isolated.
> >>
> >> On Wed, Aug 24, 2016 at 2:21 PM Guozhang Wang 
> wrote:
> >>
> >> Hi Bill,
> >>
> >> I would like to reason if there is any correlation between this KIP and
> >> KIP-71
> >>
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-71%3A+
> Enable+log+compaction+and+deletion+to+co-exist
> >>
> >> I feel they are orthogonal but would like to double check with you.
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Wed, Aug 24, 2016 at 11:05 AM, Bill Warshaw 
> >> wrote:
> >>
> >> > I'd like to re-awaken this voting thread now that KIP-33 has merged.
> >> This
> >> > KIP is now completely unblocked.  I have a working branch off of trunk
> >> with
> >> > my proposed fix, including testing.
> >> >
> >> > On Mon, May 9, 2016 at 8:30 PM Guozhang Wang 
> wrote:
> >> >
> >> > > Jay, Bill:
> >> > >
> >> > > I'm thinking of one general use case of using timestamp rather than
> >> > offset
> >> > > for log deletion, which is that for expiration handling in data
> >> > > replication, when the source data store decides to expire some data
> >> > records
> >> > > based on their timestamps, today we need to configure the
> corresponding
> >> > > Kafka changelog topic for compaction, and actively send a tombstone
> for
> >> > > each expired record. Since expiration usually happens with a bunch
> of
> >> > > records, this could generate large tombstone traffic. For example I
> >> think
> >> > > LI's data replication for Espresso is seeing similar issues and they
> >> are
> >> > > just not sending tombstone at all.
> >> > >
> >> > > With timestamp based log deletion policy, this can be easily
> handled by
> >> > > simply setting the current expiration timestamp; but ideally one
> would
> >> > > prefer to configure this topic to be both log compaction enabled as
> >> well
> >> > as
> >> > > log deletion enabled. From that point of view, I feel that current
> KIP
> >> > > still has value to be accepted.
> >> > >
> >> > > Guozhang
> >> > >
> >> > >
> >> > > On Mon, May 2, 2016 at 2:37 PM, Bill Warshaw 
> >> > wrote:
> >> > >
> >> > > > Yes, I'd agree that offset is a more precise configuration than
> >> > > timestamp.
> >> > > > If there was a way to set a partition-level configuration, I would
> >> > rather
> >> > > > use log.retention.min.offset than timestamp.  If you have an
> approach
> >> > in
> >> > > > mind I'd be open to investigating it.
> >> > > >
> >> > > > On Mon, May 2, 2016 at 5:33 PM, Jay Kreps 
> wrote:
> >> > > >
> >> > > > > Gotcha, good point. But barring that limitation, you agree that
> >> that
> >> > > > makes
> >> > > > > more sense?
> >> > > > >
> >> > > > > -Jay
> >> > > > >
> >> > > > > On Mon, May 2, 2016 at 2:29 PM, Bill Warshaw <
> wdwars...@gmail.com>
> >> > > > wrote:
> >> > > > >
> >> > > > > > The problem with offset as a config option is that offsets are
> >> > > > > > partition-specific, so we'd need a per-partition config.  This
> >> > would
> >> > > > work
> >> > > > > > for our particular use case, where we have single-partition
> >> topics,
> >> > > but
> >> > > > > for
> >> > > > > > multiple-partition topics it would delete from all partitions
> >> based
> >> > > on
> >> > > > a
> >> > > > > > global topic-level offset.
> >> > > > > >
> >> > > > > > On Mon, May 2, 2016 at 4:32 PM, Jay Kreps 
> >> > wrote:
> >> > > > > >
> >> > > > > > > I think you are saying you considered a kind of trim() api
> that
> >> > > would
> >> > > > > > > synchronously chop off the tail of the log starting from a
> >> given
> >> > > > > offset.
> >> > > > > > > That would be one option, but what I was saying was slightly
> >> > > > different:
> >> > > > > > in
> >> > > > > > > the proposal you have where there is a config that controls
> >> > > retention
> >> > > > > > that
> >> > > > > > > the user would update, wouldn't it make more sense for this
> >> > config
> >> > > to
> >> > > > > be
> >> > > > > > > based on offset rather than timestamp?
> >> > > > > > >
> >> > > > > > > -Jay
> >> > > > > > >
> >> > > > > > > On Mon, May 2, 2016 at 12:53 PM, Bill Warshaw <
> >> > wdwars...@gmail.com
> >> > > >
> >> > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > 1.  Initially I looked at using 

[jira] [Commented] (KAFKA-4283) records deleted from CachingKeyValueStore still appear in range and all queries

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15566589#comment-15566589
 ] 

ASF GitHub Bot commented on KAFKA-4283:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2001


> records deleted from CachingKeyValueStore still appear in range and all 
> queries
> ---
>
> Key: KAFKA-4283
> URL: https://issues.apache.org/jira/browse/KAFKA-4283
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> Records deleted from CachingKeyValueStore appear in range and all queries. 
> The deleted record is replaced with an LRUCacheEntry(null), so that when a 
> flush happens the deletion can be sent downstream and removed from the 
> underlying store.
> As this is valid and needed, the iterator used when querying a cached store 
> needs to be aware of the entries where LRUCacheEntry.value == null and skip 
> over them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4283) records deleted from CachingKeyValueStore still appear in range and all queries

2016-10-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4283:
-
   Resolution: Fixed
Fix Version/s: (was: 0.10.1.1)
   0.10.2.0
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2001
[https://github.com/apache/kafka/pull/2001]

> records deleted from CachingKeyValueStore still appear in range and all 
> queries
> ---
>
> Key: KAFKA-4283
> URL: https://issues.apache.org/jira/browse/KAFKA-4283
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> Records deleted from CachingKeyValueStore appear in range and all queries. 
> The deleted record is replaced with an LRUCacheEntry(null), so that when a 
> flush happens the deletion can be sent downstream and removed from the 
> underlying store.
> As this is valid and needed, the iterator used when querying a cached store 
> needs to be aware of the entries where LRUCacheEntry.value == null and skip 
> over them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2001: KAFKA-4283: records deleted from CachingKeyValueSt...

2016-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2001


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #971

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4130; Link to Varnish architect notes is broken

--
[...truncated 7823 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED


[jira] [Resolved] (KAFKA-2552) Certain admin commands such as partition assignment fail on large clusters

2016-10-11 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-2552.

Resolution: Duplicate

> Certain admin commands such as partition assignment fail on large clusters
> --
>
> Key: KAFKA-2552
> URL: https://issues.apache.org/jira/browse/KAFKA-2552
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Abhishek Nigam
>Assignee: Abhishek Nigam
>
> This happens because the json generated is greater than 1 MB and exceeds the 
> default data limit of zookeeper nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-0.10.1-jdk7 #66

2016-10-11 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #970

2016-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (Ubuntu ubuntu ubuntu-eu docker) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init 

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:652)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ubuntu-eu2(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor526.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy177.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1046)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1086)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git init 
 returned status code 1:
stdout: 
stderr: : No space left 
on device

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1723)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1699)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1695)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1317)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:650)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: null
Retrying after 10 seconds
Cloning the remote Git repository
Cloning repository 

Build failed in Jenkins: kafka-trunk-jdk8 #969

2016-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (Ubuntu ubuntu ubuntu-eu docker) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init 

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:652)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ubuntu-eu2(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor526.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy177.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1046)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1086)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git init 
 returned status code 1:
stdout: 
stderr: : No space left 
on device

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1723)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1699)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1695)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1317)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:650)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: null
Retrying after 10 seconds
Cloning the remote Git repository
Cloning repository 

Re: Store flushing on commit.interval.ms from KIP-63 introduces aggregation latency

2016-10-11 Thread Greg Fodor
Thanks Eno -- my understanding is that cache is already enabled to be
100MB per rocksdb so it should be on already, but I'll check. I was
wondering if you could shed some light on the changes between 0.10.0
and 0.10.1 -- in 0.10.0 there was an intermediate cache within
RocksDbStore -- presumably this was there to improve performance,
despite there still being a lower level cache managed by rocksdb. Can
you shed some light why this cache was needed in 0.10.0? If it sounds
like our use case won't warrant the same need then we might be OK.

Overall however, this is really problematic for us, since we will have
to turn off caching for effectively all of our jobs. The way our
system works is that we have a number of jobs running kafka streams
that are configured via database tables we change via our web stack.
For example, when we want to tell our jobs to begin processing data
for a user, we insert a record for that user into the database which
gets passed via kafka connect to a kafka topic. The kafka streams job
is consuming this topic, does some basic group by operations and
repartitions on it, and joins it against other data streams so that it
knows what users should be getting processed.

So fundamentally we have two types of aggregations: the typical case
that was I think the target for the optimizations in KIP-63, where
latency is less critical since we are counting and emitting counts for
analysis, etc. And the other type of aggregation is where we are doing
simple transformations on data coming from the database in a way to
configure the live behavior of the job. Latency here is very
sensitive: users expect the job to react and start sending data for a
user immediately after the database records are changed.

So as you can see, since this is the paradigm we use to operate jobs,
we're in a bad position if we ever want to take advantage of the work
in KIP-63. All of our jobs are set up to work in this way, so we will
either have to maintain our fork or will have to shut off caching for
all of our jobs, neither of which sounds like a very good path.

On Tue, Oct 11, 2016 at 4:16 AM, Eno Thereska  wrote:
> Hi Greg,
>
> An alternative would be to set up RocksDB's cache, while keeping the streams 
> cache to 0. That might give you what you need, especially if you can work 
> with RocksDb and don't need to change the store.
>
> For example, here is how to set the Block Cache size to 100MB and the Write 
> Buffer size to 32MB
>
> https://github.com/facebook/rocksdb/wiki/Block-Cache 
> 
> https://github.com/facebook/rocksdb/wiki/Basic-Operations#write-buffer 
> 
>
> They can override these settings by creating an impl of RocksDBConfigSetter 
> and setting StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG in Kafka Streams.
>
> Hope this helps,
> Eno
>
>> On 10 Oct 2016, at 18:19, Greg Fodor  wrote:
>>
>> Hey Eno, thanks for the suggestion -- understood that my patch is not
>> something that could be accepted given the API change, I posted it to help
>> make the discussion concrete and because i needed a workaround. (Likely
>> we'll maintain this patch internally so we can move forward with the new
>> version, since the consumer heartbeat issue is something we really need
>> addressed.)
>>
>> Looking at the code, it seems that setting the cache size to zero will
>> disable all caching. However, the previous version of Kafka Streams had a
>> local cache within the RocksDBStore to reduce I/O. If we were to set the
>> cache size to zero, my guess is we'd see a large increase in I/O relative
>> to the previous version since we would no longer have caching of any kind
>> even intra-store. By the looks of it there isn't an easy way to replicate
>> the same caching behavior as the old version of Kafka Streams in the new
>> system without increasing latency, but maybe I'm missing something.
>>
>>
>> On Oct 10, 2016 3:10 AM, "Eno Thereska"  wrote:
>>
>>> Hi Greg,
>>>
>>> Thanks for trying 0.10.1. The best option you have for your specific app
>>> is to simply turn off caching by setting the cache size to 0. That should
>>> give you the old behaviour:
>>> streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
>>> 0L);
>>>
>>> Your PR is an alternative, but it requires changing the APIs and would
>>> require a KIP.
>>>
>>> Thanks
>>> Eno
>>>
 On 9 Oct 2016, at 23:49, Greg Fodor  wrote:

 JIRA opened here: https://issues.apache.org/jira/browse/KAFKA-4281

 On Sun, Oct 9, 2016 at 2:02 AM, Greg Fodor  wrote:
> I went ahead and did some more testing, and it feels to me one option
> for resolving this issue is having a method on KGroupedStream which
> can be used to configure if the operations on it (reduce/aggregate)
> will forward immediately or not. 

Re: [DISCUSS] KIP-81: Max in-flight fetches

2016-10-11 Thread Mickael Maison
Thanks for the feedback.

Regarding the config name, I agree it's probably best to reuse the
same name as the producer (buffer.memory) whichever implementation we
decide to use.

At first, I opted for limiting the max number of concurrent fetches as
it felt more natural in the Fetcher code. Whereas in the producer we
keep track of the size of the buffer with RecordAccumulator, the
consumer simply stores the completed fetches in a list so we don't
have the used memory size immediately. Also the number of inflight
fetches was already tracked by Fetcher.
That said, it shouldn't be too hard to keep track of the used memory
by the completed fetches collection if we decide to, either way should
work.

On Mon, Oct 10, 2016 at 3:40 PM, Ismael Juma  wrote:
> Hi Mickael,
>
> Thanks for the KIP. A quick comment on the rejected alternative of using a
> bounded memory pool:
>
> "While this might be the best way to handle that on the server side it's
> unclear if this would suit the client well. Usually the client has a rough
> idea about how many partitions it will be subscribed to so it's easier to
> size the maximum number of concurrent fetches."
>
> I think this should be discussed in more detail. The producer (a client)
> uses a `BufferPool`, so we should also explain why the consumer should
> follow a different approach. Also, with KIP-74, the number of partitions is
> less relevant than the number of brokers with partitions that a consumer is
> subscribed to (which can change as the Kafka cluster size changes).
>
> I think it's also worth separating implementation from the config options.
> For example, one could configure a memory limit with an implementation that
> limits the number of concurrent fetches or uses a bounded memory pool. Are
> there other good reasons to have an explicit concurrent fetches limit per
> consumer config? If so, it would be good to mention them in the KIP.
>
> Ismael
>
> On Mon, Oct 10, 2016 at 2:41 PM, Mickael Maison 
> wrote:
>
>> Hi all,
>>
>> I would like to discuss the following KIP proposal:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-81%3A+
>> Max+in-flight+fetches
>>
>>
>> Feedback and comments are welcome.
>> Thanks !
>>
>> Mickael
>>


[GitHub] kafka-site pull request #24: update htaccess to load images nested inside of...

2016-10-11 Thread derrickdoo
Github user derrickdoo closed the pull request at:

https://github.com/apache/kafka-site/pull/24


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-10-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4130.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1835
[https://github.com/apache/kafka/pull/1835]

> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
> Fix For: 0.10.1.0
>
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565915#comment-15565915
 ] 

ASF GitHub Bot commented on KAFKA-4130:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1835


> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
> Fix For: 0.10.1.0
>
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1835: [DOCS] KAFKA-4130: Link to Varnish architect notes...

2016-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1835


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565889#comment-15565889
 ] 

ASF GitHub Bot commented on KAFKA-4130:
---

GitHub user oscerd reopened a pull request:

https://github.com/apache/kafka/pull/1835

[DOCS] KAFKA-4130: Link to Varnish architect notes is broken

Hi all,

The PR is related to 

https://issues.apache.org/jira/browse/KAFKA-4130

Thanks,
Andrea

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/oscerd/kafka KAFKA-4130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1835


commit d937766d93afe330d742487445baa2e496a08146
Author: Andrea Cosentino 
Date:   2016-09-08T10:13:06Z

KAFKA-4130: [docs] Link to Varnish architect notes is broken




> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1835: [DOCS] KAFKA-4130: Link to Varnish architect notes...

2016-10-11 Thread oscerd
GitHub user oscerd reopened a pull request:

https://github.com/apache/kafka/pull/1835

[DOCS] KAFKA-4130: Link to Varnish architect notes is broken

Hi all,

The PR is related to 

https://issues.apache.org/jira/browse/KAFKA-4130

Thanks,
Andrea

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/oscerd/kafka KAFKA-4130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1835


commit d937766d93afe330d742487445baa2e496a08146
Author: Andrea Cosentino 
Date:   2016-09-08T10:13:06Z

KAFKA-4130: [docs] Link to Varnish architect notes is broken




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4269) Multiple KStream instances with at least one Regex source causes NPE when using multiple consumers

2016-10-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4269:
-
Fix Version/s: (was: 0.10.1.0)
   0.10.1.1

> Multiple KStream instances with at least one Regex source causes NPE when 
> using multiple consumers
> --
>
> Key: KAFKA-4269
> URL: https://issues.apache.org/jira/browse/KAFKA-4269
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.1.1
>
>
> I discovered this issue while doing testing for for KAFKA-4114. 
> KAFKA-4131 fixed the issue of a _single_ KStream with a regex source on 
> partitioned topics across multiple consumers.
> //KAFKA-4131 fixed this case assuming an "foo*" topics are partitioned
> KStream kstream = builder.source(Pattern.compile("foo.*"));
> KafkaStream stream = new KafkaStreams(builder, props);
> stream.start();  
> This is a new issue where there are _multiple_
> KStream instances (and one has a regex source) within a single KafkaStreams 
> object. When running the second or "following"
> consumer there are NPE errors generated in the RecordQueue.addRawRecords 
> method when attempting to consume records. 
> For example:
> KStream kstream = builder.source(Pattern.compile("foo.*"));
> KStream kstream2 = builder.source(.): //can be regex or named topic 
> sources
> KafkaStream stream = new KafkaStreams(builder, props);
> stream.start();
> By adding an additional KStream instance like above (whether Regex or Named 
> topic) causes a NPE when run as "follower"
> From my initial debugging I can see the TopicPartition assignments being set 
> on the "follower" KafkaStreams instance, but need to track down why and where 
> all assignments aren't being set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Request to get added to contributor list

2016-10-11 Thread Jason Gustafson
Done. Thanks for contributing!

-Jason

On Tue, Oct 11, 2016 at 3:25 AM, HuXi  wrote:

> Hi, can I get added to the contributor list? I 'd like to take crack at
> some issues. Thank you. My JIRA id: huxi_2b
>


[jira] [Commented] (KAFKA-4117) Cleanup StreamPartitionAssignor behavior

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565808#comment-15565808
 ] 

ASF GitHub Bot commented on KAFKA-4117:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2012

[WIP] KAFKA-4117: Stream partitionassignro cleanup

1. Create a new ClientMetadata` to collapse `Set 
consumerMemberIds`, `ClientState state`, and `HostInfo hostInfo`.

2. Stop reusing `stateChangelogTopicToTaskIds` and 
`internalSourceTopicToTaskIds` to access the (sub-)topology's internal 
repartition and changelog topics for clarity; also use the source topics 
num.partitions to set the num.partitions for repartition topics, and clarify to 
NOT have cycles since otherwise the while loop will fail.

3. `ensure-copartition` at the end to modify the number of partitions for 
repartition topics if necessary to be equal to other co-partition topics.

4. refactor `ClientState` as well and update the logic of `TaskAssignor` 
for clarity as well.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4117-stream-partitionassignro-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2012.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2012


commit fb0ad26e27cf3310d1ff9ad1c08f2d0e4e489496
Author: Guozhang Wang 
Date:   2016-10-07T23:03:52Z

group client metadata

commit 2a81a8eedb9c7b16e1446b580c9c006afa78ffac
Author: Guozhang Wang 
Date:   2016-10-08T03:15:13Z

cleanup co-partitioning process

commit f44d07cffc065f7abaeefcb5344484d44651dfb0
Author: Guozhang Wang 
Date:   2016-10-08T03:52:11Z

re-order assignment logic

commit ec369149b036bfeaff4f743ef7f7bfe93819e641
Author: Guozhang Wang 
Date:   2016-10-08T03:56:28Z

create changelog with compaction

commit 41e8c758390614dd8c66e29bbd92bb8d29bef26b
Author: Guozhang Wang 
Date:   2016-10-08T22:04:24Z

fix unit test

commit 6296da70a17f1c8129a35189890ebb117a130b8f
Author: Guozhang Wang 
Date:   2016-10-11T15:38:04Z

refactor client state




> Cleanup StreamPartitionAssignor behavior
> 
>
> Key: KAFKA-4117
> URL: https://issues.apache.org/jira/browse/KAFKA-4117
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: architecture
>
> I went through the whole assignment logic once again and I feel the logic has 
> now becomes a bit lossy, and I want to clean them up probably in another PR 
> but just dump my thoughts here on the appropriate logic:
> Some background:
> 1. Each {{KafkaStreams}} instance contains a clientId, and if not specified 
> default value is applicationId-1/2/etc if there are multiple instances inside 
> the same JVM. One instance contains multiple threads where the 
> thread-clientId is constructed as clientId-StreamThread-1/2/etc, and the 
> thread-clientId is used as the embedded consumer clientId as well as metrics 
> tag.
> 2. However, since one instance can contain multiple threads, and hence 
> multiple consumers, and when considering partition assignment, the streams 
> library need to take the capacity into consideration based on the granularity 
> of instance not on threads. Therefore we create a 4byte {{UUID.randomUUID()}} 
> as the processId and encode that in the subscription metadata bytes, and the 
> leader then knows if multiple consumer members are actually belong to the 
> same instance (i.e. belong to threads of that instance), so that when 
> assigning partitions it can balance among instances. NOTE that in production 
> we recommend one thread per instance, so consumersByClient will only have one 
> consumer per client (i.e. instance).
> 3. In addition, historically we hard-code the partition grouper logic, where 
> for each task, it is assigned only with one partition of its subscribed 
> topic. For example, if we have topicA with 5 partitions and topicB with 10 
> partitions, we will create 10 tasks, with the first five tasks containing one 
> of the partitions each, while the last five tasks contain only one partition 
> from topicB. And therefore the TaskId class contains the groupId of the 
> sub-topology and the partition, so that taskId(group, 1) gets partition1 of 
> topicA and partition1 of topicB. We later expose this to users to customize 
> so that more than one partitions of the topic can be assigned to the same 
> task, so that the partition field in the TaskId no 

[GitHub] kafka pull request #2012: [WIP] KAFKA-4117: Stream partitionassignro cleanup

2016-10-11 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/2012

[WIP] KAFKA-4117: Stream partitionassignro cleanup

1. Create a new ClientMetadata` to collapse `Set 
consumerMemberIds`, `ClientState state`, and `HostInfo hostInfo`.

2. Stop reusing `stateChangelogTopicToTaskIds` and 
`internalSourceTopicToTaskIds` to access the (sub-)topology's internal 
repartition and changelog topics for clarity; also use the source topics 
num.partitions to set the num.partitions for repartition topics, and clarify to 
NOT have cycles since otherwise the while loop will fail.

3. `ensure-copartition` at the end to modify the number of partitions for 
repartition topics if necessary to be equal to other co-partition topics.

4. refactor `ClientState` as well and update the logic of `TaskAssignor` 
for clarity as well.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
K4117-stream-partitionassignro-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2012.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2012


commit fb0ad26e27cf3310d1ff9ad1c08f2d0e4e489496
Author: Guozhang Wang 
Date:   2016-10-07T23:03:52Z

group client metadata

commit 2a81a8eedb9c7b16e1446b580c9c006afa78ffac
Author: Guozhang Wang 
Date:   2016-10-08T03:15:13Z

cleanup co-partitioning process

commit f44d07cffc065f7abaeefcb5344484d44651dfb0
Author: Guozhang Wang 
Date:   2016-10-08T03:52:11Z

re-order assignment logic

commit ec369149b036bfeaff4f743ef7f7bfe93819e641
Author: Guozhang Wang 
Date:   2016-10-08T03:56:28Z

create changelog with compaction

commit 41e8c758390614dd8c66e29bbd92bb8d29bef26b
Author: Guozhang Wang 
Date:   2016-10-08T22:04:24Z

fix unit test

commit 6296da70a17f1c8129a35189890ebb117a130b8f
Author: Guozhang Wang 
Date:   2016-10-11T15:38:04Z

refactor client state




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Can broker recover to alive when zk callback miss?

2016-10-11 Thread 涂扬
hi,
we meet a issue that the temporary node of broker in zookeeper was lost 
when the network bewteen broker and zk cluster is not good enough, while the 
process of the broker still exist. as we know, the controller would consider it 
to be offline in kafka. After we open zkClient log, we can find when the 
connection state between broker and zk cluster is changed from disconnected to 
connected, but the newSession callback is not called.so this
broker can not recover to alive except restart.
So we decide to add a heartbeat mechanism in the application layer  
between client and broker that distinguish from zkclient heartbeat.  Can we 
immediately register this broker to zk when we detect broker temporary node is 
not in zk path. or how can we solve it?
The main problem is that the watch callback has the possibility of 
miss, how can we solve it?
Thanks.


[jira] [Commented] (KAFKA-4291) TopicCommand --describe shows topics marked for deletion as under-replicated and unavailable

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565750#comment-15565750
 ] 

ASF GitHub Bot commented on KAFKA-4291:
---

GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/2011

KAFKA-4291: TopicCommand --describe shows topics marked for deletion …

…as under-replicated and unavailable

TopicCommand --describe now shows if a topic is marked for deletion.

Developed with @edoardocomar 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-4291

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2011.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2011


commit a849efdad6cd1611580e17e76ccac61c9b503a31
Author: Mickael Maison 
Date:   2016-10-11T15:21:15Z

KAFKA-4291: TopicCommand --describe shows topics marked for deletion as 
under-replicated and unavailable

TopicCommand --describe now shows if a topic is marked for deletion.




> TopicCommand --describe shows topics marked for deletion as under-replicated 
> and unavailable
> 
>
> Key: KAFKA-4291
> URL: https://issues.apache.org/jira/browse/KAFKA-4291
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>
> When using kafka-topics.sh --describe with --under-replicated-partitions or 
> --unavailable-partitions, topics marked for deletion are listed.
> While this is debatable whether we want to list such topics this way, it 
> should at least print that the topic is marked for deletion, like --list 
> does. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2011: KAFKA-4291: TopicCommand --describe shows topics m...

2016-10-11 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/2011

KAFKA-4291: TopicCommand --describe shows topics marked for deletion …

…as under-replicated and unavailable

TopicCommand --describe now shows if a topic is marked for deletion.

Developed with @edoardocomar 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka KAFKA-4291

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2011.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2011


commit a849efdad6cd1611580e17e76ccac61c9b503a31
Author: Mickael Maison 
Date:   2016-10-11T15:21:15Z

KAFKA-4291: TopicCommand --describe shows topics marked for deletion as 
under-replicated and unavailable

TopicCommand --describe now shows if a topic is marked for deletion.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms

2016-10-11 Thread Rajini Sivaram
I have created KIP-86 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers)
to enable pluggable credential providers.

On Mon, Oct 10, 2016 at 7:40 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Gwen,
>
> Thank you, will raise a separate KIP for a pluggable interface.
>
> On Mon, Oct 10, 2016 at 5:55 PM, Gwen Shapira  wrote:
>
>> I think it is fine to break the password store to an interface in a
>> separate KIP. I actually love the idea of smaller KIPs dealing with
>> more specific functionality. I just wasn't clear why it was rejected.
>>
>> Thank you for clarifying. I'm happy with current proposal.
>>
>> Gwen
>>
>> On Mon, Oct 10, 2016 at 2:17 AM, Rajini Sivaram
>>  wrote:
>> > Gwen,
>> >
>> > Thank you for reviewing the KIP.
>> >
>> > There has been interest in making the password verification in
>> SASL/PLAIN
>> > more pluggable. So I think it makes sense to have a pluggable interface
>> > that can be adopted for any SASL mechanism rather than just SCRAM. With
>> the
>> > current proposal, you can plugin another Scram SaslServer implementation
>> > with a different password store. This is similar to the current
>> SASL/PLAIN
>> > implementation.
>> >
>> > I agree that it will be good to make password stores more pluggable
>> rather
>> > than require users to override the whole SaslServer. I was going to look
>> > into this later, but I can do it as part of this KIP. Will update the
>> KIP
>> > with a pluggable interface.
>> >
>> > Thank you,
>> >
>> > Rajini
>> >
>> >
>> > On Fri, Oct 7, 2016 at 11:37 PM, Gwen Shapira 
>> wrote:
>> >
>> >> Can you talk more about rejecting the option of making the password
>> >> store pluggable? I am a bit uncomfortable with making ZK the one and
>> >> only password store...
>> >>
>> >> On Tue, Oct 4, 2016 at 6:43 AM, Rajini Sivaram
>> >>  wrote:
>> >> > Hi all,
>> >> >
>> >> > I have just created KIP-84 to add SCRAM-SHA-1 and SCRAM-SHA-256 SASL
>> >> > mechanisms to Kafka:
>> >> >
>> >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> >> 84%3A+Support+SASL+SCRAM+mechanisms
>> >> >
>> >> >
>> >> > Comments and suggestions are welcome.
>> >> >
>> >> > Thank you...
>> >> >
>> >> > Regards,
>> >> >
>> >> > Rajini
>> >>
>> >>
>> >>
>> >> --
>> >> Gwen Shapira
>> >> Product Manager | Confluent
>> >> 650.450.2760 | @gwenshap
>> >> Follow us: Twitter | blog
>> >>
>> >
>> >
>> >
>> > --
>> > Regards,
>> >
>> > Rajini
>>
>>
>>
>> --
>> Gwen Shapira
>> Product Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter | blog
>>
>
>
>
> --
> Regards,
>
> Rajini
>



-- 
Regards,

Rajini


[jira] [Created] (KAFKA-4292) Configurable SASL callback handlers

2016-10-11 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-4292:
-

 Summary: Configurable SASL callback handlers
 Key: KAFKA-4292
 URL: https://issues.apache.org/jira/browse/KAFKA-4292
 Project: Kafka
  Issue Type: Improvement
  Components: security
Affects Versions: 0.10.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 0.10.2.0


Implementation of KIP-86: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] KIP-86: Configurable SASL callback handlers

2016-10-11 Thread Rajini Sivaram
Hi all,

I have just created KIP-86 make callback handlers in SASL configurable so
that credential providers for SASL/PLAIN (and SASL/SCRAM when it is
implemented) can be used with custom credential callbacks:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers

Comments and suggestions are welcome.

Thank you...


Regards,

Rajini


Build failed in Jenkins: kafka-0.10.1-jdk7 #65

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4290: Fix timeout overflow in WorkerCoordinator.poll

--
[...truncated 4984 lines...]

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute STARTED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute STARTED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream STARTED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey STARTED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum STARTED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp STARTED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable STARTED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination STARTED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping STARTED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues STARTED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte STARTED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality STARTED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion STARTED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV0 STARTED

kafka.message.MessageCompressionTest > testLZ4FramingV0 PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV1 STARTED

kafka.message.MessageCompressionTest > testLZ4FramingV1 PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testWriteFullyTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteFullyTo PASSED


Re: Store flushing on commit.interval.ms from KIP-63 introduces aggregation latency

2016-10-11 Thread Eno Thereska
Hi Greg,

An alternative would be to set up RocksDB's cache, while keeping the streams 
cache to 0. That might give you what you need, especially if you can work with 
RocksDb and don't need to change the store.

For example, here is how to set the Block Cache size to 100MB and the Write 
Buffer size to 32MB

https://github.com/facebook/rocksdb/wiki/Block-Cache 

https://github.com/facebook/rocksdb/wiki/Basic-Operations#write-buffer 


They can override these settings by creating an impl of RocksDBConfigSetter and 
setting StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG in Kafka Streams.

Hope this helps,
Eno

> On 10 Oct 2016, at 18:19, Greg Fodor  wrote:
> 
> Hey Eno, thanks for the suggestion -- understood that my patch is not
> something that could be accepted given the API change, I posted it to help
> make the discussion concrete and because i needed a workaround. (Likely
> we'll maintain this patch internally so we can move forward with the new
> version, since the consumer heartbeat issue is something we really need
> addressed.)
> 
> Looking at the code, it seems that setting the cache size to zero will
> disable all caching. However, the previous version of Kafka Streams had a
> local cache within the RocksDBStore to reduce I/O. If we were to set the
> cache size to zero, my guess is we'd see a large increase in I/O relative
> to the previous version since we would no longer have caching of any kind
> even intra-store. By the looks of it there isn't an easy way to replicate
> the same caching behavior as the old version of Kafka Streams in the new
> system without increasing latency, but maybe I'm missing something.
> 
> 
> On Oct 10, 2016 3:10 AM, "Eno Thereska"  wrote:
> 
>> Hi Greg,
>> 
>> Thanks for trying 0.10.1. The best option you have for your specific app
>> is to simply turn off caching by setting the cache size to 0. That should
>> give you the old behaviour:
>> streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
>> 0L);
>> 
>> Your PR is an alternative, but it requires changing the APIs and would
>> require a KIP.
>> 
>> Thanks
>> Eno
>> 
>>> On 9 Oct 2016, at 23:49, Greg Fodor  wrote:
>>> 
>>> JIRA opened here: https://issues.apache.org/jira/browse/KAFKA-4281
>>> 
>>> On Sun, Oct 9, 2016 at 2:02 AM, Greg Fodor  wrote:
 I went ahead and did some more testing, and it feels to me one option
 for resolving this issue is having a method on KGroupedStream which
 can be used to configure if the operations on it (reduce/aggregate)
 will forward immediately or not. I did a quick patch and was able to
 determine that if the records are forwarded immediately it resolves
 the issue I am seeing. Having it be done on a per-KGroupedStream basis
 would provide maximum flexibility.
 
 On Sun, Oct 9, 2016 at 1:06 AM, Greg Fodor  wrote:
> I'm taking 0.10.1 for a spin on our existing Kafka Streams jobs and
> I'm hitting what seems to be a serious issue (at least, for us) with
> the changes brought about in KIP-63. In our job, we have a number of
> steps in the topology where we perform a repartition and aggregation
> on topics that require low latency. These topics have a very low
> message volume but require subsecond latency for the aggregations to
> complete since they are configuration data that drive the rest of the
> job and need to be applied immediately.
> 
> In 0.10.0, we performed a through (for repartitioning) and aggregateBy
> and this resulted in minimal latency as the aggregateBy would just
> result in a consumer attached to the output of the through and the
> processor would consume + aggregate messages immediately passing them
> to the next step in the topology.
> 
> However, in 0.10.1 the aggregateBy API is no longer available and it
> is necessary to pivot the data through a groupByKey and then
> aggregate(). The problem is that this mechanism results in the
> intermediate KTable state store storing the data as usual, but the
> data is not forwarded downstream until the next store flush. (Due to
> the use of ForwardingCacheFlushListener instead of calling forward()
> during the process of the record.)
> 
> As noted in KIP-63 and as I saw in the code, the flush interval of
> state stores is commit.interval.ms. For us, this has been tuned to a
> few seconds, and since we have a number of these aggregations in our
> job sequentially, this now results in many seconds of latency in the
> worst case for a tuple to travel through our topology.
> 
> It seems too inflexible to have the flush interval always be the same
> as the commit interval across all aggregates. For certain aggregations
> 

[jira] [Created] (KAFKA-4291) TopicCommand --describe shows topics marked for deletion as under-replicated and unavailable

2016-10-11 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-4291:
-

 Summary: TopicCommand --describe shows topics marked for deletion 
as under-replicated and unavailable
 Key: KAFKA-4291
 URL: https://issues.apache.org/jira/browse/KAFKA-4291
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 0.10.0.1
Reporter: Mickael Maison
Assignee: Mickael Maison


When using kafka-topics.sh --describe with --under-replicated-partitions or 
--unavailable-partitions, topics marked for deletion are listed.

While this is debatable whether we want to list such topics this way, it should 
at least print that the topic is marked for deletion, like --list does. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Request to get added to contributor list

2016-10-11 Thread HuXi
Hi, can I get added to the contributor list? I 'd like to take crack at some 
issues. Thank you. My JIRA id: huxi_2b  
 

[jira] [Commented] (KAFKA-4130) [docs] Link to Varnish architect notes is broken

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565051#comment-15565051
 ] 

ASF GitHub Bot commented on KAFKA-4130:
---

Github user oscerd closed the pull request at:

https://github.com/apache/kafka/pull/1835


> [docs] Link to Varnish architect notes is broken
> 
>
> Key: KAFKA-4130
> URL: https://issues.apache.org/jira/browse/KAFKA-4130
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Stevo Slavic
>Assignee: Andrea Cosentino
>Priority: Trivial
>
> Paragraph in Kafka documentation
> {quote}
> This style of pagecache-centric design is described in an article on the 
> design of Varnish here (along with a healthy dose of arrogance). 
> {quote}
> contains a broken link.
> Should probably link to http://varnish-cache.org/wiki/ArchitectNotes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3385) Need to log "Rejected connection" as WARNING message

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15565050#comment-15565050
 ] 

ASF GitHub Bot commented on KAFKA-3385:
---

Github user oscerd closed the pull request at:

https://github.com/apache/kafka/pull/1144


> Need to log "Rejected connection" as WARNING message
> 
>
> Key: KAFKA-3385
> URL: https://issues.apache.org/jira/browse/KAFKA-3385
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiaomin Zhang
>Assignee: Andrea Cosentino
>Priority: Minor
>
> We may found below INFO messages in the log due to inappropriate 
> configuration:
> INFO kafka.network. Acceptor: Rejected connection from /, address already 
> has the configured maximum of 10 connections.
> It will make more sense for Kafka to report above message as "WARN", not just 
> "INFO", as it truly indicates something need to check against. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1835: [DOCS] KAFKA-4130: Link to Varnish architect notes...

2016-10-11 Thread oscerd
Github user oscerd closed the pull request at:

https://github.com/apache/kafka/pull/1835


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1144: KAFKA-3385: Need to log "Rejected connection" as W...

2016-10-11 Thread oscerd
Github user oscerd closed the pull request at:

https://github.com/apache/kafka/pull/1144


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #968

2016-10-11 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : kafka-trunk-jdk7 #1621

2016-10-11 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-0.10.1-jdk7 #64

2016-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (Ubuntu ubuntu ubuntu-eu docker) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Recording test results
ERROR: Build step failed with exception
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:127)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:107)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ubuntu-eu2(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at hudson.FilePath.act(FilePath.java:1007)
at hudson.FilePath.act(FilePath.java:996)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:103)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:128)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:149)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
at hudson.model.Build$BuildExecution.post2(Build.java:185)
at 

Build failed in Jenkins: kafka-0.10.1-jdk7 #63

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Add images missing from documentation

--
[...truncated 2407 lines...]

kafka.api.AdminClientTest > testListGroups STARTED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.ProducerBounceTest > testBrokerFailure STARTED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.ClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.ClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.ClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:931)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:404)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:609)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:574)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1446)
at hudson.model.AbstractProject.poll(AbstractProject.java:1349)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:528)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:557)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #967

2016-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (Ubuntu ubuntu ubuntu-eu docker) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/kafka.git
 > git init  # timeout=10
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init 

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:652)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ubuntu-eu2(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor526.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy177.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1046)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1086)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git init 
 returned status code 1:
stdout: 
stderr: : No space left 
on device

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1723)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1699)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1695)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1317)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:650)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: null
Retrying after 10 seconds
Cloning the remote Git repository
Cloning repository 

Jenkins build is back to normal : kafka-trunk-jdk8 #966

2016-10-11 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk7 #1620

2016-10-11 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (Ubuntu ubuntu ubuntu-eu docker) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1191)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1267)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Recording test results
ERROR: Build step failed with exception
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:127)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:107)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2772)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to ubuntu-eu2(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:781)
at hudson.FilePath.act(FilePath.java:1007)
at hudson.FilePath.act(FilePath.java:996)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:103)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:128)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:149)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:720)
at hudson.model.Build$BuildExecution.post2(Build.java:185)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:665)
at 

Build failed in Jenkins: kafka-trunk-jdk7 #1619

2016-10-11 Thread Apache Jenkins Server
See 

Changes:

[jason] MINOR: Add images missing from documentation

[me] KAFKA-4010: add ConfigDef toEnrichedRst() for additional fields in

--
[...truncated 4982 lines...]

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreGroupErrorMapping PASSED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure STARTED

kafka.coordinator.GroupMetadataManagerTest > testCommitOffsetFailure PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffset PASSED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
STARTED

kafka.coordinator.GroupMetadataManagerTest > testExpireOffsetsWithActiveGroup 
PASSED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup STARTED

kafka.coordinator.GroupMetadataManagerTest > testStoreEmptyGroup PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols STARTED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata STARTED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
STARTED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol STARTED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
STARTED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute STARTED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute STARTED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream STARTED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey STARTED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.MessageTest > testChecksum STARTED

kafka.message.MessageTest > testChecksum PASSED

kafka.message.MessageTest > testInvalidTimestamp STARTED

kafka.message.MessageTest > testInvalidTimestamp PASSED

kafka.message.MessageTest > testIsHashable STARTED

kafka.message.MessageTest > testIsHashable PASSED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination STARTED

kafka.message.MessageTest > testInvalidTimestampAndMagicValueCombination PASSED

kafka.message.MessageTest > testExceptionMapping STARTED

kafka.message.MessageTest > testExceptionMapping PASSED

kafka.message.MessageTest > testFieldValues STARTED

kafka.message.MessageTest > testFieldValues PASSED

kafka.message.MessageTest > testInvalidMagicByte STARTED

kafka.message.MessageTest > testInvalidMagicByte PASSED

kafka.message.MessageTest > testEquality STARTED

kafka.message.MessageTest > testEquality PASSED

kafka.message.MessageTest > testMessageFormatConversion STARTED

kafka.message.MessageTest > testMessageFormatConversion PASSED

kafka.message.MessageCompressionTest > testCompressSize STARTED

kafka.message.MessageCompressionTest > testCompressSize PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV0 STARTED

kafka.message.MessageCompressionTest > testLZ4FramingV0 PASSED

kafka.message.MessageCompressionTest > testLZ4FramingV1 STARTED

kafka.message.MessageCompressionTest > testLZ4FramingV1 PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress STARTED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq 
STARTED

kafka.message.ByteBufferMessageSetTest > testMessageWithProvidedOffsetSeq PASSED

kafka.message.ByteBufferMessageSetTest > testWriteFullyTo STARTED


[jira] [Updated] (KAFKA-4290) High CPU caused by timeout overflow in WorkerCoordinator

2016-10-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4290:
-
Fix Version/s: 0.10.2.0

> High CPU caused by timeout overflow in WorkerCoordinator
> 
>
> Key: KAFKA-4290
> URL: https://issues.apache.org/jira/browse/KAFKA-4290
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> The timeout passed to {{WorkerCoordinator.poll()}} can overflow if large 
> enough because we add it to the current time in order to calculate the call's 
> deadline. This shortcuts the poll loop and results in a very tight event loop 
> which can saturate a CPU. We hit this case out of the box because Connect 
> uses a default timeout of {{Long.MAX_VALUE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4290) High CPU caused by timeout overflow in WorkerCoordinator

2016-10-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4290.
--
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava

> High CPU caused by timeout overflow in WorkerCoordinator
> 
>
> Key: KAFKA-4290
> URL: https://issues.apache.org/jira/browse/KAFKA-4290
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> The timeout passed to {{WorkerCoordinator.poll()}} can overflow if large 
> enough because we add it to the current time in order to calculate the call's 
> deadline. This shortcuts the poll loop and results in a very tight event loop 
> which can saturate a CPU. We hit this case out of the box because Connect 
> uses a default timeout of {{Long.MAX_VALUE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4290) High CPU caused by timeout overflow in WorkerCoordinator

2016-10-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564640#comment-15564640
 ] 

ASF GitHub Bot commented on KAFKA-4290:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2009


> High CPU caused by timeout overflow in WorkerCoordinator
> 
>
> Key: KAFKA-4290
> URL: https://issues.apache.org/jira/browse/KAFKA-4290
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> The timeout passed to {{WorkerCoordinator.poll()}} can overflow if large 
> enough because we add it to the current time in order to calculate the call's 
> deadline. This shortcuts the poll loop and results in a very tight event loop 
> which can saturate a CPU. We hit this case out of the box because Connect 
> uses a default timeout of {{Long.MAX_VALUE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2009: KAFKA-4290: Fix timeout overflow in WorkerCoordina...

2016-10-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2009


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---