Build failed in Jenkins: kafka-trunk-jdk8 #2353

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[becket.qin] KAFKA-6321; Consolidate calls to KafkaConsumer's 
`beginningOffsets()`

--
[...truncated 1.43 MB...]
org.apache.kafka.common.config.ConfigDefTest > testValidate STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidate PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords STARTED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > toRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig STARTED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
PASSED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice STARTED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass STARTED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs STARTED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testEmptyList STARTED

org.apache.kafka.common.config.AbstractConfigTest > testEmptyList PASSED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithPrefixOverride STARTED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithPrefixOverride PA

Build failed in Jenkins: kafka-trunk-jdk7 #3119

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[becket.qin] KAFKA-6321; Consolidate calls to KafkaConsumer's 
`beginningOffsets()`

--
[...truncated 405.44 KB...]
kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesReque

Build failed in Jenkins: kafka-trunk-jdk8 #2352

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-4897; Add pause method to ShutdownableThread (#4393)

--
[...truncated 1.62 MB...]

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled 
STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled PASSED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled 
STARTED

org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosDisabled 
PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldRestoreTransactionalMessages PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInne

[jira] [Created] (KAFKA-6489) Fetcher.retrieveOffsetsByTimes() should add all the topics to the metadata refresh topics set.

2018-01-25 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-6489:
---

 Summary: Fetcher.retrieveOffsetsByTimes() should add all the 
topics to the metadata refresh topics set.
 Key: KAFKA-6489
 URL: https://issues.apache.org/jira/browse/KAFKA-6489
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Affects Versions: 1.0.0
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 1.1.0


Currently if users call KafkaConsumer.offsetsForTimes() with a large set of 
partitions. The consumer will add one topic at a time for the metadata refresh. 
We should add all the topics to the metadata topics and just do one metadata 
refresh.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-01-25 Thread Dong Lin
Hey Litao,

Not sure there will be an easy way to select the broker with highest LEO
without losing acknowledged message. In case it is useful, here is another
idea. Maybe we can have a mechanism to turn switch between the min.isr and
isr set for determining when to acknowledge a message. Controller can
probably use RPC to request the current leader to use isr set before it
sends LeaderAndIsrRequest for leadership change.

Regards,
Dong


On Wed, Jan 24, 2018 at 7:29 PM, Litao Deng 
wrote:

> Thanks Jun for the detailed feedback.
>
> Yes, for #1, I mean the live replicas from the ISR.
>
> Actually, I believe for all of the 4 new leader election strategies
> (offline, reassign, preferred replica and controlled shutdown), we need to
> make corresponding changes. Will document the details in the KIP.
>
> On Wed, Jan 24, 2018 at 3:59 PM, Jun Rao  wrote:
>
> > Hi, Litao,
> >
> > Thanks for the KIP. Good proposal. A few comments below.
> >
> > 1. The KIP says "select the live replica with the largest LEO".  I guess
> > what you meant is selecting the live replicas in ISR with the largest
> LEO?
> >
> > 2. I agree that we can probably just reuse the current min.isr
> > configuration, but with a slightly different semantics. Currently, if
> > min.isr is set, a user expects the record to be in at least min.isr
> > replicas on successful ack. This KIP guarantees this too. Most people are
> > probably surprised that currently the ack is only sent back after all
> > replicas in ISR receive the record. This KIP will change the ack to only
> > wait on min.isr replicas, which matches the user's expectation and gives
> > better latency. Currently, we guarantee no data loss if there are fewer
> > than replication factor failures. The KIP changes that to fewer than
> > min.isr failures. The latter probably matches the user expectation.
> >
> > 3. I agree that the new leader election process is a bit more
> complicated.
> > The controller now needs to contact all replicas in ISR to determine who
> > has the longest log. However, this happens infrequently. So, it's
> probably
> > worth doing for the better latency in #2.
> >
> > 4. We have to think through the preferred leader election process.
> > Currently, the first assigned replica is preferred for load balancing.
> > There is a process to automatically move the leader to the preferred
> > replica when it's in sync. The issue is that the preferred replica may no
> > be the replica with the longest log. Naively switching to the preferred
> > replica may cause data loss when there are actually fewer failures than
> > configured min.isr. One way to address this issue is to do the following
> > steps during preferred leader election: (a) controller sends an RPC
> request
> > to the current leader; (b) the current leader stops taking new writes
> > (sending a new error code to the clients) and returns its LEO (call it L)
> > to the controller; (c) the controller issues an RPC request to the
> > preferred replica and waits its LEO to reach L; (d) the controller
> changes
> > the leader to the preferred replica.
> >
> > Jun
> >
> > On Wed, Jan 24, 2018 at 2:51 PM, Litao Deng
>  > >
> > wrote:
> >
> > > Sorry folks, just realized I didn't use the correct thread format for
> the
> > > discussion. I started this new one and copied all of the responses from
> > the
> > > old one.
> > >
> > > @Dong
> > > It makes sense to just use the min.insync.replicas instead of
> > introducing a
> > > new config, and we must make this change together with the LEO-based
> new
> > > leader election.
> > >
> > > @Xi
> > > I thought about embedding the LEO information to the ControllerContext,
> > > didn't find a way. Using RPC will make the leader election period
> longer
> > > and this should happen in very rare cases (broker failure, controlled
> > > shutdown, preferred leader election and partition reassignment).
> > >
> > > @Jeff
> > > The current leader election is to pick the first replica from AR which
> > > exists both in the live brokers and ISR sets. I agree with you about
> > > changing the current/default behavior will cause many confusions, and
> > > that's the reason the title is "Add Support ...". In this case, we
> > wouldn't
> > > break any current promises and provide a separate option for our user.
> > > In terms of KIP-250, I feel it is more like the "Semisynchronous
> > > Replication" in the MySQL world, and yes it is something between acks=1
> > and
> > > acks=insync.replicas. Additionally, I feel KIP-250 and KIP-227 are
> > > two orthogonal improvements. KIP-227 is to improve the replication
> > protocol
> > > (like the introduction of parallel replication in MySQL), and KIP-250
> is
> > an
> > > enhancement for the replication architecture (sync, semi-sync, and
> > async).
> > >
> > >
> > > Dong Lin
> > >
> > > > Thanks for the KIP. I have one quick comment before you provide more
> > > detail
> > > > on how to select the leader with the largest LEO.
> > > > Do you think it w

Build failed in Jenkins: kafka-trunk-jdk7 #3118

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-4897; Add pause method to ShutdownableThread (#4393)

--
[...truncated 1.87 MB...]

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets 
STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTe

Jenkins build is back to normal : kafka-trunk-jdk9 #333

2018-01-25 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-232: Detect outdated metadata using leaderEpoch and partitionEpoch

2018-01-25 Thread Dong Lin
Hey Colin,

Thanks for the comment.

On Thu, Jan 25, 2018 at 4:15 PM, Colin McCabe  wrote:

> On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> > Hey Colin,
> >
> > Thanks for reviewing the KIP.
> >
> > If I understand you right, you maybe suggesting that we can use a global
> > metadataEpoch that is incremented every time controller updates metadata.
> > The problem with this solution is that, if a topic is deleted and created
> > again, user will not know whether that the offset which is stored before
> > the topic deletion is no longer valid. This motivates the idea to include
> > per-partition partitionEpoch. Does this sound reasonable?
>
> Hi Dong,
>
> Perhaps we can store the last valid offset of each deleted topic in
> ZooKeeper.  Then, when a topic with one of those names gets re-created, we
> can start the topic at the previous end offset rather than at 0.  This
> preserves immutability.  It is no more burdensome than having to preserve a
> "last epoch" for the deleted partition somewhere, right?
>

My concern with this solution is that the number of zookeeper nodes get
more and more over time if some users keep deleting and creating topics. Do
you think this can be a problem?


>
> >
> > Then the next question maybe, should we use a global metadataEpoch +
> > per-partition partitionEpoch, instead of using per-partition leaderEpoch
> +
> > per-partition leaderEpoch. The former solution using metadataEpoch would
> > not work due to the following scenario (provided by Jun):
> >
> > "Consider the following scenario. In metadata v1, the leader for a
> > partition is at broker 1. In metadata v2, leader is at broker 2. In
> > metadata v3, leader is at broker 1 again. The last committed offset in
> v1,
> > v2 and v3 are 10, 20 and 30, respectively. A consumer is started and
> reads
> > metadata v1 and reads messages from offset 0 to 25 from broker 1. My
> > understanding is that in the current proposal, the metadata version
> > associated with offset 25 is v1. The consumer is then restarted and
> fetches
> > metadata v2. The consumer tries to read from broker 2, which is the old
> > leader with the last offset at 20. In this case, the consumer will still
> > get OffsetOutOfRangeException incorrectly."
> >
> > Regarding your comment "For the second purpose, this is "soft state"
> > anyway.  If the client thinks X is the leader but Y is really the leader,
> > the client will talk to X, and X will point out its mistake by sending
> back
> > a NOT_LEADER_FOR_PARTITION.", it is probably no true. The problem here is
> > that the old leader X may still think it is the leader of the partition
> and
> > thus it will not send back NOT_LEADER_FOR_PARTITION. The reason is
> provided
> > in KAFKA-6262. Can you check if that makes sense?
>
> This is solvable with a timeout, right?  If the leader can't communicate
> with the controller for a certain period of time, it should stop acting as
> the leader.  We have to solve this problem, anyway, in order to fix all the
> corner cases.
>

Not sure if I fully understand your proposal. The proposal seems to require
non-trivial changes to our existing leadership election mechanism. Could
you provide more detail regarding how it works? For example, how should
user choose this timeout, how leader determines whether it can still
communicate with controller, and how this triggers controller to elect new
leader?


> best,
> Colin
>
> >
> > Regards,
> > Dong
> >
> >
> > On Wed, Jan 24, 2018 at 10:39 AM, Colin McCabe 
> wrote:
> >
> > > Hi Dong,
> > >
> > > Thanks for proposing this KIP.  I think a metadata epoch is a really
> good
> > > idea.
> > >
> > > I read through the DISCUSS thread, but I still don't have a clear
> picture
> > > of why the proposal uses a metadata epoch per partition rather than a
> > > global metadata epoch.  A metadata epoch per partition is kind of
> > > unpleasant-- it's at least 4 extra bytes per partition that we have to
> send
> > > over the wire in every full metadata request, which could become extra
> > > kilobytes on the wire when the number of partitions becomes large.
> Plus,
> > > we have to update all the auxillary classes to include an epoch.
> > >
> > > We need to have a global metadata epoch anyway to handle partition
> > > addition and deletion.  For example, if I give you
> > > MetadataResponse{part1,epoch 1, part2, epoch 1} and {part1, epoch1},
> which
> > > MetadataResponse is newer?  You have no way of knowing.  It could be
> that
> > > part2 has just been created, and the response with 2 partitions is
> newer.
> > > Or it coudl be that part2 has just been deleted, and therefore the
> response
> > > with 1 partition is newer.  You must have a global epoch to
> disambiguate
> > > these two cases.
> > >
> > > Previously, I worked on the Ceph distributed filesystem.  Ceph had the
> > > concept of a map of the whole cluster, maintained by a few servers
> doing
> > > paxos.  This map was versioned by a single 64-bit epoch number which
> > > 

Re: [VOTE] KIP-232: Detect outdated metadata using leaderEpoch and partitionEpoch

2018-01-25 Thread Colin McCabe
On Wed, Jan 24, 2018, at 21:07, Dong Lin wrote:
> Hey Colin,
> 
> Thanks for reviewing the KIP.
> 
> If I understand you right, you maybe suggesting that we can use a global
> metadataEpoch that is incremented every time controller updates metadata.
> The problem with this solution is that, if a topic is deleted and created
> again, user will not know whether that the offset which is stored before
> the topic deletion is no longer valid. This motivates the idea to include
> per-partition partitionEpoch. Does this sound reasonable?

Hi Dong,

Perhaps we can store the last valid offset of each deleted topic in ZooKeeper.  
Then, when a topic with one of those names gets re-created, we can start the 
topic at the previous end offset rather than at 0.  This preserves 
immutability.  It is no more burdensome than having to preserve a "last epoch" 
for the deleted partition somewhere, right?

> 
> Then the next question maybe, should we use a global metadataEpoch +
> per-partition partitionEpoch, instead of using per-partition leaderEpoch +
> per-partition leaderEpoch. The former solution using metadataEpoch would
> not work due to the following scenario (provided by Jun):
> 
> "Consider the following scenario. In metadata v1, the leader for a
> partition is at broker 1. In metadata v2, leader is at broker 2. In
> metadata v3, leader is at broker 1 again. The last committed offset in v1,
> v2 and v3 are 10, 20 and 30, respectively. A consumer is started and reads
> metadata v1 and reads messages from offset 0 to 25 from broker 1. My
> understanding is that in the current proposal, the metadata version
> associated with offset 25 is v1. The consumer is then restarted and fetches
> metadata v2. The consumer tries to read from broker 2, which is the old
> leader with the last offset at 20. In this case, the consumer will still
> get OffsetOutOfRangeException incorrectly."
> 
> Regarding your comment "For the second purpose, this is "soft state"
> anyway.  If the client thinks X is the leader but Y is really the leader,
> the client will talk to X, and X will point out its mistake by sending back
> a NOT_LEADER_FOR_PARTITION.", it is probably no true. The problem here is
> that the old leader X may still think it is the leader of the partition and
> thus it will not send back NOT_LEADER_FOR_PARTITION. The reason is provided
> in KAFKA-6262. Can you check if that makes sense?

This is solvable with a timeout, right?  If the leader can't communicate with 
the controller for a certain period of time, it should stop acting as the 
leader.  We have to solve this problem, anyway, in order to fix all the corner 
cases.

best,
Colin

> 
> Regards,
> Dong
> 
> 
> On Wed, Jan 24, 2018 at 10:39 AM, Colin McCabe  wrote:
> 
> > Hi Dong,
> >
> > Thanks for proposing this KIP.  I think a metadata epoch is a really good
> > idea.
> >
> > I read through the DISCUSS thread, but I still don't have a clear picture
> > of why the proposal uses a metadata epoch per partition rather than a
> > global metadata epoch.  A metadata epoch per partition is kind of
> > unpleasant-- it's at least 4 extra bytes per partition that we have to send
> > over the wire in every full metadata request, which could become extra
> > kilobytes on the wire when the number of partitions becomes large.  Plus,
> > we have to update all the auxillary classes to include an epoch.
> >
> > We need to have a global metadata epoch anyway to handle partition
> > addition and deletion.  For example, if I give you
> > MetadataResponse{part1,epoch 1, part2, epoch 1} and {part1, epoch1}, which
> > MetadataResponse is newer?  You have no way of knowing.  It could be that
> > part2 has just been created, and the response with 2 partitions is newer.
> > Or it coudl be that part2 has just been deleted, and therefore the response
> > with 1 partition is newer.  You must have a global epoch to disambiguate
> > these two cases.
> >
> > Previously, I worked on the Ceph distributed filesystem.  Ceph had the
> > concept of a map of the whole cluster, maintained by a few servers doing
> > paxos.  This map was versioned by a single 64-bit epoch number which
> > increased on every change.  It was propagated to clients through gossip.  I
> > wonder if something similar could work here?
> >
> > It seems like the the Kafka MetadataResponse serves two somewhat unrelated
> > purposes.  Firstly, it lets clients know what partitions exist in the
> > system and where they live.  Secondly, it lets clients know which nodes
> > within the partition are in-sync (in the ISR) and which node is the leader.
> >
> > The first purpose is what you really need a metadata epoch for, I think.
> > You want to know whether a partition exists or not, or you want to know
> > which nodes you should talk to in order to write to a given partition.  A
> > single metadata epoch for the whole response should be adequate here.  We
> > should not change the partition assignment without going through zookeeper
> > (or a

Build failed in Jenkins: kafka-trunk-jdk8 #2351

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in consumer group command error (#4463)

[jason] MINOR: Update consumer group command documentation with additionally

[jason] KAFKA-6429; LogCleanerManager.cleanableOffsets should create objects …

--
[...truncated 1.46 MB...]

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = true] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInnerRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuterRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeft[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testMultiInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] STARTED

org.apache.kafka.streams.integration.StreamStreamJoinIntegrationTest > 
testLeftRepartitioned[caching enabled = false] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceInt

Re: [VOTE] KIP-227: Introduce Fetch Requests that are Incremental to Increase Partition Scalability

2018-01-25 Thread Colin McCabe
On Thu, Jan 25, 2018, at 02:23, Mickael Maison wrote:
> I'm late to the party but +1 and thanks for the KIP

Thanks

> 
> On Thu, Jan 25, 2018 at 12:36 AM, Ismael Juma  wrote:
> > Agreed, Jun.
> >
> > Ismael
> >
> > On Wed, Jan 24, 2018 at 4:08 PM, Jun Rao  wrote:
> >
> >> Since this is a server side metric, it's probably better to use Yammer Rate
> >> (which has count) for consistency.

Good point.

best,
Colin

> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Tue, Jan 23, 2018 at 10:17 PM, Colin McCabe  wrote:
> >>
> >> > On Tue, Jan 23, 2018, at 21:47, Ismael Juma wrote:
> >> > > Colin,
> >> > >
> >> > > You get a cumulative count for rates since we added
> >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics>
> >> >
> >> > Oh, good point.
> >> >
> >> > 
> >> >
> >> >
> >> > > Ismael
> >> > >
> >> > > On Tue, Jan 23, 2018 at 4:21 PM, Colin McCabe
> >> > >  wrote:>
> >> > > > On Tue, Jan 23, 2018, at 11:57, Jun Rao wrote:
> >> > > > > Hi, Collin,
> >> > > > >
> >> > > > > Thanks for the updated KIP. +1. Just a minor comment. It seems
> >> > > > > that it's> > > better for TotalIncrementalFetchSessionsEvicted to
> >> > be a rate,
> >> > > > > instead of> > > just an ever-growing count.
> >> > > >
> >> > > > Thanks.  Perhaps we can add the rate in addition to the total
> >> > > > eviction> > count?
> >> > > >
> >> > > > best,
> >> > > > Colin
> >> > > >
> >> > > > >
> >> > > > > Jun
> >> > > > >
> >> > > > > On Mon, Jan 22, 2018 at 4:35 PM, Jason Gustafson
> >> > > > > > > wrote:
> >> > > > >
> >> > > > > > >
> >> > > > > > > What if we want to have fetch sessions for non-incremental
> >> > > > > > > fetches> > in the
> >> > > > > > > future, though?  Also, we don't expect this configuration to
> >> > > > > > > be> > changed
> >> > > > > > > often, so it doesn't really need to be short.
> >> > > > > >
> >> > > > > >
> >> > > > > > Hmm.. But in that case, I'm not sure we'd need to distinguish
> >> > > > > > the two> > > > cases. If the non-incremental sessions are
> >> > occupying space
> >> > > > proportional to
> >> > > > > > the fetched partitions, using the same config for both would be>
> >> >
> >> > reasonable.
> >> > > > > > If they are not (which is more likely), we probably wouldn't
> >> > > > > > need a> > config
> >> > > > > > at all. Given that, I'd probably still opt for the more concise
> >> > > > > > name.> > It's
> >> > > > > > not a blocker for me though.
> >> > > > > >
> >> > > > > > +1 on the KIP.
> >> > > > > >
> >> > > > > > -Jason
> >> > > > > >
> >> > > > > > On Mon, Jan 22, 2018 at 3:56 PM, Colin McCabe
> >> > > > > > > > wrote:
> >> > > > > >
> >> > > > > > > On Mon, Jan 22, 2018, at 15:42, Jason Gustafson wrote:
> >> > > > > > > > Hi Colin,
> >> > > > > > > >
> >> > > > > > > > This is looking good to me. A few comments:
> >> > > > > > > >
> >> > > > > > > > 1. The fetch type seems unnecessary in the request and
> >> > > > > > > >response> > schemas
> >> > > > > > > > since it can be inferred by the sessionId/epoch.
> >> > > > > > >
> >> > > > > > > Hi Jason,
> >> > > > > > >
> >> > > > > > > Fair enough... if we need it later, we can always bump the RPC>
> >> > > version.
> >> > > > > > >
> >> > > > > > > > 2. I agree with Jun that a separate array for partitions to
> >> > > > > > > >remove> > > > would
> >> > > > > > > be
> >> > > > > > > > more intuitive.
> >> > > > > > >
> >> > > > > > > OK.  I'll switch it to using a separate array.
> >> > > > > > >
> >> > > > > > > > 3. I'm not super thrilled with the cache configuration since
> >> > > > > > > >it> > seems
> >> > > > > > to
> >> > > > > > > > tie us a bit too closely to the implementation. You've
> >> > > > > > > > mostly> > convinced
> >> > > > > > > me
> >> > > > > > > > on the need for the slots config, but I wonder if we can at
> >> > > > > > > > least> > do
> >> > > > > > > > without "min.incremental.fetch.session.eviction.ms"? For
> >> > > > > > > > one, I> > think
> >> > > > > > > the
> >> > > > > > > > broker should reserve the right to evict sessions at will.
> >> > > > > > > > We> > shouldn't
> >> > > > > > > be
> >> > > > > > > > stuck maintaining a small session at the expense of a much
> >> > > > > > > > larger> > one
> >> > > > > > > just
> >> > > > > > > > to enforce this timeout. Internally, I think having some
> >> > > > > > > > cache> > > > stickiness
> >> > > > > > > > to avoid thrashing makes sense, but I think static values
> >> > > > > > > > are> > likely to
> >> > > > > > > be
> >> > > > > > > > good enough and that lets us retain some flexibility to
> >> > > > > > > > change the> > > > > behavior
> >> > > > > > > > in the future.
> >> > > > > > >
> >> > > > > > > OK.
> >> > > > > > >
> >> > > > > > > > 4. I think the word "incremental" is redundant in the config
> >> > > > > > > >names.> > > > Maybe
> >> > > > > > > > it could just be "max.fetch.session.cache.slots" for
> >> > > > > > > > example?> > > > >
> >> > >

Build failed in Jenkins: kafka-trunk-jdk9 #332

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in consumer group command error (#4463)

[jason] MINOR: Update consumer group command documentation with additionally

--
[...truncated 1.48 MB...]

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldProcessDataFromStoresWithLoggingDisabled PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerLeft[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterInner[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterOuter[caching enabled = true] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = false] PASSED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = false] PASSED

[jira] [Created] (KAFKA-6488) Prevent log corruption in case of OOM

2018-01-25 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-6488:
---

 Summary: Prevent log corruption in case of OOM
 Key: KAFKA-6488
 URL: https://issues.apache.org/jira/browse/KAFKA-6488
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin


Currently we will append the message to the log before updating the LEO. 
However, if there is OOM in between these two steps, KafkaRequestHandler thread 
can append a message to the log without updating the LEO. The next message may 
be appended with the same offset as the first message. This can prevent broker 
from being started because two messages have the same offset in the log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4897) LogCleaner#cleanSegments should not ignore failures to delete files

2018-01-25 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4897.

Resolution: Fixed

> LogCleaner#cleanSegments should not ignore failures to delete files
> ---
>
> Key: KAFKA-4897
> URL: https://issues.apache.org/jira/browse/KAFKA-4897
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Colin P. McCabe
>Assignee: Manikumar
>Priority: Major
>
> LogCleaner#cleanSegments should not ignore failures to delete files.  
> Currently, it ignores the failure and does not even log an error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3117

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6429; LogCleanerManager.cleanableOffsets should create objects …

--
[...truncated 404.19 KB...]

kafka.api.PlaintextConsumerTest > testCommitMetadata STARTED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
STARTED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kaf

[jira] [Created] (KAFKA-6487) ChangeLoggingKeyValueBytesStore.all() returns null

2018-01-25 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6487:
--

 Summary: ChangeLoggingKeyValueBytesStore.all() returns null
 Key: KAFKA-6487
 URL: https://issues.apache.org/jira/browse/KAFKA-6487
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck


The  {{ChangeLoggingKeyValueBytesStore}} implements the {{KeyValueStore}} 
interface which extends the {{ReadOnlyKeyValueStore}} interface.  The Javadoc 
for {{ReadOnlyKeyValueStore#all}} states the method should never return a 
{{null}} value.

But when deleting a record from the {{ChangeLoggingKeyValueBytesStore}} and 
subsequently calling the {{all}} method, a null value is returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: 1.1 KIPs

2018-01-25 Thread Damian Guy
Thanks Vahid, i've updated the release plan

On Wed, 24 Jan 2018 at 13:31 Vahid S Hashemian 
wrote:

> Hi Damian,
>
> Could you please add KIP-229 to the list? It was approved earlier this
> week
> https://www.mail-archive.com/dev@kafka.apache.org/msg84851.html
>
> Thanks for running the release.
> --Vahid
>
>
>
>
> From:   Damian Guy 
> To: dev@kafka.apache.org
> Date:   01/24/2018 01:20 PM
> Subject:Re: 1.1 KIPs
>
>
>
> Hi,
>
> The KIP deadline has passed and i've updated the release plan:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D75957546&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=TVUESwz92IFTGWgOw60U-bc5Fih9bzGtCRahKUZlSTc&s=ZsBKyGkffkjDH0yHsVpxNP-_qhSR681YSL48Bi5cTBs&e=
>
> If there is anything i've missed please let me know.
>
> The feature freeze deadline is January 30th. At this point i'll cut the
> branch for 1.1. So please make sure any major features have been committed
> by then.
>
> Thanks,
> Damian
>
> On Tue, 23 Jan 2018 at 15:29 Damian Guy  wrote:
>
> > Hi,
> >
> > Today is the KIP deadline. Tomorrow I'll update the release page with
> any
> > KIPS that have recently been voted on and accepted according to the
> process
> > that can be found here:
> >
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_Kafka-2BImprovement-2BProposals-23KafkaImprovementProposals-2DProcess&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=TVUESwz92IFTGWgOw60U-bc5Fih9bzGtCRahKUZlSTc&s=JEIfWpaEOFJ8NmyLHUs7NLlcgPgsNEdM1Iy9Y5Tr3Vs&e=
>
> >
> > Thanks,
> > Damian
> >
> > On Thu, 18 Jan 2018 at 12:47 Damian Guy  wrote:
> >
> >> Gentle reminder that KIP deadline is just 5 days away. If there is
> >> anything that wants to be in 1.1 and hasn't been voted on yet, now is
> the
> >> time!
> >>
> >> On Thu, 18 Jan 2018 at 08:49 Damian Guy  wrote:
> >>
> >>> Hi Xavier,
> >>> I'll add it to the plan.
> >>>
> >>> Thanks,
> >>> Damian
> >>>
> >>> On Tue, 16 Jan 2018 at 19:04 Xavier Léauté 
> wrote:
> >>>
>  Hi Damian, I believe the list should also include KAFKA-5886 (KIP-91)
>  which
>  was voted for 1.0 but wasn't ready to be merged in time.
> 
>  On Tue, Jan 16, 2018 at 5:13 AM Damian Guy 
>  wrote:
> 
>  > Hi,
>  >
>  > This is a reminder that we have one week left until the KIP
> deadline
>  of Jan
>  > 23. There are still some KIPs that are under discussion and/or
> being
>  voted
>  > on. Please keep in mind that the voting needs to be complete before
>  the
>  > deadline for the KIP to be added to the release.
>  >
>  >
> 
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D75957546&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=TVUESwz92IFTGWgOw60U-bc5Fih9bzGtCRahKUZlSTc&s=ZsBKyGkffkjDH0yHsVpxNP-_qhSR681YSL48Bi5cTBs&e=
>
>  >
>  > Thanks,
>  > Damian
>  >
> 
> >>>
>
>
>
>
>


Build failed in Jenkins: kafka-trunk-jdk7 #3116

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in consumer group command error (#4463)

[jason] MINOR: Update consumer group command documentation with additionally

--
[...truncated 406.06 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords STARTED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerT

Expiring record(s) for topic: 30011 ms has passed since last append

2018-01-25 Thread Dan Bress
Hi,

I'm running an app on Kafka Streams 1.0.0, and in the past day a lot of
nodes are failing and I see this in the log.

These appear to be failures when attempting to update the changelog.  Any
ideas on what I should do to work around this?  Should I configure separate
retry and timeouts for the changelog producer?  If so How do I do that?


org.apache.kafka.streams.errors.StreamsException: stream-thread
[dp-insightstrack-staging1-9cc75371-000c-420a-b9d7-b46b2b4bab5c-StreamThread-3]
Failed to rebalance.
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:860)
at
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: stream-thread
[dp-insightstrack-staging1-9cc75371-000c-420a-b9d7-b46b2b4bab5c-StreamThread-3]
failed to suspend stream tasks
at
org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:204)
at
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsRevoked(StreamThread.java:291)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:418)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:359)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1138)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1103)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
... 3 common frames omitted
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: task
[0_18] Failed to flush state store summarykey-to-summary
at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:248)
at
org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:196)
at
org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:324)
at
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:304)
at
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
at
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:299)
at
org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:414)
at
org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:396)
at
org.apache.kafka.streams.processor.internals.AssignedTasks.suspendTasks(AssignedTasks.java:219)
at
org.apache.kafka.streams.processor.internals.AssignedTasks.suspend(AssignedTasks.java:184)
at
org.apache.kafka.streams.processor.internals.TaskManager.suspendTasksAndState(TaskManager.java:197)
... 11 common frames omitted
Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_18]
Abort sending since an error caught with a previous record (key
ComparableSummaryKey(SummaryKey(GnipStream(172819),RuleId(951574687410094106),Some(TweetId(954107387677433858
value
TweetSummaryCountsWindow(SummaryKey(GnipStream(172819),RuleId(951574687410094106),Some(TweetId(954107387677433858))),168,1516311902337,WindowCounts(1516742462337,LongHyperLogLogPlus(com.clearspring.analytics.stream.cardinality.HyperLogLogPlus@2e76a045
),LongHyperLogLogPlus(com.clearspring.analytics.stream.cardinality.HyperLogLogPlus@8610f558
),None,-1,false,MetricMap(true,[I@74cc9b69
),com.twitter.datainsights.insightstrack.domain.InteractionSourceSummary@5653a508),None,false)
timestamp 1516742492445) to topic
dp-insightstrack-staging1-summarykey-to-summary-changelog due to
org.apache.kafka.common.errors.TimeoutException: Expiring 7 record(s) for
dp-insightstrack-staging1-summarykey-to-summary-changelog-18: 30011 ms has
passed since last append.
at
org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:118)
at
org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:204)
at
org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:187)
at
org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:627)
at
org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:287)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 7

[jira] [Resolved] (KAFKA-6429) dirtyNonActiveSegments in `cleanableOffsets` should only be created when log.cleaner.min.compaction.lag.ms > 0

2018-01-25 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6429.

Resolution: Fixed

> dirtyNonActiveSegments in `cleanableOffsets` should only be created when 
> log.cleaner.min.compaction.lag.ms > 0
> --
>
> Key: KAFKA-6429
> URL: https://issues.apache.org/jira/browse/KAFKA-6429
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
>Reporter: huxihx
>Assignee: huxihx
>Priority: Minor
>
> LogCleanerManager.cleanableOffsets always created objects to hold all dirty 
> non-active segments, as shown below:
> {code:java}
> // dirty log segments
> val dirtyNonActiveSegments = log.logSegments(firstDirtyOffset, 
> log.activeSegment.baseOffset)
> {code}
> Actually, these objects will not be used when 
> `log.cleaner.min.compaction.lag.ms` is 0 which is already the default value. 
> We could defer the creation. In doing so can we reduce the heap size but also 
> avoid the blocking access to the segments incurred by Log.segments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: Build failed in Jenkins: kafka-trunk-jdk9 #331

2018-01-25 Thread Reto Gmür
Hi,

For around two weeks Jenkins is sending me Kafka Build notifications.

To: dev@kafka.apache.org , ja...@confluent.io, m...@farewellutopia.com 

Why?
And how do I stop this?

Please reply directly or with CC to me, I don't want to subscribe to the list 
to get less mails.

Cheers,
Reto

> -Original Message-
> From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
> Sent: Thursday, January 25, 2018 12:45 PM
> To: dev@kafka.apache.org; ja...@confluent.io; m...@farewellutopia.com
> Subject: Build failed in Jenkins: kafka-trunk-jdk9 #331
> 
> See  jdk9/331/display/redirect?page=changes>
> 
> Changes:
> 
> [jason] KAFKA-6180; Add a Validator for NonNull configurations and remove
> 
> --
> [...truncated 1.84 MB...]
> 
> org.apache.kafka.streams.integration.RestoreIntegrationTest >
> shouldRestoreState PASSED
> 
> org.apache.kafka.streams.integration.RestoreIntegrationTest >
> shouldSuccessfullyStartWhenLoggingDisabled STARTED
> 
> org.apache.kafka.streams.integration.RestoreIntegrationTest >
> shouldSuccessfullyStartWhenLoggingDisabled PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryMapValuesState STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryMapValuesState PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> queryOnRebalance STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> queryOnRebalance PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> concurrentAccesses STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> concurrentAccesses PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldAllowToQueryAfterThreadDied STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldAllowToQueryAfterThreadDied PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryStateWithZeroSizedCache STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryStateWithZeroSizedCache PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryMapValuesAfterFilterState STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryMapValuesAfterFilterState PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryFilterState STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryFilterState PASSED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED
> 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest >
> shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldReduce STARTED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldReduce PASSED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldGroupByKey STARTED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldGroupByKey PASSED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldReduceWindowed STARTED
> 
> org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTe
> st > shouldReduceWindowed PASSED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest >
> shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWith
> GlobalAutoOffsetResetLatest STARTED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest >
> shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWith
> GlobalAutoOffsetResetLatest PASSED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest > shouldThrowExceptionOverlappingPattern STARTED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest > shouldThrowExceptionOverlappingPattern PASSED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest > shouldThrowExceptionOverlappingTopic STARTED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest > shouldThrowExceptionOverlappingTopic PASSED
> 
> org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrati
> onTest

[jira] [Resolved] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6091.
--
Resolution: Fixed

Fixed via KAFKA-5547. Please reopen if you think the issue still exists

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>Priority: Major
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4185.
--
Resolution: Duplicate

Closing this in favor of KIP-86/KAFKA-4292. Please reopen if you think otherwise

> Abstract out password verifier in SaslServer as an injectable dependency
> 
>
> Key: KAFKA-4185
> URL: https://issues.apache.org/jira/browse/KAFKA-4185
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Piyush Vijay
>Priority: Major
>
> Kafka comes with a default SASL/PLAIN implementation which assumes that 
> username and password are present in a JAAS
> config file. People often want to use some other way to provide username and 
> password to SaslServer. Their best bet,
> currently, is to have their own implementation of SaslServer (which would be, 
> in most cases, a copied version of PlainSaslServer
> minus the logic where password verification happens). This is not ideal.
> We believe that there exists a better way to structure the current 
> PlainSaslServer implementation which makes it very
> easy for people to plug-in their custom password verifier without having to 
> rewrite SaslServer or copy any code.
> The idea is to have an injectable dependency interface PasswordVerifier which 
> can be re-implemented based on the
> requirements. There would be no need to re-implement or extend 
> PlainSaslServer class.
> Note that this is commonly asked feature and there have been some attempts in 
> the past to solve this problem:
> https://github.com/apache/kafka/pull/1350
> https://github.com/apache/kafka/pull/1770
> https://issues.apache.org/jira/browse/KAFKA-2629
> https://issues.apache.org/jira/browse/KAFKA-3679
> We believe that this proposed solution does not have the demerits because of 
> previous proposals were rejected.
> I would be happy to discuss more.
> Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3408) consumer rebalance fail

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3408.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> consumer rebalance fail
> ---
>
> Key: KAFKA-3408
> URL: https://issues.apache.org/jira/browse/KAFKA-3408
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
> Environment: centos linux
>Reporter: zhongkai liu
>Priority: Major
>  Labels: newbie
>
> I use "/bin/kafka-console-consumer" command to start two consumers of group 
> "page_group",then the first conumer console report rebalance failure like 
> this:
> ERROR [page_view_group1_slave2-1458095694092-80c33086], error during 
> syncedRebalance (kafka.consumer.ZookeeperConsumerConnector)
> kafka.common.ConsumerRebalanceFailedException: 
> page_view_group1_slave2-1458095694092-80c33086 can't rebalance after 10 
> retries
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:660)
> at 
> kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:579)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5617) Update to the list of third-party clients

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5617.
--
Resolution: Fixed

> Update to the list of third-party clients
> -
>
> Key: KAFKA-5617
> URL: https://issues.apache.org/jira/browse/KAFKA-5617
> Project: Kafka
>  Issue Type: Wish
>Reporter: Daniel Schierbeck
>Priority: Major
>
> I'd like to have the list of Ruby client libraries updated to reflect the 
> current state of affairs.
> * ruby-kafka is no longer compatible with Kafka 0.8, but is compatible with 
> 0.9+. It should probably be moved to the top, since it's the only actively 
> maintained low-level library – all other libraries are either unmaintained or 
> are opinionated frameworks built on top of ruby-kafka.
> * I'd like to add Racecar (https://github.com/zendesk/racecar), a simple 
> opinionated framework built on top of ruby-kafka. It's an extraction from a 
> production Zendesk code base, and so is already pretty battle tested.
> I'm the author of both ruby-kafka and Racecar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5500) it is impossible to have custom Login Modules for PLAIN SASL mechanism

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5500.
--
Resolution: Duplicate

Closing in favor of KIP-88/KAFKA-4292.  Please reopen if you think otherwise.

> it is impossible to have custom Login Modules for PLAIN SASL mechanism
> --
>
> Key: KAFKA-5500
> URL: https://issues.apache.org/jira/browse/KAFKA-5500
> Project: Kafka
>  Issue Type: Wish
>Reporter: Anton Patrushev
>Priority: Minor
>
> This change -
>  
> https://github.com/apache/kafka/commit/275c5e1df237808fe72b8d9933f826949d4b5781#diff-3e86ea3ab586f9b6f920c00508a0d5bcR95
>  - makes it impossible have login modules other than PlainLoginModule used 
> for PLAIN SASL mechanism. Could it be changed the way that doesn't use 
> particular login module class name?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6486) TimeWindows causes unordered calls to windowed aggregation functions

2018-01-25 Thread Valentino Proietti (JIRA)
Valentino Proietti created KAFKA-6486:
-

 Summary: TimeWindows causes unordered calls to windowed 
aggregation functions
 Key: KAFKA-6486
 URL: https://issues.apache.org/jira/browse/KAFKA-6486
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Valentino Proietti


This is not a real bug but it causes some weird behaviour, at least in my 
opinion.

The TimeWindows has a method called windowsFor() that uses and returns an 
HashMap:

    @Override

    *public* Map windowsFor(*final* *long* timestamp) {

        *long* windowStart = (Math._max_(0, timestamp - sizeMs + advanceMs) / 
advanceMs) * advanceMs;

        *final* Map windows = *new* HashMap<>();

        

the HashMap does not preserve the order of insertion and this ends up later in 
calls to any streams windowed aggregation functions that are not ordered by 
window time as I would expect.

A simple solution is to replace the HashMap with a LinkedHashMap and that's 
what I did.

Anyway replacing it directly in your code can save hours of debugging to 
understand what's happening.

Thank you 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Improving ACLs by allowing ip ranges and subnet expressions?

2018-01-25 Thread Gwen Shapira
Regardless of our personal opinions about security, fact is that Kafka
right now has "limit access by IP" functionality (as does MySQL for
instance). And the usability of the feature is limited by the fact that you
can only manage one IP at a time, while in the real-world applications
normally have subnets.

There was a discussion about adding the IP-range functionality way back:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-7+-+Security+-+IP+Filtering
It says the KIP was rejected, but I failed to find the discussion on why it
was rejected.

Since it does not make Kafka any less secure than it currently is, and it
does improve manageability - why not?

Gwen


On Wed, Jan 24, 2018 at 11:37 PM Sönke Liebau
 wrote:

> Hi Colin,
>
> I agree with you on the fact that IP based security is not absolute. I was
> considering it as an additional layer of security to be used in conjunction
> with ssl certificates, so the rule would contain both the principal and
> some hosts. This way if someone manages to obtain the certificate he'd need
> to jump through extra hoops to use it from outside the cluster when its not
> feasible to lock down Kafka with a firewall.
>
> Mostly though I'd argue the principle that if we consider the feature worth
> having it should be "done right" - otherwise we might as well remove it to
> avoid giving users a false sense of security.
>
> Regarding your suggestion of access control without security, we could
> start honouring the HADOOP_USER_NAME environment variable, many people
> should already be used to that :)
> Not sure if there is a lot of demand for that feature though, I'd consider
> it more dangerous than useful, but that is really just a personal opinion.
>
> Best regards,
> Sönke
>
> Am 24.01.2018 23:31 schrieb "Colin McCabe" :
>
> Hi Sonke,
>
> IP address based security doesn't really work, though.  Users can spoof IP
> addresses.  They can poison the ARP cache on a local network, or
> impersonate a DNS server.
>
> For users who want some access controls, but don't care about security,
> maybe we should make it easier to use and create users without enabling
> kerberos or similar?
>
> best,
> Colin
>
>
> On Wed, Jan 24, 2018, at 12:59, Sönke Liebau wrote:
> > Hi everyone,
> >
> > the current ACL functionality in Kafka is a bit limited concerning
> > host based rules when specifying multiple hosts. A common scenario for
> > this would be that if have a YARN cluster running Spark jobs that
> > access Kafka and want to create ACLs based on the ip addresses of the
> > cluster nodes.
> > Currently kafka-acls only allows to specify individual ips, so this
> > would look like
> >
> > ./kafka-acls --add --producer \
> > --topic test --authorizer-properties zookeeper.connect=localhost:2181 \
> > --allow-principal User:spark \
> > --allow-host 10.0.0.10 \
> > --allow-host 10.0.0.11 \
> > --allow-host ...
> >
> > which can get unwieldy if you have a 200 node cluster. Internally this
> > command would not create a single ACL with multiple host entries, but
> > rather one ACL per host that is specified on the command line, which
> > makes the ACL listing a bit confusing.
> >
> > There are currently a few jiras in various states around this topic:
> > KAFKA-3531 [1], KAFKA-4759 [2], KAFKA-4985 [3] & KAFKA-5713 [4]
> >
> > KAFKA-4759 has a patch available, but would currently only add
> > interpretation of CIDR notation, no specific ranges, which I think
> > could easily be added.
> >
> > Colin McCabe commented in KAFKA-4985 that so far this was not
> > implemented as no standard for expressing ip ranges with a fast
> > implementation had been found so far, the available patch uses the
> > ipmath [5] package for parsing expressions and range checking - which
> > seems fairly small and focused.
> >
> > This would allow for expressions of the following type:
> > 10.0.0.1
> > 10.0.0.1-10.0.0.10
> > 10.0.0.0/24
> >
> > I'd suggest extending this a little to allow a semicolon separated
> > list of values:
> > 10.0.0.1;10.0.0.1-10.0.0.10;10.0.0.0/24
> >
> > Performance considerations
> > Internally the ipmath package represents ip addresses as longs, so if
> > we stick with the example of a 200 node cluster from above, with the
> > current implementation that would be 200 string comparisons for every
> > request, whereas with a range it could potentially come down to two
> > long comparisons. This is of course a back-of-the-envelope calculation
> > at best, but there at least seems to be a case for investigating this
> > a bit further I think.
> >
> >
> > These changes would probably necessitate a KIP - though with some
> > consideration they could be made in a way that no existing public
> > facing functionality is changed, but for transparency and proper
> > documentation I'd say a KIP would be preferable.
> >
> > I'd be happy to draft one if people think this is worthwhile.
> >
> > Let me know what you think.
> >
> > best regards,
> > Sönke
> >
> > [1] https://issues.apache.org/jira/bro

[jira] [Resolved] (KAFKA-1458) kafka hanging on shutdown

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1458.
--
Resolution: Auto Closed

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists{color}
 

> kafka hanging on shutdown
> -
>
> Key: KAFKA-1458
> URL: https://issues.apache.org/jira/browse/KAFKA-1458
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: James Blackburn
>Priority: Major
>
> I tried to restart the kafka broker because of KAFKA-1407. However a normal 
> kill wouldn't kill it:
> jstack shows:
> {code}
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode):
> "Attach Listener" daemon prio=10 tid=0x1c2e8800 nid=0x6174 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "SIGTERM handler" daemon prio=10 tid=0x1c377800 nid=0x6076 waiting 
> for monitor entry [0x431f2000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "SIGTERM handler" daemon prio=10 tid=0x1c652800 nid=0x6069 waiting 
> for monitor entry [0x430f1000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "SIGTERM handler" daemon prio=10 tid=0x1c204000 nid=0x6068 waiting 
> for monitor entry [0x44303000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "SIGTERM handler" daemon prio=10 tid=0x1c319000 nid=0x605b waiting 
> for monitor entry [0x409f3000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "SIGTERM handler" daemon prio=10 tid=0x1c625000 nid=0x604c waiting 
> for monitor entry [0x439fa000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "SIGTERM handler" daemon prio=10 tid=0x1c2e9800 nid=0x5d8b waiting 
> for monitor entry [0x438f9000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - waiting to lock <0xd042f068> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:724)
> "Thread-2" prio=10 tid=0x1c31a000 nid=0x3d4f waiting on condition 
> [0x44707000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xd04f28b8> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
> at kafka.utils.Utils$.inLock(Utils.scala:536)
> at 
> kafka.c

[jira] [Resolved] (KAFKA-1475) Kafka consumer stops LeaderFinder/FetcherThreads, but application does not know

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1475.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> Kafka consumer stops LeaderFinder/FetcherThreads, but application does not 
> know
> ---
>
> Key: KAFKA-1475
> URL: https://issues.apache.org/jira/browse/KAFKA-1475
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.8.0
> Environment: linux, rhel 6.4
>Reporter: Hang Qi
>Assignee: Neha Narkhede
>Priority: Major
>  Labels: consumer
> Attachments: 5055aeee-zk.txt
>
>
> We encounter an issue of consumers not consuming messages in production. ( 
> this consumer has its own consumer group, and just consumes one topic of 3 
> partitions.)
> Based on the logs, we have following findings:
> 1. Zookeeper session expires, kafka highlevel consumer detected this event, 
> and released old broker parition ownership and re-register consumer.
> 2. Upon creating ephemeral path in Zookeeper, it found that the path still 
> exists, and try to read the content of the node.
> 3. After read back the content, it founds the content is same as that it is 
> going to write, so it logged as "[ZkClient-EventThread-428-ZK/kafka] 
> (kafka.utils.ZkUtils$) - 
> /consumers/consumerA/ids/consumerA-1400815740329-5055aeee exists with value { 
> "pattern":"static", "subscription":{ "TOPIC": 1}, 
> "timestamp":"1400846114845", "version":1 } during connection loss; this is 
> ok", and doing nothing.
> 4. After that, it throws exception indicated that the cause is 
> "org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /consumers/consumerA/ids/consumerA-1400815740329-5055aeee" during 
> rebalance. 
> 5. After all retries failed, it gave up retry and left the 
> LeaderFinderThread, FetcherThread stopped. 
> Step 3 looks very weird, checking the code, there is timestamp contains in 
> the stored data, it may be caused by Zookeeper issue.
> But what I am wondering is that whether it is possible to let application 
> (kafka client users) to know that the underline LeaderFinderThread and 
> FetcherThread are stopped, like allowing application to register some 
> callback or throws some exception (by invalidate the KafkaStream iterator for 
> example)? For me, it is not reasonable for the kafka client to shutdown 
> everything and wait for next rebalance, and let application wait on 
> iterator.hasNext() without knowing that there is something wrong underline.
> I've read about twiki about kafka 0.9 consumer rewrite, and there is a 
> ConsumerRebalanceCallback interface, but I am not sure how long it will take 
> to be ready, and how long it will take for us to migrate. 
> Please help to look at this issue.  Thanks very much!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4256) Use IP for ZK broker register

2018-01-25 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4256.
--
Resolution: Won't Fix

Closing as per above comment. Pls reopen If the issue still exists.

> Use IP for ZK broker register
> -
>
> Key: KAFKA-4256
> URL: https://issues.apache.org/jira/browse/KAFKA-4256
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Yarek Tyshchenko
>Priority: Minor
>
> Kafka seems to default to using fqdn when registering itself with Zookeeper, 
> using the call "java.net.InetAddress.getCanonicalHostName()". This means that 
> in an environment where host's hostname doesn't resolve for zookeeper node 
> will make that node unreachable.
> Currently theres no way to tell kafka to just use the IP address, I 
> understand that it would be difficult to know which interface it should use 
> to get the IP from.
> One environment like this is docker (prior to version 1.11, where networks 
> are available). Only solution right now is to hard-code the IP address in the 
> configuration file.
> It would be nice if there was a configuration option to just use the IP 
> address of a specified interface.
> For reference I'm including my workaround for research:
> https://github.com/YarekTyshchenko/kafka-docker/blob/0d79fa4f1d5089de5ff2b6793f57103d9573fe3b/ip.sh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6485) 'ConsumerGroupCommand' performance optimization for old consumer describe group

2018-01-25 Thread HongLiang (JIRA)
HongLiang created KAFKA-6485:


 Summary: 'ConsumerGroupCommand' performance optimization for old 
consumer describe group
 Key: KAFKA-6485
 URL: https://issues.apache.org/jira/browse/KAFKA-6485
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 1.0.0
Reporter: HongLiang
 Attachments: ConsumerGroupCommand.diff

ConsumerGroupCommand describegroup performance optimization.
performance improvement 3 times compare trunk(1.0+). and performance 
improvement 10 times compare 0.10.2.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6484) 'ConsumerGroupCommand' performance optimization for old consumer describe group

2018-01-25 Thread HongLiang (JIRA)
HongLiang created KAFKA-6484:


 Summary: 'ConsumerGroupCommand' performance optimization for old 
consumer describe group
 Key: KAFKA-6484
 URL: https://issues.apache.org/jira/browse/KAFKA-6484
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 1.0.0
Reporter: HongLiang
 Attachments: ConsumerGroupCommand.diff

ConsumerGroupCommand describegroup performance optimization.
performance improvement 3 times compare trunk(1.0+). and performance 
improvement 10 times compare 0.10.2.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Best practices Partition Key

2018-01-25 Thread Maria Pilar
Hi everyone,

I´m trying to understand the best practice to define the partition key. I
have defined some topics that they are related with entities in cassandra
data model, the relationship is one-to-one, one entity - one topic, because
I need to ensure the properly ordering in the events. I have created one
partition for each topic to ensure it as well.

If I will use kafka like a datastore and search throgh the records, I know
that could be a best practice use the partition key of Cassandra (e.g
Customer ID) as a partition key in kafka

any comment please ??

thanks


[jira] [Created] (KAFKA-6483) Support ExtendedSerializer in Kafka Streams

2018-01-25 Thread Tamas Cserveny (JIRA)
Tamas Cserveny created KAFKA-6483:
-

 Summary: Support ExtendedSerializer in Kafka Streams
 Key: KAFKA-6483
 URL: https://issues.apache.org/jira/browse/KAFKA-6483
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Tamas Cserveny
Assignee: Dale Peakall
 Fix For: 0.11.0.0


KIP-82 introduced the concept of message headers and introduced an 
ExtendedDeserializer interface that allowed a Deserializer to access those 
message headers.

Change Kafka Streams to support the use of ExtendedDeserializer to provide 
compatibility with Kafka Clients that use the new header functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP 226 - Dynamic Broker Configuration

2018-01-25 Thread Viktor Somogyi
Yea, if other commands seem to follow this pattern, I'll update KIP-248 as
well :). Also introducing those arguments in the current ConfigCommand also
makes sense from the migration point of view too as it will be introduced
in 1.1 which makes it somewhat easier for KIP-248.

On Wed, Jan 24, 2018 at 6:46 PM, Rajini Sivaram 
wrote:

> Hi Ismael,
>
> Yes, that makes sense. Looking at the command line options for different
> tools, we seem to be using *--command-config  *in the commands
> that currently talk to the new AdminClient (DelegationTokenCommand,
> ConsumerGroupCommand, DeleteRecordsCommand). So perhaps it makes sense to
> do the same for ConfigCommand as well. I will update KIP-226 with the two
> options *--bootstrap-server* and *--command-config*.
>
> Viktor, what do you think?
>
> At the moment, I think many in the community are busy due to the code
> freeze next week, but hopefully we should get more feedback on KIP-248 soon
> after.
>
> Thank you,
>
> Rajini
>
> On Wed, Jan 24, 2018 at 5:41 AM, Viktor Somogyi 
> wrote:
>
> > Hi all,
> >
> > I'd also like to as the community here who were participating the
> > discussion of KIP-226 to take a look at KIP-248 (that is making
> > kafka-configs.sh fully function with AdminClient and a Java based
> > ConfigCommand). It would be much appreciated to get feedback on that as
> it
> > plays an important role for KIP-226 and other long-waited features.
> >
> > Thanks,
> > Viktor
> >
> > On Wed, Jan 24, 2018 at 6:56 AM, Ismael Juma  wrote:
> >
> > > Hi Rajini,
> > >
> > > I think the proposal makes sense. One suggestion: can we just allow the
> > > config to be passed? That is, leave out the properties config for now.
> > >
> > > On Tue, Jan 23, 2018 at 3:01 PM, Rajini Sivaram <
> rajinisiva...@gmail.com
> > >
> > > wrote:
> > >
> > > > Since we are running out of time to get the whole ConfigCommand
> > converted
> > > > to using the new AdminClient for 1.1.0 (KIP-248), we need a way to
> > enable
> > > > ConfigCommand to handle broker config updates (implemented by
> KIP-226).
> > > As
> > > > a simple first step, it would make sense to use the existing
> > > ConfigCommand
> > > > tool to perform broker config updates enabled by this KIP. Since
> config
> > > > validation and password encryption are performed by the broker, this
> > will
> > > > be easier to do with the new AdminClient. To do this, we need to add
> > > > command line options for new admin client to kafka-configs.sh.
> Dynamic
> > > > broker config updates alone will be done under KIP-226 using the new
> > > admin
> > > > client to make this feature usable.. The new command line options
> > > > (consistent with KIP-248) that will be added to ConfigCommand will
> be:
> > > >
> > > >- --bootstrap-server *host:port*
> > > >- --adminclient.config *config-file*
> > > >- --adminclient.properties *k1=v1,k2=v2*
> > > >
> > > > If anyone has any concerns about these options being added to
> > > > kafka-configs.sh, please let me know. Otherwise, I will update
> KIP-226
> > > and
> > > > add the options to one of the KIP-226 PRs.
> > > >
> > > > Thank you,
> > > >
> > > > Rajini
> > > >
> > > > On Wed, Jan 10, 2018 at 5:14 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Thanks Rajini. Sounds good.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Wed, Jan 10, 2018 at 11:41 AM, Rajini Sivaram <
> > > > rajinisiva...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Ismael,
> > > > > >
> > > > > > I have updated the KIP to use AES-256 if available and AES-128
> > > > otherwise
> > > > > > for password encryption. Looking at GCM, it looks like GCM is
> > > typically
> > > > > > used with a variable initialization vector, while we are using a
> > > > random,
> > > > > > but constant IV per-password. Also, AES/GCM is not supported by
> > > Java7.
> > > > > > Since the authentication and performance benefits of GCM are not
> > > > required
> > > > > > for this scenario, I am thinking I will leave the default as CBC,
> > but
> > > > > make
> > > > > > sure we test GCM as well so that users have the choice.
> > > > > >
> > > > > > On Wed, Jan 10, 2018 at 1:01 AM, Colin McCabe <
> cmcc...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks, Rajini.  That makes sense.
> > > > > > >
> > > > > > > regards,
> > > > > > > Colin
> > > > > > >
> > > > > > > On Tue, Jan 9, 2018, at 14:38, Rajini Sivaram wrote:
> > > > > > > > Hi Colin,
> > > > > > > >
> > > > > > > > Thank you for reviewing.
> > > > > > > >
> > > > > > > > Yes, validation is done on the broker, not the client.
> > > > > > > >
> > > > > > > > All configs from ZooKeeper are processed and any config that
> > > could
> > > > > not
> > > > > > be
> > > > > > > > applied are logged as warnings. This includes any configs
> that
> > > are
> > > > > not
> > > > > > > > dynamic in the broker version or any configs that are not
> > > supported
> > > > > in
> > > > > > > the
> > > > > > > > broker version. If you downgrade to a version that 

Build failed in Jenkins: kafka-trunk-jdk9 #331

2018-01-25 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6180; Add a Validator for NonNull configurations and remove

--
[...truncated 1.84 MB...]

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldRestoreState PASSED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled STARTED

org.apache.kafka.streams.integration.RestoreIntegrationTest > 
shouldSuccessfullyStartWhenLoggingDisabled PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldAllowToQueryAfterThreadDied PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryMapValuesAfterFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryFilterState PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithGlobalAutoOffsetResetLatest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets 
STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecifiedWithInvalidCommittedOffsets PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > 
shouldOnlyReadRecordsWhereEarliestSpecifiedWithNoCommittedOffsetsWithDefaultGlobalAutoOffsetResetEarliest
 PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpec

Re: [VOTE] KIP-227: Introduce Fetch Requests that are Incremental to Increase Partition Scalability

2018-01-25 Thread Mickael Maison
I'm late to the party but +1 and thanks for the KIP

On Thu, Jan 25, 2018 at 12:36 AM, Ismael Juma  wrote:
> Agreed, Jun.
>
> Ismael
>
> On Wed, Jan 24, 2018 at 4:08 PM, Jun Rao  wrote:
>
>> Since this is a server side metric, it's probably better to use Yammer Rate
>> (which has count) for consistency.
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Jan 23, 2018 at 10:17 PM, Colin McCabe  wrote:
>>
>> > On Tue, Jan 23, 2018, at 21:47, Ismael Juma wrote:
>> > > Colin,
>> > >
>> > > You get a cumulative count for rates since we added
>> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > 187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics>
>> >
>> > Oh, good point.
>> >
>> > C.
>> >
>> >
>> > > Ismael
>> > >
>> > > On Tue, Jan 23, 2018 at 4:21 PM, Colin McCabe
>> > >  wrote:>
>> > > > On Tue, Jan 23, 2018, at 11:57, Jun Rao wrote:
>> > > > > Hi, Collin,
>> > > > >
>> > > > > Thanks for the updated KIP. +1. Just a minor comment. It seems
>> > > > > that it's> > > better for TotalIncrementalFetchSessionsEvicted to
>> > be a rate,
>> > > > > instead of> > > just an ever-growing count.
>> > > >
>> > > > Thanks.  Perhaps we can add the rate in addition to the total
>> > > > eviction> > count?
>> > > >
>> > > > best,
>> > > > Colin
>> > > >
>> > > > >
>> > > > > Jun
>> > > > >
>> > > > > On Mon, Jan 22, 2018 at 4:35 PM, Jason Gustafson
>> > > > > > > wrote:
>> > > > >
>> > > > > > >
>> > > > > > > What if we want to have fetch sessions for non-incremental
>> > > > > > > fetches> > in the
>> > > > > > > future, though?  Also, we don't expect this configuration to
>> > > > > > > be> > changed
>> > > > > > > often, so it doesn't really need to be short.
>> > > > > >
>> > > > > >
>> > > > > > Hmm.. But in that case, I'm not sure we'd need to distinguish
>> > > > > > the two> > > > cases. If the non-incremental sessions are
>> > occupying space
>> > > > proportional to
>> > > > > > the fetched partitions, using the same config for both would be>
>> >
>> > reasonable.
>> > > > > > If they are not (which is more likely), we probably wouldn't
>> > > > > > need a> > config
>> > > > > > at all. Given that, I'd probably still opt for the more concise
>> > > > > > name.> > It's
>> > > > > > not a blocker for me though.
>> > > > > >
>> > > > > > +1 on the KIP.
>> > > > > >
>> > > > > > -Jason
>> > > > > >
>> > > > > > On Mon, Jan 22, 2018 at 3:56 PM, Colin McCabe
>> > > > > > > > wrote:
>> > > > > >
>> > > > > > > On Mon, Jan 22, 2018, at 15:42, Jason Gustafson wrote:
>> > > > > > > > Hi Colin,
>> > > > > > > >
>> > > > > > > > This is looking good to me. A few comments:
>> > > > > > > >
>> > > > > > > > 1. The fetch type seems unnecessary in the request and
>> > > > > > > >response> > schemas
>> > > > > > > > since it can be inferred by the sessionId/epoch.
>> > > > > > >
>> > > > > > > Hi Jason,
>> > > > > > >
>> > > > > > > Fair enough... if we need it later, we can always bump the RPC>
>> > > version.
>> > > > > > >
>> > > > > > > > 2. I agree with Jun that a separate array for partitions to
>> > > > > > > >remove> > > > would
>> > > > > > > be
>> > > > > > > > more intuitive.
>> > > > > > >
>> > > > > > > OK.  I'll switch it to using a separate array.
>> > > > > > >
>> > > > > > > > 3. I'm not super thrilled with the cache configuration since
>> > > > > > > >it> > seems
>> > > > > > to
>> > > > > > > > tie us a bit too closely to the implementation. You've
>> > > > > > > > mostly> > convinced
>> > > > > > > me
>> > > > > > > > on the need for the slots config, but I wonder if we can at
>> > > > > > > > least> > do
>> > > > > > > > without "min.incremental.fetch.session.eviction.ms"? For
>> > > > > > > > one, I> > think
>> > > > > > > the
>> > > > > > > > broker should reserve the right to evict sessions at will.
>> > > > > > > > We> > shouldn't
>> > > > > > > be
>> > > > > > > > stuck maintaining a small session at the expense of a much
>> > > > > > > > larger> > one
>> > > > > > > just
>> > > > > > > > to enforce this timeout. Internally, I think having some
>> > > > > > > > cache> > > > stickiness
>> > > > > > > > to avoid thrashing makes sense, but I think static values
>> > > > > > > > are> > likely to
>> > > > > > > be
>> > > > > > > > good enough and that lets us retain some flexibility to
>> > > > > > > > change the> > > > > behavior
>> > > > > > > > in the future.
>> > > > > > >
>> > > > > > > OK.
>> > > > > > >
>> > > > > > > > 4. I think the word "incremental" is redundant in the config
>> > > > > > > >names.> > > > Maybe
>> > > > > > > > it could just be "max.fetch.session.cache.slots" for
>> > > > > > > > example?> > > > >
>> > > > > > > What if we want to have fetch sessions for non-incremental
>> > > > > > > fetches> > in the
>> > > > > > > future, though?  Also, we don't expect this configuration to
>> > > > > > > be> > changed
>> > > > > > > often, so it doesn't really need to be short.
>> > > > > > >
>> > > > > > > best,
>> > > > > > > Colin
>> > > > > > >
>> > > > > > > >
>> >

[jira] [Created] (KAFKA-6482) when produce send a invalidity timestamps, broker will be not delete retention files

2018-01-25 Thread HongLiang (JIRA)
HongLiang created KAFKA-6482:


 Summary: when produce send a invalidity timestamps, broker will be 
not delete retention files
 Key: KAFKA-6482
 URL: https://issues.apache.org/jira/browse/KAFKA-6482
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 1.0.0
Reporter: HongLiang


when produce send a invalidity timestamps, broker will be not delete retention 
files.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)