Re: [VOTE] KIP-239 Add queryableStoreName() to GlobalKTable

2018-01-01 Thread Ted Yu
Gentle reminder: one more binding vote is needed for the KIP to pass.

Cheers

On Thu, Dec 21, 2017 at 4:13 AM, Damian Guy  wrote:

> +1
>
> On Wed, 20 Dec 2017 at 21:09 Ted Yu  wrote:
>
> > Ping for more (binding) votes.
> >
> > The pull request is ready.
> >
> > On Fri, Dec 15, 2017 at 12:57 PM, Guozhang Wang 
> > wrote:
> >
> > > +1 (binding), thanks!
> > >
> > > On Fri, Dec 15, 2017 at 11:56 AM, Ted Yu  wrote:
> > >
> > > > Hi,
> > > > Here is the discussion thread:
> > > >
> > > > http://search-hadoop.com/m/Kafka/uyzND12QnH514pPO9?subj=
> > > > Re+DISCUSS+KIP+239+Add+queryableStoreName+to+GlobalKTable
> > > >
> > > > Please vote on this KIP.
> > > >
> > > > Thanks
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


[jira] [Created] (KAFKA-6414) Inverse replication for replicas that are far behind

2018-01-01 Thread Ivan Babrou (JIRA)
Ivan Babrou created KAFKA-6414:
--

 Summary: Inverse replication for replicas that are far behind
 Key: KAFKA-6414
 URL: https://issues.apache.org/jira/browse/KAFKA-6414
 Project: Kafka
  Issue Type: Bug
  Components: replication
Reporter: Ivan Babrou


Let's suppose the following starting point:

* 1 topic
* 1 partition
* 1 reader
* 24h retention period
* leader outbound bandwidth is 3x of inbound bandwidth (1x replication + 1x 
reader + 1x slack = total outbound)

In this scenario, when replica fails and needs to be brought back from scratch, 
you can catch up at 2x inbound bandwidth (1x regular replication + 1x slack 
used).

2x catch-up speed means replica will be at the point where leader is now in 24h 
/ 2x = 12h. However, in 12h the oldest 12h of the topic will fall out of 
retention cliff and will be deleted. There's absolutely to use for this data, 
it will never be read from the replica in any scenario. And this not even 
including the fact that we still need to replicate 12h more of data that 
accumulated since the time we started.

My suggestion is to refill sufficiently out of sync replicas backwards from the 
tip: newest segments first, oldest segments last. Then we can stop when we hit 
retention cliff and replicate far less data. The lower the ratio of catch-up 
bandwidth to inbound bandwidth, the higher the returns would be. This will also 
set a hard cap on retention time: it will be no higher than retention period if 
catch-up speed if >1x (if it's less, you're forever out of ISR anyway).

What exactly "sufficiently out of sync" means in terms of lag is a topic for a 
debate. The default segment size is 1GiB, I'd say that being >1 full segments 
behind probably warrants this.

As of now, the solution for slow recovery appears to be to reduce retention to 
speed up recovery, which doesn't seem very friendly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6413) ReassignPartitionsCommand#parsePartitionReassignmentData() should give better error message when JSON is malformed

2018-01-01 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-6413:
-

 Summary: 
ReassignPartitionsCommand#parsePartitionReassignmentData() should give better 
error message when JSON is malformed
 Key: KAFKA-6413
 URL: https://issues.apache.org/jira/browse/KAFKA-6413
 Project: Kafka
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


In this thread: 
http://search-hadoop.com/m/Kafka/uyzND1J9Hizcxo0X?subj=Partition+reassignment+data+file+is+empty
 , Allen gave an example JSON string with extra comma where 
partitionsToBeReassigned returned by 
ReassignPartitionsCommand#parsePartitionReassignmentData() was empty.

I tried the following example where a right bracket is removed:
{code}
val (partitionsToBeReassigned, replicaAssignment) = 
ReassignPartitionsCommand.parsePartitionReassignmentData(

"{\"version\":1,\"partitions\":[{\"topic\":\"metrics\",\"partition\":0,\"replicas\":[1,2]},{\"topic\":\"metrics\",\"partition\":1,\"replicas\":[2,3]},}");
{code}
The returned partitionsToBeReassigned is empty.

The parser should give better error message for malformed JSON string.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #3068

2018-01-01 Thread Apache Jenkins Server
See 


Changes:

[github] Replace Arrays.asList with Collections.singletonList where possible

--
[...truncated 396.71 KB...]
kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] STARTED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoVal

Jenkins build is back to normal : kafka-trunk-jdk8 #2303

2018-01-01 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk9 #282

2018-01-01 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6307) mBeanName should be removed before returning from JmxReporter#removeAttribute()

2018-01-01 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6307.

   Resolution: Fixed
Fix Version/s: 1.1.0

> mBeanName should be removed before returning from 
> JmxReporter#removeAttribute()
> ---
>
> Key: KAFKA-6307
> URL: https://issues.apache.org/jira/browse/KAFKA-6307
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: siva santhalingam
> Fix For: 1.1.0
>
>
> JmxReporter$KafkaMbean showed up near the top in the first histo output from 
> KAFKA-6199.
> In JmxReporter#removeAttribute() :
> {code}
> KafkaMbean mbean = this.mbeans.get(mBeanName);
> if (mbean != null)
> mbean.removeAttribute(metricName.name());
> return mbean;
> {code}
> mbeans.remove(mBeanName) should be called before returning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)