Build failed in Jenkins: kafka-trunk-jdk8 #2383

2018-02-01 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update docs for KIP-229 (#4499)

--
[...truncated 410.76 KB...]

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[1] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[2] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerConfigUpdateTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[3] PASSED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] STARTED

kafka.log.LogCleanerIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[3] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] STARTED

kafka.log.LogCleanerIntegrationTest > testCleanerWithMessageFormatV0[3] PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > 

[jira] [Resolved] (KAFKA-6370) MirrorMakerIntegrationTest#testCommaSeparatedRegex may fail due to NullPointerException

2018-02-01 Thread huxihx (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-6370.
---
Resolution: Cannot Reproduce

Although it's harmless to add some defensive checks, this issue should have not 
happened based on the code review. Since it is not easy to reproduce again, 
just closed this Jira and be free to reopen it if encountered.

> MirrorMakerIntegrationTest#testCommaSeparatedRegex may fail due to 
> NullPointerException
> ---
>
> Key: KAFKA-6370
> URL: https://issues.apache.org/jira/browse/KAFKA-6370
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: huxihx
>Priority: Minor
>  Labels: mirror-maker
>
> From 
> https://builds.apache.org/job/kafka-trunk-jdk8/2277/testReport/junit/kafka.tools/MirrorMakerIntegrationTest/testCommaSeparatedRegex/
>  :
> {code}
> java.lang.NullPointerException
>   at 
> scala.collection.immutable.StringLike.$anonfun$format$1(StringLike.scala:351)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
>   at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at scala.collection.immutable.StringLike.format(StringLike.scala:351)
>   at scala.collection.immutable.StringLike.format$(StringLike.scala:350)
>   at scala.collection.immutable.StringOps.format(StringOps.scala:29)
>   at 
> kafka.metrics.KafkaMetricsGroup$.$anonfun$toScope$3(KafkaMetricsGroup.scala:170)
>   at scala.collection.immutable.List.map(List.scala:283)
>   at 
> kafka.metrics.KafkaMetricsGroup$.kafka$metrics$KafkaMetricsGroup$$toScope(KafkaMetricsGroup.scala:170)
>   at 
> kafka.metrics.KafkaMetricsGroup.explicitMetricName(KafkaMetricsGroup.scala:67)
>   at 
> kafka.metrics.KafkaMetricsGroup.explicitMetricName$(KafkaMetricsGroup.scala:51)
>   at 
> kafka.network.RequestMetrics.explicitMetricName(RequestChannel.scala:352)
>   at 
> kafka.metrics.KafkaMetricsGroup.metricName(KafkaMetricsGroup.scala:47)
>   at 
> kafka.metrics.KafkaMetricsGroup.metricName$(KafkaMetricsGroup.scala:42)
>   at kafka.network.RequestMetrics.metricName(RequestChannel.scala:352)
>   at 
> kafka.metrics.KafkaMetricsGroup.newHistogram(KafkaMetricsGroup.scala:81)
>   at 
> kafka.metrics.KafkaMetricsGroup.newHistogram$(KafkaMetricsGroup.scala:80)
>   at kafka.network.RequestMetrics.newHistogram(RequestChannel.scala:352)
>   at kafka.network.RequestMetrics.(RequestChannel.scala:364)
>   at 
> kafka.network.RequestChannel$Metrics.$anonfun$new$2(RequestChannel.scala:57)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.RequestChannel$Metrics.(RequestChannel.scala:56)
>   at kafka.network.RequestChannel.(RequestChannel.scala:243)
>   at kafka.network.SocketServer.(SocketServer.scala:71)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:238)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:135)
>   at 
> kafka.integration.KafkaServerTestHarness.$anonfun$setUp$1(KafkaServerTestHarness.scala:93)
> {code}
> Here is the code from KafkaMetricsGroup.scala :
> {code}
> .map { case (key, value) => "%s.%s".format(key, 
> value.replaceAll("\\.", "_"))}
> {code}
> It seems (some) value was null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-227: Introduce Fetch Requests that are Incremental to Increase Partition Scalability

2018-02-01 Thread Colin McCabe
On Thu, Jan 25, 2018, at 16:10, Colin McCabe wrote:
> On Thu, Jan 25, 2018, at 02:23, Mickael Maison wrote:
> > I'm late to the party but +1 and thanks for the KIP
> 
> Thanks
> 
> > 
> > On Thu, Jan 25, 2018 at 12:36 AM, Ismael Juma  wrote:
> > > Agreed, Jun.
> > >
> > > Ismael
> > >
> > > On Wed, Jan 24, 2018 at 4:08 PM, Jun Rao  wrote:
> > >
> > >> Since this is a server side metric, it's probably better to use Yammer 
> > >> Rate
> > >> (which has count) for consistency.
> 
> Good point.

I updated the KIP to have a IncrementalFetchSessionEvictionsPerSec: as 
suggested here.

Also, I added a table of contents at the beginning, and updated the 
FetchRequest Metadata table.

thanks,
Colin

> 
> best,
> Colin
> 
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >> On Tue, Jan 23, 2018 at 10:17 PM, Colin McCabe  wrote:
> > >>
> > >> > On Tue, Jan 23, 2018, at 21:47, Ismael Juma wrote:
> > >> > > Colin,
> > >> > >
> > >> > > You get a cumulative count for rates since we added
> > >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > 187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics>
> > >> >
> > >> > Oh, good point.
> > >> >
> > >> > 
> > >> >
> > >> >
> > >> > > Ismael
> > >> > >
> > >> > > On Tue, Jan 23, 2018 at 4:21 PM, Colin McCabe
> > >> > >  wrote:>
> > >> > > > On Tue, Jan 23, 2018, at 11:57, Jun Rao wrote:
> > >> > > > > Hi, Collin,
> > >> > > > >
> > >> > > > > Thanks for the updated KIP. +1. Just a minor comment. It seems
> > >> > > > > that it's> > > better for TotalIncrementalFetchSessionsEvicted to
> > >> > be a rate,
> > >> > > > > instead of> > > just an ever-growing count.
> > >> > > >
> > >> > > > Thanks.  Perhaps we can add the rate in addition to the total
> > >> > > > eviction> > count?
> > >> > > >
> > >> > > > best,
> > >> > > > Colin
> > >> > > >
> > >> > > > >
> > >> > > > > Jun
> > >> > > > >
> > >> > > > > On Mon, Jan 22, 2018 at 4:35 PM, Jason Gustafson
> > >> > > > > > > wrote:
> > >> > > > >
> > >> > > > > > >
> > >> > > > > > > What if we want to have fetch sessions for non-incremental
> > >> > > > > > > fetches> > in the
> > >> > > > > > > future, though?  Also, we don't expect this configuration to
> > >> > > > > > > be> > changed
> > >> > > > > > > often, so it doesn't really need to be short.
> > >> > > > > >
> > >> > > > > >
> > >> > > > > > Hmm.. But in that case, I'm not sure we'd need to distinguish
> > >> > > > > > the two> > > > cases. If the non-incremental sessions are
> > >> > occupying space
> > >> > > > proportional to
> > >> > > > > > the fetched partitions, using the same config for both would 
> > >> > > > > > be>
> > >> >
> > >> > reasonable.
> > >> > > > > > If they are not (which is more likely), we probably wouldn't
> > >> > > > > > need a> > config
> > >> > > > > > at all. Given that, I'd probably still opt for the more concise
> > >> > > > > > name.> > It's
> > >> > > > > > not a blocker for me though.
> > >> > > > > >
> > >> > > > > > +1 on the KIP.
> > >> > > > > >
> > >> > > > > > -Jason
> > >> > > > > >
> > >> > > > > > On Mon, Jan 22, 2018 at 3:56 PM, Colin McCabe
> > >> > > > > > > > wrote:
> > >> > > > > >
> > >> > > > > > > On Mon, Jan 22, 2018, at 15:42, Jason Gustafson wrote:
> > >> > > > > > > > Hi Colin,
> > >> > > > > > > >
> > >> > > > > > > > This is looking good to me. A few comments:
> > >> > > > > > > >
> > >> > > > > > > > 1. The fetch type seems unnecessary in the request and
> > >> > > > > > > >response> > schemas
> > >> > > > > > > > since it can be inferred by the sessionId/epoch.
> > >> > > > > > >
> > >> > > > > > > Hi Jason,
> > >> > > > > > >
> > >> > > > > > > Fair enough... if we need it later, we can always bump the 
> > >> > > > > > > RPC>
> > >> > > version.
> > >> > > > > > >
> > >> > > > > > > > 2. I agree with Jun that a separate array for partitions to
> > >> > > > > > > >remove> > > > would
> > >> > > > > > > be
> > >> > > > > > > > more intuitive.
> > >> > > > > > >
> > >> > > > > > > OK.  I'll switch it to using a separate array.
> > >> > > > > > >
> > >> > > > > > > > 3. I'm not super thrilled with the cache configuration 
> > >> > > > > > > > since
> > >> > > > > > > >it> > seems
> > >> > > > > > to
> > >> > > > > > > > tie us a bit too closely to the implementation. You've
> > >> > > > > > > > mostly> > convinced
> > >> > > > > > > me
> > >> > > > > > > > on the need for the slots config, but I wonder if we can at
> > >> > > > > > > > least> > do
> > >> > > > > > > > without "min.incremental.fetch.session.eviction.ms"? For
> > >> > > > > > > > one, I> > think
> > >> > > > > > > the
> > >> > > > > > > > broker should reserve the right to evict sessions at will.
> > >> > > > > > > > We> > shouldn't
> > >> > > > > > > be
> > >> > > > > > > > stuck maintaining a small session at the expense of a much
> > >> > > > > > > > larger> > one
> > >> > > > > > > just
> 

Jenkins build is back to normal : kafka-trunk-jdk8 #2382

2018-02-01 Thread Apache Jenkins Server
See 




Re: Please add me to the contributor list

2018-02-01 Thread Gwen Shapira
Found the user ("Robert Yokota") and added to JIRA contributors.

On Thu, Feb 1, 2018 at 4:58 PM Guozhang Wang  wrote:

> Hello Robert,
>
> What's your apache id?
>
>
> Guozhang
>
> On Thu, Feb 1, 2018 at 4:37 PM, Robert Yokota  wrote:
>
> > Hi,
> >
> > Can someone please add me to the contributor list?   I want to assign
> > KAFKA-2925 to myself so I can close it out.
> >
> > Thanks,
> > Robert Yokota
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-6522) Retrying leaderEpoch request for partition xxx as the leader reported an error: UNKNOWN_SERVER_ERROR

2018-02-01 Thread Wang Shuxiao (JIRA)
Wang Shuxiao created KAFKA-6522:
---

 Summary: Retrying leaderEpoch request for partition xxx as the 
leader reported an error: UNKNOWN_SERVER_ERROR
 Key: KAFKA-6522
 URL: https://issues.apache.org/jira/browse/KAFKA-6522
 Project: Kafka
  Issue Type: New Feature
  Components: core
Affects Versions: 1.0.0
 Environment: Ubuntu 16.04 LTS 64bit-server
Reporter: Wang Shuxiao


we have 3 brokers in a kafka cluster(brokerid:401,402,403). The broker-403 
fails to fetch data from leader:
{code:java}
[2018-02-02 08:58:26,861] INFO [ReplicaFetcher replicaId=403, leaderId=401, 
fetcherId=0] Retrying leaderEpoch request for partition sub_payone1hour-0 as 
the leader reported an error: UNKNOWN_SERVER_ERROR 
(kafka.server.ReplicaFetcherThread)
[2018-02-02 08:58:26,865] WARN [ReplicaFetcher replicaId=403, leaderId=401, 
fetcherId=3] Error when sending leader epoch request for Map(sub_myshardSinfo-3 
-> -1, sub_myshardUinfo-1 -> -1, sub_videoOnlineResourceType8Test-0 -> -1, 
pub_videoReportEevent-1 -> 9, sub_StreamNofity-3 -> -1, pub_RsVideoInfo-1 -> 
-1, pub_lidaTopic3-15 -> -1, pub_lidaTopic3-3 -> -1, sub_zwbtest-1 -> -1, 
sub_svAdminTagging-5 -> -1, pub_channelinfoupdate-1 -> -1, pub_RsPlayInfo-4 -> 
-1, sub_tinyVideoWatch-4 -> 14, __consumer_offsets-36 -> -1, 
pub_ybusAuditorChannel3-2 -> -1, pub_vipPush-4 -> -1, sub_LivingNotifyOnline-3 
-> -1, sub_baseonline-4 -> -1, __consumer_offsets-24 -> -1, sub_lidaTopic-3 -> 
-1, sub_mobileGuessGameReward-0 -> -1, pub_lidaTopic-6 -> -1, sub_NewUserAlgo-0 
-> -1, __consumer_offsets-48 -> -1, pub_RsUserBehavior-3 -> -1, 
sub_channelinfoupdate-0 -> -1, pub_tinyVideoComment-1 -> -1, pub_bulletin-2 -> 
-1, pub_RecordCompleteNotifition-6 -> -1, sub_lidaTopic2-3 -> -1, smsgateway-10 
-> -1, __consumer_offsets-0 -> -1, pub_baseonlinetest-1 -> -1, 
__consumer_offsets-12 -> -1, pub_myshardUinfo-0 -> -1, pub_baseonline-3 -> -1, 
smsGatewayMarketDbInfo-6 -> -1, sub_tinyVideoComment-0 -> 14) 
(kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 401 was disconnected before the response was 
read
 at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:95)
 at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:96)
 at 
kafka.server.ReplicaFetcherThread.fetchEpochsFromLeader(ReplicaFetcherThread.scala:312)
 at 
kafka.server.AbstractFetcherThread.maybeTruncate(AbstractFetcherThread.scala:130)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:102)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64){code}
 

on the leader(broker-401) side, the log shows:
{code:java}
[2018-02-02 08:58:26,859] ERROR Closing socket for 
116.31.112.9:9099-116.31.121.159:30476 because of error 
(kafka.network.Processor)
org.apache.kafka.common.errors.InvalidRequestException: Error getting request 
for apiKey: 23 and apiVersion: 0
Caused by: java.lang.IllegalArgumentException: Unexpected ApiKeys id `23`, it 
should be between `0` and `20` (inclusive)
 at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:73)
 at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
 at kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:96)
 at kafka.network.RequestChannel$Request.(RequestChannel.scala:91)
 at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:492)
 at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:487)
 at scala.collection.Iterator$class.foreach(Iterator.scala:893)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at kafka.network.Processor.processCompletedReceives(SocketServer.scala:487)
 at kafka.network.Processor.run(SocketServer.scala:417)
 at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6492) LogSemgent.truncateTo() should always resize the index file

2018-02-01 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6492.

   Resolution: Fixed
 Assignee: Jason Gustafson
Fix Version/s: (was: 1.2.0)
   1.1.0

> LogSemgent.truncateTo() should always resize the index file
> ---
>
> Key: KAFKA-6492
> URL: https://issues.apache.org/jira/browse/KAFKA-6492
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.2, 0.10.1.1, 0.10.2.1, 1.0.0, 0.11.0.2
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 1.1.0
>
>
> The bug is the following:
>  # Initially on a follower broker there are two segments 0 and segment 1. 
> Segment 0 is empty (maybe due to log compaction)
>  # log is truncated to 0.
>  # LogSemgent.Truncate() will not find a message to truncate in segment 0, so 
> it will skip resizing the index/timeindex files. 
>  # When a new message is fetched, Log.maybeRoll() will try to roll a new 
> segment because the index file of segment 0 is already full (max size is 0)
>  # After creating the new segment 0, the replica fetcher thread finds that 
> there is already a segment 0 exists. So it just throws exception and dies.
> The fix would be let the broker make sure the index files of active segments 
> are always resized properly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Matthias J. Sax
Thanks.

I updated the KIP accordingly and started work on the PR to see if this
`To` interface work nicely.

-Matthias

On 2/1/18 4:00 PM, Ted Yu wrote:
> Yeah.
> Cleaner in this formation.
> 
> On Thu, Feb 1, 2018 at 3:59 PM, Bill Bejeck  wrote:
> 
>> `To` works for me.
>>
>> Thanks,
>> Bill
>>
>> On Thu, Feb 1, 2018 at 6:47 PM, Matthias J. Sax 
>> wrote:
>>
>>> @Paolo:
>>>
>>> The timestamp will be used to set the message/record metadata timestamp
>>> on `Producer.send(new ProducerRecord(...,timestamp,...))`.
>>>
>>> @Bill,Ted:
>>>
>>> Might be a good idea. I was thinking about the name, and came up with
>> `To`:
>>>
 context.forward(key, value, To.child("processorX").withTimestamp(5));
 context.forward(key, value, To.child(1).withTimestamp(10));
>>>
>>> Without specifying the downstream child processor:
>>>
 context.forward(key, value, To.all().withTimestamp(5));
>>>
>>> WDYT?
>>>
>>>
>>> -Matthias
>>>
>>> On 2/1/18 8:45 AM, Ted Yu wrote:
 I like Bill's idea (pending a better name for the Forwarded).

 Cheers

 On Thu, Feb 1, 2018 at 7:47 AM, Bill Bejeck  wrote:

> Hi Matthias,
>
> Thanks for the KIP!
>
> Could we consider taking an approach similar to what was done in
>> KIP-182
> with regards to overloading?
>
> Meaning we could add a "Forwarded" object (horrible name I know) with
> methods withTimestamp, withChildName, and withChildIndex. To handle
>> the
> case when both a child-name and child-index is provided we could throw
>>> an
> exception.
>
> Then we could reduce the overloaded {{forward}} methods from 6 to 2.
>
> Thanks,
> Bill
>
>
> On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno 
>>> wrote:
>
>> Hi Matthias,
>>
>> just a question : what will be the timestamp "type" in the new
>> message
>>> on
>> the wire ?
>>
>> Thanks,
>> Paolo.
>> 
>> From: Matthias J. Sax 
>> Sent: Wednesday, January 31, 2018 2:06 AM
>> To: dev@kafka.apache.org
>> Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor
>>> API
>>
>> Hi,
>>
>> I want to propose a new KIP for Kafka Streams that allows timestamp
>> manipulation at Processor API level.
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 251%3A+Allow+timestamp+manipulation+in+Processor+API
>>
>> Looking forward to your feedback.
>>
>>
>> -Matthias
>>
>>
>

>>>
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Please add me to the contributor list

2018-02-01 Thread Guozhang Wang
Hello Robert,

What's your apache id?


Guozhang

On Thu, Feb 1, 2018 at 4:37 PM, Robert Yokota  wrote:

> Hi,
>
> Can someone please add me to the contributor list?   I want to assign
> KAFKA-2925 to myself so I can close it out.
>
> Thanks,
> Robert Yokota
>



-- 
-- Guozhang


Please add me to the contributor list

2018-02-01 Thread Robert Yokota
Hi,

Can someone please add me to the contributor list?   I want to assign
KAFKA-2925 to myself so I can close it out.

Thanks,
Robert Yokota


Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Ted Yu
Yeah.
Cleaner in this formation.

On Thu, Feb 1, 2018 at 3:59 PM, Bill Bejeck  wrote:

> `To` works for me.
>
> Thanks,
> Bill
>
> On Thu, Feb 1, 2018 at 6:47 PM, Matthias J. Sax 
> wrote:
>
> > @Paolo:
> >
> > The timestamp will be used to set the message/record metadata timestamp
> > on `Producer.send(new ProducerRecord(...,timestamp,...))`.
> >
> > @Bill,Ted:
> >
> > Might be a good idea. I was thinking about the name, and came up with
> `To`:
> >
> > > context.forward(key, value, To.child("processorX").withTimestamp(5));
> > > context.forward(key, value, To.child(1).withTimestamp(10));
> >
> > Without specifying the downstream child processor:
> >
> > > context.forward(key, value, To.all().withTimestamp(5));
> >
> > WDYT?
> >
> >
> > -Matthias
> >
> > On 2/1/18 8:45 AM, Ted Yu wrote:
> > > I like Bill's idea (pending a better name for the Forwarded).
> > >
> > > Cheers
> > >
> > > On Thu, Feb 1, 2018 at 7:47 AM, Bill Bejeck  wrote:
> > >
> > >> Hi Matthias,
> > >>
> > >> Thanks for the KIP!
> > >>
> > >> Could we consider taking an approach similar to what was done in
> KIP-182
> > >> with regards to overloading?
> > >>
> > >> Meaning we could add a "Forwarded" object (horrible name I know) with
> > >> methods withTimestamp, withChildName, and withChildIndex. To handle
> the
> > >> case when both a child-name and child-index is provided we could throw
> > an
> > >> exception.
> > >>
> > >> Then we could reduce the overloaded {{forward}} methods from 6 to 2.
> > >>
> > >> Thanks,
> > >> Bill
> > >>
> > >>
> > >> On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno 
> > wrote:
> > >>
> > >>> Hi Matthias,
> > >>>
> > >>> just a question : what will be the timestamp "type" in the new
> message
> > on
> > >>> the wire ?
> > >>>
> > >>> Thanks,
> > >>> Paolo.
> > >>> 
> > >>> From: Matthias J. Sax 
> > >>> Sent: Wednesday, January 31, 2018 2:06 AM
> > >>> To: dev@kafka.apache.org
> > >>> Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor
> > API
> > >>>
> > >>> Hi,
> > >>>
> > >>> I want to propose a new KIP for Kafka Streams that allows timestamp
> > >>> manipulation at Processor API level.
> > >>>
> > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> 251%3A+Allow+timestamp+manipulation+in+Processor+API
> > >>>
> > >>> Looking forward to your feedback.
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>>
> > >>
> > >
> >
> >
>


Re: Vote for KIP-245: Use Properties instead of StreamsConfig in KafkaStreams constructor

2018-02-01 Thread Matthias J. Sax
I am closing this vote.

The KIP is accepted with 3 non-binding (Ted, Bill, Satish) and 3 binding
(Guozhang, Damian, Matthias) votes.

Thanks a lot for the discussion and voting.


-Matthias

On 1/16/18 10:08 AM, Matthias J. Sax wrote:
> ATM, StreamsConfig is immutable. Thus, it's just boiler plate code for
> the user.
> 
> If we really add a builder pattern or change StreamsConfig, we can
> update the API accordingly. We will need a KIP for those changes anyway.
> But for now, it seems to be best to avoid boiler plate if possible.
> 
> Just my 2 cents.
> 
> -Matthias
> 
> On 1/16/18 9:59 AM, Colin McCabe wrote:
>> Why not just have a StreamsConfig constructor that takes a Properties 
>> object?  This has a few advantages.  Firstly, because it's purely additive, 
>> it doesn't create any deprecated functions  or compatibility issues that we 
>> have to clean up later.  Secondly, if we decide to do something interesting 
>> with StreamsConfig later, we still have it.  For example, we could have a 
>> builder, or some interesting ways to serialize it or send it to a string.
>>
>> best,
>> Colin
>>
>>
>> On Tue, Jan 16, 2018, at 09:54, Matthias J. Sax wrote:
>>> Thanks for updating the KIP.
>>>
>>> I am recasting my vote +1 (binding).
>>>
>>>
>>> -Matthias
>>>
>>> On 1/13/18 4:30 AM, Boyang Chen wrote:
 Hey Matt and Guozhang,


 I have already updated the pull
 request: https://github.com/apache/kafka/pull/4354

 and the
 KIP: 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-245%3A+Use+Properties+instead+of+StreamsConfig+in+KafkaStreams+constructor


 to reflect the change proposed by Guozhang(adding a 4th constructor)


 Let me know your thoughts!


 Best,

 Boyang



 
 *From:* Boyang Chen 
 *Sent:* Saturday, January 13, 2018 9:37 AM
 *To:* Matthias J. Sax
 *Subject:* Re: Vote for KIP-245: Use Properties instead of StreamsConfig
 in KafkaStreams constructor
  

 Sounds good, will do. However I don't receive the +1 emails, interesting...



 
 *From:* Matthias J. Sax 
 *Sent:* Saturday, January 13, 2018 9:32 AM
 *To:* Boyang Chen
 *Subject:* Re: Vote for KIP-245: Use Properties instead of StreamsConfig
 in KafkaStreams constructor
  
 Guozhang left a comment about having a 4th overload. I agree that we
 should add this 4th overload.

 Please update the KIP accordingly and follow up on the mailing list
 thread. Than we can vote it through.

 Thx.

 -Matthias

 On 1/12/18 4:48 PM, Boyang Chen wrote:
> Hey Matt,
>
>
> I haven't received any approval/veto on this KIP. Everything is ready
> but only needs one approval. Any step I should take?
>
> Thanks for the help!
>
> Boyang
>
>
>
> 
> *From:* Matthias J. Sax 
> *Sent:* Saturday, January 13, 2018 3:47 AM
> *To:* dev@kafka.apache.org
> *Subject:* Re: Vote for KIP-245: Use Properties instead of StreamsConfig
> in KafkaStreams constructor
>  
> Boyang,
>
> what is the status of this KIP? The release plan for 1.1 was just
> announced and we like to get this KIP into the release.
>
> Thx.
>
>
> -Matthias
>
> On 1/2/18 11:18 AM, Guozhang Wang wrote:
>> Boyang,
>>
>> Thanks for the proposed change, the wiki page lgtm. One minor comment
>> otherwise I'm +1:
>>
>> For the new API, we now also have a constructor that accepts both a
>> clientSupplier and a Time, so we should consider having four overloads in
>> total:
>>
>>
>> // New API (using Properties)
>> public KafkaStreams(final Topology, final Properties props)
>> public KafkaStreams(final Topology, final Properties props, final Time 
>> time)
>> public KafkaStreams(final Topology, final Properties props, final
>> KafkaClientSupplier
>> clientSupplier)
>> public KafkaStreams(final Topology, final Properties props, final
>> KafkaClientSupplier
>> clientSupplier, final Time time)
>>
>> Guozhang
>>
>> On Tue, Dec 26, 2017 at 7:26 PM, Satish Duggana 
>> 
>> wrote:
>>
>>> Thanks for the KIP, +1 from me.
>>>
>>> On Wed, Dec 27, 2017 at 7:42 AM, Bill Bejeck  wrote:
>>>
 Thanks for the KIP.  +1 for me.

 On Tue, Dec 26, 2017 at 6:22 PM Ted Yu  wrote:

> +1 from me as well.
>
> On Tue, Dec 26, 2017 at 10:41 AM, Matthias J. Sax <
>>> matth...@confluent.io
>

Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Bill Bejeck
`To` works for me.

Thanks,
Bill

On Thu, Feb 1, 2018 at 6:47 PM, Matthias J. Sax 
wrote:

> @Paolo:
>
> The timestamp will be used to set the message/record metadata timestamp
> on `Producer.send(new ProducerRecord(...,timestamp,...))`.
>
> @Bill,Ted:
>
> Might be a good idea. I was thinking about the name, and came up with `To`:
>
> > context.forward(key, value, To.child("processorX").withTimestamp(5));
> > context.forward(key, value, To.child(1).withTimestamp(10));
>
> Without specifying the downstream child processor:
>
> > context.forward(key, value, To.all().withTimestamp(5));
>
> WDYT?
>
>
> -Matthias
>
> On 2/1/18 8:45 AM, Ted Yu wrote:
> > I like Bill's idea (pending a better name for the Forwarded).
> >
> > Cheers
> >
> > On Thu, Feb 1, 2018 at 7:47 AM, Bill Bejeck  wrote:
> >
> >> Hi Matthias,
> >>
> >> Thanks for the KIP!
> >>
> >> Could we consider taking an approach similar to what was done in KIP-182
> >> with regards to overloading?
> >>
> >> Meaning we could add a "Forwarded" object (horrible name I know) with
> >> methods withTimestamp, withChildName, and withChildIndex. To handle the
> >> case when both a child-name and child-index is provided we could throw
> an
> >> exception.
> >>
> >> Then we could reduce the overloaded {{forward}} methods from 6 to 2.
> >>
> >> Thanks,
> >> Bill
> >>
> >>
> >> On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno 
> wrote:
> >>
> >>> Hi Matthias,
> >>>
> >>> just a question : what will be the timestamp "type" in the new message
> on
> >>> the wire ?
> >>>
> >>> Thanks,
> >>> Paolo.
> >>> 
> >>> From: Matthias J. Sax 
> >>> Sent: Wednesday, January 31, 2018 2:06 AM
> >>> To: dev@kafka.apache.org
> >>> Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor
> API
> >>>
> >>> Hi,
> >>>
> >>> I want to propose a new KIP for Kafka Streams that allows timestamp
> >>> manipulation at Processor API level.
> >>>
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> 251%3A+Allow+timestamp+manipulation+in+Processor+API
> >>>
> >>> Looking forward to your feedback.
> >>>
> >>>
> >>> -Matthias
> >>>
> >>>
> >>
> >
>
>


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-02-01 Thread Jeff Widman
Don't forget to update the wiki page now that the vote has passed--it
currently says this KIP is "under discussion":
https://cwiki.apache.org/confluence/display/KAFKA/KIP-222+-+Add+Consumer+Group+operations+to+Admin+API

Also, should the JIRA ticket be tagged with 1.1.0 (provided this is merged
by then)?

On Mon, Jan 22, 2018 at 3:26 AM, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> My bad, KIP is updated:
>
> ```
> public class MemberDescription {
> private final String consumerId;
> private final String clientId;
> private final String host;
> private final MemberAssignment assignment;
> }
> public class MemberAssignment {
> private final List assignment;
> }
> ```
>
> Cheers,
> Jorge.
>
> El lun., 22 ene. 2018 a las 6:46, Jun Rao () escribió:
>
> > Hi, Jorge,
> >
> > For #3, I wasn't suggesting using the internal Assignment. We can just
> > introduce a new public type that wraps List. We can call
> it
> > sth like MemberAssignment to distinguish it from the internal one. This
> > makes extending the type in the future easier.
> >
> > Thanks,
> >
> > Jun
> >
> > On Sun, Jan 21, 2018 at 3:19 PM, Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Hi all,
> > >
> > > Thanks all for your votes and approving this KIP :)
> > >
> > > @Jun Rao:
> > >
> > > 1. Yes, KIP is updated with MemberDescription.
> > > 2. Changed:
> > > ```
> > > public class ListGroupOffsetsResult {
> > > final KafkaFuture> future;
> > > ```
> > > 3. Not sure about this one as Assignment type is part of
> > > o.a.k.clients.consumer.internals. Will we be breaking encapsulation
> if we
> > > expose it as part of AdminClient?
> > > Currently is defined as:
> > > ```
> > > public class MemberDescription {
> > > private final String consumerId;
> > > private final String clientId;
> > > private final String host;
> > > private final List assignment;
> > > }
> > > ```
> > >
> > > BTW: I've created a PR with the work in progress:
> > > https://github.com/apache/kafka/pull/4454
> > >
> > > Cheers,
> > > Jorge.
> > >
> > > El vie., 19 ene. 2018 a las 23:52, Jun Rao ()
> > escribió:
> > >
> > > > Hi, Jorge,
> > > >
> > > > Thanks for the KIP. Looks good to me overall. A few comments below.
> > > >
> > > > 1. It seems that ConsumerDescription should be MemberDescription?
> > > >
> > > > 2. Each offset can have an optional metadata. So, in
> > > > ListGroupOffsetsResult, perhaps it's better to have
> > > > KafkaFuture>, where
> > > > OffsetAndMetadata contains an offset and a metadata of String.
> > > >
> > > > 3. As Jason mentioned in the discussion, it would be nice to extend
> > this
> > > > api to support general group management, instead of just the consumer
> > > group
> > > > in the future. For that, it might be better for MemberDescription to
> > have
> > > > assignment of type Assignment, which consists of a list of
> partitions.
> > > > Then, in the future, we can add other fields to Assignment.
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Thu, Jan 18, 2018 at 9:45 AM, Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 (non binding), thanks
> > > > >
> > > > > On Thu, Jan 18, 2018 at 5:41 PM, Colin McCabe 
> > > > wrote:
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Colin
> > > > > >
> > > > > >
> > > > > > On Thu, Jan 18, 2018, at 07:36, Ted Yu wrote:
> > > > > >> +1
> > > > > >>  Original message From: Bill Bejeck <
> > > > bbej...@gmail.com>
> > > > > >> Date: 1/18/18  6:59 AM  (GMT-08:00) To: dev@kafka.apache.org
> > > Subject:
> > > > > >> Re: [VOTE] KIP-222 - Add "describe consumer group" to
> > > KafkaAdminClient
> > > > > >> Thanks for the KIP
> > > > > >>
> > > > > >> +1
> > > > > >>
> > > > > >> Bill
> > > > > >>
> > > > > >> On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram <
> > > > > rajinisiva...@gmail.com>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > +1 (binding)
> > > > > >> >
> > > > > >> > Thanks for the KIP, Jorge.
> > > > > >> >
> > > > > >> > Regards,
> > > > > >> >
> > > > > >> > Rajini
> > > > > >> >
> > > > > >> > On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang <
> > > wangg...@gmail.com>
> > > > > wrote:
> > > > > >> >
> > > > > >> > > +1 (binding). Thanks Jorge.
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > Guozhang
> > > > > >> > >
> > > > > >> > > On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira <
> > > g...@confluent.io
> > > > >
> > > > > >> > wrote:
> > > > > >> > >
> > > > > >> > > > Hey, since there were no additional comments in the
> > > discussion,
> > > > > I'd
> > > > > >> > like
> > > > > >> > > to
> > > > > >> > > > resume the voting.
> > > > > >> > > >
> > > > > >> > > > +1 (binding)
> > > > > >> > > >
> > > > > >> > > > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang <
> > > > wangg...@gmail.com
> > > > > >
> > > > > >> 

Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Matthias J. Sax
@Paolo:

The timestamp will be used to set the message/record metadata timestamp
on `Producer.send(new ProducerRecord(...,timestamp,...))`.

@Bill,Ted:

Might be a good idea. I was thinking about the name, and came up with `To`:

> context.forward(key, value, To.child("processorX").withTimestamp(5));
> context.forward(key, value, To.child(1).withTimestamp(10));

Without specifying the downstream child processor:

> context.forward(key, value, To.all().withTimestamp(5));

WDYT?


-Matthias

On 2/1/18 8:45 AM, Ted Yu wrote:
> I like Bill's idea (pending a better name for the Forwarded).
> 
> Cheers
> 
> On Thu, Feb 1, 2018 at 7:47 AM, Bill Bejeck  wrote:
> 
>> Hi Matthias,
>>
>> Thanks for the KIP!
>>
>> Could we consider taking an approach similar to what was done in KIP-182
>> with regards to overloading?
>>
>> Meaning we could add a "Forwarded" object (horrible name I know) with
>> methods withTimestamp, withChildName, and withChildIndex. To handle the
>> case when both a child-name and child-index is provided we could throw an
>> exception.
>>
>> Then we could reduce the overloaded {{forward}} methods from 6 to 2.
>>
>> Thanks,
>> Bill
>>
>>
>> On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno  wrote:
>>
>>> Hi Matthias,
>>>
>>> just a question : what will be the timestamp "type" in the new message on
>>> the wire ?
>>>
>>> Thanks,
>>> Paolo.
>>> 
>>> From: Matthias J. Sax 
>>> Sent: Wednesday, January 31, 2018 2:06 AM
>>> To: dev@kafka.apache.org
>>> Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API
>>>
>>> Hi,
>>>
>>> I want to propose a new KIP for Kafka Streams that allows timestamp
>>> manipulation at Processor API level.
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> 251%3A+Allow+timestamp+manipulation+in+Processor+API
>>>
>>> Looking forward to your feedback.
>>>
>>>
>>> -Matthias
>>>
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk8 #2381

2018-02-01 Thread Apache Jenkins Server
See 


Changes:

[becket.qin] KAFKA-6489; Fetcher.retrieveOffsetsByTimes() should batch the 
metadata

--
[...truncated 3.25 MB...]
kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization STARTED

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testFetchResponseVersion PASSED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion STARTED

kafka.api.RequestResponseSerializationTest > testProduceResponseVersion PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED


[jira] [Created] (KAFKA-6521) Store record timestamps in KTable stores

2018-02-01 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6521:
--

 Summary: Store record timestamps in KTable stores
 Key: KAFKA-6521
 URL: https://issues.apache.org/jira/browse/KAFKA-6521
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Currently, KTables store plain key-value pairs. However, it is desirable to 
also store a timestamp for the record.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Kafka Consumers not rebalancing.

2018-02-01 Thread satyajit vegesna
Any help would be appreciated!

Hi All,

I was experimenting on the new consumer API and have a question regarding
the rebalance process.

I start a consumer group with single thread and make the Thread sleep while
processing the records retrieved from the first consumer.poll call, i was
making sure the Thread.sleep time goes beyond the session timeout and was
expecting to see a rebalance on the consumer group.

But when i try to get the state of the consumer group using the below
command,

/opt/confluent-4.0.0/bin/kafka-consumer-groups  --group
"consumer-grievances-group15" --bootstrap-server xxx:9092,:9092,
:9092  --describe

i get below result ,

Consumer group 'consumer-grievances-group15' has no active members.


TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
LAGCONSUMER-ID   HOST
CLIENT-ID

TELMATEQA.grievances.grievances 0  125855  152037
26182  - -
-

The same happens with multiple thread in the consumer group scenario,
and *going
further one step i was able to make the thread get into running state, from
sleep state and could see that the consumer group started off from where it
left.*

My only question is , why isn't the rebalancing happening in this scenario.
My expectation was that the threads rebalance and start from the committed
offset.

Regards,
Satyajit.


On Mon, Jan 29, 2018 at 3:36 PM, satyajit vegesna 
wrote:

> Hi All,
>
> I was experimenting on the new consumer API and have a question regarding
> the rebalance process.
>
> I start a consumer group with single thread and make the Thread sleep
> while processing the records retrieved from the first consumer.poll call, i
> was making sure the Thread.sleep time goes beyond the session timeout and
> was expecting to see a rebalance on the consumer group.
>
> But when i try to get the state of the consumer group using the below
> command,
>
> /opt/confluent-4.0.0/bin/kafka-consumer-groups  --group
> "consumer-grievances-group15" --bootstrap-server xxx:9092,:9092,
> :9092  --describe
>
> i get below result ,
>
> Consumer group 'consumer-grievances-group15' has no active members.
>
>
> TOPIC  PARTITION  CURRENT-OFFSET  LOG-END-OFFSET
> LAGCONSUMER-ID   HOST
>   CLIENT-ID
>
> TELMATEQA.grievances.grievances 0  125855  152037
>   26182  - -
> -
>
> The same happens with multiple thread in the consumer group scenario, and 
> *going
> further one step i was able to make the thread get into running state, from
> sleep state and could see that the consumer group started off from where it
> left.*
>
> My only question is , why isn't the rebalancing happening in this
> scenario. My expectation was that the threads rebalance and start from the
> committed offset.
>
> Regards,
> Satyajit.
>
>


Jenkins build is back to normal : kafka-trunk-jdk7 #3142

2018-02-01 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-01 Thread Vahid S Hashemian
Thanks James for sharing that scenario.

I agree it makes sense to be able to remove offsets for the topics that 
are no longer "active" in the group.
I think it becomes important to determine what constitutes that a topic is 
no longer active: If we use per-partition expiration we would manually 
choose a retention time that works for the particular scenario.

That works, but since we are manually intervening and specify a 
per-partition retention, why not do the intervention in some other way:

One alternative for this intervention, to favor the simplicity of the 
suggested protocol in the KIP, is to improve upon the just introduced 
DELETE_GROUPS API and allow for deletion of offsets of specific topics in 
the group. This is what the old ZooKeeper based group management supported 
anyway, and we would just be leveling the group deletion features of the 
Kafka-based group management with the ZooKeeper-based one.

So, instead of deciding in advance when the offsets should be removed we 
would instantly remove them when we are sure that they are no longer 
needed.

Let me know what you think.

Thanks.
--Vahid



From:   James Cheng 
To: dev@kafka.apache.org
Date:   02/01/2018 12:37 AM
Subject:Re: [DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer Group Offsets



Vahid,

Under rejected alternatives, we had decided that we did NOT want to do 
per-partition expiration, and instead we wait until the entire group is 
empty and then (after the right time has passed) expire the entire group 
at once.

I thought of one scenario that might benefit from per-partition 
expiration.

Let's say I have topics A B C... Z. So, I have 26 topics, all of them 
single partition, so 26 partitions. Let's say I have mirrormaker mirroring 
those 26 topics. The group will then have 26 committed offsets.

Let's say I then change the whitelist on mirrormaker so that it only 
mirrors topic Z, but I keep the same consumer group name. (I imagine that 
is a common thing to do?)

With the proposed design for this KIP, the committed offsets for topics A 
through Y will stay around as long as this mirroring group name exists.

In the current implementation that already exists (prior to this KIP), I 
belive that committed offsets for topics A through Y will expire.

How much do we care about this case?

-James

> On Jan 23, 2018, at 11:44 PM, Jeff Widman  wrote:
> 
> Bumping this as I'd like to see it land...
> 
> It's one of the "features" that tends to catch Kafka n00bs unawares and
> typically results in message skippage/loss, vs the proposed solution is
> much more intuitive behavior.
> 
> Plus it's more wire efficient because consumers no longer need to commit
> offsets for partitions that have no new messages just to keep those 
offsets
> alive.
> 
> On Fri, Jan 12, 2018 at 10:21 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> There has been no further discussion on this KIP for about two months.
>> So I thought I'd provide the scoop hoping it would spark additional
>> feedback and move the KIP forward.
>> 
>> The KIP proposes a method to preserve group offsets as long as the 
group
>> is not in Empty state (even when offsets are committed very rarely), 
and
>> start the offset expiration of the group as soon as the group becomes
>> Empty.
>> It suggests dropping the `retention_time` field from the `OffsetCommit`
>> request and, instead, enforcing it via the broker config
>> `offsets.retention.minutes` for all groups. In other words, all groups
>> will have the same retention time.
>> The KIP presumes that this global retention config would suffice common
>> use cases and does not lead to, e.g., unmanageable offset cache size 
(for
>> groups that don't need to stay around that long). It suggests opening
>> another KIP if this global retention setting proves to be problematic 
in
>> the future. It was suggested earlier in the discussion thread that the 
KIP
>> should propose a per-group retention config to circumvent this risk.
>> 
>> I look forward to hearing your thoughts. Thanks!
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" 
>> To: dev 
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of 
Consumer
>> Group Offsets
>> 
>> 
>> 
>> Hi all,
>> 
>> I created a KIP to address the group offset expiration issue reported 
in
>> KAFKA-4682:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-
>> 2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets&
>> d=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
>> kjJc7uSVcviKUc=JkzH_2jfSMhCUPMk3rUasrjDAId6xbAEmX7_shSYdU4=
>> UBu7D2Obulg0fterYxL5m8xrDWkF_O2kGlygTCWsfFc=
>> 
>> 
>> Your feedback is welcome!
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> 
> *Jeff Widman*
> jeffwidman.com <

[jira] [Created] (KAFKA-6520) When a Kafka Stream can't communicate with the server, it's Status stays RUNNING

2018-02-01 Thread Michael Kohout (JIRA)
Michael Kohout created KAFKA-6520:
-

 Summary: When a Kafka Stream can't communicate with the server, 
it's Status stays RUNNING
 Key: KAFKA-6520
 URL: https://issues.apache.org/jira/browse/KAFKA-6520
 Project: Kafka
  Issue Type: Bug
Reporter: Michael Kohout


When you execute the following scenario the application is always in RUNNING 
state
 
1)start kafka
2)start app, app connects to kafka and starts processing
3)kill kafka(stop docker container)
4)the application doesn't give any indication that it's no longer 
connected(Stream State is still RUNNING, and the uncaught exception handler 
isn't invoked)
 
 
It would be useful if the Stream State had a DISCONNECTED status.
 
see 
[this|https://groups.google.com/forum/#!topic/confluent-platform/nQh2ohgdrIQ] 
for a discussion from the google user forum.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3141

2018-02-01 Thread Apache Jenkins Server
See 


Changes:

[becket.qin] KAFKA-6489; Fetcher.retrieveOffsetsByTimes() should batch the 
metadata

--
[...truncated 412.13 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume STARTED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets STARTED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload STARTED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion STARTED

kafka.server.OffsetCommitTest > testOffsetsDeleteAfterTopicDeletion PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets STARTED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration STARTED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit STARTED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithLeaderThrottle PASSED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments STARTED

kafka.server.ReplicationQuotasTest > shouldThrottleOldSegments PASSED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle STARTED

kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle PASSED

kafka.server.EdgeCaseRequestTest > testInvalidApiVersionRequest STARTED

kafka.server.EdgeCaseRequestTest > testInvalidApiVersionRequest PASSED

kafka.server.EdgeCaseRequestTest > testMalformedHeaderRequest STARTED

kafka.server.EdgeCaseRequestTest > testMalformedHeaderRequest PASSED

kafka.server.EdgeCaseRequestTest > testProduceRequestWithNullClientId STARTED

kafka.server.EdgeCaseRequestTest > testProduceRequestWithNullClientId PASSED

kafka.server.EdgeCaseRequestTest > testInvalidApiKeyRequest STARTED

kafka.server.EdgeCaseRequestTest > testInvalidApiKeyRequest PASSED

kafka.server.EdgeCaseRequestTest > testHeaderOnlyRequest STARTED

kafka.server.EdgeCaseRequestTest > testHeaderOnlyRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 

[jira] [Created] (KAFKA-6519) Change log level from ERROR to WARN for not leader for this partition exception

2018-02-01 Thread Antony Stubbs (JIRA)
Antony Stubbs created KAFKA-6519:


 Summary: Change log level from ERROR to WARN for not leader for 
this partition exception
 Key: KAFKA-6519
 URL: https://issues.apache.org/jira/browse/KAFKA-6519
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.0.0
Reporter: Antony Stubbs


Not the leader for this partition is not an error in operation and is in fact 
expected and a apart of the partition discovery / movement system. This 
confuses users because they think something is going wrong. I'd suggest at 
least changing it to WARN, but perhaps is it even something users should be 
warned about?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6499) Avoid creating dummy checkpoint files with no state stores

2018-02-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6499.
--
   Resolution: Fixed
Fix Version/s: 1.2.0
   1.1.0

> Avoid creating dummy checkpoint files with no state stores
> --
>
> Key: KAFKA-6499
> URL: https://issues.apache.org/jira/browse/KAFKA-6499
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 1.1.0, 1.2.0
>
>
> Today, for a streams task that contains no state stores, its processor state 
> manager would still write a dummy checkpoint file that contains some 
> characters (version, size). This introduces unnecessary disk IOs and should 
> be avoidable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6489) Fetcher.retrieveOffsetsByTimes() should add all the topics to the metadata refresh topics set.

2018-02-01 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved KAFKA-6489.
-
Resolution: Fixed

> Fetcher.retrieveOffsetsByTimes() should add all the topics to the metadata 
> refresh topics set.
> --
>
> Key: KAFKA-6489
> URL: https://issues.apache.org/jira/browse/KAFKA-6489
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 1.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
> Fix For: 1.1.0
>
>
> Currently if users call KafkaConsumer.offsetsForTimes() with a large set of 
> partitions. The consumer will add one topic at a time for the metadata 
> refresh. We should add all the topics to the metadata topics and just do one 
> metadata refresh.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-02-01 Thread Manikumar
Hi,

They are few deployments using IPv6.  It is good to support IPv6 also.

I think KAFKA-5713 is about adding regular expression support to resource
names (topic. consumer etc..).
Yes, wildcards (*) in hostname doesn't makes sense. Range and subnet
support will give us the flexibility.

On Thu, Feb 1, 2018 at 5:56 PM, Sönke Liebau <
soenke.lie...@opencore.com.invalid> wrote:

> Hi Manikumar,
>
> the current proposal indeed leaves out IPv6 addresses, as I was unsure
> whether Kafka fully supports that yet to be honest. But it would be
> fairly easy to add these to the proposal - I'll update it over the
> weekend.
>
> Regarding KAFKA-5713, I simply listed it as related, since it is
> similar in spirit, if not exact wording.  Parts of that issue
> (wildcards in hosts) would be covered by this kip - just in a slightly
> different way. Do we really need wildcard support in IP addresses if
> we can specify ranges and subnets? I considered it, but only came up
> with scenarios that seemed fairly academic to me, like allowing the
> same host from multiple subnets (10.0.*.1) for example.
>
> Allowing wildcards has the potential to make the code more complex,
> depending on how we decide to implement this feature, hance I decided
> to leave wildcards out for now.
>
> What do you think?
>
> Best regards,
> Sönke
>
> On Thu, Feb 1, 2018 at 10:14 AM, Manikumar 
> wrote:
> > Hi,
> >
> > 1. Do we support IPv6 CIDR/ranges?
> >
> > 2. KAFKA-5713 is mentioned in Related JIRAs section. But there is no
> > mention of wildcard support in the KIP.
> >
> >
> > Thanks,
> >
> > On Thu, Feb 1, 2018 at 4:05 AM, Sönke Liebau <
> > soenke.lie...@opencore.com.invalid> wrote:
> >
> >> Hey everybody,
> >>
> >> following a brief inital discussion a couple of days ago on this list
> >> I'd like to get a discussion going on KIP-252 which would allow
> >> specifying ip ranges and subnets for the -allow-host and --deny-host
> >> parameters of the acl tool.
> >>
> >> The KIP can be found at
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 252+-+Extend+ACLs+to+allow+filtering+based+on+ip+ranges+and+subnets
> >>
> >> Best regards,
> >> Sönke
> >>
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>


Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Ted Yu
I like Bill's idea (pending a better name for the Forwarded).

Cheers

On Thu, Feb 1, 2018 at 7:47 AM, Bill Bejeck  wrote:

> Hi Matthias,
>
> Thanks for the KIP!
>
> Could we consider taking an approach similar to what was done in KIP-182
> with regards to overloading?
>
> Meaning we could add a "Forwarded" object (horrible name I know) with
> methods withTimestamp, withChildName, and withChildIndex. To handle the
> case when both a child-name and child-index is provided we could throw an
> exception.
>
> Then we could reduce the overloaded {{forward}} methods from 6 to 2.
>
> Thanks,
> Bill
>
>
> On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno  wrote:
>
> > Hi Matthias,
> >
> > just a question : what will be the timestamp "type" in the new message on
> > the wire ?
> >
> > Thanks,
> > Paolo.
> > 
> > From: Matthias J. Sax 
> > Sent: Wednesday, January 31, 2018 2:06 AM
> > To: dev@kafka.apache.org
> > Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API
> >
> > Hi,
> >
> > I want to propose a new KIP for Kafka Streams that allows timestamp
> > manipulation at Processor API level.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 251%3A+Allow+timestamp+manipulation+in+Processor+API
> >
> > Looking forward to your feedback.
> >
> >
> > -Matthias
> >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2380

2018-02-01 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Bill Bejeck
Hi Matthias,

Thanks for the KIP!

Could we consider taking an approach similar to what was done in KIP-182
with regards to overloading?

Meaning we could add a "Forwarded" object (horrible name I know) with
methods withTimestamp, withChildName, and withChildIndex. To handle the
case when both a child-name and child-index is provided we could throw an
exception.

Then we could reduce the overloaded {{forward}} methods from 6 to 2.

Thanks,
Bill


On Thu, Feb 1, 2018 at 3:49 AM, Paolo Patierno  wrote:

> Hi Matthias,
>
> just a question : what will be the timestamp "type" in the new message on
> the wire ?
>
> Thanks,
> Paolo.
> 
> From: Matthias J. Sax 
> Sent: Wednesday, January 31, 2018 2:06 AM
> To: dev@kafka.apache.org
> Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API
>
> Hi,
>
> I want to propose a new KIP for Kafka Streams that allows timestamp
> manipulation at Processor API level.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 251%3A+Allow+timestamp+manipulation+in+Processor+API
>
> Looking forward to your feedback.
>
>
> -Matthias
>
>


Build failed in Jenkins: kafka-trunk-jdk7 #3140

2018-02-01 Thread Apache Jenkins Server
See 


Changes:

[github] Bump trunk versions to 1.2-SNAPSHOT (#4505)

--
[...truncated 44.63 MB...]
at com.sun.proxy.$Proxy74.onOutput(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.TestListenerAdapter.output(TestListenerAdapter.java:56)
at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230)
at 
org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149)
at 
org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324)
at 
org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140)
at 
org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy75.output(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.results.StateTrackingTestResultProcessor.output(StateTrackingTestResultProcessor.java:87)
at 
org.gradle.api.internal.tasks.testing.results.AttachParentTestResultProcessor.output(AttachParentTestResultProcessor.java:48)
at sun.reflect.GeneratedMethodAccessor301.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.FailureHandlingDispatch.dispatch(FailureHandlingDispatch.java:29)
at 
org.gradle.internal.dispatch.AsyncDispatch.dispatchMessages(AsyncDispatch.java:133)
at 
org.gradle.internal.dispatch.AsyncDispatch.access$000(AsyncDispatch.java:34)
at 
org.gradle.internal.dispatch.AsyncDispatch$1.run(AsyncDispatch.java:73)
at 
org.gradle.internal.operations.BuildOperationIdentifierPreservingRunnable.run(BuildOperationIdentifierPreservingRunnable.java:39)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at 
org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:154)
... 54 more
java.io.IOException: No space left on device
com.esotericsoftware.kryo.KryoException: java.io.IOException: No space left on 
device
at com.esotericsoftware.kryo.io.Output.flush(Output.java:156)
at com.esotericsoftware.kryo.io.Output.require(Output.java:134)
at com.esotericsoftware.kryo.io.Output.writeBoolean(Output.java:578)
at 
org.gradle.internal.serialize.kryo.KryoBackedEncoder.writeBoolean(KryoBackedEncoder.java:63)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestOutputStore$Writer.onOutput(TestOutputStore.java:99)
at 
org.gradle.api.internal.tasks.testing.junit.result.TestReportDataCollector.onOutput(TestReportDataCollector.java:141)
at sun.reflect.GeneratedMethodAccessor298.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 

[jira] [Created] (KAFKA-6518) Kafka broker get stops after configured log retention time and throws I/O exception in append to log

2018-02-01 Thread Rajendra Jangir (JIRA)
Rajendra Jangir created KAFKA-6518:
--

 Summary: Kafka broker get stops after configured log retention 
time and throws  I/O exception in append to log
 Key: KAFKA-6518
 URL: https://issues.apache.org/jira/browse/KAFKA-6518
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.10.2.0
 Environment: Windows 7, Java version 9.0.4, kafka version- 0.10.2.0, 
zookeeper version -3.3.6
Reporter: Rajendra Jangir
 Fix For: 0.10.0.2
 Attachments: AppendToLogError.PNG

I am facing one serious issue in kafka. I have multiple kafka brokers. And I am 
producing approx 100-200 messages per second through a python script. Initially 
it works fine without any exception. When it comes to log retention time(5 mins 
in my case) then it throws I/O exception in append to log  and finally kafka 
broker get stops. And it throws following error -

[2018-02-01 17:18:39,168] FATAL [Replica Manager on Broker 3]: Halting due to 
unrecoverable I/O error while handling produce request: 
(kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log 'rjangir1-22'
 at kafka.log.Log.append(Log.scala:349)
 at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:443)
 at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:429)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:240)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
 at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:407)
 at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:393)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:393)
 at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:330)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:416)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
 at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.IOException: The requested operation cannot be performed on 
a file with a user-mapped section open
 at java.base/java.io.RandomAccessFile.setLength(Native Method)
 at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:125)
 at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:116)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.log.AbstractIndex.resize(AbstractIndex.scala:116)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(AbstractIndex.scala:175)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:175)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:175)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:174)
 at kafka.log.Log.roll(Log.scala:774)
 at kafka.log.Log.maybeRoll(Log.scala:745)
 at kafka.log.Log.append(Log.scala:405)
 ... 22 more

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6516) KafkaProducer retries indefinitely to authenticate on SaslAuthenticationException

2018-02-01 Thread Edoardo Comar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edoardo Comar resolved KAFKA-6516.
--
Resolution: Won't Fix

thanks, [~rsivaram] so my expectation was invalid then

> KafkaProducer retries indefinitely to authenticate on 
> SaslAuthenticationException
> -
>
> Key: KAFKA-6516
> URL: https://issues.apache.org/jira/browse/KAFKA-6516
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.0.0
>Reporter: Edoardo Comar
>Priority: Major
>
> Even after https://issues.apache.org/jira/browse/KAFKA-5854
> the producer's (background) polling thread keeps retrying to authenticate.
> if the future returned by KafkaProducer.send is not resolved.
> The current test
> {{org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest.testProducerWithInvalidCredentials()}}
> passes because it relies on the future being resolved.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6517) ZooKeeperClient holds a lock while waiting for responses, blocking shutdown

2018-02-01 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6517:
-

 Summary: ZooKeeperClient holds a lock while waiting for responses, 
blocking shutdown
 Key: KAFKA-6517
 URL: https://issues.apache.org/jira/browse/KAFKA-6517
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.0
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 1.1.0


Stack traces from a local test run that was deadlocked because shutdown 
couldn't acquire the lock:
 # kafka-scheduler-7: acquired read lock in 
kafka.zookeeper.ZooKeeperClient.handleRequests
 # Test worker-EventThread waiting for write lock to process SessionExpired in 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$.process
 # ForkJoinPool-1-worker-11 processing KafkaServer.shutdown is queued behind 2) 
waiting to acquire read lock for 
kafka.zookeeper.ZooKeeperClient.unregisterStateChangeHandler

Stack traces of the relevant threads:

{quote}
"kafka-scheduler-7" daemon prio=5 tid=0x7fade918d800 nid=0xd317 waiting on 
condition [0x7b371000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x0007e4c6e698> (a 
java.util.concurrent.CountDownLatch$Sync)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
    at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
    at 
kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1.apply(ZooKeeperClient.scala:146)
    at 
kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1.apply(ZooKeeperClient.scala:126)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:256)
    at 
kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:125)
    at 
kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1432)
    at 
kafka.zk.KafkaZkClient.kafka$zk$KafkaZkClient$$retryRequestUntilConnected(KafkaZkClient.scala:1425)
    at kafka.zk.KafkaZkClient.conditionalUpdatePath(KafkaZkClient.scala:583)
    at 
kafka.utils.ReplicationUtils$.updateLeaderAndIsr(ReplicationUtils.scala:33)
    at 
kafka.cluster.Partition.kafka$cluster$Partition$$updateIsr(Partition.scala:665)
    at kafka.cluster.Partition$$anonfun$4.apply$mcZ$sp(Partition.scala:509)
    at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:500)
    at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:500)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
    at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
    at kafka.cluster.Partition.maybeShrinkIsr(Partition.scala:499)
    at 
kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2.apply(ReplicaManager.scala:1335)
    at 
kafka.server.ReplicaManager$$anonfun$kafka$server$ReplicaManager$$maybeShrinkIsr$2.apply(ReplicaManager.scala:1335)

..

"Test worker-EventThread" daemon prio=5 tid=0x7fade90cf800 nid=0xef13 
waiting on condition [0x7a23f000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x000781847620> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
    at 
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:248)
    at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
    at 
kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$.process(ZooKeeperClient.scala:355)
    at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)

 

"ForkJoinPool-1-worker-11" daemon prio=5 tid=0x7fade9a83000 nid=0x17907 
waiting on condition [0x700011eaf000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  

Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-02-01 Thread Sönke Liebau
Hi Manikumar,

the current proposal indeed leaves out IPv6 addresses, as I was unsure
whether Kafka fully supports that yet to be honest. But it would be
fairly easy to add these to the proposal - I'll update it over the
weekend.

Regarding KAFKA-5713, I simply listed it as related, since it is
similar in spirit, if not exact wording.  Parts of that issue
(wildcards in hosts) would be covered by this kip - just in a slightly
different way. Do we really need wildcard support in IP addresses if
we can specify ranges and subnets? I considered it, but only came up
with scenarios that seemed fairly academic to me, like allowing the
same host from multiple subnets (10.0.*.1) for example.

Allowing wildcards has the potential to make the code more complex,
depending on how we decide to implement this feature, hance I decided
to leave wildcards out for now.

What do you think?

Best regards,
Sönke

On Thu, Feb 1, 2018 at 10:14 AM, Manikumar  wrote:
> Hi,
>
> 1. Do we support IPv6 CIDR/ranges?
>
> 2. KAFKA-5713 is mentioned in Related JIRAs section. But there is no
> mention of wildcard support in the KIP.
>
>
> Thanks,
>
> On Thu, Feb 1, 2018 at 4:05 AM, Sönke Liebau <
> soenke.lie...@opencore.com.invalid> wrote:
>
>> Hey everybody,
>>
>> following a brief inital discussion a couple of days ago on this list
>> I'd like to get a discussion going on KIP-252 which would allow
>> specifying ip ranges and subnets for the -allow-host and --deny-host
>> parameters of the acl tool.
>>
>> The KIP can be found at
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 252+-+Extend+ACLs+to+allow+filtering+based+on+ip+ranges+and+subnets
>>
>> Best regards,
>> Sönke
>>



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Created] (KAFKA-6516) KafkaProducer retries indefinitely to authenticate on SaslAuthenticationException

2018-02-01 Thread Edoardo Comar (JIRA)
Edoardo Comar created KAFKA-6516:


 Summary: KafkaProducer retries indefinitely to authenticate on 
SaslAuthenticationException
 Key: KAFKA-6516
 URL: https://issues.apache.org/jira/browse/KAFKA-6516
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.0.0
Reporter: Edoardo Comar


Even after https://issues.apache.org/jira/browse/KAFKA-5854

the producer's (background) polling thread keeps retrying to authenticate.

if the future returned by KafkaProducer.send is not resolved.

The current test

{{org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest.testProducerWithInvalidCredentials()}}

passes because it relies on the future being resolved.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


New release branch 1.1

2018-02-01 Thread Damian Guy
Hello Kafka developers and friends,

We now have a release branch for the 1.1 release (with 1.1.0 as the version).
Trunk has been bumped to 1.2.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release to
the next release.

>From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the release)
will be double-committed. *Please discuss with your reviewer whether your
PR should go to trunk or to trunk+release so they can merge accordingly.

*Please help us test the release! *

Thanks!

Damian


Re: 1.1 release progress

2018-02-01 Thread Paolo Patierno
Hi Guozhang and Damian,

I have a couple of issues marked for 1.1.0 but the PR isn't reviewed since a 
long time and both of them should be ready.

They are :

KAFKA-5919
KAFKA-5532

Maybe you can take a look at them ?

Thanks,
Paolo.

From: Guozhang Wang 
Sent: Saturday, January 27, 2018 4:15 AM
To: dev@kafka.apache.org
Subject: Re: 1.1 release progress

Thanks Damian,

I made a pass over the in-progress / open ones and moved some of them out
of 1.1.


Guozhang

On Fri, Jan 26, 2018 at 11:32 AM, Damian Guy  wrote:

> Hi,
>
> Just a quick updated on the 1.1 release. Feature freeze is on the 30th of
> January, so please ensure any major features have been merged and that any
> minor features have PRs by this time.
>
> We currently have 57 issues in progress and 117 todo:
> https://issues.apache.org/jira/projects/KAFKA/versions/12339769
> If people can start moving stuff out of 1.1 that would be great, otherwise
> anything that is todo will be moved to the next release.
>
> Thanks,
> Damian
>



--
-- Guozhang


Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-02-01 Thread Manikumar
Hi,

1. Do we support IPv6 CIDR/ranges?

2. KAFKA-5713 is mentioned in Related JIRAs section. But there is no
mention of wildcard support in the KIP.


Thanks,

On Thu, Feb 1, 2018 at 4:05 AM, Sönke Liebau <
soenke.lie...@opencore.com.invalid> wrote:

> Hey everybody,
>
> following a brief inital discussion a couple of days ago on this list
> I'd like to get a discussion going on KIP-252 which would allow
> specifying ip ranges and subnets for the -allow-host and --deny-host
> parameters of the acl tool.
>
> The KIP can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 252+-+Extend+ACLs+to+allow+filtering+based+on+ip+ranges+and+subnets
>
> Best regards,
> Sönke
>


Re: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

2018-02-01 Thread Paolo Patierno
Hi Matthias,

just a question : what will be the timestamp "type" in the new message on the 
wire ?

Thanks,
Paolo.

From: Matthias J. Sax 
Sent: Wednesday, January 31, 2018 2:06 AM
To: dev@kafka.apache.org
Subject: [DISCUSS] KIP-251: Allow timestamp manipulation in Processor API

Hi,

I want to propose a new KIP for Kafka Streams that allows timestamp
manipulation at Processor API level.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-251%3A+Allow+timestamp+manipulation+in+Processor+API

Looking forward to your feedback.


-Matthias



Re: [DISCUSS] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-02-01 Thread James Cheng
Vahid,

Under rejected alternatives, we had decided that we did NOT want to do 
per-partition expiration, and instead we wait until the entire group is empty 
and then (after the right time has passed) expire the entire group at once.

I thought of one scenario that might benefit from per-partition expiration.

Let's say I have topics A B C... Z. So, I have 26 topics, all of them single 
partition, so 26 partitions. Let's say I have mirrormaker mirroring those 26 
topics. The group will then have 26 committed offsets.

Let's say I then change the whitelist on mirrormaker so that it only mirrors 
topic Z, but I keep the same consumer group name. (I imagine that is a common 
thing to do?)

With the proposed design for this KIP, the committed offsets for topics A 
through Y will stay around as long as this mirroring group name exists.

In the current implementation that already exists (prior to this KIP), I belive 
that committed offsets for topics A through Y will expire.

How much do we care about this case?

-James

> On Jan 23, 2018, at 11:44 PM, Jeff Widman  wrote:
> 
> Bumping this as I'd like to see it land...
> 
> It's one of the "features" that tends to catch Kafka n00bs unawares and
> typically results in message skippage/loss, vs the proposed solution is
> much more intuitive behavior.
> 
> Plus it's more wire efficient because consumers no longer need to commit
> offsets for partitions that have no new messages just to keep those offsets
> alive.
> 
> On Fri, Jan 12, 2018 at 10:21 AM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
> 
>> There has been no further discussion on this KIP for about two months.
>> So I thought I'd provide the scoop hoping it would spark additional
>> feedback and move the KIP forward.
>> 
>> The KIP proposes a method to preserve group offsets as long as the group
>> is not in Empty state (even when offsets are committed very rarely), and
>> start the offset expiration of the group as soon as the group becomes
>> Empty.
>> It suggests dropping the `retention_time` field from the `OffsetCommit`
>> request and, instead, enforcing it via the broker config
>> `offsets.retention.minutes` for all groups. In other words, all groups
>> will have the same retention time.
>> The KIP presumes that this global retention config would suffice common
>> use cases and does not lead to, e.g., unmanageable offset cache size (for
>> groups that don't need to stay around that long). It suggests opening
>> another KIP if this global retention setting proves to be problematic in
>> the future. It was suggested earlier in the discussion thread that the KIP
>> should propose a per-group retention config to circumvent this risk.
>> 
>> I look forward to hearing your thoughts. Thanks!
>> 
>> --Vahid
>> 
>> 
>> 
>> 
>> From:   "Vahid S Hashemian" 
>> To: dev 
>> Date:   10/18/2017 04:45 PM
>> Subject:[DISCUSS] KIP-211: Revise Expiration Semantics of Consumer
>> Group Offsets
>> 
>> 
>> 
>> Hi all,
>> 
>> I created a KIP to address the group offset expiration issue reported in
>> KAFKA-4682:
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> apache.org_confluence_display_KAFKA_KIP-2D211-253A-2BRevise-
>> 2BExpiration-2BSemantics-2Bof-2BConsumer-2BGroup-2BOffsets&
>> d=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
>> kjJc7uSVcviKUc=JkzH_2jfSMhCUPMk3rUasrjDAId6xbAEmX7_shSYdU4=
>> UBu7D2Obulg0fterYxL5m8xrDWkF_O2kGlygTCWsfFc=
>> 
>> 
>> Your feedback is welcome!
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> 
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><



[jira] [Created] (KAFKA-6515) Add toString() method to kafka connect Field class

2018-02-01 Thread JIRA
Bartłomiej Tartanus created KAFKA-6515:
--

 Summary: Add toString() method to kafka connect Field class
 Key: KAFKA-6515
 URL: https://issues.apache.org/jira/browse/KAFKA-6515
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Bartłomiej Tartanus


Currently testing is really painful:
{code:java}
org.apache.kafka.connect.data.Field@1d51df1f was not equal to 
org.apache.kafka.connect.data.Field@c0d62cd8{code}
 

toString() method would fix this, so please add one. :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)