Re: Unique users per calendar month using kafka streams

2019-11-21 Thread Matthias J. Sax
While Kafka Streams does not support monthly windows out-of-the box, it
is possible to define you own custom windows.

You can find an example that defines "daily windows", including timezone
support, on GitHub:
https://github.com/confluentinc/kafka-streams-examples/blob/5.3.1-post/src/test/java/io/confluent/examples/streams/window/DailyTimeWindows.java

This should help you to define a custom window base on calendar month.

Hope this helps.


-Matthias

On 11/21/19 3:51 AM, claude.war...@wipro.com.INVALID wrote:
> A different approach would be to integrate the Apache DataSketches  
> (https://datasketches.apache.org/) which have mathematical proofs behind 
> them.  Using a DataSketch you can capture unique members for any given time 
> period in a very small data object and be able to aggregate them (even though 
> unique counts are not in and of themselves aggregateable).  For example you 
> could take the monthly measures and calculate the unique users per quarter or 
> for the entire year very quickly.  Generally orders of magnitude faster.
> 
> 
> From: Bruno Cadonna 
> Sent: Thursday, November 21, 2019 11:37
> To: Users 
> Subject: Re: Unique users per calendar month using kafka streams
> 
> ** This mail has been sent from an external source. Treat hyperlinks and 
> attachments in this email with caution**
> 
> Hi Chintan,
> 
> You cannot specify time windows based on a calendar object like months.
> 
> In the following, I suppose the keys of your records are user IDs. You
> could extract the months from the timestamps of the events and add
> them to the key of your records. Then you can group the records by key
> and count them. Be aware that your state that stores the counts will
> grow indefinitely and therefore you need to take care how to remove
> counts you do not need anymore from your local state.
> 
> Take a look at the following example of how to deduplicate records
> 
> https://clicktime.symantec.com/3E6BmtgzXaCnuSmDcxKqdKD7Vc?u=https%3A%2F%2Fgithub.com%2Fconfluentinc%2Fkafka-streams-examples%2Fblob%2F5.3.1-post%2Fsrc%2Ftest%2Fjava%2Fio%2Fconfluent%2Fexamples%2Fstreams%2FEventDeduplicationLambdaIntegrationTest.java
> 
> It shows how to avoid indefinite growing of local store in such cases.
> Try to adapt it to solve your problem by extending the key with the
> month and computing the count instead of looking for duplicates.
> 
> Best,
> Bruno
> 
> On Thu, Nov 21, 2019 at 10:28 AM chintan mavawala
>  wrote:
>>
>> Hi,
>>
>> We have a use case to capture number of unique users per month. We planned
>> to use windowing concept for this.
>>
>> For example, group events from input topic by user name and later sub group
>> them based on time window. However i don't see how i can sub group the
>> results based on particular month, say January. The only way is sub group
>> based on time.
>>
>> Any pointers would be appreciated.
>>
>> Regards,
>> Chintan
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. Please notify the sender immediately and destroy all copies of this 
> message and any attachments. WARNING: Computer viruses can be transmitted via 
> email. The recipient should check this email and any attachments for the 
> presence of viruses. The company accepts no liability for any damage caused 
> by any virus transmitted by this email. www.wipro.com
> 



signature.asc
Description: OpenPGP digital signature


Re: Magic v1 Does not Support Record Headers

2019-11-21 Thread Matthias J. Sax
It's going to be hard to find out which client it is. This is a known
issue in general and there is a KIP that address is:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers

The root cause for the error you see seems to be, that the client tries
to write messages including record headers. Record headers where added
in 0.11.0.0, thus, your brokers basically support them.

However, it seems that the topic in question is still on message format
0.10 that does not support record headers. Note that broker version and
message format are independent of each other. You can see from the stack
trace, that the broker tries to down convert the message format (I
assuem from 0.11 to 0.10 -- this down convertion would succeed if record
headers would not be used).

> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:245)

Thus, the client must either stop using records headers, or you need to
upgrade the message format to 0.11. See the docs for details about
upgrading the message format.


Hope that helps.


-Matthias


On 11/21/19 12:38 AM, Shalom Sagges wrote:
> Hi Experts,
> 
> I use Kafka 0.11.2
> 
> I have an issue where the Kafka logs are bombarded with the following error:
> ERROR [KafkaApi-14733] Error when handling request
> {replica_id=-1,max_wait_time=0,min_bytes=0,max_bytes=2147483647,topics=[{topic=my_topic,partitions=[{partition=22,fetch_offset=1297798,max_bytes=1048576}]}]}
> (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: *Magic v1 does not support record
> headers*
> at
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:385)
> at
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:568)
> at
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:117)
> at
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:98)
> at
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:245)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$5.apply(KafkaApis.scala:523)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$5.apply(KafkaApis.scala:521)
> at scala.Option.map(Option.scala:146)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:521)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:511)
> at scala.Option.flatMap(Option.scala:171)
> at
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:511)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:559)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:558)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:558)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:579)
> at
> kafka.server.ClientQuotaManager.recordAndThrottleOnQuotaViolation(ClientQuotaManager.scala:196)
> at
> kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2014)
> at
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:578)
> at
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:598)
> at
> kafka.server.ClientQuotaManager.recordAndThrottleOnQuotaViolation(ClientQuotaManager.scala:196)
> at
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:188)
> at
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:597)
> at
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:614)
> at
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:614)
> at
> kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:640)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:606)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:98)
> at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66)
> at java.lang.Thread.run(Thread.java:745)
> 
> 
> I understand this is probably related to a client that u

Re: Broker shutdown slowdown between 1.1.0 and 2.3.1

2019-11-21 Thread Nicholas Feinberg
On Thu, Nov 21, 2019 at 4:25 PM Peter Bukowinski  wrote:

> How many partitions are on each of your brokers? That’s a key factor
> affecting shutdown and startup time.
>

The test hosts run about 384 partitions each (7 topics * 128 partitions
each * 3x replication / 7 brokers). The largest prod cluster has about 1344
partitions/broker; the smallest and slowest has 2560.


> I’m currently doing a rolling restart of a 150-broker cluster running
> kafka 2.3.1. The cluster is very busy (~500k msg/sec, ~1GB/sec). Each
> broker has about 65 partitions. Each broker restart cycle (stop/start,
> rejoin ISR) takes about 90 seconds.
>

In our largest prod cluster (16 d2.8xlarge broker cluster, 200k msg/s, 300
MB/s), our restart cycles take about 3 minutes on 1.1.0 (counting
ISR-rejoin time) and about 30 minutes on 2.3.1. The only other change we
made between versions was increasing heap size from 8G to 16G.

Thanks for the response!


>
> > On Nov 21, 2019, at 3:52 PM, Nicholas Feinberg 
> wrote:
> >
> > I've been looking at upgrading my cluster from 1.1.0 to 2.3.1. While
> > testing, I've noticed that shutting brokers down seems to take
> consistently
> > longer on 2.3.1. Specifically, the process of 'creating snapshots' seems
> to
> > take several times longer than it did on 1.1.0. On a small testing setup,
> > the time needed to create snapshots and shut down goes from ~20s to
> ~120s;
> > with production-scale data, it goes from ~2min to ~30min.
> >
> > To allow myself to roll back, I'm still using the 1.1 versions of the
> > inter-broker protocol and the message format - is it possible that those
> > could slow things down in 2.3.1? If not, any ideas what else could be at
> > fault, or what I could do to narrow down the issue further?
> >
> > Thanks!
> > -Nicholas
>
>


Re: Broker shutdown slowdown between 1.1.0 and 2.3.1

2019-11-21 Thread Peter Bukowinski
How many partitions are on each of your brokers? That’s a key factor affecting 
shutdown and startup time. Even if it is large, though, I’ve seen a notable 
reduction in shutdown and startup times as I’ve moved from kafka 0.11 to 1.x to 
2.x.

I’m currently doing a rolling restart of a 150-broker cluster running kafka 
2.3.1. The cluster is very busy (~500k msg/sec, ~1GB/sec). Each broker has 
about 65 partitions. Each broker restart cycle (stop/start, rejoin ISR) takes 
about 90 seconds.


> On Nov 21, 2019, at 3:52 PM, Nicholas Feinberg  wrote:
> 
> I've been looking at upgrading my cluster from 1.1.0 to 2.3.1. While
> testing, I've noticed that shutting brokers down seems to take consistently
> longer on 2.3.1. Specifically, the process of 'creating snapshots' seems to
> take several times longer than it did on 1.1.0. On a small testing setup,
> the time needed to create snapshots and shut down goes from ~20s to ~120s;
> with production-scale data, it goes from ~2min to ~30min.
> 
> To allow myself to roll back, I'm still using the 1.1 versions of the
> inter-broker protocol and the message format - is it possible that those
> could slow things down in 2.3.1? If not, any ideas what else could be at
> fault, or what I could do to narrow down the issue further?
> 
> Thanks!
> -Nicholas



Broker shutdown slowdown between 1.1.0 and 2.3.1

2019-11-21 Thread Nicholas Feinberg
I've been looking at upgrading my cluster from 1.1.0 to 2.3.1. While
testing, I've noticed that shutting brokers down seems to take consistently
longer on 2.3.1. Specifically, the process of 'creating snapshots' seems to
take several times longer than it did on 1.1.0. On a small testing setup,
the time needed to create snapshots and shut down goes from ~20s to ~120s;
with production-scale data, it goes from ~2min to ~30min.

To allow myself to roll back, I'm still using the 1.1 versions of the
inter-broker protocol and the message format - is it possible that those
could slow things down in 2.3.1? If not, any ideas what else could be at
fault, or what I could do to narrow down the issue further?

Thanks!
-Nicholas


Kafka upgrade from 0.10 to 2.3.x

2019-11-21 Thread Vitalii Stoianov
Hi All,

All the docs that I was able to find describe the rolling upgrade.
But I didn't find any docs that describe how to perform no-rolling upgrade.

So if system can afford downtime how in this case to upgrade kafka form
version 0.10.0 to 2.3.x.
Is it still required to do few restarts or we can just upgrade all in one
go?

For example:
0. Stop producers -> consumer -> brokers.
1. Upgrade brokers, clients.
2. Upgrade brokers, inter.broker.protocol.version.
3. Upgrade brokers, log.message.format.version.
4. Start brokers.
5. Start consumers.
6. Start producers.

In case of upgrade from 0.10.0 to 2.3.x Docs states that message format has
changed since version 0.10.0
https://kafka.apache.org/22/documentation.html#messageset (old format that
is used by 0.10.0) and in 0.11.0 was introduced new format that is used
till now 2.3.x (as I understand).

What will happen with kafka brokers data logs on disk after message format
change change ?
And if data on disk is not changed during change of message format: What
will happen if new consumer receive message in old message format because
it was not read by consumer before upgrade?

Regards,
Vitalii.


ReplicaMap: Java ConcurrentMap replicated over Kafka

2019-11-21 Thread Sergi Vladykin
Hi,

Maybe this library will be useful for someone:
https://github.com/svladykin/ReplicaMap

Future plans:
- Optimistic transactions: update multiple keys in a single TX
- Sharding: distribute the partitions across multiple clients

Sergi


Re: Unique users per calendar month using kafka streams

2019-11-21 Thread claude.war...@wipro.com.INVALID
A different approach would be to integrate the Apache DataSketches  
(https://datasketches.apache.org/) which have mathematical proofs behind them.  
Using a DataSketch you can capture unique members for any given time period in 
a very small data object and be able to aggregate them (even though unique 
counts are not in and of themselves aggregateable).  For example you could take 
the monthly measures and calculate the unique users per quarter or for the 
entire year very quickly.  Generally orders of magnitude faster.


From: Bruno Cadonna 
Sent: Thursday, November 21, 2019 11:37
To: Users 
Subject: Re: Unique users per calendar month using kafka streams

** This mail has been sent from an external source. Treat hyperlinks and 
attachments in this email with caution**

Hi Chintan,

You cannot specify time windows based on a calendar object like months.

In the following, I suppose the keys of your records are user IDs. You
could extract the months from the timestamps of the events and add
them to the key of your records. Then you can group the records by key
and count them. Be aware that your state that stores the counts will
grow indefinitely and therefore you need to take care how to remove
counts you do not need anymore from your local state.

Take a look at the following example of how to deduplicate records

https://clicktime.symantec.com/3E6BmtgzXaCnuSmDcxKqdKD7Vc?u=https%3A%2F%2Fgithub.com%2Fconfluentinc%2Fkafka-streams-examples%2Fblob%2F5.3.1-post%2Fsrc%2Ftest%2Fjava%2Fio%2Fconfluent%2Fexamples%2Fstreams%2FEventDeduplicationLambdaIntegrationTest.java

It shows how to avoid indefinite growing of local store in such cases.
Try to adapt it to solve your problem by extending the key with the
month and computing the count instead of looking for duplicates.

Best,
Bruno

On Thu, Nov 21, 2019 at 10:28 AM chintan mavawala
 wrote:
>
> Hi,
>
> We have a use case to capture number of unique users per month. We planned
> to use windowing concept for this.
>
> For example, group events from input topic by user name and later sub group
> them based on time window. However i don't see how i can sub group the
> results based on particular month, say January. The only way is sub group
> based on time.
>
> Any pointers would be appreciated.
>
> Regards,
> Chintan
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com


Re: Unique users per calendar month using kafka streams

2019-11-21 Thread Bruno Cadonna
Hi Chintan,

You cannot specify time windows based on a calendar object like months.

In the following, I suppose the keys of your records are user IDs. You
could extract the months from the timestamps of the events and add
them to the key of your records. Then you can group the records by key
and count them. Be aware that your state that stores the counts will
grow indefinitely and therefore you need to take care how to remove
counts you do not need anymore from your local state.

Take a look at the following example of how to deduplicate records

https://github.com/confluentinc/kafka-streams-examples/blob/5.3.1-post/src/test/java/io/confluent/examples/streams/EventDeduplicationLambdaIntegrationTest.java

It shows how to avoid indefinite growing of local store in such cases.
Try to adapt it to solve your problem by extending the key with the
month and computing the count instead of looking for duplicates.

Best,
Bruno

On Thu, Nov 21, 2019 at 10:28 AM chintan mavawala
 wrote:
>
> Hi,
>
> We have a use case to capture number of unique users per month. We planned
> to use windowing concept for this.
>
> For example, group events from input topic by user name and later sub group
> them based on time window. However i don't see how i can sub group the
> results based on particular month, say January. The only way is sub group
> based on time.
>
> Any pointers would be appreciated.
>
> Regards,
> Chintan


Unique users per calendar month using kafka streams

2019-11-21 Thread chintan mavawala
Hi,

We have a use case to capture number of unique users per month. We planned
to use windowing concept for this.

For example, group events from input topic by user name and later sub group
them based on time window. However i don't see how i can sub group the
results based on particular month, say January. The only way is sub group
based on time.

Any pointers would be appreciated.

Regards,
Chintan


Re: Last Stable Offset (LSO) stuck for specific topic partition after Broker issues

2019-11-21 Thread Pieter Hameete
Hello,

I final update on this. I found that there is an open transaction causing the 
LSO to be stuck at offset 10794778. Similar to this stackoverflow issue:

https://stackoverflow.com/questions/56643907/manually-close-old-kafka-transaction

Despite using the same pool of transactional IDs this old transaction was not 
aborted after the brokers and client apps came back online.

Is there any way to abort this defective transaction? Or is the only way to 
migrate all the data from this topic to a new one by using a read_uncommitted 
reader?

Best,

Pieter

Van: Pieter Hameete 
Verzonden: woensdag 20 november 2019 16:33
Aan: Ashutosh singh 
CC: users@kafka.apache.org 
Onderwerp: Re: Last Stable Offset (LSO) stuck for specific topic partition 
after Broker issues

Hi Ashu, others,

I have tested with the latest kafkacat with librdkafka 1.2.2 which can also do 
transactional reading.

Reading the partition with offset reset from beginning will read until offset 
10794778 (this is the offset of the LSO that is stuck)

Reading the partition from any offset after 10794778 (so any specific offset 
greater than 10794778, or auto offset reset to latest) will not read anything 
at all.

Reading in uncommitted mode will read properly from any offset.

I think my only solution would be to somehow get the LSO on the broker side to 
increase again. There's nothing I can do on the consumer side to get this 
working again while keeping read mode read_committed.

Best,

Pieter

Van: Ashutosh singh 
Verzonden: woensdag 20 november 2019 15:15
Aan: Pieter Hameete 
CC: users@kafka.apache.org 
Onderwerp: Re: Last Stable Offset (LSO) stuck for specific topic partition 
after Broker issues

Alright got that.
What about resetting or changing the consumer offset ?  You can try to change 
it to some previous offset and restart consumer.  Consumer may have to do 
duplicate processing but should work .

On Wed, Nov 20, 2019 at 7:18 PM Pieter Hameete 
mailto:pieter.hame...@blockbax.com>> wrote:
Hi Ashu,

thanks for the tip. We have tried restarting the consumer, but that did not 
help. All read_committed consumers for this partition (we have multiple) have 
the same issue.

The partition already had different leaders, when we performed a 
rolling-restart of the brokers. All brokers give the same stuck LSO, so I don't 
think deleting will the partition will help? It will then restore the partition 
from another in-sync replica but that also has the incorrect LSO?

Best,

Pieter

Van: Ashutosh singh mailto:getas...@gmail.com>>
Verzonden: woensdag 20 november 2019 14:43
Aan: users@kafka.apache.org 
mailto:users@kafka.apache.org>>
Onderwerp: Re: Last Stable Offset (LSO) stuck for specific topic partition 
after Broker issues

Hello Pieter,

We had similar issue.

Did you try restarting your consumer ?  It that doesn't fix then you can
try deleting that particular topic partition from the broker and restart
the broker so that it will get in sync.  Please make sure that you have
replica in-sync before deleting the partition.

Thanks
Ashu


On Wed, Nov 20, 2019 at 6:57 PM Pieter Hameete 
mailto:pieter.hame...@blockbax.com>>
wrote:

> Hello,
>
> after having some Broker issues (too many open files) we managed to
> recover our Brokers, but read_committed consumers are stuck for a specific
> topic partition. It seems like the LSO is stuck at a specific offset. The
> transactional producer for the topic partition is working without errors so
> the latest offset is incrementing correctly and so is transactional
> producing.
>
> What could be wrong here? And how can we get this specific LSO to be
> increment again?
>
> Thank you in advance for any advice.
>
> Best,
>
> Pieter
>


--
Thanx & Regard
Ashutosh Singh
08151945559


--
Thanx & Regard
Ashutosh Singh
08151945559



Magic v1 Does not Support Record Headers

2019-11-21 Thread Shalom Sagges
Hi Experts,

I use Kafka 0.11.2

I have an issue where the Kafka logs are bombarded with the following error:
ERROR [KafkaApi-14733] Error when handling request
{replica_id=-1,max_wait_time=0,min_bytes=0,max_bytes=2147483647,topics=[{topic=my_topic,partitions=[{partition=22,fetch_offset=1297798,max_bytes=1048576}]}]}
(kafka.server.KafkaApis)
java.lang.IllegalArgumentException: *Magic v1 does not support record
headers*
at
org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:385)
at
org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:568)
at
org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:117)
at
org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:98)
at
org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:245)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$5.apply(KafkaApis.scala:523)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$5.apply(KafkaApis.scala:521)
at scala.Option.map(Option.scala:146)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:521)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:511)
at scala.Option.flatMap(Option.scala:171)
at
kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:511)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:559)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:558)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at
kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:558)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:579)
at
kafka.server.ClientQuotaManager.recordAndThrottleOnQuotaViolation(ClientQuotaManager.scala:196)
at
kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2014)
at
kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:578)
at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:598)
at
kafka.server.ClientQuotaManager.recordAndThrottleOnQuotaViolation(ClientQuotaManager.scala:196)
at
kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:188)
at
kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:597)
at
kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:614)
at
kafka.server.KafkaApis$$anonfun$handleFetchRequest$1.apply(KafkaApis.scala:614)
at
kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:640)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:606)
at kafka.server.KafkaApis.handle(KafkaApis.scala:98)
at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66)
at java.lang.Thread.run(Thread.java:745)


I understand this is probably related to a client that uses a client
version that isn't compatible with 0.11, but I don't know how to pinpoint
the client since the topic is used by multiple consumers.
Any idea what this error actually means and how I can find the culprit?
I can't read anything in the logs besides this error  :-S

Thanks a lot!