Prior to KIP-794 it was possible to create a custom Partitioner that could
delegate to the DefaultPartitioner. This has been deprecated so we can now
only delegate to BuiltInPartitioner.partitionForKey which does not handle a
non-keyed message. Hence there is now no mechanism for a custom
coming soon.
Thank you.
Luke
On Fri, Apr 29, 2022 at 8:53 AM James Olsen
mailto:ja...@inaseq.com>> wrote:
Luke,
Do you know if 2.8.2 will be released anytime soon? It appears to be waiting
on https://issues.apache.org/jira/browse/KAFKA-13805 for which fixes are
available.
Regards, Jam
known issue KAFKA-13636
<https://issues.apache.org/jira/browse/KAFKA-13636>, which should be fixed
in the newer version.
Thank you.
Luke
On Mon, Apr 11, 2022 at 9:18 AM James Olsen
mailto:ja...@inaseq.com>> wrote:
I recently observed the following series of events for a particular
pa
I recently observed the following series of events for a particular partition
(MyTopic-6):
2022-03-18 03:18:28,562 INFO
[org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
'executor-thread-2' [Consumer clientId=consumer-MyTopicService-group-3,
groupId=MyTopicService-group]
not occur with 2.5.1 and 2.7.0 Clients. This make me
suspect that https://issues.apache.org/jira/browse/KAFKA-10793 introduced this
issue.
Regards, James.
On 24/11/2021, at 14:35, James Olsen
mailto:ja...@inaseq.com>> wrote:
Luke,
We did not upgrade to resolve the issue. We simply rest
10:27 AM James Olsen
mailto:ja...@inaseq.com>> wrote:
We had a 2.5.1 Broker/Client system running for some time with regular rolling
OS upgrades to the Brokers without any problems. A while ago we upgraded both
Broker and Clients to 2.7.1 and now on the first rolling OS upgrade to the
2
We had a 2.5.1 Broker/Client system running for some time with regular rolling
OS upgrades to the Brokers without any problems. A while ago we upgraded both
Broker and Clients to 2.7.1 and now on the first rolling OS upgrade to the
2.7.1 Brokers we encountered some Consumer issues. We have a
If it's of any value to you, we use the following test to check that we have a
well balanced set of consumer group ids. Note that in the code,
ConsumerGroups.ALL_GROUPS is simply a list of all our consumer group ids.
Spreading the offset commit load across these partitions evenly helps in
these networking issues has to do with
one of the brokers being unavailable -- something that is not supposed to
happen.
Thanks,
-- Ricardo
On 6/18/20 9:18 PM, James Olsen wrote:
We are using AWS MSK with Kafka 2.4.1 (and same client version), 3 Brokers. We
are seeing fairly frequent
We are using AWS MSK with Kafka 2.4.1 (and same client version), 3 Brokers. We
are seeing fairly frequent consumer offset commit fails as shown in the example
logs below. Things continue working as they are all retriable, however I would
like to improve this situation.
The issue occurs most
org.apache.kafka.clients.producer.ProducerConfig
org.apache.kafka.clients.consumer.ConsumerConfig
> On 3/04/2020, at 04:30, 一直以来 <279377...@qq.com> wrote:
>
> Properties props = new Properties(); props.put("bootstrap.servers",
> "localhost:9092"); props.put("acks", "all");
te.fr>> wrote:
There are serious latency issues when mixing different client and server version
Could you be more specific ? Link to any issue ?
Thanks by advance !
Christophe
____
De : James Olsen mailto:ja...@inaseq.com>>
Envoyé : vendredi 27 mars 202
Resolved by downgrading Client to 2.2.2 and implementing an application level
heartbeat on every Producer to avoid he UNKNOWN_PRODUCER_ID issue.
> On 9/03/2020, at 16:08, James Olsen wrote:
>
> P.S. I guess the big question is what is the best way to handle or avoid
> UNKNOWN_PROD
Also check your Kafka Client and Server versions. There are serious latency
issues when mixing different client and server versions IF your consumers
handle multiple partitions.
> On 27/03/2020, at 12:59, Chris Larsen wrote:
>
> Hi Vidhya,
>
> How many tasks are you running against the
P.S. I guess the big question is what is the best way to handle or avoid
UNKNOWN_PRODUCER_ID when running versions that don’t include KAFKA-7190 /
KAFKA-8710 ?
We are using non-transactional idempotent Producers.
> On 9/03/2020, at 12:59 PM, James Olsen wrote:
>
> For completene
?
We can choose 2.2.1 or 2.3.1 for the Broker (AWS recommend 2.2.1 although don’t
state why). Based on the experiences below, I would then go with the
corresponding 2.2.2 or 2.3.1 Client version.
Which combo would people recommend?
> On 9/03/2020, at 12:03 PM, James Olsen wrote:
>
&
data available to be read instantly?
Thanks,
Jamie
Sent from AOL Mobile Mail
Get the new AOL app: mail.mobile.aol.com<http://mail.mobile.aol.com/>
On Sunday, 8 March 2020, James Olsen
mailto:ja...@inaseq.com>> wrote:
Using 2.3.1 Brokers makes things worse. There are now 2 fetch.max.wait
=consumer-LedgerService-group-1, groupId=LedgerService-group] Sending
READ_UNCOMMITTED IncrementalFetchRequest(toSend=(Ledger-1), toForget=(),
implied=(Ledger-0)) to broker localhost:9093 (id: 1001 rack: null)
> On 9/03/2020, at 10:48 AM, James Olsen wrote:
>
> Thanks for your respo
ou are having 20 partitions per consumer (as
per your 60 partition and 1 CGroup setup), 5 means 12. There's nothing
special about these numbers as you also noticed.
Have you tried setting fetch.max.wait.ms = 0 and see whether that's making
a difference for you?
Thanks,
On Thu, 5 Mar 2020 at 03
I’m seeing behaviour that I don’t understand when I have Consumers fetching
from multiple Partitions from the same Topic. There are two different
conditions arising:
1. A subset of the Partitions allocated to a given Consumer not being consumed
at all. The Consumer appears healthy, the
20 matches
Mail list logo