Hi,
When using MirrorMaker for a forwarding flow like A->B, B->C what are the
options for handling failure of B so that C could then mirror from A?
>From what I can see MirrorMaker has no concept of failover so C cannot
failover to A. Can you please confirm this is correct?
An option could be ha
What is the status of support for Java 17 in Kafka for both brokers and
clients?
The docs for Kafka 3.0.0 state that Java 8 and Java 11 are supported.
Thanks,
Mark
Jackson was updated to 2.10 in the latest Kafka release. The method
mentioned no longer exists in 2.10.
Do you have multiple versions of Jackson on the ckasspath?
On Thu, 12 Dec 2019, 11:09 Charles Bueche, wrote:
> Hello again,
>
> spending hours debugging this and having no clue...
>
> * Kaf
le to import 2.3.1-rc2 in spring boot?
>
> Thanks!
>
> On Thu, Oct 24, 2019 at 4:21 PM Mark Anderson
> wrote:
> >
> > Are you using Spring Boot?
> >
> > I know that the recent Spring Boot 2.2.0 release specifically updates
> their
> > Kafka depe
Are you using Spring Boot?
I know that the recent Spring Boot 2.2.0 release specifically updates their
Kafka dependency to 2.3.0. Previous version used Kafka 2.1.x though I've
used 2.2.x with it.
Maybe running mvn dependency:tree would help see if there are multiple
Kafka versions that could conf
The first thing I would do is update to the latest Java 8 release. Just in
case you are hitting any G1GC bugs in such an old version.
Mark
On Thu, 22 Aug 2019, 07:17 Xiaobing Bu, wrote:
> it not a network issues, since i had capture the network packets.
> when the GC remark and unloading class,
We have a different use case where we stop consuming due to connection to
an external system being down.
In this case we sleep for the same period as our poll timeout would be and
recommit the previous offset. This stops the consumer going stale and
avoids increasing the max interval.
Perhaps you
Kafka has its own version of the zookeeper client libraries that are still
3.4.13.
I'd be interested to know if it is compatible with 3.5.x now that has a
stable release.
Mark
On Wed, 5 Jun 2019, 21:27 Sebastian Schmitz, <
sebastian.schm...@propellerhead.co.nz> wrote:
> Hi,
>
> I am currently t
Further investigation has uncovered a defect when resolving a hostname
fails - https://issues.apache.org/jira/browse/KAFKA-8182
Looks like it has been present since support for resolving all DNS IPs was
added.
On Mon, 1 Apr 2019 at 15:55, Mark Anderson wrote:
> Hi list,
>
> I'
Hi list,
I've a question regarding a stack trace I see with the 2.2.0 consumer
java.lang.IllegalStateException: No entry found for connection 0|
at
org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339)|
at
org.apache.kafka.clients.ClusterConnectionStates
Hi all
>From reading the javadoc I've made the assumption that all exceptions
thrown by KafkaConsumer.poll() are unrecoverable.
What exactly does this mean with regards to the consumer instance itself?
Can it be used again in any way e.g. close then call subscribe again?
Or do you need to repla
Hi all,
I've a question regarding the following warning when sending data
"Received invalid metadata error in produce request on partition topic-3
due to org.apache.kafka.common.errors.NetworkException: The server
disconnected before a response was received.. Going to request metadata
update now
Hi
Reviewing the javadoc for KafkaConsumer.poll() I'd like to confirm the
status of the consumer after the poll method throws an exception.
I assume for an unrecoverable KafkaException you could not re-use the
consumer and would need to create a brand new KafkaConsumer object?
Are there any case
I'm sure I initially made this assumption when trying to read all records
from a compacted topic on application startup and it was incorrect.
Due to latency, threading GC pauses etc it would return 0 when there were
still records on the topic.
Mark
On Mon, 4 Feb 2019, 18:02 Pere Urbón Bayes Hi,
Hi all,
Since Kafka 2.1 supports Java 11 we are considering moving to take
advantage of performance improvements since Java 8.
One issue that isn't clear to me is whether the latest Zookeeper 3.4.x
release supports Java 11.
Does anyone know? And if so are there any issues to watch out for?
Than
Hi,
After reading http://www.evanjones.ca/jvm-mmap-pause.html and
https://bugs.openjdk.java.net/browse/JDK-8076103 (alongside the linked
e-mail trail) I'm considering adding this flag when running Kafka.
I'm assuming this is safe to use and there are no unintended side effects?
Does anyone have
Hi,
I've been investigating application pauses on my Kafka broker and sometimes
see large (high hundreds to one second) safepoint times for revoking bias.
Does anyone have any experience of running Kafka brokers with the
-XX:-UseBiasedLocking flag to remove these pauses?
If so does it impact per
he 'readTimeout' is defined as:
>
> readTimeout = sessionTimeout * 2 / 3;
>
> Thus, the 'actually' sessionTimeout is 1333ms while
> config:zookeeper.session.timeout=2000ms
>
>
> >-Original Message-
> >From: Mark Anderson [mailto:manderso...@gm
Hi,
I'm experimenting with the value of zookeeper.session.timeout.ms in Kafka
2.0.1.
In my broker logs I see the following message
[2019-01-09 15:12:01,246] WARN Client session timed out, have not heard
from server in 1369ms for sessionid 0x200d78d415e0002
(org.apache.zookeeper.ClientCnxn)
Howe
ml
> >
> > I think making zookeeper.session.timeout.ms smaller will result in
> faster
> > detection of a dead node, but the downside is that a leader election
> might
> > get triggered by network blips or other cases where your broker is not
> > actually dead.
&
Hi,
I'm currently testing how Kafka reacts in cases of broker failure due to
process failure or network timeout.
I'd like to have the election of a new leader for a topic partition happen
as quickly as possible but it is unclear from the documentation or broker
configuration what the key paramete
che/kafka/pull/6005/files
>
>
> Guozhang
>
> On Wed, Dec 5, 2018 at 6:54 AM Mark Anderson
> wrote:
>
> > Hi,
> >
> > I'm periodically seeing ConcurrentModificationExceptions in the producer
> > when records are expired e.g.
> >
> > E
Hi,
I'm periodically seeing ConcurrentModificationExceptions in the producer
when records are expired e.g.
ERROR Dec 05 11:56:13.033 388753 [kafka-producer-network-thread |
analogDataProducer] com.x.AnalogMessageBundler Exception
org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record
Hi,
I'm currently testing Kafka Producers in cases of broker connection failure
due to the broker process dieing or network connection timeout. I'd like to
make sure that I understand how the Producer buffer functions in this case.
Note that I have retries set to 0.
>From what I can see when send
Hi,
Do you have a roadmap detailing how long each Kafka version will be
supported with bug fixes?
We are currently running 1.1.x and given we may need to support this system
for a number of years I'd be interested to know what the support period is.
Especially given 2.0.x has been released and 2.
Have you reviewed
https://www.confluent.io/blog/getting-started-apache-kafka-kubernetes/ as a
starting point?
On Mon, 22 Oct 2018, 18:07 M. Manna, wrote:
> Thanks a lot for your prompt answer. This is what I was expecting.
>
> So, if we had three pods where volumes are mapped as the following
>
Also in this case will it fall back to a full request?
Hence no data is lost but it might increase latency?
Thanks
Mark
On Thu, 26 Jul 2018, 12:28 Mark Anderson, wrote:
> Ted,
>
> Below are examples of the DEBUG entries from FetchSession
>
> [2018-07-26 11:14:43,461] DEBUG Crea
just turn on DEBUG
>> for this package.
>>
>> FYI
>>
>>
>> On Wed, Jun 13, 2018 at 9:08 AM, Mark Anderson
>> wrote:
>>
>>> Ted
>>>
>>> I don't see any other INFO log messages so I assume that means it is the
>>>
n.id}: expected " +
>
> s"epoch ${session.epoch}, but got epoch $
> {reqMetadata.epoch()}.")
>
> new SessionErrorContext(Errors.INVALID_FETCH_SESSION_EPOCH,
> reqMetadata)
>
> Can you pastebin the log line preceding what you pas
We recently updated our Kafka brokers and clients to 1.1.0. Since the
upgrade we periodically see INFO log entries such as
INFO Jun 08 08:30:20.335 61161458 [KafkaRecordConsumer-0]
org.apache.kafka.clients.FetchSessionHandler [Consumer clientId=consumer-1,
groupId=group_60_10] Node 3 was unable to
30 matches
Mail list logo