Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 2.3.1 which
includes many bug fixes for Apache Kafka 2.3.
Release notes for the 2.3.1 release:
https://home.apache.org/~davidarthur/kafka-2.3.1-rc0/RELEASE_NOTES.html
*** Please downl
Gwen Shapira published a great whitepaper with Reference Architectures for
all Kafka and Confluent components in big and small environements and for
bare metal, VMs, and all 3 major public clouds.
https://www.confluent.io/resources/apache-kafka-confluent-enterprise-reference-architecture/
On Fri
Hi all,
I have a small small question. In my company we would like to use Apache Kafka
with KSQL.
And my small question is: which hardware requirements do you have to run Kafka
and KSQL in small and big environments?
Best regards,
Peter
Mit freundlichen Grüßen,
Peter Menapace
Senior IT Archit
Just bring a new broker up and give it the id of the lost one. It will sync
itself
/svante
Den fre 13 sep. 2019 kl 13:51 skrev saurav suman :
> Hi,
>
> When the old data is lost and another broker is added to the cluster then
> it is a new fresh broker with no data. You can reassign the partitio
Hi,
When the old data is lost and another broker is added to the cluster then
it is a new fresh broker with no data. You can reassign the partitions of
the topics using kafka-reassign-partitions.sh script.
Please check the below links for more details.
https://blog.imaginea.com/how-to-rebalance-
Not sure what you mean by 1 topic with 3 partition and replication-factor =
2. You may need to revisit the idea of partitions and replication factor
from docs.
Thanks,
On Fri, 13 Sep 2019 at 11:51, Alok Dwivedi wrote:
> Hi All
> Can someone please advise what is the recommended practice for rep
Hi All
Can someone please advise what is the recommended practice for replacing a lost
broker in a Kafka cluster. Lets consider this sequence of events
1. 3 node cluster say with 1 topic of 3 partitions and RF of 2.
2. That gives 2 partitions on each Broker (B1,B2,B3)
3. Lets say we los
Hi all,
Currently with default settings
min.cleanable.dirty.ratio/log.cleaner.min.cleanable.ratio (0.5)
__consumer_offsets partition size grows up to 80Mb and this seems to slow down
metadata related operations (notably commits / join group).
What are the normal/expected partition avg size for