Hello,
Last week I upgraded one relatively large kafka (EC2, 10 brokers, ~30 TB
data, 100-300 Mbps in/out per instance) 0.10.0.1 cluster to 1.0, and saw
some issues.
Out of ~100 topics with 2..20 partitions each, 9 partitions in 8 topics
become "unavailable" across 3 brokers. The leader was shown
Hi,
The V1 message format is
1. v1 (supported since 0.10.0)
2. Message => Crc MagicByte Attributes Key Value
3. Crc => int32
4. MagicByte => int8
5. Attributes => int8
6. Timestamp => int64
7. Key => bytes
8. Value => bytes
Would it be a good suggestion to h
We also have created simple wrapper scripts for common operations.
On Sat, Apr 21, 2018 at 2:20 AM, Peter Bukowinski wrote:
> One solution is to build wrapper scripts around the standard kafka
> scripts. You’d put your relevant cluster parameters (brokers, zookeepers)
> in a single config file (
I have a few questions on behavior of kafka w.r.t to broker life-cycle
changes.
1. When a new broker is added to the cluster, do we need to manually invoke
rebalancing or based on the load, kafka automatically invokes rebalancing ?
2. When an active broker goes down, does Kafka moves the partitio