[
https://issues.apache.org/jira/browse/KAFKA-19704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18020392#comment-18020392
]
MithaJoseph commented on KAFKA-19704:
-------------------------------------
[~beregon87]
I understand that Kafka 4.1.0 enforces a minimum value of *1 MB* (1,048,576
bytes) for {*}{{segment.bytes}}{*}, and topics configured with a lower value
(e.g. 75,000 bytes) will fail broker startup.
Quick question: for topics that were created under earlier Kafka versions with
{{{}segment.bytes < 1 MB{}}}, shouldn't the upgrade to 4.1.0 allow backward
compatibility? That is, should there be a mechanism such that existing topics
with smaller segment sizes are not blocked from upgrade, or at least the broker
is able to start (perhaps with a warning) and allow the topic config to be
migrated / increased later?
If Kafka does *not* currently support that, could we explore options like:
* Allowing override or bypass of the validation for existing topics on upgrade
* Providing an offline tool or snapshot mechanism to patch the topic configs
before startup
* Or documentation / recommended procedure to safely migrate topics to meet
the new minimum
I’d appreciate your thoughts on whether backward compatibility was considered
here, or if there is an existing plan to support this scenario.
> Kafka Broker Fails to Start After Upgrade to 4.1.0 Due to Invalid
> segment.bytes Configuration
> ---------------------------------------------------------------------------------------------
>
> Key: KAFKA-19704
> URL: https://issues.apache.org/jira/browse/KAFKA-19704
> Project: Kafka
> Issue Type: Bug
> Reporter: MithaJoseph
> Priority: Major
>
> After upgrading our Kafka brokers to version {*}4.1.0{*}, the broker fails to
> start due to a misconfigured topic-level {{segment.bytes}} setting. The new
> version enforces a *minimum value of 1MB (1048576 bytes)* for this
> configuration, and any value below this threshold causes the broker to
> terminate during startup.
> *Error Details:*
> **
>
> {code:java}
> [2025-09-12 14:39:51,285] ERROR Encountered fatal fault: Error starting
> LogManager (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)
> org.apache.kafka.common.config.ConfigException: Invalid value 75000 for
> configuration segment.bytes: Value must be at least 1048576 at
> org.apache.kafka.common.config.ConfigDef$Range.ensureValid(ConfigDef.java:989)
> ~[kafka-clients-4.1.0.jar:?]
>
> {code}
> In our setup, some topics were previously configured with a lower
> segment.bytes value (e.g., 75000), which was allowed in earlier Kafka
> versions but is now invalid.
> As a result Kafka broker cannot start, leading to downtime and
> unavailability.No snapshot file exists yet, so the {{kafka-metadata-shell}}
> tool cannot be used to patch the config offline.
> We would appreciate your guidance on the following:
> * Are there any supported methods from Kafka 4.1.0 to override or bypass
> this validation at startup to recover without losing data?
> * If not, is there a documented approach to fix such configuration issues
> when snapshots are not yet available?
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)