[
https://issues.apache.org/jira/browse/KAFKA-19704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18019930#comment-18019930
]
MithaJoseph commented on KAFKA-19704:
-------------------------------------
Hi [~brandboat] , thank you for the suggestion.
Unfortunately, reverting to a previous Kafka version to update the
{{segment.bytes}} configuration is not a viable option for us. Kafka is
embedded in our product, which follows a bi-weekly release cycle. Even if we
were to fix the configuration in one release and upgrade Kafka in a subsequent
one, there's no guarantee that all customers will upgrade sequentially. As a
result, we require a mechanism to bypass this validation during the Kafka
upgrade itself, ensuring compatibility regardless of the upgrade path.
As an additional note, we had previously tested an upgrade to Kafka 4.0.0 and
did not encounter this {{segment.bytes}} issue - brokers started successfully.
However, we had to roll back due to KAFKA-19427, which caused out-of-memory
errors. This prompted us to move directly to 4.1.0, where the stricter
enforcement of the {{segment.bytes}} minimum led to this startup failure.
We’re looking for guidance on how to handle this scenario cleanly, ideally
without requiring a rollback or risking data loss.
> Kafka Broker Fails to Start After Upgrade to 4.1.0 Due to Invalid
> segment.bytes Configuration
> ---------------------------------------------------------------------------------------------
>
> Key: KAFKA-19704
> URL: https://issues.apache.org/jira/browse/KAFKA-19704
> Project: Kafka
> Issue Type: Bug
> Reporter: MithaJoseph
> Priority: Major
>
> After upgrading our Kafka brokers to version {*}4.1.0{*}, the broker fails to
> start due to a misconfigured topic-level {{segment.bytes}} setting. The new
> version enforces a *minimum value of 1MB (1048576 bytes)* for this
> configuration, and any value below this threshold causes the broker to
> terminate during startup.
> *Error Details:*
> **
>
> {code:java}
> [2025-09-12 14:39:51,285] ERROR Encountered fatal fault: Error starting
> LogManager (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)
> org.apache.kafka.common.config.ConfigException: Invalid value 75000 for
> configuration segment.bytes: Value must be at least 1048576 at
> org.apache.kafka.common.config.ConfigDef$Range.ensureValid(ConfigDef.java:989)
> ~[kafka-clients-4.1.0.jar:?]
>
> {code}
> In our setup, some topics were previously configured with a lower
> segment.bytes value (e.g., 75000), which was allowed in earlier Kafka
> versions but is now invalid.
> As a result Kafka broker cannot start, leading to downtime and
> unavailability.No snapshot file exists yet, so the {{kafka-metadata-shell}}
> tool cannot be used to patch the config offline.
> We would appreciate your guidance on the following:
> * Are there any supported methods from Kafka 4.1.0 to override or bypass
> this validation at startup to recover without losing data?
> * If not, is there a documented approach to fix such configuration issues
> when snapshots are not yet available?
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)