mumrah commented on code in PR #12596:
URL: https://github.com/apache/kafka/pull/12596#discussion_r966484578


##########
core/src/main/scala/kafka/server/metadata/BrokerMetadataListener.scala:
##########
@@ -307,11 +322,18 @@ class BrokerMetadataListener(
 
   private def publish(publisher: MetadataPublisher): Unit = {
     val delta = _delta
-    _image = _delta.apply()
+    try {
+      _image = _delta.apply()

Review Comment:
   Rewinding and re-applying does sound useful for some kind of automatic error 
mitigation, but I think it will be a quite a bit of work. As it stands, I 
believe the broker can only process metadata going forward. 
   
   I can think of a degenerate case we have today where `loadBatches` is able 
to process all but one record, but `delta.apply` cannot complete and so we 
can't publish any new metadata. Like you mention, I think the only way to 
mitigate a situation like this would be to produce smaller deltas to reduce the 
blast radius of a bad record.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to