[
https://issues.apache.org/jira/browse/KAFKA-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047417#comment-16047417
]
ASF GitHub Bot commented on KAFKA-5438:
---------------------------------------
GitHub user apurvam opened a pull request:
https://github.com/apache/kafka/pull/3313
KAFKA-5438: Fix UnsupportedOperationException in writeTxnMarkersRequest
Before this patch, the `partitionErrors` was an immutable map. As a result
if a single producer had a marker for multiple partitions, and if there were
multiple response callbacks for a single append, we would get an
`UnsupportedOperationException` in the `writeTxnMarker` handler.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/apurvam/kafka
KAFKA-5438-fix-unsupportedoperationexception-in-writetxnmarker
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/3313.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #3313
----
commit 0fc523aefb8a30f4fdebf166f904a780b40f6965
Author: Apurva Mehta <[email protected]>
Date: 2017-06-13T04:47:59Z
Make partitionErrors a mutable map in KafkaApis.handleWriteTxnMarkerRequest
----
> UnsupportedOperationException in WriteTxnMarkers handler
> --------------------------------------------------------
>
> Key: KAFKA-5438
> URL: https://issues.apache.org/jira/browse/KAFKA-5438
> Project: Kafka
> Issue Type: Sub-task
> Components: clients, core, producer
> Reporter: Jason Gustafson
> Assignee: Apurva Mehta
> Priority: Blocker
> Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> {code}
> [2017-06-10 19:16:36,280] ERROR [KafkaApi-2] Error when handling request
> {replica_id=1,max_wait_time=500,min_bytes=1,max_bytes=10485760,isolation_level=0,topics=[{topic=__transaction_state,partitions=[{partition=45,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=15,f
> etch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=20,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=3,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=21,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=29,fetch_offset=0,lo
> g_start_offset=0,max_bytes=1048576},{partition=39,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=38,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=9,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=33,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=32,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=17,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=23,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=47,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=26,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=5,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=27,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=41,fetch_offset=48,log_start_offset=0,max_bytes=1048576},{partition=35,fetch_offset=0,log_start_offset=0,max_bytes=1048576}]},{topic=__consumer_offsets,partitions=[{partition=8,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=21,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=27,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=9,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=41,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=33,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=23,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=47,fetch_offset=41,log_start_offset=0,max_bytes=1048576},{partition=3,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=15,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=17,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=11,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=14,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=20,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=39,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=45,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=5,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=26,fetch_offset=0,log_start_offset=0,max_bytes=1048576},{partition=29,fetch_offset=0,log_start_offset=0,max_bytes=1048576}]},{topic=output-topic,partitions=[{partition=1,fetch_offset=4522,log_start_offset=0,max_bytes=1048576}]}]}
> (kafka.server.KafkaApis)
> java.lang.UnsupportedOperationException
> at java.util.AbstractMap.put(AbstractMap.java:203)
> at java.util.AbstractMap.putAll(AbstractMap.java:273)
> at
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$13(KafkaApis.scala:1509)
> at
> kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$2$$anonfun$apply$20.apply(KafkaApis.scala:1571)
> at
> kafka.server.KafkaApis$$anonfun$handleWriteTxnMarkersRequest$2$$anonfun$apply$20.apply(KafkaApis.scala:1571)
> at kafka.server.DelayedProduce.onComplete(DelayedProduce.scala:131)
> at
> kafka.server.DelayedOperation.forceComplete(DelayedOperation.scala:66)
> at kafka.server.DelayedProduce.tryComplete(DelayedProduce.scala:113)
> at
> kafka.server.DelayedProduce.safeTryComplete(DelayedProduce.scala:76)
> at
> kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:338)
> at
> kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:244)
> at
> kafka.server.ReplicaManager.tryCompleteDelayedProduce(ReplicaManager.scala:250)
> at
> kafka.cluster.Partition.tryCompleteDelayedRequests(Partition.scala:418)
> at
> kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:269)
> at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1091)
> at
> kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1088)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at
> kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:1088)
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:622)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:606)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:98)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)