kevin-pan-skydio opened a new issue, #1423:
URL: https://github.com/apache/pulsar-client-go/issues/1423
# Description
We observed a significant increase in memory usage when enabling Key Based
Batching in the Pulsar Go client. With default batching, memory usage remained
stable, but with key-based batching, memory grew monotonically and never
plateaued until pods were OOM killed.
# Environment
- Deployment: Kubernetes (statefulset, 4 pods)
- Producers per pod: ~60 message keys, ~15 topics
- Throughput: ~700 messages / second
- Pulsar Go Client version: 0.16.0 (latest)
# Producer configuration
```
producer, err := p.producers.FailableGetOrSet(topic, func()
(pulsar.Producer, error) {
producer, err := p.nativeClient.CreateProducer(pulsar.ProducerOptions{
// general
Topic: topic,
DisableBlockIfQueueFull: true, // drop if queue is full
// batching
DisableBatching: false,
BatchingMaxPublishDelay: 1 * time.Second / 30, // ~30Hz
BatchingMaxMessages: 800,
BatcherBuilderType: pulsar.KeyBasedBatchBuilder, // required
for key-shared subscriptions
MaxPendingMessages: 1000, // explicitly set (default is 1000)
// compression
CompressionType: pulsar.ZSTD,
CompressionLevel: pulsar.Faster,
})
if err != nil {
return nil, fmt.Errorf("failed to create producer for topic %s: %w",
topic, err)
}
return producer, err
})
```
# Observed behavior:
With DefaultBatchBuilder: memory usage stabilizes between ~0.2 GB and ~1.5
GB per pod under load.
With KeyBasedBatchBuilder: memory usage grows steadily from ~5 GB to ~15 GB
per pod within a day, with no ceiling.
Even when no messages are being published, memory remains held and does not
drop.
# Expected behavior:
Memory usage should remain bounded and proportional to batching/pending
limits when using key-based batching, similar to default batching.
# Additional context:
The issue is reproducible under consistent traffic patterns (~700 msg/s).
The problem seems related to how key-based batches are retained or cleaned
up in the producer.
It appears memory allocated for batching is not released when idle.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]