[
https://issues.apache.org/jira/browse/QPID-5880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14053707#comment-14053707
]
Pavel Moravec commented on QPID-5880:
-------------------------------------
Just FYI, another attempt in adding a message annotation does not work either.
As adding an annotation makes a new copy of the whole message, causing the
SharedState is decoupled immediatelly (and moreover, each individual message
has one extra annotation). See https://reviews.apache.org/r/11329/ for some
relevant info.
> [C++ broker] Make memory usage consistent after broker restart
> --------------------------------------------------------------
>
> Key: QPID-5880
> URL: https://issues.apache.org/jira/browse/QPID-5880
> Project: Qpid
> Issue Type: Improvement
> Components: C++ Broker
> Affects Versions: 0.28
> Reporter: Pavel Moravec
> Assignee: Pavel Moravec
> Priority: Minor
> Attachments: QPID-5880.patch
>
>
> In scenario "send messages via (fanout) exchange into multiple (tens,
> hundreds) queues", qpid broker keeps SharedState just once per the received
> message, to save RAM requirements.
> Assuming the messages are durable and stored to the disk, broker restart and
> journal recovery creates one SharedState for each message copy in each queue.
> This has negative impact to memory consumed by the broker, and it could
> potentially prevent broker successfull startup due to RAM exhausted - while
> the broker was running fine prior the restart attempt.
> It is required to have some "unifying" mechanism in journal recovery that
> creates just one SharedState per identical copies of a message spread over
> more queues.
> Ideally, this could be resolved by durable topics but that would require much
> bigger change.
> My proposal (to be discussed in reviewboard) is rather a fast fix supplying
> the original demand.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]