> On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
> > Having the broker set the persistence id in some cases, where the current 
> > expectation is that the store sets it, makes me nervous. How does this 
> > affect the windows store(s) for example? Does any store make any 
> > assumptions about the id that would no longer hold?

Good point.

Both linearstore and legacystore make no assumption. They just set (and 
require) unique ID for each record within one journal, but don't care about it 
otherwise. In legacystore, there is an int comparison within loadContent method 
(used only for journal recovery) that can make the function less efficient if 
unordered persistence ID is found.

Anyway it would be great if Kim could confirm I am right.

Both Window stores rewrite persistence ID by their value, so for ms-clfs and 
ms-sql stores the patch won't have any impact at all.


> On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
> > /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h, line 81
> > <https://reviews.apache.org/r/23305/diff/2/?file=624628#file624628line81>
> >
> >     Should be camel case rather than underscore, like other option members.

Fixed.


> On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
> > /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp, line 61
> > <https://reviews.apache.org/r/23305/diff/2/?file=624629#file624629line61>
> >
> >     Odd option name, qpid.store_msg_id would be neater. I don't feel the 
> > option name really properly describes the function though. It's really more 
> > something like 'share recovered messages'

I did not like the option name either, it was just first draft. Thanks for the 
suggestion, SHARE_RECOVERED_MSGS("qpid.share_recovered_msgs"); is used now.


> On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
> > /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp, line 987
> > <https://reviews.apache.org/r/23305/diff/2/?file=624631#file624631line987>
> >
> >     Would be nicer to contain this all in the broker code, e.g. perhaps in 
> > qpid/broker/RecoveryManagerImpl.cpp?

That would make sense from logical point of view, as the broker should be 
responsible for the message re-coupling in its memory. But..

It would have to be in RecoveryManagerImpl class, to have the message_ids map 
broker-wide.

RecoveryManagerImpl::recoverMessage is aware of one message only, without 
context (like queue it belongs to or what persistency ID the message should 
have). I would have to extend the method argument to have persistencyID and 
RecoverableQueue::shared_ptr to move that functionality there. The Queue 
pointer is necessary due to message_ids map to keep only pointers to messages 
from qpid.share_recovered_msgs queues. Otherwise the map would cache redundant 
pointers as well.

Is it worth adding the two arguments for a method to be called for every 
message recovery? If the feature will be applied only to some queues?

Or is there some another option I overlooked?


- Pavel


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23305/#review47397
-----------------------------------------------------------


On July 8, 2014, 12:53 p.m., Pavel Moravec wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23305/
> -----------------------------------------------------------
> 
> (Updated July 8, 2014, 12:53 p.m.)
> 
> 
> Review request for qpid, Gordon Sim and Kim van der Riet.
> 
> 
> Bugs: QPID-5880
>     https://issues.apache.org/jira/browse/QPID-5880
> 
> 
> Repository: qpid
> 
> 
> Description
> -------
> 
> Simple idea:
> - in Queue::enqueue, set PersistableMessage::persistencyID "manually" to some 
> unique number that is identical to all message instances that has common 
> SharedState - e.g. to the pointer to SharedState
> - during journal recovery, if we recover a message with already seen 
> persistencyID, use the previous seen instead with its SharedState and 
> PersistableMessage bits
> 
> Known limitation:
> - message annotations added to some queue (e.g. due to queue sequencing 
> enabled) will be either over-written or shared to all other queues during 
> recovery
> 
> The patch contains a new QueueSettings option to enable (by default disabled) 
> this feature on per queue basis. This somehow limits the limitation above.
> 
> Isn't storing pointer to SharedState to the disk (via persistencyID) some 
> sort of security breach? (I dont think so, but worth to ask)
> 
> Can't manual setup of persistencyID break something in store? (AFAIK no as 
> uniqueness of the ID is assured: 1) a new / different message with the same 
> persistencyID can appear only after the previous instance is gone from 
> memory, and 2) only queues with the option enabled are checked for message 
> coupling)
> 
> Will it help in cluster? No, it won't. As when primary broker gets 1 message 
> to an exchange that distributes it to 100 queues, th broker updates backup 
> brokers via 100 individual "enqueue 1 message to queue q[1-100]" events. So 
> backup brokers consume more memory than primary - the same amount like 
> primary does not share SharedState at all.
> 
> So it is reasonable for standalone brokers only.
> 
> 
> Diffs
> -----
> 
>   /trunk/qpid/cpp/src/qpid/broker/Queue.cpp 1608083 
>   /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h 1608083 
>   /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp 1608083 
>   /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.h 1608083 
>   /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp 1608083 
>   /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.h 1608083 
>   /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.cpp 1608083 
> 
> Diff: https://reviews.apache.org/r/23305/diff/
> 
> 
> Testing
> -------
> 
> No significant difference in memory consumption before & after restart (in 
> setup of 500 queues with qpid.store_msgID=true and thousands of messages sent 
> via fanout exchange to all of them).
> 
> Automated tests passed.
> 
> 
> Thanks,
> 
> Pavel Moravec
> 
>

Reply via email to