thank you so much for sharing these results with us...

I'm trying to improve paging a little further..  stop using records to
chase the counters and optimize transaction usage. (if a sender uses a
TX to send a single message, it should send non transacted.. that's
what I'm planning at least).


Thank you

On Mon, Sep 19, 2022 at 7:46 AM Roskvist Anton <anton.roskv...@volvo.com> wrote:
>
> Hi,
>
> I've had some issues with the Artemis broker when it accidentally runs into 
> paging mode on multiple addresses at once. As such I have been working a bit 
> with being able to measure and mitigate this issue for my use case.
> I'm glad to say that as of the latest build I made against the 2.x-branch (so 
> against artemis-2.25.1/2.26.0 or whatever the final version might be) I'm 
> seeing a very noticeable improvement in this area.
>
> The test as I have designed it connects to a single broker and spawn 10 
> consumer+producer pairs, each pair connecting to their own Queue.
> Each pair process N messages and then disconnects.
> So I'll have:
> p0 -> Queue.0 -> c0
> p1 -> Queue.1 -> c1
> ....
> p9 -> Queue.9 -> c9
>
> I set the address "max-size-bytes" to an unreasonable low value to make sure 
> all these queue hit paging asap.
> The producers get a few seconds of head-start for the same reason.
> Then the producer+consumer pairs simply send and receive messages at 3K each 
> as fast as they can, having the storage bandwidth as the limiting factor 
> (this limit is verified).
>
> I ran the same test multiple times with two broker versions being the only 
> variability in the setup. artemis-2.22.0 and artemis-2.x-latest (built around 
> a week ago).
>
> I could observe two distinct differences between the versions:
> 1. Overall paging throughput is around 10% higher with the latest version of 
> the broker... even though storage throughput is capped out in both cases, 
> which I found quite interesting.
>
> 2. Resource distribution is a lot more even between the queues in the latest 
> version. To elaborate on this, the previous behavior is:
>   All 10 destinations hit paging pretty much as soon as the test starts.
>   Throughput is roughly evenly distributed between all queues, I.e. they 
> process about 1/10th each of the total throughput.
>   After something like 1-2 minutes this shifts and one destination is given 
> all bandwidth so it's throughput dramatically increases. Meanwhile the other 
> destinations are basically at a standstill.
>   This holds until processing is done for the "fast" queue. Then the other 
> destinations proceed evenly for about 1-2 minutes and then another queue gets 
> to ride in the "fast lane" and the rest have to wait.
>
> In the latest version the processing is even throughout the duration of the 
> test. Adding an 11th pair (also to paging) during this sees resources divided 
> evenly against all 11 queues as expected. Nice!
>
> I think the main benefit of this would be with patterns where a client pick 
> up messages on an IN queue within a transaction, does some processing and 
> sends it to an OUT-queue before committing the transaction.
> In this scenario, if both the IN and OUT-queue enter paging and the IN-queue 
> gets the "fast-lane" I suspect the client could basically get deadlocked with 
> it's message processing, or at least have _very_ degraded performance. This 
> is very similar to what I have observed "In the wild" and from what I can 
> tell, the changes to paging remedies the issue.
>
> In any case, just thought I'd share my findings. I think this is looking very 
> good!
>
> Br,
> Anton
>
> This email message (including its attachments) is confidential and may 
> contain privileged information and is intended solely for the use of the 
> individual and/or entity to whom it is addressed. If you are not the intended 
> recipient of this e-mail you may not disseminate, distribute or copy this 
> e-mail (including its attachments), or any part thereof. If this e-mail is 
> received in error, please notify the sender immediately by return e-mail and 
> make sure that this e-mail (including its attachments), and all copies 
> thereof, are immediately deleted from your system. Please further note that 
> when you communicate with us via email or visit our website we process your 
> personal data. See our privacy policy for more information about how we 
> process it: https://www.volvogroup.com/en-en/privacy.html



-- 
Clebert Suconic

Reply via email to