> 1) Paging should really be considered palliative. In other words, it's
just meant to mask the fact that the broker has run out of memory.
Performance will drop considerably when paging. You can certainly rely on
it to do its job (i.e. keep the broker from crashing while steps are taken
to restore message flow and/or client performance to normal levels).
However, it's not something I would really recommend building a solution to
specifically leverage. Lots of users think paging is kind of a general use
feature rather than a palliative and therefore build solutions to use a
message queue like a database. This is a classic anti-pattern [1]. I'm not
saying you're guilty of this; I'm just trying to be clear.\

I actually share this, in enterprise messaging it is typically true.

Although my goal wasn't to page consistently, as consumers will likely be
up to date and therefore consuming from memory.
I was hoping that I would be able to instruct Artemis to keep the messages
in a topic for as long as possible even after consumed, so in the rare
cases where a consumer its newly instantiated, it would get all existing
history because of the retroactive consuming. I can see this is not
possible now, thanks.

FWIW this is just an experiment for a server metrics use case: I want to
move metrics to multiple backend services, if a new backend arrives I'd
like a best effort send of the history.
This can be done relatively easily with other commit log based brokers,
although I'm not keen on adding more software if I can avoid it hence this
little playground experiment.

> 2) You have observed correctly that dropping messages is supported when
an address reaches its memory limit, but only blocking is supported when
the broker reaches its disk limit.
> 3) Retroactive consumers are not implemented.

Got it,

Thanks!


2017-05-01 16:09 GMT-07:00 Justin Bertram <jbert...@apache.org>:

> Couple of things:
>
>   1) Paging should really be considered palliative. In other words, it's
> just meant to mask the fact that the broker has run out of memory.
> Performance will drop considerably when paging. You can certainly rely on
> it to do its job (i.e. keep the broker from crashing while steps are taken
> to restore message flow and/or client performance to normal levels).
> However, it's not something I would really recommend building a solution to
> specifically leverage. Lots of users think paging is kind of a general use
> feature rather than a palliative and therefore build solutions to use a
> message queue like a database. This is a classic anti-pattern [1]. I'm not
> saying you're guilty of this; I'm just trying to be clear.
>
>   2) You have observed correctly that dropping messages is supported when
> an address reaches its memory limit, but only blocking is supported when
> the broker reaches its disk limit.
>
>   3) Retroactive consumers are not implemented.
>
>
> Justin
>
> [1] http://sensatic.net/messaging/messaging-anti-patterns-part-1.html
>
> ----- Original Message -----
> From: "Victor" <victor.rom...@gmail.com>
> To: users@activemq.apache.org
> Sent: Monday, May 1, 2017 4:06:21 PM
> Subject: Re: Ring buffer with Artemis: paging mode
>
> Sorry, forgot to mention that I was planning to combine it with retroactive
> consumers too, but the only thing I can find is inconclusive:
> https://issues.apache.org/jira/browse/ARTEMIS-402
>
>
> 2017-05-01 14:04 GMT-07:00 Victor <victor.rom...@gmail.com>:
>
> > Hi all,
> >
> > I have been considering to implement something similar to a ring buffer
> > with artemis to deliver server metrics to a number of backends:
> >
> > I was hoping to be able to:
> >
> > - have a topic √
> > - limit paging with a max disk usage √
> > - drop messages when the disk page limit is reached ?
> > - higher priority for metric messages coming from the serves where the
> > brokers run √
> >
> > Doing a quick review and google search I can't find other people having
> > similar experiences.
> >
> > I also find that I can only drop messages the memory limit
> > <https://activemq.apache.org/artemis/docs/2.0.0/paging.html> is reached,
> > not the disk limit.
> >
> > Any ideas?
> >
>

Reply via email to