Hi Kristinn,

the only way to set a STOMP consumer windows size on ActiveMQ Artemis is to
set the property `stompConsumerCredits` in the STOMP acceptor, ie ie the
following acceptor will set consumerWindowSize[1] = 1024 bytes for all
STOMP clients connected:

<acceptor
name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP;stompConsumerCredits=1024</acceptor>

Obviously stompConsumerCredits can not be set to 0, because this would
prevent the server from sending messages to the client.
I would move from the STOMPclient to ActiveMQJMSClient as the example at
https://stackoverflow.com/questions/62701025/activemq-artemis-handle-messages-sequentially/62705413#62705413

[1]
https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html

Regards,
Domenico


Il giorno mer 8 lug 2020 alle ore 19:35 Kristinn Thor Johannsson <
k...@skalar.no> ha scritto:

> In my test-case I had my consumers set up with stomp (I wrote it in
> typescript), and used the CLIENT_INDIVIDUAL mode (
> https://stomp.github.io/stomp-specification-1.2.html#SUBSCRIBE_ack_Header
> ).
> So I thought that might delay messages from a specific queue until the
> current one was acked,
> but with my consumers waiting 500ms until acking, they still received new
> messages from the queues immediately.
>
> And in that test-case I also tried consumerWindowSize=0 as a query-param on
> the connection string, and also as a default address setting for all
> queues.
>
> ons. 8. jul. 2020 kl. 17:01 skrev Tim Bain <tb...@alumni.duke.edu>:
>
> > Would CLIENT_ACKNOWLEDGE (or INDIVIDUAL_ACKNOWLEDGE, though I don't think
> > it buys you anything over CLIENT_ACKNOWLEDGE) delivery modes address your
> > use case, by allowing your clients control over when messages are acked?
> >
> > Tim
> >
> > On Wed, Jul 8, 2020, 5:57 AM Domenico Francesco Bruscino <
> > bruscin...@gmail.com> wrote:
> >
> > > Hi Kristinn,
> > >
> > > the prefetch limit is for consumers and the current implementation of
> the
> > > activemq-client 5 automatically sends a delivery acknowledgement when
> the
> > > unacknowledged messages are more than the half of the prefetch limit,
> so
> > I
> > > don't think it can help.
> > >
> > > Regards,
> > > Domenico
> > >
> > > Il giorno mar 7 lug 2020 alle ore 10:11 Kristinn Thor Johannsson <
> > > k...@skalar.no> ha scritto:
> > >
> > > > The use case can be explained like this:
> > > >
> > > > We are doing some work based on git commits, and for each target
> > (either
> > > > commit or user defined branch name) we only want to be handling the
> > first
> > > > workload before starting the next.
> > > > So we're probably going to have a high number of topics (each unique
> > > > commit), but within each of those topics there might only be 1-3
> > messages
> > > > to be handled (with the exception of the user defined branch named
> > > > targets).
> > > >
> > > > So we were thinking that a mq might be able to help us do this in a
> > > > performant way, where our consumers are only informed of messages
> that
> > > are
> > > > ready and valid to start (no active workload for that specific
> > > > commit/branch name and it's the next within that topic).
> > > >
> > > > I've read the doc page you linked to about prefetch limit, could I
> use
> > > the
> > > > prefetch limit set to 1 for topics to achieve this?
> > > >
> > > > man. 6. jul. 2020 kl. 18:14 skrev Domenico Francesco Bruscino <
> > > > bruscin...@gmail.com>:
> > > >
> > > > > Hi Kristinn,
> > > > >
> > > > > the acknowledgements confirm the successful consumption of a
> message
> > to
> > > > the
> > > > > server and generally it doesn't control directly the flow of data
> > > between
> > > > > the server and the client[1].
> > > > >
> > > > > Using the messaging group suitably, only one consumer will receive
> > the
> > > > > messages of a queue and if your consumer is slow you could
> > > > > set consumerWindowSize to 0 (for no buffer at all)[2] to limit the
> > > > > redeliveries on disconnection.
> > > > > So using the messaging group and setting consumerWindowSize to 0,
> the
> > > > > server sends all the messages of a queue to one consumer and the
> > > consumer
> > > > > doesn't receive the next message before invoking the `receive()`
> > method
> > > > or
> > > > > asynchronously via a message listener.
> > > > >
> > > > > If this solution doesn't match your requirements, the
> > activemq-client 5
> > > > > offers a way to control the delivery with the acknowledgements
> using
> > > > > prefetch limit [3].
> > > > > Why do you need to lock the delivery of the next message until
> > > > > the acknowledgement of the previous one?
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> >
> https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html#slow-consumers
> > > > > [3] https://activemq.apache.org/what-is-the-prefetch-limit-for
> > > > >
> > > > > Regards,
> > > > > Domenico
> > > > >
> > > > > Il giorno lun 6 lug 2020 alle ore 09:50 Kristinn Thor Johannsson <
> > > > > k...@skalar.no> ha scritto:
> > > > >
> > > > > > Thanks a lot for the response Domenico.
> > > > > >
> > > > > > I see what I failed to mention in my original question is that
> each
> > > > > > consumer might need a while to process a message from each queue
> > it's
> > > > > > handling.
> > > > > > In your example, you had auto-acknowledge, making each message
> > being
> > > > > > instantly handled by the consumer.
> > > > > >
> > > > > > Is your example modifiable to make it so that a message takes
> > about 1
> > > > > > second to be handled and acknowledged, and the consumer doesn't
> > > receive
> > > > > the
> > > > > > next message for the same queue before that happens?
> > > > > > If so, I'm certain our use case can be handled by artemis.
> > > > > >
> > > > > >
> > > > > > fre. 3. jul. 2020 kl. 13:02 skrev Domenico Francesco Bruscino <
> > > > > > bruscin...@gmail.com>:
> > > > > >
> > > > > > > Hi Kristinn,
> > > > > > >
> > > > > > > I have just answered the same question on stackoverflow:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://stackoverflow.com/questions/62701025/activemq-artemis-handle-messages-sequentially/62705413#62705413
> > > > > > >
> > > > > > > Regards,
> > > > > > > Domenico
> > > > > > >
> > > > > > > Il giorno gio 18 giu 2020 alle ore 10:51 Kristinn Thor
> > Johannsson <
> > > > > > > k...@skalar.no> ha scritto:
> > > > > > >
> > > > > > > > Hi, I've tried to ensure that a consumer on a queue (with
> > message
> > > > > > > grouping)
> > > > > > > > will only receive one message at a time from each queue it's
> > > > > handling,
> > > > > > > > until the consumer acknowledges said message.
> > > > > > > >
> > > > > > > > For a test I've set up ActiveMQ Artemis and have 3 consumers
> > on a
> > > > > > > wildcard
> > > > > > > > EXAMPLE.* , and one publisher posting 10 messages to each of
> 5
> > > > > queues:
> > > > > > > > EXAMPLE.1 - EXAMPLE.5 . What I'm seeing is that each of the
> > > > consumers
> > > > > > > > receive messages from the queues immediately. I've tried
> using
> > > the
> > > > > > > consumer
> > > > > > > > window size setting (as 0) as I thought that would help me
> only
> > > > > deliver
> > > > > > > one
> > > > > > > > message at a time from each queue, but that doesn't seem to
> > work.
> > > > > Have
> > > > > > I
> > > > > > > > misunderstood that setting? If so, are there any other
> > settings I
> > > > > > should
> > > > > > > be
> > > > > > > > looking at to help me get this working?
> > > > > > > >
> > > > > > > > The particular use case I'm trying to achieve is that I'll
> > > possibly
> > > > > > have
> > > > > > > > many queues and a couple of consumers. And it's important
> that
> > > > > messages
> > > > > > > in
> > > > > > > > each of the queues are handled sequentially, but all queues
> can
> > > be
> > > > > > > handled
> > > > > > > > in parallell
> > > > > > > >
> > > > > > > > --
> > > > > > > > -Kristinn Thor
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -Kristinn Thor
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -Kristinn Thor
> > > >
> > >
> >
>
>
> --
> -Kristinn Thor
>

Reply via email to