Kim,

We have only one main queue and one dead letter queue, and we have 10
produces and 12 consumers and producers pump 1k messages/sec. Below is
qpid-stat -q output when it stopped taking any more messages.

bash-4.1# qpid-stat -q
Queues
  queue                                     dur  autoDel  excl  msg
 msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

=========================================================================================================================
  ax-q-axgroup-001-consumer-group-001       Y                      0
 114k   114k      0   5.88g    5.88g       11     2
  ax-q-axgroup-001-consumer-group-001-dl    Y                      0     0
    0       0      0        0         0     2
  d72e183e-f0df-457c-89a7-81a2cff509c8:0.0       Y        Y        0     0
    0       0      0        0         1     2


Thanks,
Ram




On Fri, Dec 7, 2018 at 9:00 AM Kim van der Riet <kvand...@redhat.com> wrote:

> This looks like a bug to me, and that is why I am keen to see a
> reproducer if you can find one. How many queues are there? How many
> producers and consumers are there for each queue? How are the consumers
> working? Are they configured as listeners, or do they poll for new
> messages? How frequently? How long does it take under these conditions
> for the error to occur typically? If I can get some kind of idea what
> the runtime conditions are, it will give me some idea where to look.
>
> If you set the broker to use INFO+ logging (log-enable=info+), then you
> should see some detail about the starting and recovery of the store when
> the broker starts, which should include this info. The store settings in
> the config file are global, so when you set a particular buffer
> configuration, all queues will use this. It should be reported during
> startup when using INFO+ level logging. Watch your log size, however, as
> using this level will make the logs big.
>
> On 12/5/18 5:06 PM, rammohan ganapavarapu wrote:
> > Kim,
> >
> > We have set wcache-page-size=128 in qpidd.conf, restarted broker and let
> > client recreated the queues fresh, we still getting this error, how do we
> > find if queues created by client actually have this wcache-page-size=128?
> >
> > 2018-12-05 21:18:16 [Protocol] error Connection
> > qpid.<server>:5672-<client>:17769 closed by error: Queue <queue-name>:
> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue() threw
> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue returned
> > partly completed (state ENQ_PART). (This data_tok: id=456535 state=NONE)
> >
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
> >
> > Thanks,
> > Ram
> >
> >
> >
> > On Tue, Dec 4, 2018 at 8:18 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Kim,
> >>
> >> Thank you, i will play with that setting, please let me know if any
> other
> >> tunings will help.
> >>
> >> Ram
> >>
> >> On Wed, Nov 28, 2018 at 8:04 AM Kim van der Riet <kvand...@redhat.com>
> >> wrote:
> >>
> >>> The answer to your first question depends on what is more important to
> >>> you - low latency or high throughput. Messages to be persisted will
> >>> accumulate in a buffer page until it is full or until a timer is
> >>> triggered, then it will be written to disk. It is not until this
> happens
> >>> that the message will be acknowledged by the broker. If low latency is
> >>> important, then having smaller but more numerous buffer pages will mean
> >>> the messages will not wait for very long before being written to disk
> >>> and acknowledged as received. However this occurs at the cost of some
> >>> efficiency, which can affect throughput. If you have large volumes of
> >>> messages and the throughput is more important, then using fewer but
> >>> larger buffer pages will help you.
> >>>
> >>> Be aware, however, that the product of the size and number of pages is
> >>> the total memory that will be consumed and held by the broker for
> >>> buffering *per queue*. If you have a very large number of queues, then
> >>> you must watch out that you don't over-size your write buffers or else
> >>> you will run out of memory.
> >>>
> >>> While I cannot give you specific answers, as these depend on your
> >>> performance priorities, I suggest some trial-and-error if you want to
> >>> adjust these values.
> >>>
> >>> The Transaction Prepared List (TPL) is a special global queue for
> >>> persisting transaction boundaries. As this info is usually small and
> >>> relatively infrequent, the tpl-* settings apply to this queue only and
> >>> the user has the option to use different values than the regular
> queues.
> >>> If you don't use transactions, then this can be ignored. It is not a
> >>> queue that can be written to directly, but the store creates its own
> >>> data that is saved in this queue. Adjusting the tpl-* settings depends
> >>> only on the frequency of transactions in the user's application or
> >>> use-case.
> >>>
> >>> Hope that helps,
> >>>
> >>> Kim van der Riet
> >>>
> >>> On 11/27/18 4:44 PM, rammohan ganapavarapu wrote:
> >>>> Kim,
> >>>>
> >>>> 1. My message size is around 80kb, so what would be suggested values
> for
> >>>> the blow properties?
> >>>>
> >>>>
> >>>> wcache-page-size
> >>>> wcache-num-pages
> >>>> tpl-wcache-num-pages
> >>>> tpl-wcache-page-size
> >>>>
> >>>> right now i have all defaults, so i am trying to see if i can tune
> these
> >>>> values for my messages size to avoid those AIO busy cases.  I have try
> >>> to
> >>>> define those properties/options in qpidd.conf file but when i run
> >>>> qpid-config queues its not showing those values on my queues created
> by
> >>>> client application, do i have to define those options when i create
> >>> queue
> >>>> instead of keep them in qpidd.conf?
> >>>>
> >>>> 2. What is difference b/w tpl-wcache-page-size and wcache-page-size
> >>>>
> >>>> Thanks,
> >>>> Ram
> >>>>
> >>>> On Fri, Nov 16, 2018 at 9:26 AM Kim van der Riet <kvand...@redhat.com
> >
> >>>> wrote:
> >>>>
> >>>>> There is little documentation on linearstore. Certainly, the Apache
> >>> docs
> >>>>> don't contain much. I think this is an oversight, but it won't get
> >>> fixed
> >>>>> anytime soon.
> >>>>>
> >>>>> Kim
> >>>>>
> >>>>> On 11/16/18 12:11 PM, rammohan ganapavarapu wrote:
> >>>>>> Any one point me to the doc where i can read internals about how
> >>>>>> linearstore works and how qpid uses it?
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Ram
> >>>>>>
> >>>>>> On Mon, Nov 12, 2018 at 8:43 AM rammohan ganapavarapu <
> >>>>>> rammohanga...@gmail.com> wrote:
> >>>>>>
> >>>>>>> Kim,
> >>>>>>>
> >>>>>>> Thanks for clearing that up for me, does it support SAN storage
> >>> blocks.
> >>>>>>> Where can i read more about linearstore if i want to know the low
> >>> level
> >>>>>>> internals?
> >>>>>>>
> >>>>>>> Ram
> >>>>>>>
> >>>>>>> On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet <
> >>> kvand...@redhat.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> The linearstore relies on using libaio for its async disk writes.
> >>> The
> >>>>>>>> O_DIRECT flag is used, and this requires a block of aligned memory
> >>> to
> >>>>>>>> serve as a memory buffer for disk write operations. To my
> knowledge,
> >>>>>>>> this technique only works with local disks and controllers. NFS
> does
> >>>>> not
> >>>>>>>> allow for DMA memory writes to disk AFAIK, and for as long as I
> can
> >>>>>>>> remember, has been a problem for the linearstore. With some work
> it
> >>>>>>>> might be possible to make it work using another write technique
> >>> though.
> >>>>>>>> NFS has never been a "supported" medium for linearstore.
> >>>>>>>>
> >>>>>>>> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
> >>>>>>>>> But how does NFS will cause this issue, i am interested to see
> >>> because
> >>>>>>>> we
> >>>>>>>>> are using NFS (V4 version) in some environments, so wanted to
> learn
> >>>>>>>> tunings
> >>>>>>>>> when we use NFS.
> >>>>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>> Ram
> >>>>>>>>>
> >>>>>>>>> On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
> >>>>>>>>> rammohanga...@gmail.com> wrote:
> >>>>>>>>>
> >>>>>>>>>> Sorry, i thought it's NFS but it's actually SAN storage volume.
> >>>>>>>>>>
> >>>>>>>>>> Thanks,
> >>>>>>>>>> Ram
> >>>>>>>>>>
> >>>>>>>>>> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim <g...@redhat.com wrote:
> >>>>>>>>>>
> >>>>>>>>>>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
> >>>>>>>>>>>> I was wrong about the NFS for qpid journal files, looks like
> >>> they
> >>>>>>>> are on
> >>>>>>>>>>>> NFS, so does NFS cause this issue?
> >>>>>>>>>>> Yes, I believe it does. What version of NFS are you using?
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>>>>>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>>>>>>>>>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>> ---------------------------------------------------------------------
> >>>>>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>>>>>>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>>>>>>
> >>>>>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>>>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>>>
> >>>>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>
> >>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>

Reply via email to