Re: Migrate from CPP Qpid to Java Qpid broker

2024-02-19 Thread rammohan ganapavarapu
Thank you.

On Mon, Feb 19, 2024, 4:38 AM Robbie Gemmell 
wrote:

> The two brokers use different stores, you can't move the data files
> between them. I'm not away of any tooling either, as its not been a
> move many were typically ever interested in, having had reason to use
> one or the other originally. If you have content stored in one you
> would want to migrate you would have to consume it and produce it into
> the other.
>
> On Fri, 16 Feb 2024 at 17:00, rammohan ganapavarapu
>  wrote:
> >
> > Hi,
> >
> > Is there any way to migrate cpp-qpid broker to java qpid broker with out
> losing the messages in the queue? Does a Java broker use the same store as
> the cpp broker? If they are different are there any tools to migrate data
> from cpp store to java broker store?
> >
> > Thanks,
> > Ram
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Migrate from CPP Qpid to Java Qpid broker

2024-02-16 Thread rammohan ganapavarapu
Hi,

Is there any way to migrate cpp-qpid broker to java qpid broker with out
losing the messages in the queue? Does a Java broker use the same store as
the cpp broker? If they are different are there any tools to migrate data
from cpp store to java broker store?

Thanks,
Ram


Re: Qpid broker EOL

2023-12-15 Thread rammohan ganapavarapu
Thank you!

On Fri, Dec 15, 2023 at 1:43 AM Robbie Gemmell 
wrote:

> We dont keep a specific 'EOL details' so I can't refer you to any,
> even if you had indicated which you are interested in.
>
> Essentially if a version isnt on the website as a current release it
> is definitely EOL, though being on the website doesn't mean it isnt
> also effectively so, e.g the cpp broker hasnt been released in 5
> years.
>
> On Thu, 14 Dec 2023 at 23:18, rammohan ganapavarapu
>  wrote:
> >
> > Hi,
> >
> > Where can I find the EOL details for a specific Qpid broker version?
> >
> > Ram
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Qpid broker EOL

2023-12-14 Thread rammohan ganapavarapu
Hi,

Where can I find the EOL details for a specific Qpid broker version?

Ram


Re: Benchmark comparison between Java broker and Cpp broker

2022-10-06 Thread rammohan ganapavarapu
Thank you Robbie!

Ram

On Thu, Oct 6, 2022 at 1:56 AM Robbie Gemmell 
wrote:

> I'm not aware of any comparison you could look at, but since messaging
> benchmarks are typically so very dependent on the specific messaging
> use case and environment anyway, if you want a comparison really you
> should compare with/in your representative use case and env anyway.
>
> On Wed, 5 Oct 2022 at 22:39, rammohan ganapavarapu
>  wrote:
> >
> > Anyone have any ideas?
> >
> > On Tue, Sep 20, 2022 at 10:52 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I have been using a CPP based qpid broker for a while and i am trying
> to
> > > move to Java based broker, i just wanted to see if there are any
> > > perf/benchmarking comparison between these two.
> > >
> > > Thanks,
> > > Ram
> > >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Benchmark comparison between Java broker and Cpp broker

2022-10-05 Thread rammohan ganapavarapu
Anyone have any ideas?

On Tue, Sep 20, 2022 at 10:52 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> I have been using a CPP based qpid broker for a while and i am trying to
> move to Java based broker, i just wanted to see if there are any
> perf/benchmarking comparison between these two.
>
> Thanks,
> Ram
>


Benchmark comparison between Java broker and Cpp broker

2022-09-20 Thread rammohan ganapavarapu
Hi,

I have been using a CPP based qpid broker for a while and i am trying to
move to Java based broker, i just wanted to see if there are any
perf/benchmarking comparison between these two.

Thanks,
Ram


How to increase the limits on existing queue

2022-04-14 Thread rammohan ganapavarapu
Hi,

I have a queue created with default limits and now I want to increase the
queue limits to hold more messages.

test-queue  --durable --file-size=2000
--file-count=24 --max-queue-size=2147483647 --max-queue-count=100
--limit-policy=flow-to-disk --argument no-local=False

Is it possible to adjust the --max-queue-size and --max-queue-count on an
existing queue which is holding messages in the queue without losing any
messages?


My broker version is qpid-cpp-server-1.39.

Thanks,
Ram


Re: qpid-cpp-server 1.36 for CentOS6/RHEL6

2020-06-02 Thread rammohan ganapavarapu
thank you

On Mon, Jun 1, 2020 at 6:07 PM Virgilio Fornazin 
wrote:

> https://www.dropbox.com/s/by03lcmhbirqx9m/qpid136rhel7.tar?dl=0
>
> it expires on 24hours
>
> On Mon, Jun 1, 2020 at 2:08 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Virrgilio,
> >
> > The link you have provided is still exist? i am getting 404, can you
> please
> > share again?
> >
> > Ram
> >
> > On Sat, May 30, 2020 at 3:50 AM Virgilio Fornazin <
> > virgilioforna...@gmail.com> wrote:
> >
> > > I've built by myself for rhel6, 7, 8 ...
> > >
> > > https://www.dropbox.com/s/kawv8dfrf3ez4x8/qpid136rhel6.tar?dl=0
> > >
> > >
> > > On Sat, May 30, 2020 at 1:18 AM rammohan ganapavarapu <
> > > rammohanga...@gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > I am looking for the qpid-cpp-server 1.36 and dependent rpms  for
> > > > CentOS6/RHEL6, can some one suggest where can i download them?
> > > >
> > > > Thanks,
> > > > Ram
> > > >
> > >
> >
>


Re: qpid-cpp-server 1.36 for CentOS6/RHEL6

2020-06-01 Thread rammohan ganapavarapu
Virrgilio,

The link you have provided is still exist? i am getting 404, can you please
share again?

Ram

On Sat, May 30, 2020 at 3:50 AM Virgilio Fornazin <
virgilioforna...@gmail.com> wrote:

> I've built by myself for rhel6, 7, 8 ...
>
> https://www.dropbox.com/s/kawv8dfrf3ez4x8/qpid136rhel6.tar?dl=0
>
>
> On Sat, May 30, 2020 at 1:18 AM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > I am looking for the qpid-cpp-server 1.36 and dependent rpms  for
> > CentOS6/RHEL6, can some one suggest where can i download them?
> >
> > Thanks,
> > Ram
> >
>


Re: qpid-cpp-server 1.36 for CentOS6/RHEL6

2020-05-30 Thread rammohan ganapavarapu
Thank you

Ram

On Sat, May 30, 2020, 3:50 AM Virgilio Fornazin 
wrote:

> I've built by myself for rhel6, 7, 8 ...
>
> https://www.dropbox.com/s/kawv8dfrf3ez4x8/qpid136rhel6.tar?dl=0
>
>
> On Sat, May 30, 2020 at 1:18 AM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > I am looking for the qpid-cpp-server 1.36 and dependent rpms  for
> > CentOS6/RHEL6, can some one suggest where can i download them?
> >
> > Thanks,
> > Ram
> >
>


qpid-cpp-server 1.36 for CentOS6/RHEL6

2020-05-29 Thread rammohan ganapavarapu
Hi,

I am looking for the qpid-cpp-server 1.36 and dependent rpms  for
CentOS6/RHEL6, can some one suggest where can i download them?

Thanks,
Ram


Re: limit policy in cpp broker 1.3*

2020-03-04 Thread rammohan ganapavarapu
Thank you, any guidelines or best practices on using paging ?

Thanks,
Ram

On Wed, Mar 4, 2020 at 1:56 PM Gordon Sim  wrote:

> On 04/03/2020 9:46 pm, rammohan ganapavarapu wrote:
> > Gordon,
> >
> > Thanks for quick response, so what are the available limit-policis? is
> > there any config option to make messages to keep on disk when queue
> > limit is reached?
>
> There is a paging option, see
>
> http://qpid.2158936.n2.nabble.com/Flow-to-disk-functionality-and-its-replacement-in-0-24-tp7597697p7597698.html
> for details
>
> > Also since flow-to-disk is not available what will happen to the
> > incoming messages when queue limit is reached? also what is the default
> > limit-policy if nothing configured?
>
> The default is to reject messages over the limit. It can also be
> configured to either drop messages or delete the queue (which is mostly
> used to limit depth of subscription queues).
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: limit policy in cpp broker 1.3*

2020-03-04 Thread rammohan ganapavarapu
Gordon,

Thanks for quick response, so what are the available limit-policis? is
there any config option to make messages to keep on disk when queue limit
is reached?

Also since flow-to-disk is not available what will happen to the incoming
messages when queue limit is reached? also what is the default limit-policy
if nothing configured?

Thanks,
Ram

On Wed, Mar 4, 2020 at 12:56 PM Gordon Sim  wrote:

> On 04/03/2020 5:49 pm, rammohan ganapavarapu wrote:
> > Hi,
> >
> > What is  the queue limit policy in qpid-1.3* version? is
> > --limit-policy=flow-to-disk
> > still valid? as per the below doc it seems like it is still valid but i
> am
> > getting warning message in broker logs
> >
> > [Broker] warning Unrecognised policy option: flow_to_disk
> >
> >
> https://qpid.apache.org/releases/qpid-cpp-1.39.0/cpp-broker/book/chapter-Managing-CPP-Broker.html
>
> Flow to disk has not been supported for quite some time[1]. The doc is
> out of date, but usage of qpid-config should now be accurate.
>
> [1]
>
> http://qpid.2158936.n2.nabble.com/Flow-to-disk-functionality-and-its-replacement-in-0-24-tp7597697p7597698.html
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


limit policy in cpp broker 1.3*

2020-03-04 Thread rammohan ganapavarapu
Hi,

What is  the queue limit policy in qpid-1.3* version? is
--limit-policy=flow-to-disk
still valid? as per the below doc it seems like it is still valid but i am
getting warning message in broker logs

[Broker] warning Unrecognised policy option: flow_to_disk

https://qpid.apache.org/releases/qpid-cpp-1.39.0/cpp-broker/book/chapter-Managing-CPP-Broker.html

Thanks,
Ram


Re: qpid-cpp-0.35 errors

2018-12-28 Thread rammohan ganapavarapu
Kim,

Any tools to read or dump messages from journal files?

Thanks,
Ram

On Fri, Dec 7, 2018 at 11:32 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim,
>
> We have only one main queue and one dead letter queue, and we have 10
> produces and 12 consumers and producers pump 1k messages/sec. Below is
> qpid-stat -q output when it stopped taking any more messages.
>
> bash-4.1# qpid-stat -q
> Queues
>   queue dur  autoDel  excl  msg
>  msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind
>
> =
>   ax-q-axgroup-001-consumer-group-001   Y  0
>  114k   114k  0   5.88g5.88g   11 2
>   ax-q-axgroup-001-consumer-group-001-dlY  0
>  0  0   0  00 0 2
>   d72e183e-f0df-457c-89a7-81a2cff509c8:0.0   YY0
>  0  0   0  00 1 2
>
>
> Thanks,
> Ram
>
>
>
>
> On Fri, Dec 7, 2018 at 9:00 AM Kim van der Riet 
> wrote:
>
>> This looks like a bug to me, and that is why I am keen to see a
>> reproducer if you can find one. How many queues are there? How many
>> producers and consumers are there for each queue? How are the consumers
>> working? Are they configured as listeners, or do they poll for new
>> messages? How frequently? How long does it take under these conditions
>> for the error to occur typically? If I can get some kind of idea what
>> the runtime conditions are, it will give me some idea where to look.
>>
>> If you set the broker to use INFO+ logging (log-enable=info+), then you
>> should see some detail about the starting and recovery of the store when
>> the broker starts, which should include this info. The store settings in
>> the config file are global, so when you set a particular buffer
>> configuration, all queues will use this. It should be reported during
>> startup when using INFO+ level logging. Watch your log size, however, as
>> using this level will make the logs big.
>>
>> On 12/5/18 5:06 PM, rammohan ganapavarapu wrote:
>> > Kim,
>> >
>> > We have set wcache-page-size=128 in qpidd.conf, restarted broker and let
>> > client recreated the queues fresh, we still getting this error, how do
>> we
>> > find if queues created by client actually have this
>> wcache-page-size=128?
>> >
>> > 2018-12-05 21:18:16 [Protocol] error Connection
>> > qpid.:5672-:17769 closed by error: Queue :
>> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue()
>> threw
>> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue returned
>> > partly completed (state ENQ_PART). (This data_tok: id=456535 state=NONE)
>> >
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>> >
>> > Thanks,
>> > Ram
>> >
>> >
>> >
>> > On Tue, Dec 4, 2018 at 8:18 AM rammohan ganapavarapu <
>> > rammohanga...@gmail.com> wrote:
>> >
>> >> Kim,
>> >>
>> >> Thank you, i will play with that setting, please let me know if any
>> other
>> >> tunings will help.
>> >>
>> >> Ram
>> >>
>> >> On Wed, Nov 28, 2018 at 8:04 AM Kim van der Riet 
>> >> wrote:
>> >>
>> >>> The answer to your first question depends on what is more important to
>> >>> you - low latency or high throughput. Messages to be persisted will
>> >>> accumulate in a buffer page until it is full or until a timer is
>> >>> triggered, then it will be written to disk. It is not until this
>> happens
>> >>> that the message will be acknowledged by the broker. If low latency is
>> >>> important, then having smaller but more numerous buffer pages will
>> mean
>> >>> the messages will not wait for very long before being written to disk
>> >>> and acknowledged as received. However this occurs at the cost of some
>> >>> efficiency, which can affect throughput. If you have large volumes of
>> >>> messages and the throughput is more important, then using fewer but
>> >>> larger buffer pages will help you.
>> >>>
>> >>> Be aware, however, that the product of the size and number of pages is
>> >>> the total memory that will be consumed and held by the brok

Re: qpid-cpp-0.35 errors

2018-12-07 Thread rammohan ganapavarapu
Kim,

We have only one main queue and one dead letter queue, and we have 10
produces and 12 consumers and producers pump 1k messages/sec. Below is
qpid-stat -q output when it stopped taking any more messages.

bash-4.1# qpid-stat -q
Queues
  queue dur  autoDel  excl  msg
 msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

=
  ax-q-axgroup-001-consumer-group-001   Y  0
 114k   114k  0   5.88g5.88g   11 2
  ax-q-axgroup-001-consumer-group-001-dlY  0 0
0   0  00 0 2
  d72e183e-f0df-457c-89a7-81a2cff509c8:0.0   YY0 0
0   0  00 1 2


Thanks,
Ram




On Fri, Dec 7, 2018 at 9:00 AM Kim van der Riet  wrote:

> This looks like a bug to me, and that is why I am keen to see a
> reproducer if you can find one. How many queues are there? How many
> producers and consumers are there for each queue? How are the consumers
> working? Are they configured as listeners, or do they poll for new
> messages? How frequently? How long does it take under these conditions
> for the error to occur typically? If I can get some kind of idea what
> the runtime conditions are, it will give me some idea where to look.
>
> If you set the broker to use INFO+ logging (log-enable=info+), then you
> should see some detail about the starting and recovery of the store when
> the broker starts, which should include this info. The store settings in
> the config file are global, so when you set a particular buffer
> configuration, all queues will use this. It should be reported during
> startup when using INFO+ level logging. Watch your log size, however, as
> using this level will make the logs big.
>
> On 12/5/18 5:06 PM, rammohan ganapavarapu wrote:
> > Kim,
> >
> > We have set wcache-page-size=128 in qpidd.conf, restarted broker and let
> > client recreated the queues fresh, we still getting this error, how do we
> > find if queues created by client actually have this wcache-page-size=128?
> >
> > 2018-12-05 21:18:16 [Protocol] error Connection
> > qpid.:5672-:17769 closed by error: Queue :
> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue() threw
> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue returned
> > partly completed (state ENQ_PART). (This data_tok: id=456535 state=NONE)
> >
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
> >
> > Thanks,
> > Ram
> >
> >
> >
> > On Tue, Dec 4, 2018 at 8:18 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Kim,
> >>
> >> Thank you, i will play with that setting, please let me know if any
> other
> >> tunings will help.
> >>
> >> Ram
> >>
> >> On Wed, Nov 28, 2018 at 8:04 AM Kim van der Riet 
> >> wrote:
> >>
> >>> The answer to your first question depends on what is more important to
> >>> you - low latency or high throughput. Messages to be persisted will
> >>> accumulate in a buffer page until it is full or until a timer is
> >>> triggered, then it will be written to disk. It is not until this
> happens
> >>> that the message will be acknowledged by the broker. If low latency is
> >>> important, then having smaller but more numerous buffer pages will mean
> >>> the messages will not wait for very long before being written to disk
> >>> and acknowledged as received. However this occurs at the cost of some
> >>> efficiency, which can affect throughput. If you have large volumes of
> >>> messages and the throughput is more important, then using fewer but
> >>> larger buffer pages will help you.
> >>>
> >>> Be aware, however, that the product of the size and number of pages is
> >>> the total memory that will be consumed and held by the broker for
> >>> buffering *per queue*. If you have a very large number of queues, then
> >>> you must watch out that you don't over-size your write buffers or else
> >>> you will run out of memory.
> >>>
> >>> While I cannot give you specific answers, as these depend on your
> >>> performance priorities, I suggest some trial-and-error if you want to
> >>> adjust these values.
> >>>
> >>> The Transaction Prepared List (TPL) is a special global queue for
> >>> persisting transaction boundar

Re: qpid-cpp-0.35 errors

2018-12-05 Thread rammohan ganapavarapu
Kim,

We have set wcache-page-size=128 in qpidd.conf, restarted broker and let
client recreated the queues fresh, we still getting this error, how do we
find if queues created by client actually have this wcache-page-size=128?

2018-12-05 21:18:16 [Protocol] error Connection
qpid.:5672-:17769 closed by error: Queue :
MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue() threw
JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue returned
partly completed (state ENQ_PART). (This data_tok: id=456535 state=NONE)
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)

Thanks,
Ram



On Tue, Dec 4, 2018 at 8:18 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim,
>
> Thank you, i will play with that setting, please let me know if any other
> tunings will help.
>
> Ram
>
> On Wed, Nov 28, 2018 at 8:04 AM Kim van der Riet 
> wrote:
>
>> The answer to your first question depends on what is more important to
>> you - low latency or high throughput. Messages to be persisted will
>> accumulate in a buffer page until it is full or until a timer is
>> triggered, then it will be written to disk. It is not until this happens
>> that the message will be acknowledged by the broker. If low latency is
>> important, then having smaller but more numerous buffer pages will mean
>> the messages will not wait for very long before being written to disk
>> and acknowledged as received. However this occurs at the cost of some
>> efficiency, which can affect throughput. If you have large volumes of
>> messages and the throughput is more important, then using fewer but
>> larger buffer pages will help you.
>>
>> Be aware, however, that the product of the size and number of pages is
>> the total memory that will be consumed and held by the broker for
>> buffering *per queue*. If you have a very large number of queues, then
>> you must watch out that you don't over-size your write buffers or else
>> you will run out of memory.
>>
>> While I cannot give you specific answers, as these depend on your
>> performance priorities, I suggest some trial-and-error if you want to
>> adjust these values.
>>
>> The Transaction Prepared List (TPL) is a special global queue for
>> persisting transaction boundaries. As this info is usually small and
>> relatively infrequent, the tpl-* settings apply to this queue only and
>> the user has the option to use different values than the regular queues.
>> If you don't use transactions, then this can be ignored. It is not a
>> queue that can be written to directly, but the store creates its own
>> data that is saved in this queue. Adjusting the tpl-* settings depends
>> only on the frequency of transactions in the user's application or
>> use-case.
>>
>> Hope that helps,
>>
>> Kim van der Riet
>>
>> On 11/27/18 4:44 PM, rammohan ganapavarapu wrote:
>> > Kim,
>> >
>> > 1. My message size is around 80kb, so what would be suggested values for
>> > the blow properties?
>> >
>> >
>> > wcache-page-size
>> > wcache-num-pages
>> > tpl-wcache-num-pages
>> > tpl-wcache-page-size
>> >
>> > right now i have all defaults, so i am trying to see if i can tune these
>> > values for my messages size to avoid those AIO busy cases.  I have try
>> to
>> > define those properties/options in qpidd.conf file but when i run
>> > qpid-config queues its not showing those values on my queues created by
>> > client application, do i have to define those options when i create
>> queue
>> > instead of keep them in qpidd.conf?
>> >
>> > 2. What is difference b/w tpl-wcache-page-size and wcache-page-size
>> >
>> > Thanks,
>> > Ram
>> >
>> > On Fri, Nov 16, 2018 at 9:26 AM Kim van der Riet 
>> > wrote:
>> >
>> >> There is little documentation on linearstore. Certainly, the Apache
>> docs
>> >> don't contain much. I think this is an oversight, but it won't get
>> fixed
>> >> anytime soon.
>> >>
>> >> Kim
>> >>
>> >> On 11/16/18 12:11 PM, rammohan ganapavarapu wrote:
>> >>> Any one point me to the doc where i can read internals about how
>> >>> linearstore works and how qpid uses it?
>> >>>
>> >>> Thanks,
>> >>> Ram
>> >>>
>> >>> On Mon, Nov 12, 2018 at 8:43 AM rammohan ganapavarapu <
>> >>> rammohanga...@gmail.com> wrote:
>> >>>
&

Re: qpid-cpp-0.35 errors

2018-12-04 Thread rammohan ganapavarapu
Kim,

Thank you, i will play with that setting, please let me know if any other
tunings will help.

Ram

On Wed, Nov 28, 2018 at 8:04 AM Kim van der Riet 
wrote:

> The answer to your first question depends on what is more important to
> you - low latency or high throughput. Messages to be persisted will
> accumulate in a buffer page until it is full or until a timer is
> triggered, then it will be written to disk. It is not until this happens
> that the message will be acknowledged by the broker. If low latency is
> important, then having smaller but more numerous buffer pages will mean
> the messages will not wait for very long before being written to disk
> and acknowledged as received. However this occurs at the cost of some
> efficiency, which can affect throughput. If you have large volumes of
> messages and the throughput is more important, then using fewer but
> larger buffer pages will help you.
>
> Be aware, however, that the product of the size and number of pages is
> the total memory that will be consumed and held by the broker for
> buffering *per queue*. If you have a very large number of queues, then
> you must watch out that you don't over-size your write buffers or else
> you will run out of memory.
>
> While I cannot give you specific answers, as these depend on your
> performance priorities, I suggest some trial-and-error if you want to
> adjust these values.
>
> The Transaction Prepared List (TPL) is a special global queue for
> persisting transaction boundaries. As this info is usually small and
> relatively infrequent, the tpl-* settings apply to this queue only and
> the user has the option to use different values than the regular queues.
> If you don't use transactions, then this can be ignored. It is not a
> queue that can be written to directly, but the store creates its own
> data that is saved in this queue. Adjusting the tpl-* settings depends
> only on the frequency of transactions in the user's application or
> use-case.
>
> Hope that helps,
>
> Kim van der Riet
>
> On 11/27/18 4:44 PM, rammohan ganapavarapu wrote:
> > Kim,
> >
> > 1. My message size is around 80kb, so what would be suggested values for
> > the blow properties?
> >
> >
> > wcache-page-size
> > wcache-num-pages
> > tpl-wcache-num-pages
> > tpl-wcache-page-size
> >
> > right now i have all defaults, so i am trying to see if i can tune these
> > values for my messages size to avoid those AIO busy cases.  I have try to
> > define those properties/options in qpidd.conf file but when i run
> > qpid-config queues its not showing those values on my queues created by
> > client application, do i have to define those options when i create queue
> > instead of keep them in qpidd.conf?
> >
> > 2. What is difference b/w tpl-wcache-page-size and wcache-page-size
> >
> > Thanks,
> > Ram
> >
> > On Fri, Nov 16, 2018 at 9:26 AM Kim van der Riet 
> > wrote:
> >
> >> There is little documentation on linearstore. Certainly, the Apache docs
> >> don't contain much. I think this is an oversight, but it won't get fixed
> >> anytime soon.
> >>
> >> Kim
> >>
> >> On 11/16/18 12:11 PM, rammohan ganapavarapu wrote:
> >>> Any one point me to the doc where i can read internals about how
> >>> linearstore works and how qpid uses it?
> >>>
> >>> Thanks,
> >>> Ram
> >>>
> >>> On Mon, Nov 12, 2018 at 8:43 AM rammohan ganapavarapu <
> >>> rammohanga...@gmail.com> wrote:
> >>>
> >>>> Kim,
> >>>>
> >>>> Thanks for clearing that up for me, does it support SAN storage
> blocks.
> >>>> Where can i read more about linearstore if i want to know the low
> level
> >>>> internals?
> >>>>
> >>>> Ram
> >>>>
> >>>> On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet  >
> >>>> wrote:
> >>>>
> >>>>> The linearstore relies on using libaio for its async disk writes. The
> >>>>> O_DIRECT flag is used, and this requires a block of aligned memory to
> >>>>> serve as a memory buffer for disk write operations. To my knowledge,
> >>>>> this technique only works with local disks and controllers. NFS does
> >> not
> >>>>> allow for DMA memory writes to disk AFAIK, and for as long as I can
> >>>>> remember, has been a problem for the linearstore. With some work it
> >>>>> might be possible to make it work using another 

Re: qpid-cpp-0.35 errors

2018-11-27 Thread rammohan ganapavarapu
Kim,

1. My message size is around 80kb, so what would be suggested values for
the blow properties?


wcache-page-size
wcache-num-pages
tpl-wcache-num-pages
tpl-wcache-page-size

right now i have all defaults, so i am trying to see if i can tune these
values for my messages size to avoid those AIO busy cases.  I have try to
define those properties/options in qpidd.conf file but when i run
qpid-config queues its not showing those values on my queues created by
client application, do i have to define those options when i create queue
instead of keep them in qpidd.conf?

2. What is difference b/w tpl-wcache-page-size and wcache-page-size

Thanks,
Ram

On Fri, Nov 16, 2018 at 9:26 AM Kim van der Riet 
wrote:

> There is little documentation on linearstore. Certainly, the Apache docs
> don't contain much. I think this is an oversight, but it won't get fixed
> anytime soon.
>
> Kim
>
> On 11/16/18 12:11 PM, rammohan ganapavarapu wrote:
> > Any one point me to the doc where i can read internals about how
> > linearstore works and how qpid uses it?
> >
> > Thanks,
> > Ram
> >
> > On Mon, Nov 12, 2018 at 8:43 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Kim,
> >>
> >> Thanks for clearing that up for me, does it support SAN storage blocks.
> >> Where can i read more about linearstore if i want to know the low level
> >> internals?
> >>
> >> Ram
> >>
> >> On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet 
> >> wrote:
> >>
> >>> The linearstore relies on using libaio for its async disk writes. The
> >>> O_DIRECT flag is used, and this requires a block of aligned memory to
> >>> serve as a memory buffer for disk write operations. To my knowledge,
> >>> this technique only works with local disks and controllers. NFS does
> not
> >>> allow for DMA memory writes to disk AFAIK, and for as long as I can
> >>> remember, has been a problem for the linearstore. With some work it
> >>> might be possible to make it work using another write technique though.
> >>> NFS has never been a "supported" medium for linearstore.
> >>>
> >>> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
> >>>> But how does NFS will cause this issue, i am interested to see because
> >>> we
> >>>> are using NFS (V4 version) in some environments, so wanted to learn
> >>> tunings
> >>>> when we use NFS.
> >>>>
> >>>> Thanks,
> >>>> Ram
> >>>>
> >>>> On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
> >>>> rammohanga...@gmail.com> wrote:
> >>>>
> >>>>> Sorry, i thought it's NFS but it's actually SAN storage volume.
> >>>>>
> >>>>> Thanks,
> >>>>> Ram
> >>>>>
> >>>>> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim  >>>>>
> >>>>>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
> >>>>>>> I was wrong about the NFS for qpid journal files, looks like they
> >>> are on
> >>>>>>> NFS, so does NFS cause this issue?
> >>>>>> Yes, I believe it does. What version of NFS are you using?
> >>>>>>
> >>>>>>
> -
> >>>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>>>>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>>>>
> >>>>>>
> >>> -
> >>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>
> >>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-16 Thread rammohan ganapavarapu
Kim,

Actually we are using qpid as part of our application, and customer is
using my application. They are facing this issue and it is happening to
them intermittently but still don't know at what scenario it is happening.
I was trying the same application with NFS but still couldn't reproduce it.
We took tcpdump and i see that tcp trace doesn't have full message and it
is getting truncate before broker getting close the tcp connection.

Thanks,
Ram

On Fri, Nov 16, 2018 at 8:33 AM Kim van der Riet 
wrote:

> Did you find a reproducer at all?
>
> Kim
>
> On 11/12/18 11:43 AM, rammohan ganapavarapu wrote:
> > Kim,
> >
> > Thanks for clearing that up for me, does it support SAN storage blocks.
> > Where can i read more about linearstore if i want to know the low level
> > internals?
> >
> > Ram
> >
> > On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet 
> > wrote:
> >
> >> The linearstore relies on using libaio for its async disk writes. The
> >> O_DIRECT flag is used, and this requires a block of aligned memory to
> >> serve as a memory buffer for disk write operations. To my knowledge,
> >> this technique only works with local disks and controllers. NFS does not
> >> allow for DMA memory writes to disk AFAIK, and for as long as I can
> >> remember, has been a problem for the linearstore. With some work it
> >> might be possible to make it work using another write technique though.
> >> NFS has never been a "supported" medium for linearstore.
> >>
> >> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
> >>> But how does NFS will cause this issue, i am interested to see because
> we
> >>> are using NFS (V4 version) in some environments, so wanted to learn
> >> tunings
> >>> when we use NFS.
> >>>
> >>> Thanks,
> >>> Ram
> >>>
> >>> On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
> >>> rammohanga...@gmail.com> wrote:
> >>>
> >>>> Sorry, i thought it's NFS but it's actually SAN storage volume.
> >>>>
> >>>> Thanks,
> >>>> Ram
> >>>>
> >>>> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim  >>>>
> >>>>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
> >>>>>> I was wrong about the NFS for qpid journal files, looks like they
> are
> >> on
> >>>>>> NFS, so does NFS cause this issue?
> >>>>> Yes, I believe it does. What version of NFS are you using?
> >>>>>
> >>>>> -
> >>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>>>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>>>
> >>>>>
> >> -
> >> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >> For additional commands, e-mail: users-h...@qpid.apache.org
> >>
> >>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-16 Thread rammohan ganapavarapu
Any one point me to the doc where i can read internals about how
linearstore works and how qpid uses it?

Thanks,
Ram

On Mon, Nov 12, 2018 at 8:43 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim,
>
> Thanks for clearing that up for me, does it support SAN storage blocks.
> Where can i read more about linearstore if i want to know the low level
> internals?
>
> Ram
>
> On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet 
> wrote:
>
>> The linearstore relies on using libaio for its async disk writes. The
>> O_DIRECT flag is used, and this requires a block of aligned memory to
>> serve as a memory buffer for disk write operations. To my knowledge,
>> this technique only works with local disks and controllers. NFS does not
>> allow for DMA memory writes to disk AFAIK, and for as long as I can
>> remember, has been a problem for the linearstore. With some work it
>> might be possible to make it work using another write technique though.
>> NFS has never been a "supported" medium for linearstore.
>>
>> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
>> > But how does NFS will cause this issue, i am interested to see because
>> we
>> > are using NFS (V4 version) in some environments, so wanted to learn
>> tunings
>> > when we use NFS.
>> >
>> > Thanks,
>> > Ram
>> >
>> > On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
>> > rammohanga...@gmail.com> wrote:
>> >
>> >> Sorry, i thought it's NFS but it's actually SAN storage volume.
>> >>
>> >> Thanks,
>> >> Ram
>> >>
>> >> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim > >>
>> >>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
>> >>>> I was wrong about the NFS for qpid journal files, looks like they
>> are on
>> >>>> NFS, so does NFS cause this issue?
>> >>> Yes, I believe it does. What version of NFS are you using?
>> >>>
>> >>> -
>> >>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> >>> For additional commands, e-mail: users-h...@qpid.apache.org
>> >>>
>> >>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>


Re: qpid-cpp-0.35 errors

2018-11-12 Thread rammohan ganapavarapu
Kim,

Thanks for clearing that up for me, does it support SAN storage blocks.
Where can i read more about linearstore if i want to know the low level
internals?

Ram

On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet 
wrote:

> The linearstore relies on using libaio for its async disk writes. The
> O_DIRECT flag is used, and this requires a block of aligned memory to
> serve as a memory buffer for disk write operations. To my knowledge,
> this technique only works with local disks and controllers. NFS does not
> allow for DMA memory writes to disk AFAIK, and for as long as I can
> remember, has been a problem for the linearstore. With some work it
> might be possible to make it work using another write technique though.
> NFS has never been a "supported" medium for linearstore.
>
> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
> > But how does NFS will cause this issue, i am interested to see because we
> > are using NFS (V4 version) in some environments, so wanted to learn
> tunings
> > when we use NFS.
> >
> > Thanks,
> > Ram
> >
> > On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Sorry, i thought it's NFS but it's actually SAN storage volume.
> >>
> >> Thanks,
> >> Ram
> >>
> >> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim  >>
> >>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
> >>>> I was wrong about the NFS for qpid journal files, looks like they are
> on
> >>>> NFS, so does NFS cause this issue?
> >>> Yes, I believe it does. What version of NFS are you using?
> >>>
> >>> -
> >>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>
> >>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-09 Thread rammohan ganapavarapu
But how does NFS will cause this issue, i am interested to see because we
are using NFS (V4 version) in some environments, so wanted to learn tunings
when we use NFS.

Thanks,
Ram

On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Sorry, i thought it's NFS but it's actually SAN storage volume.
>
> Thanks,
> Ram
>
> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim 
>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
>> > I was wrong about the NFS for qpid journal files, looks like they are on
>> > NFS, so does NFS cause this issue?
>>
>> Yes, I believe it does. What version of NFS are you using?
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>


Re: qpid-cpp-0.35 errors

2018-11-09 Thread rammohan ganapavarapu
Sorry, i thought it's NFS but it's actually SAN storage volume.

Thanks,
Ram

On Fri, Nov 9, 2018, 2:10 AM Gordon Sim  On 08/11/18 16:56, rammohan ganapavarapu wrote:
> > I was wrong about the NFS for qpid journal files, looks like they are on
> > NFS, so does NFS cause this issue?
>
> Yes, I believe it does. What version of NFS are you using?
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-08 Thread rammohan ganapavarapu
Kim/Gordon,

I was wrong about the NFS for qpid journal files, looks like they are on
NFS, so does NFS cause this issue?

Ram

On Wed, Nov 7, 2018 at 12:18 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim,
>
> Ok, i am still trying to see what part of my java application is causing
> that issue, yes that issue is happening intermittently. Regarding
> "JERR_WMGR_ENQDISCONT" error, may be they are chained exceptions from the
> previous error JERR_JCNTL_AIOCMPLWAIT?
>
> Does message size contribute to this issue?
>
> Thanks,
> Ram
>
> On Wed, Nov 7, 2018 at 11:37 AM Kim van der Riet 
> wrote:
>
>> No, they are not.
>>
>> These two defines govern the number of sleeps and the sleep time while
>> waiting for before throwing an exception during recovery only. They do
>> not play a role during normal operation.
>>
>> If you are able to compile the broker code, you can try playing with
>> these values. But I don't think they will make much difference to the
>> overall problem. I think some of the other errors you have been seeing
>> prior to this one are closer to where the real problem lies - such as
>> the JRNL_WMGR_ENQDISCONT error.
>>
>> Do you have a reproducer of any kind? Does this error occur predictably
>> under some or other conditions?
>>
>> Thanks,
>>
>> Kim van der Riet
>>
>> On 11/7/18 12:51 PM, rammohan ganapavarapu wrote:
>> > Kim,
>> >
>> > I see these two settings from code, can these be configurable?
>> >
>> > #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
>> >
>> > #define AIO_SLEEP_TIME_US  10 // 0.01 ms
>> >
>> >
>> > Ram
>> >
>> > On Wed, Nov 7, 2018 at 7:04 AM rammohan ganapavarapu <
>> > rammohanga...@gmail.com> wrote:
>> >
>> >> Thank you Kim, i will try your suggestions.
>> >>
>> >> On Wed, Nov 7, 2018, 6:58 AM Kim van der Riet > wrote:
>> >>
>> >>> This error is a linearstore issue. It looks as though there is a
>> single
>> >>> write operation to disk that has become stuck, and is holding up all
>> >>> further write operations. This happens because there is a fixed
>> circular
>> >>> pool of memory pages used for the AIO operations to disk, and when one
>> >>> of these is "busy" (indicated by the A letter in the  page state map),
>> >>> write operations cannot continue until it is cleared. It it does not
>> >>> clear within a certain time, then an exception is thrown, which
>> usually
>> >>> results in the broker closing the connection.
>> >>>
>> >>> The events leading up to a "stuck" write operation are complex and
>> >>> sometimes difficult to reproduce. If you have a reproducer, then I
>> would
>> >>> be interested to see it! Even so, the ability to reproduce on another
>> >>> machine is hard as it depends on such things as disk write speed, the
>> >>> disk controller characteristics, the number of threads in the thread
>> >>> pool (ie CPU type), memory and other hardware-related things.
>> >>>
>> >>> There are two linearstore parameters that you can try playing with to
>> >>> see if you can change the behavior of the store:
>> >>>
>> >>> wcache-page-size: This sets the size of each page in the write buffer.
>> >>> Larger page size is good for large messages, a smaller size will help
>> if
>> >>> you have small messages.
>> >>>
>> >>> wchache-num-pages: The total number of pages in the write buffer.
>> >>>
>> >>> Use the --help on the broker with the linearstore loaded to see more
>> >>> details on this. I hope that helps a little.
>> >>>
>> >>> Kim van der Riet
>> >>>
>> >>> On 11/6/18 2:12 PM, rammohan ganapavarapu wrote:
>> >>>> Any help in understand why/when broker throws those errors and stop
>> >>>> receiving message would be appreciated.
>> >>>>
>> >>>> Not sure if any kernel tuning or broker tuning needs to be done to
>> >>>> solve this issue.
>> >>>>
>> >>>> Thanks in advance,
>> >>>> Ram
>> >>>>
>> >>>> On Tue, Nov 6, 2018 at 8:35 AM rammohan ganapavarapu <
>> >>>> rammo

Re: qpid-cpp-0.35 errors

2018-11-08 Thread rammohan ganapavarapu
Do you have any kernel (net/disk) tuning recommendations for qpid-cpp with
linearstore?

Ram

On Thu, Nov 8, 2018 at 8:56 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim/Gordon,
>
> I was wrong about the NFS for qpid journal files, looks like they are on
> NFS, so does NFS cause this issue?
>
> Ram
>
> On Wed, Nov 7, 2018 at 12:18 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Kim,
>>
>> Ok, i am still trying to see what part of my java application is causing
>> that issue, yes that issue is happening intermittently. Regarding
>> "JERR_WMGR_ENQDISCONT" error, may be they are chained exceptions from the
>> previous error JERR_JCNTL_AIOCMPLWAIT?
>>
>> Does message size contribute to this issue?
>>
>> Thanks,
>> Ram
>>
>> On Wed, Nov 7, 2018 at 11:37 AM Kim van der Riet 
>> wrote:
>>
>>> No, they are not.
>>>
>>> These two defines govern the number of sleeps and the sleep time while
>>> waiting for before throwing an exception during recovery only. They do
>>> not play a role during normal operation.
>>>
>>> If you are able to compile the broker code, you can try playing with
>>> these values. But I don't think they will make much difference to the
>>> overall problem. I think some of the other errors you have been seeing
>>> prior to this one are closer to where the real problem lies - such as
>>> the JRNL_WMGR_ENQDISCONT error.
>>>
>>> Do you have a reproducer of any kind? Does this error occur predictably
>>> under some or other conditions?
>>>
>>> Thanks,
>>>
>>> Kim van der Riet
>>>
>>> On 11/7/18 12:51 PM, rammohan ganapavarapu wrote:
>>> > Kim,
>>> >
>>> > I see these two settings from code, can these be configurable?
>>> >
>>> > #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
>>> >
>>> > #define AIO_SLEEP_TIME_US  10 // 0.01 ms
>>> >
>>> >
>>> > Ram
>>> >
>>> > On Wed, Nov 7, 2018 at 7:04 AM rammohan ganapavarapu <
>>> > rammohanga...@gmail.com> wrote:
>>> >
>>> >> Thank you Kim, i will try your suggestions.
>>> >>
>>> >> On Wed, Nov 7, 2018, 6:58 AM Kim van der Riet >> wrote:
>>> >>
>>> >>> This error is a linearstore issue. It looks as though there is a
>>> single
>>> >>> write operation to disk that has become stuck, and is holding up all
>>> >>> further write operations. This happens because there is a fixed
>>> circular
>>> >>> pool of memory pages used for the AIO operations to disk, and when
>>> one
>>> >>> of these is "busy" (indicated by the A letter in the  page state
>>> map),
>>> >>> write operations cannot continue until it is cleared. It it does not
>>> >>> clear within a certain time, then an exception is thrown, which
>>> usually
>>> >>> results in the broker closing the connection.
>>> >>>
>>> >>> The events leading up to a "stuck" write operation are complex and
>>> >>> sometimes difficult to reproduce. If you have a reproducer, then I
>>> would
>>> >>> be interested to see it! Even so, the ability to reproduce on another
>>> >>> machine is hard as it depends on such things as disk write speed, the
>>> >>> disk controller characteristics, the number of threads in the thread
>>> >>> pool (ie CPU type), memory and other hardware-related things.
>>> >>>
>>> >>> There are two linearstore parameters that you can try playing with to
>>> >>> see if you can change the behavior of the store:
>>> >>>
>>> >>> wcache-page-size: This sets the size of each page in the write
>>> buffer.
>>> >>> Larger page size is good for large messages, a smaller size will
>>> help if
>>> >>> you have small messages.
>>> >>>
>>> >>> wchache-num-pages: The total number of pages in the write buffer.
>>> >>>
>>> >>> Use the --help on the broker with the linearstore loaded to see more
>>> >>> details on this. I hope that helps a little.
>>> >>>
>>> >>> Kim van der Riet
>>> >>>
>>> >&g

Re: qpid-cpp-0.35 errors

2018-11-07 Thread rammohan ganapavarapu
Kim,

Ok, i am still trying to see what part of my java application is causing
that issue, yes that issue is happening intermittently. Regarding
"JERR_WMGR_ENQDISCONT" error, may be they are chained exceptions from the
previous error JERR_JCNTL_AIOCMPLWAIT?

Does message size contribute to this issue?

Thanks,
Ram

On Wed, Nov 7, 2018 at 11:37 AM Kim van der Riet 
wrote:

> No, they are not.
>
> These two defines govern the number of sleeps and the sleep time while
> waiting for before throwing an exception during recovery only. They do
> not play a role during normal operation.
>
> If you are able to compile the broker code, you can try playing with
> these values. But I don't think they will make much difference to the
> overall problem. I think some of the other errors you have been seeing
> prior to this one are closer to where the real problem lies - such as
> the JRNL_WMGR_ENQDISCONT error.
>
> Do you have a reproducer of any kind? Does this error occur predictably
> under some or other conditions?
>
> Thanks,
>
> Kim van der Riet
>
> On 11/7/18 12:51 PM, rammohan ganapavarapu wrote:
> > Kim,
> >
> > I see these two settings from code, can these be configurable?
> >
> > #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
> >
> > #define AIO_SLEEP_TIME_US  10 // 0.01 ms
> >
> >
> > Ram
> >
> > On Wed, Nov 7, 2018 at 7:04 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Thank you Kim, i will try your suggestions.
> >>
> >> On Wed, Nov 7, 2018, 6:58 AM Kim van der Riet  wrote:
> >>
> >>> This error is a linearstore issue. It looks as though there is a single
> >>> write operation to disk that has become stuck, and is holding up all
> >>> further write operations. This happens because there is a fixed
> circular
> >>> pool of memory pages used for the AIO operations to disk, and when one
> >>> of these is "busy" (indicated by the A letter in the  page state map),
> >>> write operations cannot continue until it is cleared. It it does not
> >>> clear within a certain time, then an exception is thrown, which usually
> >>> results in the broker closing the connection.
> >>>
> >>> The events leading up to a "stuck" write operation are complex and
> >>> sometimes difficult to reproduce. If you have a reproducer, then I
> would
> >>> be interested to see it! Even so, the ability to reproduce on another
> >>> machine is hard as it depends on such things as disk write speed, the
> >>> disk controller characteristics, the number of threads in the thread
> >>> pool (ie CPU type), memory and other hardware-related things.
> >>>
> >>> There are two linearstore parameters that you can try playing with to
> >>> see if you can change the behavior of the store:
> >>>
> >>> wcache-page-size: This sets the size of each page in the write buffer.
> >>> Larger page size is good for large messages, a smaller size will help
> if
> >>> you have small messages.
> >>>
> >>> wchache-num-pages: The total number of pages in the write buffer.
> >>>
> >>> Use the --help on the broker with the linearstore loaded to see more
> >>> details on this. I hope that helps a little.
> >>>
> >>> Kim van der Riet
> >>>
> >>> On 11/6/18 2:12 PM, rammohan ganapavarapu wrote:
> >>>> Any help in understand why/when broker throws those errors and stop
> >>>> receiving message would be appreciated.
> >>>>
> >>>> Not sure if any kernel tuning or broker tuning needs to be done to
> >>>> solve this issue.
> >>>>
> >>>> Thanks in advance,
> >>>> Ram
> >>>>
> >>>> On Tue, Nov 6, 2018 at 8:35 AM rammohan ganapavarapu <
> >>>> rammohanga...@gmail.com> wrote:
> >>>>
> >>>>> Also from this log message (store level) it seems like waiting for
> AIO
> >>> to
> >>>>> complete.
> >>>>>
> >>>>> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal " >>>>> name>": get_events() returned JERR_JCNTL_AIOCMPLWAIT;
> >>>>> wmgr_status: wmgr: pi=25 pc=8 po=0 aer=1 edac=TFFF
> >>>>> ps=[-A--]
> >>>>>
> >>>>> page_state ps=[-A--]  where A is
> >

Re: qpid-cpp-0.35 errors

2018-11-07 Thread rammohan ganapavarapu
Kim,

I see these two settings from code, can these be configurable?

#define MAX_AIO_SLEEPS 10 // tot: ~1 sec

#define AIO_SLEEP_TIME_US  10 // 0.01 ms


Ram

On Wed, Nov 7, 2018 at 7:04 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Thank you Kim, i will try your suggestions.
>
> On Wed, Nov 7, 2018, 6:58 AM Kim van der Riet 
>> This error is a linearstore issue. It looks as though there is a single
>> write operation to disk that has become stuck, and is holding up all
>> further write operations. This happens because there is a fixed circular
>> pool of memory pages used for the AIO operations to disk, and when one
>> of these is "busy" (indicated by the A letter in the  page state map),
>> write operations cannot continue until it is cleared. It it does not
>> clear within a certain time, then an exception is thrown, which usually
>> results in the broker closing the connection.
>>
>> The events leading up to a "stuck" write operation are complex and
>> sometimes difficult to reproduce. If you have a reproducer, then I would
>> be interested to see it! Even so, the ability to reproduce on another
>> machine is hard as it depends on such things as disk write speed, the
>> disk controller characteristics, the number of threads in the thread
>> pool (ie CPU type), memory and other hardware-related things.
>>
>> There are two linearstore parameters that you can try playing with to
>> see if you can change the behavior of the store:
>>
>> wcache-page-size: This sets the size of each page in the write buffer.
>> Larger page size is good for large messages, a smaller size will help if
>> you have small messages.
>>
>> wchache-num-pages: The total number of pages in the write buffer.
>>
>> Use the --help on the broker with the linearstore loaded to see more
>> details on this. I hope that helps a little.
>>
>> Kim van der Riet
>>
>> On 11/6/18 2:12 PM, rammohan ganapavarapu wrote:
>> > Any help in understand why/when broker throws those errors and stop
>> > receiving message would be appreciated.
>> >
>> > Not sure if any kernel tuning or broker tuning needs to be done to
>> > solve this issue.
>> >
>> > Thanks in advance,
>> > Ram
>> >
>> > On Tue, Nov 6, 2018 at 8:35 AM rammohan ganapavarapu <
>> > rammohanga...@gmail.com> wrote:
>> >
>> >> Also from this log message (store level) it seems like waiting for AIO
>> to
>> >> complete.
>> >>
>> >> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal "> >> name>": get_events() returned JERR_JCNTL_AIOCMPLWAIT;
>> >> wmgr_status: wmgr: pi=25 pc=8 po=0 aer=1 edac=TFFF
>> >> ps=[-A--]
>> >>
>> >> page_state ps=[-A--]  where A is
>> AIO_PENDING
>> >> aer=1 _aio_evt_rem;  ///< Remaining AIO events
>> >>
>> >> When there is or there are pending AIO, does broker close the
>> connection?
>> >> is there any tuning that can be done to resolve this?
>> >>
>> >> Thanks,
>> >> Ram
>> >>
>> >>
>> >>
>> >>
>> >> On Mon, Nov 5, 2018 at 8:55 PM rammohan ganapavarapu <
>> >> rammohanga...@gmail.com> wrote:
>> >>
>> >>> I was check the code and i see these lines for that AIO timeout.
>> >>>
>> >>>case
>> qpid::linearstore::journal::RHM_IORES_PAGE_AIOWAIT:
>> >>>      if (++aio_sleep_cnt > MAX_AIO_SLEEPS)
>> >>>  THROW_STORE_EXCEPTION("Timeout waiting for AIO in
>> >>> MessageStoreImpl::recoverMessages()");
>> >>>  ::usleep(AIO_SLEEP_TIME_US);
>> >>>  break;
>> >>>
>> >>> And these are the defaults
>> >>>
>> >>> #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
>> >>>
>> >>> #define AIO_SLEEP_TIME_US  10 // 0.01 ms
>> >>>
>> >>>
>> >>>RHM_IORES_PAGE_AIOWAIT, ///< IO operation suspended - next page is
>> >>> waiting for AIO.
>> >>>
>> >>>
>> >>>
>> >>> So does page got blocked and its waiting for page availability?
>> >>>
>> >>>
>> >>> Ram
>> >>>
>> >>> On Mon

Re: qpid-cpp-0.35 errors

2018-11-07 Thread rammohan ganapavarapu
Thank you Kim, i will try your suggestions.

On Wed, Nov 7, 2018, 6:58 AM Kim van der Riet  This error is a linearstore issue. It looks as though there is a single
> write operation to disk that has become stuck, and is holding up all
> further write operations. This happens because there is a fixed circular
> pool of memory pages used for the AIO operations to disk, and when one
> of these is "busy" (indicated by the A letter in the  page state map),
> write operations cannot continue until it is cleared. It it does not
> clear within a certain time, then an exception is thrown, which usually
> results in the broker closing the connection.
>
> The events leading up to a "stuck" write operation are complex and
> sometimes difficult to reproduce. If you have a reproducer, then I would
> be interested to see it! Even so, the ability to reproduce on another
> machine is hard as it depends on such things as disk write speed, the
> disk controller characteristics, the number of threads in the thread
> pool (ie CPU type), memory and other hardware-related things.
>
> There are two linearstore parameters that you can try playing with to
> see if you can change the behavior of the store:
>
> wcache-page-size: This sets the size of each page in the write buffer.
> Larger page size is good for large messages, a smaller size will help if
> you have small messages.
>
> wchache-num-pages: The total number of pages in the write buffer.
>
> Use the --help on the broker with the linearstore loaded to see more
> details on this. I hope that helps a little.
>
> Kim van der Riet
>
> On 11/6/18 2:12 PM, rammohan ganapavarapu wrote:
> > Any help in understand why/when broker throws those errors and stop
> > receiving message would be appreciated.
> >
> > Not sure if any kernel tuning or broker tuning needs to be done to
> > solve this issue.
> >
> > Thanks in advance,
> > Ram
> >
> > On Tue, Nov 6, 2018 at 8:35 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Also from this log message (store level) it seems like waiting for AIO
> to
> >> complete.
> >>
> >> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal " >> name>": get_events() returned JERR_JCNTL_AIOCMPLWAIT;
> >> wmgr_status: wmgr: pi=25 pc=8 po=0 aer=1 edac=TFFF
> >> ps=[-A--]
> >>
> >> page_state ps=[-A--]  where A is AIO_PENDING
> >> aer=1 _aio_evt_rem;  ///< Remaining AIO events
> >>
> >> When there is or there are pending AIO, does broker close the
> connection?
> >> is there any tuning that can be done to resolve this?
> >>
> >> Thanks,
> >> Ram
> >>
> >>
> >>
> >>
> >> On Mon, Nov 5, 2018 at 8:55 PM rammohan ganapavarapu <
> >> rammohanga...@gmail.com> wrote:
> >>
> >>> I was check the code and i see these lines for that AIO timeout.
> >>>
> >>>case qpid::linearstore::journal::RHM_IORES_PAGE_AIOWAIT:
> >>>  if (++aio_sleep_cnt > MAX_AIO_SLEEPS)
> >>>  THROW_STORE_EXCEPTION("Timeout waiting for AIO in
> >>> MessageStoreImpl::recoverMessages()");
> >>>  ::usleep(AIO_SLEEP_TIME_US);
> >>>  break;
> >>>
> >>> And these are the defaults
> >>>
> >>> #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
> >>>
> >>> #define AIO_SLEEP_TIME_US  10 // 0.01 ms
> >>>
> >>>
> >>>RHM_IORES_PAGE_AIOWAIT, ///< IO operation suspended - next page is
> >>> waiting for AIO.
> >>>
> >>>
> >>>
> >>> So does page got blocked and its waiting for page availability?
> >>>
> >>>
> >>> Ram
> >>>
> >>> On Mon, Nov 5, 2018 at 8:00 PM rammohan ganapavarapu <
> >>> rammohanga...@gmail.com> wrote:
> >>>
> >>>> Actually we have upgraded from qpid-cpp 0.28 to 1.35 and after that we
> >>>> see this message
> >>>>
> >>>> 2018-10-27 18:58:25 [Store] warning Linear Store: Journal
> >>>> "": Bad record alignment found at fid=0x4605b
> offs=0x107680
> >>>> (likely journal overwrite boundary); 19 filler record(s) required.
> >>>> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal
> >>>> "": Recover phase writ

Re: qpid-cpp-0.35 errors

2018-11-06 Thread rammohan ganapavarapu
Any help in understand why/when broker throws those errors and stop
receiving message would be appreciated.

Not sure if any kernel tuning or broker tuning needs to be done to
solve this issue.

Thanks in advance,
Ram

On Tue, Nov 6, 2018 at 8:35 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Also from this log message (store level) it seems like waiting for AIO to
> complete.
>
> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal " name>": get_events() returned JERR_JCNTL_AIOCMPLWAIT;
> wmgr_status: wmgr: pi=25 pc=8 po=0 aer=1 edac=TFFF
> ps=[-A--]
>
> page_state ps=[-A--]  where A is AIO_PENDING
> aer=1 _aio_evt_rem;  ///< Remaining AIO events
>
> When there is or there are pending AIO, does broker close the connection?
> is there any tuning that can be done to resolve this?
>
> Thanks,
> Ram
>
>
>
>
> On Mon, Nov 5, 2018 at 8:55 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> I was check the code and i see these lines for that AIO timeout.
>>
>>   case qpid::linearstore::journal::RHM_IORES_PAGE_AIOWAIT:
>> if (++aio_sleep_cnt > MAX_AIO_SLEEPS)
>> THROW_STORE_EXCEPTION("Timeout waiting for AIO in
>> MessageStoreImpl::recoverMessages()");
>> ::usleep(AIO_SLEEP_TIME_US);
>> break;
>>
>> And these are the defaults
>>
>> #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
>>
>> #define AIO_SLEEP_TIME_US  10 // 0.01 ms
>>
>>
>>   RHM_IORES_PAGE_AIOWAIT, ///< IO operation suspended - next page is
>> waiting for AIO.
>>
>>
>>
>> So does page got blocked and its waiting for page availability?
>>
>>
>> Ram
>>
>> On Mon, Nov 5, 2018 at 8:00 PM rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>>>
>>> Actually we have upgraded from qpid-cpp 0.28 to 1.35 and after that we
>>> see this message
>>>
>>> 2018-10-27 18:58:25 [Store] warning Linear Store: Journal
>>> "": Bad record alignment found at fid=0x4605b offs=0x107680
>>> (likely journal overwrite boundary); 19 filler record(s) required.
>>> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal
>>> "": Recover phase write: Wrote filler record: fid=0x4605b
>>> offs=0x107680
>>> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal
>>> "": Recover phase write: Wr... few more Recover phase logs
>>>
>>> It worked fine for a day and started throwing this message:
>>>
>>> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal "":
>>> get_events() returned JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=25 pc=8
>>> po=0 aer=1 edac=TFFF ps=[-A--]
>>> 2018-10-28 12:27:01 [Broker] warning Exchange  cannot deliver to
>>> queue : Queue : MessageStoreImpl::store() failed:
>>> jexception 0x0202 jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT:
>>> Timeout waiting for AIOs to complete.
>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>>> 2018-10-28 12:27:01 [Broker] error Connection exception: framing-error:
>>> Queue : MessageStoreImpl::store() failed: jexception 0x0202
>>> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
>>> AIOs to complete.
>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>>> 2018-10-28 12:27:01 [Protocol] error Connection
>>> qpid.server-ip:5672-client-ip:44457 closed by error: Queue :
>>> MessageStoreImpl::store() failed: jexception 0x0202
>>> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
>>> AIOs to complete.
>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>>> 2018-10-28 12:27:01 [Protocol] error Connection
>>> qpid.server-ip:5672-client-ip:44457 closed by error: illegal-argument:
>>> Value for replyText is too large(320)
>>>
>>> Thanks,
>>> Ram
>>>
>>>
>>> On Mon, Nov 5, 2018 at 3:34 PM rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>>> No, local disk.
>>>>
>>>> On Mon, Nov 5, 2018 at 3:26 PM Gordon Sim  wrote:
>>>>
>>>>> On 05/11/18 22:58, rammohan ganapavarapu wrote:
>>>>> > Gordon,
>>>>> >
>>>>> > We are using java client 0.28 version and qpidd-cpp 1.35 version
>>>>> > (qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario
>>>>> its
>>>>> > happening but after i restart broker and if we wait for few days its
>>>>> > happening again. From the above logs do you have any pointers to
>>>>> check?
>>>>>
>>>>> Are you using NFS?
>>>>>
>>>>>
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>>>
>>>>>


Re: qpid-cpp-0.35 errors

2018-11-06 Thread rammohan ganapavarapu
Also from this log message (store level) it seems like waiting for AIO to
complete.

2018-10-28 12:27:01 [Store] critical Linear Store: Journal "": get_events() returned JERR_JCNTL_AIOCMPLWAIT;
wmgr_status: wmgr: pi=25 pc=8 po=0 aer=1 edac=TFFF
ps=[-A--]

page_state ps=[-A--]  where A is AIO_PENDING
aer=1 _aio_evt_rem;  ///< Remaining AIO events

When there is or there are pending AIO, does broker close the connection?
is there any tuning that can be done to resolve this?

Thanks,
Ram




On Mon, Nov 5, 2018 at 8:55 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> I was check the code and i see these lines for that AIO timeout.
>
>   case qpid::linearstore::journal::RHM_IORES_PAGE_AIOWAIT:
> if (++aio_sleep_cnt > MAX_AIO_SLEEPS)
> THROW_STORE_EXCEPTION("Timeout waiting for AIO in
> MessageStoreImpl::recoverMessages()");
> ::usleep(AIO_SLEEP_TIME_US);
> break;
>
> And these are the defaults
>
> #define MAX_AIO_SLEEPS 10 // tot: ~1 sec
>
> #define AIO_SLEEP_TIME_US  10 // 0.01 ms
>
>
>   RHM_IORES_PAGE_AIOWAIT, ///< IO operation suspended - next page is
> waiting for AIO.
>
>
>
> So does page got blocked and its waiting for page availability?
>
>
> Ram
>
> On Mon, Nov 5, 2018 at 8:00 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>>
>> Actually we have upgraded from qpid-cpp 0.28 to 1.35 and after that we
>> see this message
>>
>> 2018-10-27 18:58:25 [Store] warning Linear Store: Journal
>> "": Bad record alignment found at fid=0x4605b offs=0x107680
>> (likely journal overwrite boundary); 19 filler record(s) required.
>> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal
>> "": Recover phase write: Wrote filler record: fid=0x4605b
>> offs=0x107680
>> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal
>> "": Recover phase write: Wr... few more Recover phase logs
>>
>> It worked fine for a day and started throwing this message:
>>
>> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal "":
>> get_events() returned JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=25 pc=8
>> po=0 aer=1 edac=TFFF ps=[-A--]
>> 2018-10-28 12:27:01 [Broker] warning Exchange  cannot deliver to
>> queue : Queue : MessageStoreImpl::store() failed:
>> jexception 0x0202 jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT:
>> Timeout waiting for AIOs to complete.
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>> 2018-10-28 12:27:01 [Broker] error Connection exception: framing-error:
>> Queue : MessageStoreImpl::store() failed: jexception 0x0202
>> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
>> AIOs to complete.
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>> 2018-10-28 12:27:01 [Protocol] error Connection
>> qpid.server-ip:5672-client-ip:44457 closed by error: Queue :
>> MessageStoreImpl::store() failed: jexception 0x0202
>> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
>> AIOs to complete.
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>> 2018-10-28 12:27:01 [Protocol] error Connection
>> qpid.server-ip:5672-client-ip:44457 closed by error: illegal-argument:
>> Value for replyText is too large(320)
>>
>> Thanks,
>> Ram
>>
>>
>> On Mon, Nov 5, 2018 at 3:34 PM rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>>> No, local disk.
>>>
>>> On Mon, Nov 5, 2018 at 3:26 PM Gordon Sim  wrote:
>>>
>>>> On 05/11/18 22:58, rammohan ganapavarapu wrote:
>>>> > Gordon,
>>>> >
>>>> > We are using java client 0.28 version and qpidd-cpp 1.35 version
>>>> > (qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario
>>>> its
>>>> > happening but after i restart broker and if we wait for few days its
>>>> > happening again. From the above logs do you have any pointers to
>>>> check?
>>>>
>>>> Are you using NFS?
>>>>
>>>>
>>>>
>>>> -
>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>>
>>>>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
I was check the code and i see these lines for that AIO timeout.

  case qpid::linearstore::journal::RHM_IORES_PAGE_AIOWAIT:
if (++aio_sleep_cnt > MAX_AIO_SLEEPS)
THROW_STORE_EXCEPTION("Timeout waiting for AIO in
MessageStoreImpl::recoverMessages()");
::usleep(AIO_SLEEP_TIME_US);
break;

And these are the defaults

#define MAX_AIO_SLEEPS 10 // tot: ~1 sec

#define AIO_SLEEP_TIME_US  10 // 0.01 ms


  RHM_IORES_PAGE_AIOWAIT, ///< IO operation suspended - next page is
waiting for AIO.



So does page got blocked and its waiting for page availability?


Ram

On Mon, Nov 5, 2018 at 8:00 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

>
> Actually we have upgraded from qpid-cpp 0.28 to 1.35 and after that we see
> this message
>
> 2018-10-27 18:58:25 [Store] warning Linear Store: Journal
> "": Bad record alignment found at fid=0x4605b offs=0x107680
> (likely journal overwrite boundary); 19 filler record(s) required.
> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal "":
> Recover phase write: Wrote filler record: fid=0x4605b offs=0x107680
> 2018-10-27 18:58:25 [Store] notice Linear Store: Journal "":
> Recover phase write: Wr... few more Recover phase logs
>
> It worked fine for a day and started throwing this message:
>
> 2018-10-28 12:27:01 [Store] critical Linear Store: Journal "":
> get_events() returned JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=25 pc=8
> po=0 aer=1 edac=TFFF ps=[-A--]
> 2018-10-28 12:27:01 [Broker] warning Exchange  cannot deliver to
> queue : Queue : MessageStoreImpl::store() failed:
> jexception 0x0202 jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT:
> Timeout waiting for AIOs to complete.
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> 2018-10-28 12:27:01 [Broker] error Connection exception: framing-error:
> Queue : MessageStoreImpl::store() failed: jexception 0x0202
> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
> AIOs to complete.
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> 2018-10-28 12:27:01 [Protocol] error Connection
> qpid.server-ip:5672-client-ip:44457 closed by error: Queue :
> MessageStoreImpl::store() failed: jexception 0x0202
> jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
> AIOs to complete.
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
> 2018-10-28 12:27:01 [Protocol] error Connection
> qpid.server-ip:5672-client-ip:44457 closed by error: illegal-argument:
> Value for replyText is too large(320)
>
> Thanks,
> Ram
>
>
> On Mon, Nov 5, 2018 at 3:34 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> No, local disk.
>>
>> On Mon, Nov 5, 2018 at 3:26 PM Gordon Sim  wrote:
>>
>>> On 05/11/18 22:58, rammohan ganapavarapu wrote:
>>> > Gordon,
>>> >
>>> > We are using java client 0.28 version and qpidd-cpp 1.35 version
>>> > (qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario its
>>> > happening but after i restart broker and if we wait for few days its
>>> > happening again. From the above logs do you have any pointers to check?
>>>
>>> Are you using NFS?
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
Actually we have upgraded from qpid-cpp 0.28 to 1.35 and after that we see
this message

2018-10-27 18:58:25 [Store] warning Linear Store: Journal "":
Bad record alignment found at fid=0x4605b offs=0x107680 (likely journal
overwrite boundary); 19 filler record(s) required.
2018-10-27 18:58:25 [Store] notice Linear Store: Journal "":
Recover phase write: Wrote filler record: fid=0x4605b offs=0x107680
2018-10-27 18:58:25 [Store] notice Linear Store: Journal "":
Recover phase write: Wr... few more Recover phase logs

It worked fine for a day and started throwing this message:

2018-10-28 12:27:01 [Store] critical Linear Store: Journal "":
get_events() returned JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=25 pc=8
po=0 aer=1 edac=TFFF ps=[-A--]
2018-10-28 12:27:01 [Broker] warning Exchange  cannot deliver to
queue : Queue : MessageStoreImpl::store() failed:
jexception 0x0202 jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT:
Timeout waiting for AIOs to complete.
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
2018-10-28 12:27:01 [Broker] error Connection exception: framing-error:
Queue : MessageStoreImpl::store() failed: jexception 0x0202
jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
AIOs to complete.
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
2018-10-28 12:27:01 [Protocol] error Connection
qpid.server-ip:5672-client-ip:44457 closed by error: Queue :
MessageStoreImpl::store() failed: jexception 0x0202
jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
AIOs to complete.
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
2018-10-28 12:27:01 [Protocol] error Connection
qpid.server-ip:5672-client-ip:44457 closed by error: illegal-argument:
Value for replyText is too large(320)

Thanks,
Ram


On Mon, Nov 5, 2018 at 3:34 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> No, local disk.
>
> On Mon, Nov 5, 2018 at 3:26 PM Gordon Sim  wrote:
>
>> On 05/11/18 22:58, rammohan ganapavarapu wrote:
>> > Gordon,
>> >
>> > We are using java client 0.28 version and qpidd-cpp 1.35 version
>> > (qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario its
>> > happening but after i restart broker and if we wait for few days its
>> > happening again. From the above logs do you have any pointers to check?
>>
>> Are you using NFS?
>>
>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
No, local disk.

On Mon, Nov 5, 2018 at 3:26 PM Gordon Sim  wrote:

> On 05/11/18 22:58, rammohan ganapavarapu wrote:
> > Gordon,
> >
> > We are using java client 0.28 version and qpidd-cpp 1.35 version
> > (qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario its
> > happening but after i restart broker and if we wait for few days its
> > happening again. From the above logs do you have any pointers to check?
>
> Are you using NFS?
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
Gordon,

We are using java client 0.28 version and qpidd-cpp 1.35 version
(qpid-cpp-server-1.35.0-1.el7.x86_64), i dont know at what scenario its
happening but after i restart broker and if we wait for few days its
happening again. From the above logs do you have any pointers to check?

We are using linear store not legacy.

When i do netstat -an |grep 5672, i see two ESTABLISHED connections for a
client host.

This is the queue config:

qpid-config queues
Queue NameAttributes
=
1cdfd9cd-227d-4b84-9539-ab4d67ee5a1f:0.0  auto-del excl
q-001   --durable --file-size=2000 --file-count=24
--max-queue-size=1073741824 --max-queue-count=100
--limit-policy=flow-to-disk --argument no-local=False
q-001-dl--durable --file-size=6000 --file-count=4
--max-queue-size=52428800 --max-queue-count=10
--limit-policy=flow-to-disk --argument no-local=False

Some one posted long back similar issue but dont see any solution,
http://qpid.2158936.n2.nabble.com/RE-qpid-Java-client-unable-to-send-messages-td7613136.html

Thanks
Ram

On Mon, Nov 5, 2018 at 2:32 PM Gordon Sim  wrote:

> On 05/11/18 20:57, rammohan ganapavarapu wrote:
> > Actually there are no messages in queue, all they messages got consumed
> by
> > consumer.
>
> But it still will not enqueue any further messages? Can you reproduce
> this easily?
>
> One other suggestion is to try with the linear store rather than the
> legacy store if possible.
>
> > I also observe two tcp connections to each client and for this
> > client only one tcp connection. Why does qpid creates two connections?
>
> I don't think it does. Which client and version are you using? How are
> you observing the two connections?
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
I also see this in qpidd logs for broker,store and protocol. Please help me
to understand what it means? why i am i getting "Timeout waiting for AIOs
to complete" ? does it means some thing wrong with journal files?

2018-02-28 13:19:00 [Store] critical Journal "q-001": get_events() returned
JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=2 pc=45 po=0 aer=1 edac:TFFF
ps=[--A-] wrfc: state: Active fcntl[1]: pfid=1
ws=11524 wc=11268 rs=0 rc=0 ec=6 ac=1
2018-02-28 13:19:00 [Broker] warning Exchange ex-001 cannot deliver to
queue q-001: Queue q-001: MessageStoreImpl::store() failed: jexception
0x0202 jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout
waiting for AIOs to complete.
(/home/fliu/rpmbuild/BUILD/qpid-0.28/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1357)
2018-02-28 13:19:00 [Broker] error Connection exception: framing-error:
Queue q-001: MessageStoreImpl::store() failed: jexception 0x0202
jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
AIOs to complete.
(/home/fliu/rpmbuild/BUILD/qpid-0.28/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1357)
2018-02-28 13:19:00 [Protocol] error Connection
qpid.1.2.3.4:5672-1.2.3.5:28188 closed by error: Queue q-001:
MessageStoreImpl::store() failed: jexception 0x0202
jcntl::handle_aio_wait() threw JERR_JCNTL_AIOCMPLWAIT: Timeout waiting for
AIOs to complete.
(/home/fliu/rpmbuild/BUILD/qpid-0.28/cpp/src/qpid/legacystore/MessageStoreImpl.cpp:1357)(501)

On Mon, Nov 5, 2018 at 12:57 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Actually there are no messages in queue, all they messages got consumed by
> consumer. I also observe two tcp connections to each client and for this
> client only one tcp connection. Why does qpid creates two connections?
>
> Ram
>
> On Mon, Nov 5, 2018, 11:09 AM Gordon Sim 
>> Can you drain the queue?
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
Actually there are no messages in queue, all they messages got consumed by
consumer. I also observe two tcp connections to each client and for this
client only one tcp connection. Why does qpid creates two connections?

Ram

On Mon, Nov 5, 2018, 11:09 AM Gordon Sim  Can you drain the queue?
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-11-05 Thread rammohan ganapavarapu
I am using java client version 0.28 and qpid-cpp 1.35 version, and i see
this error on client side(producer).

2018-11-02 00:00:46,376  IoSender - /1.2.3.4:5672 INFO
o.a.q.t.n.io.IoSender - Logger.info() : Exception in thread sending to '/
1.2.3.4:5672': java.net.SocketException: Broken pipe (Write failed)
2018-11-02 00:00:46,377  IoReceiver - /1.2.3.4:5672 ERROR
o.a.q.c.AMQConnectionDelegate_0_10 - AMQConnectionDelegate_0_10.exception()
: previous exception
org.apache.qpid.transport.ConnectionException: java.net.SocketException:
Broken pipe (Write failed)
at org.apache.qpid.transport.Connection.exception(Connection.java:546)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.Assembler.exception(Assembler.java:107)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.exception(InputHandler.java:199)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:217)
~[qpid-common-0.28.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.qpid.transport.SenderException:
java.net.SocketException: Broken pipe (Write failed)
at org.apache.qpid.transport.network.io.IoSender.close(IoSender.java:229)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.io.IoSender.close(IoSender.java:199)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.Disassembler.close(Disassembler.java:88)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.sendConnectionCloseOkAndCloseSender(ConnectionDelegate.java:82)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:74)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:40)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionClose.dispatch(ConnectionClose.java:91)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:49)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:40)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Method.delegate(Method.java:163)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Connection.received(Connection.java:392)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Connection.received(Connection.java:62)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.emit(Assembler.java:97)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.assemble(Assembler.java:183)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.frame(Assembler.java:131)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Frame.delegate(Frame.java:128)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.received(Assembler.java:102)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.received(Assembler.java:44)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.next(InputHandler.java:189)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:105)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:44)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:161)
~[qpid-common-0.28.jar:na]
... 1 common frames omitted
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.8.0_121]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
~[na:1.8.0_121]
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
~[na:1.8.0_121]
at org.apache.qpid.transport.network.io.IoSender.run(IoSender.java:308)
~[qpid-common-0.28.jar:na]
... 1 common frames omitted

Thanks,
Ram

On Sun, Nov 4, 2018 at 7:50 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> Any one else saw this error before?  after this error broker stop taking
> any messages, not sure what is causing this error.
>
> Thanks,
> Ram
>
> On Fri, Nov 2, 2018 at 4:24 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Kim/Gordon,
>>
>> After this message broker is not accepting any more messages and keep
>> throwing this message.
>>
>> Thanks,
>> Ram
>>
>> On Fri, Nov 2, 2018 at 8:59 AM rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>>> Any help in understating this error message would be appreciated.
>>>
>>> Ram
>>>
>>> On Wed, Oct 31, 2018 at 5:47 AM rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>>> Kim,
>>>>
>>>> Any idea about this error?
>>>>
>>&g

Re: qpid-cpp-0.35 errors

2018-11-04 Thread rammohan ganapavarapu
Hi,

Any one else saw this error before?  after this error broker stop taking
any messages, not sure what is causing this error.

Thanks,
Ram

On Fri, Nov 2, 2018 at 4:24 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim/Gordon,
>
> After this message broker is not accepting any more messages and keep
> throwing this message.
>
> Thanks,
> Ram
>
> On Fri, Nov 2, 2018 at 8:59 AM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Any help in understating this error message would be appreciated.
>>
>> Ram
>>
>> On Wed, Oct 31, 2018 at 5:47 AM rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>>> Kim,
>>>
>>> Any idea about this error?
>>>
>>> Thanks,
>>> Ram
>>>
>>> On Tue, Oct 30, 2018, 2:13 PM Gordon Sim  wrote:
>>>
>>>> On 30/10/18 18:59, rammohan ganapavarapu wrote:
>>>> > There are two more error from my original post, can some one help me
>>>> to
>>>> > understand when qpid throws these error?
>>>> >
>>>> >
>>>> > 1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
>>>> > not-attached: Channel 0 is not attached
>>>> >
>>>>  
>>>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>>>>
>>>> The one above is comon when you are sending asynchronously, and a
>>>> previous message caused the session to be ended with an exception
>>>> frame.
>>>> Any subsequent messages that were sent before the client received the
>>>> exception frame result in above error.
>>>>
>>>> > 2. 2018-10-30 14:30:36 [Broker] error Connection exception:
>>>> > framing-error: Queue ax-q-axgroup-001-consumer-group-001:
>>>> > MessageStoreImpl::store() failed: jexception 0x0803
>>>> wmgr::enqueue() threw
>>>> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue
>>>> returned
>>>> > partly completed (state ENQ_PART). (This data_tok: id=1714315
>>>> state=NONE)
>>>> >
>>>>  
>>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>>>> > 3. 2018-10-30 14:30:36 [Protocol] error Connection
>>>> > qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
>>>> > ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store()
>>>> failed:
>>>> > jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT:
>>>> Enqueued new
>>>> > dtok when previous enqueue returned partly completed (state
>>>> ENQ_PART).
>>>> > (This data_tok: id=1714315 state=NONE)
>>>> >
>>>>  
>>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>>>>
>>>> Not sure what the 'partly completed state' means here. Kim, any
>>>> thoughts?
>>>>
>>>> -
>>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>>
>>>>


Re: qpid-cpp-0.35 errors

2018-11-02 Thread rammohan ganapavarapu
Kim/Gordon,

After this message broker is not accepting any more messages and keep
throwing this message.

Thanks,
Ram

On Fri, Nov 2, 2018 at 8:59 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Any help in understating this error message would be appreciated.
>
> Ram
>
> On Wed, Oct 31, 2018 at 5:47 AM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Kim,
>>
>> Any idea about this error?
>>
>> Thanks,
>> Ram
>>
>> On Tue, Oct 30, 2018, 2:13 PM Gordon Sim  wrote:
>>
>>> On 30/10/18 18:59, rammohan ganapavarapu wrote:
>>> > There are two more error from my original post, can some one help me to
>>> > understand when qpid throws these error?
>>> >
>>> >
>>> > 1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
>>> > not-attached: Channel 0 is not attached
>>> >
>>>  
>>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>>>
>>> The one above is comon when you are sending asynchronously, and a
>>> previous message caused the session to be ended with an exception frame.
>>> Any subsequent messages that were sent before the client received the
>>> exception frame result in above error.
>>>
>>> > 2. 2018-10-30 14:30:36 [Broker] error Connection exception:
>>> > framing-error: Queue ax-q-axgroup-001-consumer-group-001:
>>> > MessageStoreImpl::store() failed: jexception 0x0803
>>> wmgr::enqueue() threw
>>> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue
>>> returned
>>> > partly completed (state ENQ_PART). (This data_tok: id=1714315
>>> state=NONE)
>>> >
>>>  
>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>>> > 3. 2018-10-30 14:30:36 [Protocol] error Connection
>>> > qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
>>> > ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store()
>>> failed:
>>> > jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT:
>>> Enqueued new
>>> > dtok when previous enqueue returned partly completed (state
>>> ENQ_PART).
>>> > (This data_tok: id=1714315 state=NONE)
>>> >
>>>  
>>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>>>
>>> Not sure what the 'partly completed state' means here. Kim, any thoughts?
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>>


Re: qpid-cpp-0.35 errors

2018-11-02 Thread rammohan ganapavarapu
Any help in understating this error message would be appreciated.

Ram

On Wed, Oct 31, 2018 at 5:47 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Kim,
>
> Any idea about this error?
>
> Thanks,
> Ram
>
> On Tue, Oct 30, 2018, 2:13 PM Gordon Sim  wrote:
>
>> On 30/10/18 18:59, rammohan ganapavarapu wrote:
>> > There are two more error from my original post, can some one help me to
>> > understand when qpid throws these error?
>> >
>> >
>> > 1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
>> > not-attached: Channel 0 is not attached
>> >
>>  
>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>>
>> The one above is comon when you are sending asynchronously, and a
>> previous message caused the session to be ended with an exception frame.
>> Any subsequent messages that were sent before the client received the
>> exception frame result in above error.
>>
>> > 2. 2018-10-30 14:30:36 [Broker] error Connection exception:
>> > framing-error: Queue ax-q-axgroup-001-consumer-group-001:
>> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue()
>> threw
>> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue
>> returned
>> > partly completed (state ENQ_PART). (This data_tok: id=1714315
>> state=NONE)
>> >
>>  
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
>> > 3. 2018-10-30 14:30:36 [Protocol] error Connection
>> > qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
>> > ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store()
>> failed:
>> > jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT:
>> Enqueued new
>> > dtok when previous enqueue returned partly completed (state
>> ENQ_PART).
>> > (This data_tok: id=1714315 state=NONE)
>> >
>>  
>> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>>
>> Not sure what the 'partly completed state' means here. Kim, any thoughts?
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>


Re: qpid-cpp-0.35 errors

2018-10-31 Thread rammohan ganapavarapu
Kim,

Any idea about this error?

Thanks,
Ram

On Tue, Oct 30, 2018, 2:13 PM Gordon Sim  wrote:

> On 30/10/18 18:59, rammohan ganapavarapu wrote:
> > There are two more error from my original post, can some one help me to
> > understand when qpid throws these error?
> >
> >
> > 1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
> > not-attached: Channel 0 is not attached
> >
>  
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>
> The one above is comon when you are sending asynchronously, and a
> previous message caused the session to be ended with an exception frame.
> Any subsequent messages that were sent before the client received the
> exception frame result in above error.
>
> > 2. 2018-10-30 14:30:36 [Broker] error Connection exception:
> > framing-error: Queue ax-q-axgroup-001-consumer-group-001:
> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue()
> threw
> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue
> returned
> > partly completed (state ENQ_PART). (This data_tok: id=1714315
> state=NONE)
> >
>  
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> > 3. 2018-10-30 14:30:36 [Protocol] error Connection
> > qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
> > ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store()
> failed:
> > jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT:
> Enqueued new
> > dtok when previous enqueue returned partly completed (state
> ENQ_PART).
> > (This data_tok: id=1714315 state=NONE)
> >
>  
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>
> Not sure what the 'partly completed state' means here. Kim, any thoughts?
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-10-30 Thread rammohan ganapavarapu
Gordon,

Thank you.


On Tue, Oct 30, 2018 at 2:13 PM Gordon Sim  wrote:

> On 30/10/18 18:59, rammohan ganapavarapu wrote:
> > There are two more error from my original post, can some one help me to
> > understand when qpid throws these error?
> >
> >
> > 1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
> > not-attached: Channel 0 is not attached
> >
>  
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>
> The one above is comon when you are sending asynchronously, and a
> previous message caused the session to be ended with an exception frame.
> Any subsequent messages that were sent before the client received the
> exception frame result in above error.
>
> > 2. 2018-10-30 14:30:36 [Broker] error Connection exception:
> > framing-error: Queue ax-q-axgroup-001-consumer-group-001:
> > MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue()
> threw
> > JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue
> returned
> > partly completed (state ENQ_PART). (This data_tok: id=1714315
> state=NONE)
> >
>  
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> > 3. 2018-10-30 14:30:36 [Protocol] error Connection
> > qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
> > ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store()
> failed:
> > jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT:
> Enqueued new
> > dtok when previous enqueue returned partly completed (state
> ENQ_PART).
> > (This data_tok: id=1714315 state=NONE)
> >
>  
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
>
> Not sure what the 'partly completed state' means here. Kim, any thoughts?
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-10-30 Thread rammohan ganapavarapu
Just to add to this , the store level error corresponding to those above
errors is :

2018-02-28 13:19:00 [Store] critical Journal
"ax-q-axgroup-001-consumer-group-001": get_events() returned
JERR_JCNTL_AIOCMPLWAIT; wmgr_status: wmgr: pi=2 pc=45 po=0 aer=1 edac:TFFF
ps=[--A-] wrfc: state: Active fcntl[1]: pfid=1
ws=11524 wc=11268 rs=0 rc=0 ec=6 ac=1

Thanks,
Ram

On Mon, Oct 22, 2018 at 2:20 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> I am seeing lot of these messages in my qpidd logs and i am not sure why
> am i seeing these, can some one explain?
>
> 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver
> to  queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
> prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
> 100, size: 1073741824]
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>
> 2018-10-22 07:39:02 [Broker] error Execution exception:
> resource-limit-exceeded: Maximum depth exceeded on prod-queue-01:
> current=[count: 12567, size: 1073741816], max=[count: 100, size:
> 1073741824]
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>
> 2018-10-22 08:05:30 [Broker] error Channel exception: not-attached:
> Channel 0 is not attached
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>
>
> 2018-10-22 07:37:02 [Broker] error Connection exception: framing-error:
> Queue prod-queue-01: MessageStoreImpl::store() failed: jexception 0x0803
> wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous
> enqueue returned partly completed (state ENQ_PART). (This data_tok:
> id=4682049 state=NONE)
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> 2018-10-22 07:37:02 [Protocol] error Connection
> qpid.10.68.94.117:5672-10.66.244.23:46574 closed by error: Queue
> prod-queue-01: MessageStoreImpl::store() failed: jexception 0x0803
> wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous
> enqueue returned partly completed (state ENQ_PART). (This data_tok:
> id=4682049 state=NONE)
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
> 2018-10-22 07:37:02 [Protocol] error Connection
> qpid.10.68.94.117:5672-10.66.244.23:46574 closed by error:
> illegal-argument: Value for replyText is too large(320)
> 2018-10-22 07:37:05 [Broker] notice Broker (pid=35293) shut-down
>
>
> Thanks,
> Ram
>
>


Re: qpid-cpp-0.35 errors

2018-10-30 Thread rammohan ganapavarapu
There are two more error from my original post, can some one help me to
understand when qpid throws these error?


   1. 1. 2018-10-22 08:05:30 [Broker] error Channel exception:
   not-attached: Channel 0 is not attached
   
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
   2. 2018-10-30 14:30:36 [Broker] error Connection exception:
   framing-error: Queue ax-q-axgroup-001-consumer-group-001:
   MessageStoreImpl::store() failed: jexception 0x0803 wmgr::enqueue() threw
   JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous enqueue returned
   partly completed (state ENQ_PART). (This data_tok: id=1714315 state=NONE)
   
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
   3. 2018-10-30 14:30:36 [Protocol] error Connection
   qpid.10.68.94.134:5672-10.68.94.127:39458 closed by error: Queue
   ax-q-axgroup-001-consumer-group-001: MessageStoreImpl::store() failed:
   jexception 0x0803 wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new
   dtok when previous enqueue returned partly completed (state ENQ_PART).
   (This data_tok: id=1714315 state=NONE)
   
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)

Thanks,
Ram

On Mon, Oct 29, 2018 at 2:48 PM Gordon Sim  wrote:

> On 29/10/18 21:45, rammohan ganapavarapu wrote:
> > I will try to produce, do have any suggestions on what to check in trace
> > logs?
>
> for qpidd, adding --log-enable trace+:Network to whatever other logging
> options you have should do it
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-0.35 errors

2018-10-29 Thread rammohan ganapavarapu
I will try to produce, do have any suggestions on what to check in trace
logs?

On Fri, Oct 26, 2018 at 2:15 AM Gordon Sim  wrote:

> On 26/10/18 00:44, rammohan ganapavarapu wrote:
> > Any idea when do we see this error?
>
> In AMQP 0-10, the connection-close frame allows an informational message
> (replyText) to be included. This must be less than 256 bytes. It sounds
> like something is either trying to exceed that, or succeeding and
> violating the protocol.
>
> If you can reproduce with a protocol trace, that will speed up resolution.
>
> >
> > On Tue, Oct 23, 2018 at 10:42 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> That error i get it but what is this error i am seeing in client ?
> >>
> >>
> >> 2018-10-23 17:10:57,997  IoReceiver - /10.68.94.134:5672 ERROR
> >> o.a.q.c.AMQConnectionDelegate_0_10 -
> AMQConnectionDelegate_0_10.closed() :
> >> connection exception: conn:5668fdd1
> >> org.apache.qpid.transport.ConnectionException: illegal-argument: Value
> for
> >> replyText is too large
> >> at org.apache.qpid.transport.Connection.closeCode(Connection.java:556)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:75)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:40)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.ConnectionClose.dispatch(ConnectionClose.java:91)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:49)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:40)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.Method.delegate(Method.java:163)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.Connection.received(Connection.java:392)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.Connection.received(Connection.java:62)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.network.Assembler.emit(Assembler.java:97)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >> org.apache.qpid.transport.network.Assembler.assemble(Assembler.java:183)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.network.Assembler.frame(Assembler.java:131)
> >> ~[qpid-common-0.28.jar:na]
> >> at org.apache.qpid.transport.network.Frame.delegate(Frame.java:128)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >> org.apache.qpid.transport.network.Assembler.received(Assembler.java:102)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> org.apache.qpid.transport.network.Assembler.received(Assembler.java:44)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.network.InputHandler.next(InputHandler.java:189)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:105)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >>
> org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:44)
> >> ~[qpid-common-0.28.jar:na]
> >> at
> >> org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:161)
> >> ~[qpid-common-0.28.jar:na]
> >> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> >>
> >> Ram
> >>
> >> On Mon, Oct 22, 2018 at 3:41 PM Chester  wrote:
> >>
> >>> 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot
> deliver to
> >>>
> >>> queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
> >>>
> >>> prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
> >>>
> >>> 100, size: 1073741824]
> >>>
> >>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
> >>>
> >>>
> >>> Looks like you're hitting a queue size (bytes) limit of ~1GB. See the
> >>> Broker Book for the configuration option that sets this limit [1].
> >>>
> >>> [1]
> >>>
> >>>
> https://qpid.apache.org/releases/qpid-cpp-1.38.0/cpp-broker/book/ch01s02.html#CheatSheetforconfiguringQueueOptions-ApplyingQueueSizingConstraints

Re: queue paging

2018-10-29 Thread rammohan ganapavarapu
Gordon,

Yes, i am using AMQP 1.0 client. I am ok if journals grow as i have enough
disk space, but one question is once the queue depth is cleared (messages
got consumed), disk space will retain right?

Now i understand your earlier statement "
*Messages that are larger than a single page are not supported at present*"
since paging uses OS page size (4096) and default page factor is 1, if my
message is bigger than this it will throw error message "Message is larger than
page size", so at this point does broker accept any other incoming messages
that are not larger than the page size or broker just stop there?

Thanks a lot for your help in answering my questions.

Ram

On Mon, Oct 29, 2018 at 11:55 AM Gordon Sim  wrote:

> On 29/10/18 18:21, rammohan ganapavarapu wrote:
> > On Mon, Oct 29, 2018 at 10:00 AM Gordon Sim  wrote:
> >
> >> On 28/10/18 02:23, rammohan ganapavarapu wrote:
> >>> Hi,
> >>>
> >>> I have create a durable queue with following arguments:
> >>> qpid.paging=True
> >>> durable=true
> >>> qpid.max_pages_loaded=10
> >>> qpid.page_factor=1
> >>>
> >>> so this means my broker will keep 10 * 4028 = 4kb worth of messages (un
> >>> consumed) in the memory and offload the incoming messages to disk
> right?
> >>>
> >>> So if when broker restarts, first it will load the same amount (size)
> fo
> >>> messages from disk to memory and processes them?
> >>
> >> Paging is distinct from durability. They use different storage on disk.
> >> The paging store is not recovered, the journal is. On recovery however
> >> the same paging limits should apply.
> >>
> >
> > So what ever the messages i have in paging store will be in journal as
> well
> > ? for durable queues unconsumed messages should be persist right? if
> paging
> > store is not recovered how do they get persist? i assume from journals,
> > isn't it?
>
>
> correct
>
> >>> I do see my msgDepth is growing when i stop consumer but i do see this
> in
> >>> logs, so i am not sure if it is really offloading messages to disk or
> >> not.
> >>
> >> The message depth will indeed grow when messages are not being consumed,
> >> but the memroy growth should be limited by the paging configuration.
> >>
> >
> > So my question here is, after the page is full, remaining incoming
> messages
> > will off load to disk/journals?
>
> yes
>
> > can i ignore the below error?
>
> It means the message you sent was larger (when encoded) than a single
> page. If you want to send messages of that size you should increase the
> size of the page.
>
> >>> 2018-10-27 20:44:12 [Broker] error Message is larger than page size for
> >>> queue ax-q-group-001
> >>>
> >>> 2018-10-27 20:44:12 [System] debug Exception constructed: Message is
> >> larger
> >>> than page size for queue ax-q-group-001
> >>>
> >>
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/PagedQueue.cpp:137)
> >>
> >> At present it is not possible to have a message that spans multiple
> >> pages. Therefor the page factor used will determine the maximum message
> >> size the queue will accept.
> >>
> >
> > So once it reaches the configured page size, queue won't accept any
> > messages?
>
> No, once it reaches the maximum queue depth.
>
> >  but i do see my queue depth was increasing. I assume this page
> > size is for the group of messages in the queue not just to hold one
> > message, right?
>
> Yes, as many messages are written to a page as will fit. (Messages that
> are larger than a single page are not supported at present, as above).
>
> >>> Also how does these --max-queue-size=1073741824
> --max-queue-count=100
> >>> settings will affect for paged queues? i mean for paged queues it wont
> >>> reach the max-queue-size?
> >>
> >> No, the size limit is also distinct from paging. When it reaches the
> >> maximum size it will reject or drop messages depending on policy chosen.
> >> You can set the queue to be unlimited.
> >>
> >
> > Any side affects with unlimited queue size with paging enabled ( to
> control
> > the memory usage) ?
>
> The journal will continue to grow with queue depth, as will the number
> of pages written out by the paging function.
>
> >>> I have created paged queues using following arguments, qpid-config
> shows
> >>> its paged but i

Re: queue paging

2018-10-29 Thread rammohan ganapavarapu
Thank you Gordon, I please check my inline comments.

Ram


On Mon, Oct 29, 2018 at 10:00 AM Gordon Sim  wrote:

> On 28/10/18 02:23, rammohan ganapavarapu wrote:
> > Hi,
> >
> > I have create a durable queue with following arguments:
> > qpid.paging=True
> > durable=true
> > qpid.max_pages_loaded=10
> > qpid.page_factor=1
> >
> > so this means my broker will keep 10 * 4028 = 4kb worth of messages (un
> > consumed) in the memory and offload the incoming messages to disk right?
> >
> > So if when broker restarts, first it will load the same amount (size) fo
> > messages from disk to memory and processes them?
>
> Paging is distinct from durability. They use different storage on disk.
> The paging store is not recovered, the journal is. On recovery however
> the same paging limits should apply.
>

So what ever the messages i have in paging store will be in journal as well
? for durable queues unconsumed messages should be persist right? if paging
store is not recovered how do they get persist? i assume from journals,
isn't it?

>
> > I do see my msgDepth is growing when i stop consumer but i do see this in
> > logs, so i am not sure if it is really offloading messages to disk or
> not.
>
> The message depth will indeed grow when messages are not being consumed,
> but the memroy growth should be limited by the paging configuration.
>

So my question here is, after the page is full, remaining incoming messages
will off load to disk/journals? can i ignore the below error?

>
> > 2018-10-27 20:44:12 [Broker] error Message is larger than page size for
> > queue ax-q-group-001
> >
> > 2018-10-27 20:44:12 [System] debug Exception constructed: Message is
> larger
> > than page size for queue ax-q-group-001
> >
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/PagedQueue.cpp:137)
>
> At present it is not possible to have a message that spans multiple
> pages. Therefor the page factor used will determine the maximum message
> size the queue will accept.
>

So once it reaches the configured page size, queue won't accept any
messages?  but i do see my queue depth was increasing. I assume this page
size is for the group of messages in the queue not just to hold one
message, right?

>
> > Also how does these --max-queue-size=1073741824 --max-queue-count=100
> > settings will affect for paged queues? i mean for paged queues it wont
> > reach the max-queue-size?
>
> No, the size limit is also distinct from paging. When it reaches the
> maximum size it will reject or drop messages depending on policy chosen.
> You can set the queue to be unlimited.
>

Any side affects with unlimited queue size with paging enabled ( to control
the memory usage) ?

>
> > I have created paged queues using following arguments, qpid-config shows
> > its paged but in debug logs its says paging=false
> >
> >
> > queue.argument.qpid.policy_type=flow_to_disk
> > queue.argument.qpid.paging=true
> > queue.argument.qpid.max_pages_loaded=500
> > queue.argument.qpid.page_factor=5
> > queue.argument.qpid.max_size=1073741824
> > queue.argument.qpid.max_count=100
> > queue.argument.qpid.file_count=24
> > queue.argument.qpid.file_size=2000
> >
> > [rganapavarapu@ip-10-17-8-245 ~]$ qpid-config queues
> > Queue NameAttributes
> > =
> > ax-q-axgroup-001-consumer-group-001   --durable --file-size=2000
> > --file-count=24 --max-queue-size=1073741824 --max-queue-count=100
> > --limit-policy=flow-to-disk --argument qpid.page_factor=5--argument
> no-local=False --argument
> > qpid.max_pages_loaded=500
> >
> >
> > Here is the log it says true for the first time and then it says false,
> > also the other arguments seems to be different than what i have passed.
> >
> > 2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
> > action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
> > with params { durable=true autodelete=false exclusive=false alternate=
> > policytype=reject paging=true maxpages=500 maxpagefactor=5
> > maxqueuesize=1073741824 maxqueuecount=100 maxfilesize=2000
> > maxfilecount=24 }
> >
> >
> > 2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
> > action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
> > with params { durable=true autodelete=false exclusive=false alternate=
> > policytype=reject paging=false maxqueuesize=104857600 maxfilesize=24
> > maxfilecount=8 }
>
> Are you saying these two log messages occur together? Can you describe a
> reproducer for seeing that?
>

Yes, i have created a queue with the above arguments using java client, and
i see that message only in trace mode on qpidd.

>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: queue paging

2018-10-29 Thread rammohan ganapavarapu
Any help?

Thanks,
Ram

On Sat, Oct 27, 2018 at 7:23 PM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> I have create a durable queue with following arguments:
> qpid.paging=True
> durable=true
> qpid.max_pages_loaded=10
> qpid.page_factor=1
>
> so this means my broker will keep 10 * 4028 = 4kb worth of messages (un
> consumed) in the memory and offload the incoming messages to disk right?
>
> So if when broker restarts, first it will load the same amount (size) fo
> messages from disk to memory and processes them?
>
> I do see my msgDepth is growing when i stop consumer but i do see this in
> logs, so i am not sure if it is really offloading messages to disk or not.
>
> 2018-10-27 20:44:12 [Broker] error Message is larger than page size for
> queue ax-q-group-001
>
> 2018-10-27 20:44:12 [System] debug Exception constructed: Message is
> larger than page size for queue ax-q-group-001
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/PagedQueue.cpp:137)
>
> Also how does these --max-queue-size=1073741824 --max-queue-count=100
> settings will affect for paged queues? i mean for paged queues it wont
> reach the max-queue-size?
>
> I have created paged queues using following arguments, qpid-config shows
> its paged but in debug logs its says paging=false
>
>
> queue.argument.qpid.policy_type=flow_to_disk
> queue.argument.qpid.paging=true
> queue.argument.qpid.max_pages_loaded=500
> queue.argument.qpid.page_factor=5
> queue.argument.qpid.max_size=1073741824
> queue.argument.qpid.max_count=100
> queue.argument.qpid.file_count=24
> queue.argument.qpid.file_size=2000
>
> [rganapavarapu@ip-10-17-8-245 ~]$ qpid-config queues
> Queue NameAttributes
> =
> ax-q-axgroup-001-consumer-group-001   --durable --file-size=2000
> --file-count=24 --max-queue-size=1073741824 --max-queue-count=100
> --limit-policy=flow-to-disk --argument qpid.page_factor=5 --argument
> qpid.paging=true --argument no-local=False --argument
> qpid.max_pages_loaded=500
>
>
> Here is the log it says true for the first time and then it says false,
> also the other arguments seems to be different than what i have passed.
>
> 2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
> action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
> with params { durable=true autodelete=false exclusive=false alternate=
> policytype=reject paging=true maxpages=500 maxpagefactor=5
> maxqueuesize=1073741824 maxqueuecount=100 maxfilesize=2000
> maxfilecount=24 }
>
>
> 2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
> action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
> with params { durable=true autodelete=false exclusive=false alternate=
> policytype=reject paging=false maxqueuesize=104857600 maxfilesize=24
> maxfilecount=8 }
>
> How to make sure my queue is paged and the messages are offloading to disk
> once the page size reached to configured?
>
> Thanks,
> Ram
>
>
>
>
>
>
>


queue paging

2018-10-27 Thread rammohan ganapavarapu
Hi,

I have create a durable queue with following arguments:
qpid.paging=True
durable=true
qpid.max_pages_loaded=10
qpid.page_factor=1

so this means my broker will keep 10 * 4028 = 4kb worth of messages (un
consumed) in the memory and offload the incoming messages to disk right?

So if when broker restarts, first it will load the same amount (size) fo
messages from disk to memory and processes them?

I do see my msgDepth is growing when i stop consumer but i do see this in
logs, so i am not sure if it is really offloading messages to disk or not.

2018-10-27 20:44:12 [Broker] error Message is larger than page size for
queue ax-q-group-001

2018-10-27 20:44:12 [System] debug Exception constructed: Message is larger
than page size for queue ax-q-group-001
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/PagedQueue.cpp:137)

Also how does these --max-queue-size=1073741824 --max-queue-count=100
settings will affect for paged queues? i mean for paged queues it wont
reach the max-queue-size?

I have created paged queues using following arguments, qpid-config shows
its paged but in debug logs its says paging=false


queue.argument.qpid.policy_type=flow_to_disk
queue.argument.qpid.paging=true
queue.argument.qpid.max_pages_loaded=500
queue.argument.qpid.page_factor=5
queue.argument.qpid.max_size=1073741824
queue.argument.qpid.max_count=100
queue.argument.qpid.file_count=24
queue.argument.qpid.file_size=2000

[rganapavarapu@ip-10-17-8-245 ~]$ qpid-config queues
Queue NameAttributes
=
ax-q-axgroup-001-consumer-group-001   --durable --file-size=2000
--file-count=24 --max-queue-size=1073741824 --max-queue-count=100
--limit-policy=flow-to-disk --argument qpid.page_factor=5 --argument
qpid.paging=true --argument no-local=False --argument
qpid.max_pages_loaded=500


Here is the log it says true for the first time and then it says false,
also the other arguments seems to be different than what i have passed.

2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
with params { durable=true autodelete=false exclusive=false alternate=
policytype=reject paging=true maxpages=500 maxpagefactor=5
maxqueuesize=1073741824 maxqueuecount=100 maxfilesize=2000
maxfilecount=24 }


2018-10-27 22:10:22 [Security] debug ACL: Lookup for id:anonymous
action:create objectType:queue name:ax-q-axgroup-001-consumer-group-001
with params { durable=true autodelete=false exclusive=false alternate=
policytype=reject paging=false maxqueuesize=104857600 maxfilesize=24
maxfilecount=8 }

How to make sure my queue is paged and the messages are offloading to disk
once the page size reached to configured?

Thanks,
Ram


Messages in system queues

2018-10-27 Thread rammohan ganapavarapu
Hi,

Some i see some of the messages are going to system queues (qmfc*,
reply-ip*,topic-ip*) what are these queues and what are those messages?


[rganapavarapu@ip-10-17-8-245 ~]$ qpid-stat -q
Queues
  queue  dur  autoDel  excl
msg   msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

==
  ax-q-001 Y
  0 0  0   0  00 0 2
  ax-q-001-dl  Y
  0 0  0   0  00 0 2
  ax-q-group-001   Y
559   6821231.26m  1.52m 261k0 2
  ax-q-group-001-dlY
  0 0  0   0  00 0 2
  ec8457f8-8517-411d-ae4d-94ef040d93f3:0.0YY
0 0  0   0  00 1 2
  qmfc-v2-hb-ip-10-17-8-245.aeip.apigee.net.31883.1   YY
0   184184   0   88.9k88.9k1 2
  qmfc-v2-ip-10-17-8-245.aeip.apigee.net.31883.1  YY
0 2  2   0758  758 1 2
  qmfc-v2-ui-ip-10-17-8-245.aeip.apigee.net.31883.1   YY
0  3.58k  3.58k  0   6.82m6.82m1 3
  reply-ip-10-17-8-245.aeip.apigee.net.31883.1YY
0   100100   0   48.4k48.4k1 2
  topic-ip-10-17-8-245.aeip.apigee.net.31883.1YY
0 0  0   0  00 1 3


Thanks,
Ram


Re: qpid-cpp-0.35 errors

2018-10-25 Thread rammohan ganapavarapu
Any idea when do we see this error?

On Tue, Oct 23, 2018 at 10:42 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> That error i get it but what is this error i am seeing in client ?
>
>
> 2018-10-23 17:10:57,997  IoReceiver - /10.68.94.134:5672 ERROR
> o.a.q.c.AMQConnectionDelegate_0_10 - AMQConnectionDelegate_0_10.closed() :
> connection exception: conn:5668fdd1
> org.apache.qpid.transport.ConnectionException: illegal-argument: Value for
> replyText is too large
> at org.apache.qpid.transport.Connection.closeCode(Connection.java:556)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:75)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:40)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.ConnectionClose.dispatch(ConnectionClose.java:91)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:49)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:40)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.Method.delegate(Method.java:163)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.Connection.received(Connection.java:392)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.Connection.received(Connection.java:62)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.network.Assembler.emit(Assembler.java:97)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.Assembler.assemble(Assembler.java:183)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.network.Assembler.frame(Assembler.java:131)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.network.Frame.delegate(Frame.java:128)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.Assembler.received(Assembler.java:102)
> ~[qpid-common-0.28.jar:na]
> at org.apache.qpid.transport.network.Assembler.received(Assembler.java:44)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.InputHandler.next(InputHandler.java:189)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:105)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:44)
> ~[qpid-common-0.28.jar:na]
> at
> org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:161)
> ~[qpid-common-0.28.jar:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
>
> Ram
>
> On Mon, Oct 22, 2018 at 3:41 PM Chester  wrote:
>
>> 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver to
>>
>> queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
>>
>> prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
>>
>> 100, size: 1073741824]
>>
>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>>
>>
>> Looks like you're hitting a queue size (bytes) limit of ~1GB. See the
>> Broker Book for the configuration option that sets this limit [1].
>>
>> [1]
>>
>> https://qpid.apache.org/releases/qpid-cpp-1.38.0/cpp-broker/book/ch01s02.html#CheatSheetforconfiguringQueueOptions-ApplyingQueueSizingConstraints
>>
>>
>> On Mon, Oct 22, 2018 at 5:21 PM rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>> > Hi,
>> >
>> > I am seeing lot of these messages in my qpidd logs and i am not sure
>> why am
>> > i seeing these, can some one explain?
>> >
>> > 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver
>> to
>> > queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
>> > prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
>> > 100, size: 1073741824]
>> > (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>> >
>> > 2018-10-22 07:39:02 [Broker] error Execution exception:
>> > resource-limit-exceeded: Maximum depth exceeded on prod-queue-01:
>> > current=[count: 12567, size: 1073741816], max=[count: 100, size:
>> > 1073741824]
>> > (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>> >
>> > 2018-10-22 08:05:30 [Broker] error Channel exception: not-attached:
>> Channel
>> > 0 is not attached
>> >
>> >
>> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
>> >
>> 

Re: qpid-cpp-0.35 errors

2018-10-23 Thread rammohan ganapavarapu
That error i get it but what is this error i am seeing in client ?


2018-10-23 17:10:57,997  IoReceiver - /10.68.94.134:5672 ERROR
o.a.q.c.AMQConnectionDelegate_0_10 - AMQConnectionDelegate_0_10.closed() :
connection exception: conn:5668fdd1
org.apache.qpid.transport.ConnectionException: illegal-argument: Value for
replyText is too large
at org.apache.qpid.transport.Connection.closeCode(Connection.java:556)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:75)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.connectionClose(ConnectionDelegate.java:40)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionClose.dispatch(ConnectionClose.java:91)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:49)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.ConnectionDelegate.control(ConnectionDelegate.java:40)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Method.delegate(Method.java:163)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Connection.received(Connection.java:392)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.Connection.received(Connection.java:62)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.emit(Assembler.java:97)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.assemble(Assembler.java:183)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.frame(Assembler.java:131)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Frame.delegate(Frame.java:128)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.received(Assembler.java:102)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.Assembler.received(Assembler.java:44)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.next(InputHandler.java:189)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:105)
~[qpid-common-0.28.jar:na]
at
org.apache.qpid.transport.network.InputHandler.received(InputHandler.java:44)
~[qpid-common-0.28.jar:na]
at org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:161)
~[qpid-common-0.28.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]

Ram

On Mon, Oct 22, 2018 at 3:41 PM Chester  wrote:

> 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver to
>
> queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
>
> prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
>
> 100, size: 1073741824]
>
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
>
>
> Looks like you're hitting a queue size (bytes) limit of ~1GB. See the
> Broker Book for the configuration option that sets this limit [1].
>
> [1]
>
> https://qpid.apache.org/releases/qpid-cpp-1.38.0/cpp-broker/book/ch01s02.html#CheatSheetforconfiguringQueueOptions-ApplyingQueueSizingConstraints
>
>
> On Mon, Oct 22, 2018 at 5:21 PM rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > I am seeing lot of these messages in my qpidd logs and i am not sure why
> am
> > i seeing these, can some one explain?
> >
> > 2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver
> to
> > queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
> > prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
> > 100, size: 1073741824]
> > (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
> >
> > 2018-10-22 07:39:02 [Broker] error Execution exception:
> > resource-limit-exceeded: Maximum depth exceeded on prod-queue-01:
> > current=[count: 12567, size: 1073741816], max=[count: 100, size:
> > 1073741824]
> > (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)
> >
> > 2018-10-22 08:05:30 [Broker] error Channel exception: not-attached:
> Channel
> > 0 is not attached
> >
> >
> (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)
> >
> >
> > 2018-10-22 07:37:02 [Broker] error Connection exception: framing-error:
> > Queue prod-queue-01: MessageStoreImpl::store() failed: jexception 0x0803
> > wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new dtok when
> previous
> > enqueue returned partly completed (state ENQ_PART). (This data_tok:
> > id=4682049 state=NONE)
> >
> >
> (/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
> > 2018-10-22 07:37:02 [Protocol] error Connection
> > qpid.10.68.94.117:5672-10.66.244.23:46574 closed by error: Queue
> >

qpid-cpp-0.35 errors

2018-10-22 Thread rammohan ganapavarapu
Hi,

I am seeing lot of these messages in my qpidd logs and i am not sure why am
i seeing these, can some one explain?

2018-10-22 07:39:02 [Broker] warning Exchange ex-group-1 cannot deliver to
queue prod-queue-01: resource-limit-exceeded: Maximum depth exceeded on
prod-queue-01: current=[count: 12567, size: 1073741816], max=[count:
100, size: 1073741824]
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)

2018-10-22 07:39:02 [Broker] error Execution exception:
resource-limit-exceeded: Maximum depth exceeded on prod-queue-01:
current=[count: 12567, size: 1073741816], max=[count: 100, size:
1073741824]
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)

2018-10-22 08:05:30 [Broker] error Channel exception: not-attached: Channel
0 is not attached
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/amqp_0_10/SessionHandler.cpp:39)


2018-10-22 07:37:02 [Broker] error Connection exception: framing-error:
Queue prod-queue-01: MessageStoreImpl::store() failed: jexception 0x0803
wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous
enqueue returned partly completed (state ENQ_PART). (This data_tok:
id=4682049 state=NONE)
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)
2018-10-22 07:37:02 [Protocol] error Connection
qpid.10.68.94.117:5672-10.66.244.23:46574 closed by error: Queue
prod-queue-01: MessageStoreImpl::store() failed: jexception 0x0803
wmgr::enqueue() threw JERR_WMGR_ENQDISCONT: Enqueued new dtok when previous
enqueue returned partly completed (state ENQ_PART). (This data_tok:
id=4682049 state=NONE)
(/home/rganapavarapu/rpmbuild/BUILD/qpid-cpp-1.35.0/src/qpid/linearstore/MessageStoreImpl.cpp:1211)(501)
2018-10-22 07:37:02 [Protocol] error Connection
qpid.10.68.94.117:5672-10.66.244.23:46574 closed by error:
illegal-argument: Value for replyText is too large(320)
2018-10-22 07:37:05 [Broker] notice Broker (pid=35293) shut-down


Thanks,
Ram


Re: qpid journal files

2018-01-30 Thread rammohan ganapavarapu
Any help understanding this?

On Jan 29, 2018 9:34 AM, "rammohan ganapavarapu" <rammohanga...@gmail.com>
wrote:

> Hi,
>
> I am using qpid-cpp-server-1.35 version and i see lot journal files in
> "/data/qls/p001/efp/2048k" dir and some files under
> "/data/qls/p001/efp/2048k/in_use" also i see symlinks under
> "/data/qls/jrnl2/*/*", can some one explain what are these different
> locations and what are the current serving file(s)? how to set limit on the
> number of those files and how to purge unused files with out stoping
> processes.
>
> Thanks,
> Ram
>


qpid journal files

2018-01-29 Thread rammohan ganapavarapu
Hi,

I am using qpid-cpp-server-1.35 version and i see lot journal files in
"/data/qls/p001/efp/2048k" dir and some files under
"/data/qls/p001/efp/2048k/in_use" also i see symlinks under
"/data/qls/jrnl2/*/*", can some one explain what are these different
locations and what are the current serving file(s)? how to set limit on the
number of those files and how to purge unused files with out stoping
processes.

Thanks,
Ram


Re: How to resize queue store in qpid-cpp-1.35.0

2017-03-09 Thread rammohan ganapavarapu
Jakub,

in older versions there is a qpid/libexec/resize tool right, do we need and
or have the similar or same tool in 1.35 version?

Ram

On Mar 9, 2017 3:33 AM, "Jakub Scholz" <ja...@scholz.cz> wrote:

> Hi,
>
> What do you mean with resizing queue store? With linear store, the journal
> files should be created "on demand", you don't have to do any resizing. The
> only limits for your queue are the size and count limits. There are no
> store limits like they used to be with the old legacy store.
>
> Jakub
>
> On Thu, Mar 9, 2017 at 12:17 AM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > I dont see resize tool in linearstore, how do i resize queue store in
> > qpid-cpp-1.35 version?
> >
> > Thanks,
> > Ram
> >
>


How to resize queue store in qpid-cpp-1.35.0

2017-03-08 Thread rammohan ganapavarapu
Hi,

I dont see resize tool in linearstore, how do i resize queue store in
qpid-cpp-1.35 version?

Thanks,
Ram


Re: qpid-cpp-1.35 linearstore build error

2016-12-08 Thread rammohan ganapavarapu
Chris,

Thank you, i have tried that but it did not work in SLES11 so this is what
i did to make it work.

looks like finddb.cmake is looking for this file `/usr/lib64/libdb_cxx.so`
but my db4 installation only have "/usr/lib64/libdb_cxx-4.3.so" so i
created symlink for `/usr/lib64/libdb_cxx-4.3.so`

Ram

On Thu, Dec 8, 2016 at 5:39 PM, Chris Richardson <c...@fourc.eu> wrote:

> Since this is more a question about cmake/c++ and not strictly a question
> about Qpid I'll break userlist etiquette (again) and, with apologies to the
> real Qpid team, stick my nose in where it doesn't belong...
>
> I've had this same problem installing on Gentoo since if your BerkeleyDB is
> not installed in one of the following paths (as yours is not) it will not
> be found by the qpid-cpp build system:
>/usr/local/include/db4
>/usr/local/include/libdb4
>/usr/local/include
>/usr/include/db4
>/usr/include/libdb4
>/usr/include
>
> You can see how I've solved this on Gentoo here:
> https://github.com/fourceu/qpid-portage-overlay/blob/
> master/net-misc/qpid-cpp/qpid-cpp-0.34-r1.ebuild
> Line 56 begins a block which finds the relevant headers and passes them to
> the Qpid build system.
>
> In your case a similar solution might be to add
> "-DDB_CXX_INCLUDE_DIR=/usr/include/db43"
> to your cmake command line, or set the variable in the CMake GUI if that's
> what you're using.
>
> HTH
>
> /Chris
>
>
> On 7 December 2016 at 00:53, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am trying to build linear-store for qpid-cpp-1.35  version and i am
> > getting bellow error.
> >
> > -- Legacystore requires BerkeleyDB for C++ which is absent.
> > -- Legacystore is excluded from build.
> > -- Linearstore requires BerkeleyDB for C++ which is absent.
> > CMake Error at src/linearstore.cmake:65 (message):
> >   Linearstore requires BerkeleyDB for C++ which is absent.
> > Call Stack (most recent call first):
> >   src/CMakeLists.txt:1274 (include)
> >
> >
> >
> > I have installed necessary rpms (db4), is there a way for cmake to tell
> > where to find db4 libraries? any options i can use to make it work?
> >
> >
> > ec2-user@ip-10-17-8-126:~/files/rpmbuild/SPECS> rpm -ql
> db43-4.3.29-125.17
> > /usr/lib64/libdb-4.3.so
> > /usr/lib64/libdb_cxx-4.3.so
> > /usr/share/doc/packages/db43
> > /usr/share/doc/packages/db43/LICENSE
> > /usr/share/doc/packages/db43/README
> > /usr/share/doc/packages/db43/images
> > /usr/share/doc/packages/db43/images/api.gif
> > /usr/share/doc/packages/db43/images/next.gif
> > /usr/share/doc/packages/db43/images/prev.gif
> > /usr/share/doc/packages/db43/images/ps.gif
> > /usr/share/doc/packages/db43/images/ref.gif
> > /usr/share/doc/packages/db43/images/sleepycat.gif
> > /usr/share/doc/packages/db43/index.html
> > /usr/share/doc/packages/db43/sleepycat
> > /usr/share/doc/packages/db43/sleepycat/legal.html
> > /usr/share/doc/packages/db43/sleepycat/license.html
> > ec2-user@ip-10-17-8-126:~/files/rpmbuild/SPECS> rpm -ql
> > db43-devel-4.3.29-125.17
> > /usr/include/db43
> > /usr/include/db43/db.h
> > /usr/include/db43/db_185.h
> > /usr/include/db43/db_cxx.h
> > /usr/lib64/libdb-4.3.a
> > /usr/lib64/libdb_cxx-4.3.a
> >
> > Thanks,
> > Ram
> >
>
>
>
> --
>
> *Chris Richardson*, System Architect
> c...@fourc.eu
>
>
> *FourC AS, Vestre Rosten 81, Trekanten, NO-7075 Tiller, Norwaywww.fourc.eu
> <http://www.fourc.eu/>*
>
> *Follow us on LinkedIn <http://bit.ly/fourcli>, Facebook
> <http://bit.ly/fourcfb>, Google+ <http://bit.ly/fourcgp> and Twitter
> <http://bit.ly/fourctw>!*
>


qpid-cpp-1.35 linearstore build error

2016-12-06 Thread rammohan ganapavarapu
Hi,

I am trying to build linear-store for qpid-cpp-1.35  version and i am
getting bellow error.

-- Legacystore requires BerkeleyDB for C++ which is absent.
-- Legacystore is excluded from build.
-- Linearstore requires BerkeleyDB for C++ which is absent.
CMake Error at src/linearstore.cmake:65 (message):
  Linearstore requires BerkeleyDB for C++ which is absent.
Call Stack (most recent call first):
  src/CMakeLists.txt:1274 (include)



I have installed necessary rpms (db4), is there a way for cmake to tell
where to find db4 libraries? any options i can use to make it work?


ec2-user@ip-10-17-8-126:~/files/rpmbuild/SPECS> rpm -ql db43-4.3.29-125.17
/usr/lib64/libdb-4.3.so
/usr/lib64/libdb_cxx-4.3.so
/usr/share/doc/packages/db43
/usr/share/doc/packages/db43/LICENSE
/usr/share/doc/packages/db43/README
/usr/share/doc/packages/db43/images
/usr/share/doc/packages/db43/images/api.gif
/usr/share/doc/packages/db43/images/next.gif
/usr/share/doc/packages/db43/images/prev.gif
/usr/share/doc/packages/db43/images/ps.gif
/usr/share/doc/packages/db43/images/ref.gif
/usr/share/doc/packages/db43/images/sleepycat.gif
/usr/share/doc/packages/db43/index.html
/usr/share/doc/packages/db43/sleepycat
/usr/share/doc/packages/db43/sleepycat/legal.html
/usr/share/doc/packages/db43/sleepycat/license.html
ec2-user@ip-10-17-8-126:~/files/rpmbuild/SPECS> rpm -ql
db43-devel-4.3.29-125.17
/usr/include/db43
/usr/include/db43/db.h
/usr/include/db43/db_185.h
/usr/include/db43/db_cxx.h
/usr/lib64/libdb-4.3.a
/usr/lib64/libdb_cxx-4.3.a

Thanks,
Ram


Re: Proton Go client

2016-11-25 Thread rammohan ganapavarapu
Alan,

I have upgraded my go to 1.5 and i dont see that error now.

Thanks,
Ram

On Fri, Nov 25, 2016 at 12:32 PM, Alan Conway <acon...@redhat.com> wrote:

> On Fri, 2016-11-25 at 14:44 -0500, Alan Conway wrote:
> > On Fri, 2016-11-25 at 11:15 -0800, rammohan ganapavarapu wrote:
> > >
> > > Alan,
> > >
> > > I have the same issue and raised a jira here "
> > > https://issues.apache.org/jira/browse/PROTON-1356; did you get
> > > chance
> > > to
> > > take a look?
> > >
> >
> > Looking now, but not sure what is happening,
>
> I just committed a possible fix, the problem identifier is included via
> a "shim" C header file in the go package. It's possible that older
> versions of go deal with this differently (you are on 1.2, I'm on 1.6).
> The shim is no longer needed so I removed it - please try again and let
> me know if this fixes it.
>
> > I can't reproduce and I
> > can't see any problem with the line that has the error. Can you add
> > the
> > output from:
> >
> > go version
> > cc -v
> >
> > to the JIRA?
> >
> > >
> > > Ram
> > >
> > > On Fri, Nov 25, 2016 at 11:02 AM, Alan Conway <acon...@redhat.com>
> > > wrote:
> > >
> > > >
> > > >
> > > > On Thu, 2016-11-24 at 20:26 +0100, Ulf Lilleengen wrote:
> > > > >
> > > > >
> > > > > On 24. nov. 2016 14:34, Alex Kritikos wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > hello all,
> > > > > >
> > > > > > i am interested in finding out more about the status of the
> > > > > > Go
> > > > > > client, specifically level of completeness and whether it can
> > > > > > be
> > > > > > installed as
> > > > > >
> > > > > > go get qpid.apache.org/amqp
> > > > > > # qpid.apache.org/amqp
> > > > > > GOPATH/src/qpid.apache.org/amqp/error.go:22:11: fatal error:
> > > > > > 'proton/error.h' file not found
> > > > > >  #include 
> > > > > >   ^
> > > > > > 1 error generated.
> > > > > >
> > > > >
> > > > > I just tried the go bindings for the first time today, and I
> > > > > think
> > > > > you
> > > > > need to have the proton c library installed. At least it works
> > > > > for
> > > > > me
> > > > > when I have the 'qpid-proton-c-devel' package installed (on
> > > > > fedora).
> > > >
> > > > That is correct, you need to have the qpid-proton-c libraries
> > > > installed, either from packages or from source at
> > > > qpid.apache.org/proton. The 'go get' code should work with proton
> > > > version 0.10 and higher.
> > > >
> > > > Please mail me, or this list, or raise an issue at
> > > > issues.apache.org if
> > > > you have trouble. The Go library is relatively new but should be
> > > > ready
> > > > to use, and I am keen to get feedback on improvements or issues
> > > > to
> > > > fix.
> > > >
> > > > >
> > > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Is there a plan to address it?
> > > > > >
> > > > > > Alex
> > > > > >
> > > > > >
> > > > > > -
> > > > > > 
> > > > > > 
> > > > > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > > > > For additional commands, e-mail: users-h...@qpid.apache.org
> > > > > >
> > > > >
> > > >
> > > >
> > > > -
> > > > 
> > > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > > For additional commands, e-mail: users-h...@qpid.apache.org
> > > >
> > > >
> >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Proton Go client

2016-11-25 Thread rammohan ganapavarapu
Updated JIRA, can you share your environment details may be i will try to
use same and see if it works for me.

Ram

On Fri, Nov 25, 2016 at 11:44 AM, Alan Conway <acon...@redhat.com> wrote:

> On Fri, 2016-11-25 at 11:15 -0800, rammohan ganapavarapu wrote:
> > Alan,
> >
> > I have the same issue and raised a jira here "
> > https://issues.apache.org/jira/browse/PROTON-1356; did you get chance
> > to
> > take a look?
> >
>
> Looking now, but not sure what is happening, I can't reproduce and I
> can't see any problem with the line that has the error. Can you add the
> output from:
>
> go version
> cc -v
>
> to the JIRA?
>
> > Ram
> >
> > On Fri, Nov 25, 2016 at 11:02 AM, Alan Conway <acon...@redhat.com>
> > wrote:
> >
> > >
> > > On Thu, 2016-11-24 at 20:26 +0100, Ulf Lilleengen wrote:
> > > >
> > > > On 24. nov. 2016 14:34, Alex Kritikos wrote:
> > > > >
> > > > >
> > > > > hello all,
> > > > >
> > > > > i am interested in finding out more about the status of the Go
> > > > > client, specifically level of completeness and whether it can
> > > > > be
> > > > > installed as
> > > > >
> > > > > go get qpid.apache.org/amqp
> > > > > # qpid.apache.org/amqp
> > > > > GOPATH/src/qpid.apache.org/amqp/error.go:22:11: fatal error:
> > > > > 'proton/error.h' file not found
> > > > >  #include 
> > > > >   ^
> > > > > 1 error generated.
> > > > >
> > > >
> > > > I just tried the go bindings for the first time today, and I
> > > > think
> > > > you
> > > > need to have the proton c library installed. At least it works
> > > > for
> > > > me
> > > > when I have the 'qpid-proton-c-devel' package installed (on
> > > > fedora).
> > >
> > > That is correct, you need to have the qpid-proton-c libraries
> > > installed, either from packages or from source at
> > > qpid.apache.org/proton. The 'go get' code should work with proton
> > > version 0.10 and higher.
> > >
> > > Please mail me, or this list, or raise an issue at
> > > issues.apache.org if
> > > you have trouble. The Go library is relatively new but should be
> > > ready
> > > to use, and I am keen to get feedback on improvements or issues to
> > > fix.
> > >
> > > >
> > > >
> > > > >
> > > > >
> > > > >
> > > > > Is there a plan to address it?
> > > > >
> > > > > Alex
> > > > >
> > > > >
> > > > > -
> > > > > 
> > > > > 
> > > > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > > > For additional commands, e-mail: users-h...@qpid.apache.org
> > > > >
> > > >
> > >
> > >
> > > -
> > > 
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> > >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Proton Go client

2016-11-25 Thread rammohan ganapavarapu
Alan,

I have the same issue and raised a jira here "
https://issues.apache.org/jira/browse/PROTON-1356; did you get chance to
take a look?

Ram

On Fri, Nov 25, 2016 at 11:02 AM, Alan Conway  wrote:

> On Thu, 2016-11-24 at 20:26 +0100, Ulf Lilleengen wrote:
> > On 24. nov. 2016 14:34, Alex Kritikos wrote:
> > >
> > > hello all,
> > >
> > > i am interested in finding out more about the status of the Go
> > > client, specifically level of completeness and whether it can be
> > > installed as
> > >
> > > go get qpid.apache.org/amqp
> > > # qpid.apache.org/amqp
> > > GOPATH/src/qpid.apache.org/amqp/error.go:22:11: fatal error:
> > > 'proton/error.h' file not found
> > >  #include 
> > >   ^
> > > 1 error generated.
> > >
> >
> > I just tried the go bindings for the first time today, and I think
> > you
> > need to have the proton c library installed. At least it works for
> > me
> > when I have the 'qpid-proton-c-devel' package installed (on fedora).
>
> That is correct, you need to have the qpid-proton-c libraries
> installed, either from packages or from source at
> qpid.apache.org/proton. The 'go get' code should work with proton
> version 0.10 and higher.
>
> Please mail me, or this list, or raise an issue at issues.apache.org if
> you have trouble. The Go library is relatively new but should be ready
> to use, and I am keen to get feedback on improvements or issues to fix.
>
> >
> > >
> > >
> > > Is there a plan to address it?
> > >
> > > Alex
> > >
> > >
> > > -
> > > 
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid-cpp-1.35 rpm build for SUSE

2016-11-23 Thread rammohan ganapavarapu
Chris,

SLES 12 is working fine, I had issues in SLES 11, can you try on 11?

Ram

On Nov 23, 2016 5:22 AM, "Chris Richardson" <c...@fourc.eu> wrote:

> Ram,
>
> I've tried to reproduce your problem with no success. Here are the steps I
> took:
> * Created a new EC2 instance of SLES 12 SP1, ami-cae4b7dd on am3.large
> dual core instance type
> * Installed some prerequisites
>   sudo zypper install cmake
>   sudo zypper install gcc-c++
> * Downloaded and built qpid-proton-0-14 as follows:
>   mkdir -p proton/build
>   cd proton
>   wget http://archive.apache.org/dist/qpid/proton/0.14.0/qpid-
> proton-0.14.0.tar.gz
>   tar zxf qpid-proton-0.14.0.tar.gz
>   cd build/
>   cmake ../qpid-proton-0.14.0
>   make -j3
>
> This built with no problems.
>
> Then I noticed from the pathnames in your original post that you're using
> rpmbuild so I tried that too. I haven't used rpmbuild before but it seems
> you need a specfile, so I hashed one up (see attached). Then I ran
>
> $rpmbuild -ba qpid-proton-0.14.0.spec
>
> and the build succeded again, terminating with the messages:
> Wrote: /home/ec2-user/rpmbuild/SRPMS/qpid-proton-0.14.0-1.src.rpm
> Wrote: /home/ec2-user/rpmbuild/RPMS/x86_64/qpid-proton-0.14.0-1.x86_64.rpm
>
> In both cases the compilation of connection.cpp (which failed in your
> build) happened earlier in my output (27%) than in yours so I guess you're
> including some extra cmake settings you haven't mentioned. This could
> affect the build outcome.
>
> Another possibility common to c++ it that your build system is "dirty" in
> some way, for instance if you have a previous version of proton already
> installed. That could result in the compiler picking up a conflicting
> version of a header file from eg: /usr/include instead of the one it should
> find in your build tree. Might be worth trying this again on a clean system.
>
> Hope that helps!
>
> Chris
>
> On 21 November 2016 at 16:32, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Chris,
>>
>> Thanks for trying to help me, below are my env details.
>>
>> OS: SUSE Linux Enterprise Server 12 SP1  (x86_64) - Kernel \r (\l).
>> cmake -version
>> cmake version 2.8.12.1
>> gcc version 4.8.5 (SUSE Linux)
>>
>>
>>
>> Ram
>>
>> On Fri, Nov 18, 2016 at 6:16 PM, Chris Richardson <c...@fourc.eu> wrote:
>>
>> > Hi Ram,
>> >
>> > It looks like you're not entirely alone with this problem:
>> > http://stackoverflow.com/questions/39708294/error-changes-meaning-when-
>> > installing-apache-qpid
>> > notably also SUSE, unfortunately no solution posted.
>> >
>> > May I suggest you post some more info about your environment,
>> particularly
>> > arch (whether you're on 32 or 64 bit) and what compiler (incl. version)
>> > you're using. Steps to reproduce would also help. Unfortunately I don't
>> > have SUSE but I'd be happy to test on the distros I do have if it's of
>> any
>> > benefit.
>> >
>> > /Chris
>> >
>> >
>> >
>> > On 18 November 2016 at 23:34, rammohan ganapavarapu <
>> > rammohanga...@gmail.com
>> > > wrote:
>> >
>> > > Hi,
>> > >
>> > > If any one built qpid-cpp rpms for suse please share your experience,
>> if
>> > > you already have a rpms to download please share.
>> > >
>> > > Ram
>> > >
>> > > On Tue, Nov 8, 2016 at 8:19 AM, rammohan ganapavarapu <
>> > > rammohanga...@gmail.com> wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > I am trying to build qpid-cpp v1.35 rpms for SUSE, while i am
>> building
>> > > > qpid-proton rpm i am getting this compilation error any idea why i
>> am
>> > > > getting this error and how to fix it?
>> > > >
>> > > > [ 34%] Building CXX object proton-c/bindings/cpp/
>> > > > CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o
>> > > > In file included from /mnt/ec2-user/rpmbuild/BUILD/
>> > > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
>> > > ././link.hpp:31,
>> > > >  from /mnt/ec2-user/rpmbuild/BUILD/
>> > > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
>> > > > ./receiver.hpp:27,
>> > > >  from /mnt/ec2-user/rpmbuild/BUILD/
>> > > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
>> &

Re: qpid-cpp-1.35 rpm build for SUSE

2016-11-21 Thread rammohan ganapavarapu
Chris,

Thanks for trying to help me, below are my env details.

OS: SUSE Linux Enterprise Server 12 SP1  (x86_64) - Kernel \r (\l).
cmake -version
cmake version 2.8.12.1
gcc version 4.8.5 (SUSE Linux)



Ram

On Fri, Nov 18, 2016 at 6:16 PM, Chris Richardson <c...@fourc.eu> wrote:

> Hi Ram,
>
> It looks like you're not entirely alone with this problem:
> http://stackoverflow.com/questions/39708294/error-changes-meaning-when-
> installing-apache-qpid
> notably also SUSE, unfortunately no solution posted.
>
> May I suggest you post some more info about your environment, particularly
> arch (whether you're on 32 or 64 bit) and what compiler (incl. version)
> you're using. Steps to reproduce would also help. Unfortunately I don't
> have SUSE but I'd be happy to test on the distros I do have if it's of any
> benefit.
>
> /Chris
>
>
>
> On 18 November 2016 at 23:34, rammohan ganapavarapu <
> rammohanga...@gmail.com
> > wrote:
>
> > Hi,
> >
> > If any one built qpid-cpp rpms for suse please share your experience, if
> > you already have a rpms to download please share.
> >
> > Ram
> >
> > On Tue, Nov 8, 2016 at 8:19 AM, rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I am trying to build qpid-cpp v1.35 rpms for SUSE, while i am building
> > > qpid-proton rpm i am getting this compilation error any idea why i am
> > > getting this error and how to fix it?
> > >
> > > [ 34%] Building CXX object proton-c/bindings/cpp/
> > > CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o
> > > In file included from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > ././link.hpp:31,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > > ./receiver.hpp:27,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > session.hpp:27,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/
> > connection.hpp:28,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> > > /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> > > bindings/cpp/include/proton/././././sender_options.hpp:87: error:
> > > declaration of ‘proton::sender_options& proton::sender_options::
> > > delivery_mode(proton::delivery_mode)’
> > > /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> > > bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> > > changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> > > In file included from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > ././link.hpp:32,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > > ./receiver.hpp:27,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> > session.hpp:27,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/
> > connection.hpp:28,
> > >  from /mnt/ec2-user/rpmbuild/BUILD/
> > > qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> > > /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> > > bindings/cpp/include/proton/././././receiver_options.hpp:83: error:
> > > declaration of ‘proton::receiver_options& proton::receiver_options::
> > > delivery_mode(proton::delivery_mode)’
> > > /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> > > bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> > > changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> > > make[2]: *** [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.
> > dir/src/connection.cpp.o]
> > > Error 1
> > > make[1]: *** [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.
> dir/all]
> > > Error 2
> > >
> > >
> > > Ram
> > >
> >
>
>
>
> --
>
> *Chris Richardson*, System Architect
> c...@fourc.eu
>
>
> *FourC AS, Vestre Rosten 81, Trekanten, NO-7075 Tiller, Norwaywww.fourc.eu
> <http://www.fourc.eu/>*
>
> *Follow us on LinkedIn <http://bit.ly/fourcli>, Facebook
> <http://bit.ly/fourcfb>, Google+ <http://bit.ly/fourcgp> and Twitter
> <http://bit.ly/fourctw>!*
>


Re: qpid-cpp-1.35 rpm build for SUSE

2016-11-18 Thread rammohan ganapavarapu
Hi,

If any one built qpid-cpp rpms for suse please share your experience, if
you already have a rpms to download please share.

Ram

On Tue, Nov 8, 2016 at 8:19 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> I am trying to build qpid-cpp v1.35 rpms for SUSE, while i am building
> qpid-proton rpm i am getting this compilation error any idea why i am
> getting this error and how to fix it?
>
> [ 34%] Building CXX object proton-c/bindings/cpp/
> CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o
> In file included from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:31,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> ./receiver.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/././././sender_options.hpp:87: error:
> declaration of ‘proton::sender_options& proton::sender_options::
> delivery_mode(proton::delivery_mode)’
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> In file included from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:32,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> ./receiver.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/././././receiver_options.hpp:83: error:
> declaration of ‘proton::receiver_options& proton::receiver_options::
> delivery_mode(proton::delivery_mode)’
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> make[2]: *** 
> [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o]
> Error 1
> make[1]: *** [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/all]
> Error 2
>
>
> Ram
>


Re: qpid monitoring tool in GO

2016-11-18 Thread rammohan ganapavarapu
Here is the JIRA with attachments
https://issues.apache.org/jira/browse/PROTON-1356

Ram

On Fri, Nov 18, 2016 at 9:12 AM, Alan Conway <acon...@redhat.com> wrote:

> On Fri, 2016-11-18 at 07:23 -0800, rammohan ganapavarapu wrote:
> > no I was talking about email not able to deliver because the
> > attachment it
> > big, I was trying to send the tar files that you have asked, not sure
> > if
> > you received both the tar files.
>
> I only got one usr_include_proton.tgz. Open a JIRA for this issue and
> attach the files to that, that's better than putting them on the list.
>
> https://issues.apache.org/jira/
>
> That will make it easier to track as well.
>
> >
> > Ram
> >
> > On Nov 17, 2016 5:54 AM, "Alan Conway" <acon...@redhat.com> wrote:
> >
> > >
> > > On Tue, 2016-11-15 at 16:04 -0800, rammohan ganapavarapu wrote:
> > > >
> > > > Alan,
> > > >
> > > > Looks like it didn't deliver the message as its > 1mb, let me
> > > > send
> > > > you as two parts.
> > >
> > > Are you able to send smaller messages?
> > >
> > > If so that sounds like a bug, can you raise a JIRA on https://issue
> > > s.ap
> > > ache.org/jira for project: Qpid Proton, component: go-binding
> > >
> > > Add all the info you have, sample code if possible. I'll get on it
> > > ASAP.
> > >
> > > >
> > > >
> > > >
> > > > Ram
> > > >
> > > > On Tue, Nov 15, 2016 at 3:37 PM, rammohan ganapavarapu
> > > >  > > > @gmail.com> wrote:
> > > > >
> > > > > Alan,
> > > > >
> > > > > Please find the attached.
> > > > >
> > > > > Ram
> > > > >
> > > > > On Tue, Nov 15, 2016 at 8:51 AM, Alan Conway <acon...@redhat.co
> > > > > m>
> > > > > wrote:
> > > > > >
> > > > > > On Tue, 2016-11-15 at 11:48 -0500, Alan Conway wrote:
> > > > > > >
> > > > > > > On Thu, 2016-11-03 at 10:46 -0700, rammohan ganapavarapu
> > > > > > > wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > Alan,
> > > > > > > >
> > > > > > > > I have that qpid-proton-c-devel but still getting this
> > > > > > > > error
> > > > > > when
> > > > > > >
> > > > > > > >
> > > > > > > > do
> > > > > > > > go get
> > > > > > > >
> > > > > > > > # qpid.apache.org/amqp
> > > > > > > > golang/src/qpid.apache.org/amqp/types.go:33:9: expected
> > > > > > > > (unqualified)
> > > > > > > > identifier
> > > > > > > > [root@gohost ~]# rpm -qa |grep qpid-proton-c-devel
> > > > > > > > qpid-proton-c-devel-0.14.0-1.el6.x86_64
> > > > > > > >
> > > > > > > > Ram
> > > > > > > >
> > > > > > >
> > > > > > > Thanks Ram. It works for me on fedora 24 with
> > > > > > >proton-c-devel.x86_64 0.14.0-1.fc24
> > > > > > >
> > > > > > > What does `rpm -q proton-c-devel` say for you? The RHEL
> > > > > > > package
> > > > > > is
> > > > > > >
> > > > > > > probably older, I tested against past proton source
> > > > > > > releases
> > > > > > but the
> > > > > > >
> > > > > > > RHEL package may have something I missed.
> > > > > >
> > > > > > Doh! Sorry, you already did give me that info and it looks
> > > > > > like
> > > > > > the
> > > > > > same proton versin. Not sure what's happening here. Can you
> > > > > > tar
> > > > > > up your
> > > > > > golang/src/qpid.apache.org and /usr/include/proton so I can
> > > > > > look
> > > > > > at the
> > > > > > exact sources you have and see if something went astray?
> > > > > >
> > > > > > >
> > > > > >

Re: qpid monitoring tool in GO

2016-11-18 Thread rammohan ganapavarapu
no I was talking about email not able to deliver because the attachment it
big, I was trying to send the tar files that you have asked, not sure if
you received both the tar files.

Ram

On Nov 17, 2016 5:54 AM, "Alan Conway" <acon...@redhat.com> wrote:

> On Tue, 2016-11-15 at 16:04 -0800, rammohan ganapavarapu wrote:
> > Alan,
> >
> > Looks like it didn't deliver the message as its > 1mb, let me send
> > you as two parts.
>
> Are you able to send smaller messages?
>
> If so that sounds like a bug, can you raise a JIRA on https://issues.ap
> ache.org/jira for project: Qpid Proton, component: go-binding
>
> Add all the info you have, sample code if possible. I'll get on it
> ASAP.
>
> >
> >
> > Ram
> >
> > On Tue, Nov 15, 2016 at 3:37 PM, rammohan ganapavarapu  > @gmail.com> wrote:
> > > Alan,
> > >
> > > Please find the attached.
> > >
> > > Ram
> > >
> > > On Tue, Nov 15, 2016 at 8:51 AM, Alan Conway <acon...@redhat.com>
> > > wrote:
> > > > On Tue, 2016-11-15 at 11:48 -0500, Alan Conway wrote:
> > > > > On Thu, 2016-11-03 at 10:46 -0700, rammohan ganapavarapu wrote:
> > > > > >
> > > > > > Alan,
> > > > > >
> > > > > > I have that qpid-proton-c-devel but still getting this error
> > > > when
> > > > > > do
> > > > > > go get
> > > > > >
> > > > > > # qpid.apache.org/amqp
> > > > > > golang/src/qpid.apache.org/amqp/types.go:33:9: expected
> > > > > > (unqualified)
> > > > > > identifier
> > > > > > [root@gohost ~]# rpm -qa |grep qpid-proton-c-devel
> > > > > > qpid-proton-c-devel-0.14.0-1.el6.x86_64
> > > > > >
> > > > > > Ram
> > > > > >
> > > > >
> > > > > Thanks Ram. It works for me on fedora 24 with
> > > > >proton-c-devel.x86_64 0.14.0-1.fc24
> > > > >
> > > > > What does `rpm -q proton-c-devel` say for you? The RHEL package
> > > > is
> > > > > probably older, I tested against past proton source releases
> > > > but the
> > > > > RHEL package may have something I missed.
> > > >
> > > > Doh! Sorry, you already did give me that info and it looks like
> > > > the
> > > > same proton versin. Not sure what's happening here. Can you tar
> > > > up your
> > > > golang/src/qpid.apache.org and /usr/include/proton so I can look
> > > > at the
> > > > exact sources you have and see if something went astray?
> > > >
> > > > >
> > > > > >
> > > > > > On Wed, Nov 2, 2016 at 10:27 AM, Alan Conway <aconway@redhat.
> > > > com>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, 2016-11-01 at 12:20 -0700, rammohan ganapavarapu
> > > > wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > I have "qpid-proton-c-0.14.0-1.el6.x86_64" installed and
> > > > my
> > > > > > > > borker
> > > > > > > > version
> > > > > > > > is "qpid-cpp-server-1.35.0-2.el6.x86_64" so the proton-c
> > > > rpm
> > > > > > > > version
> > > > > > > > is not
> > > > > > > > right?
> > > > > > > >
> > > > > > >
> > > > > > > You need qpid-proton-c-devel RPM for the header files which
> > > > are
> > > > > > > used to
> > > > > > > compile the Go binding. With that you should be able to
> > > > > > >
> > > > > > >go get qpid.apache.org/electron
> > > > > > >
> > > > > > > Note I updated it yesterday, so if it didn't work before,
> > > > try
> > > > > > > again.
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Ram
> > > > > > > >
> > > > > > > > On Thu, Oct 27, 2016 at 2:05 PM, Alan Conway <aconway@red
> > > > hat.co
> > > > &g

Re: qpid-cpp-1.35 rpm build for SUSE

2016-11-15 Thread rammohan ganapavarapu
Hi,

Any one attempted to build rpms for SUSE-11? did you guys face the above
issue? please help

Ram

On Tue, Nov 8, 2016 at 8:19 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> I am trying to build qpid-cpp v1.35 rpms for SUSE, while i am building
> qpid-proton rpm i am getting this compilation error any idea why i am
> getting this error and how to fix it?
>
> [ 34%] Building CXX object proton-c/bindings/cpp/
> CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o
> In file included from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:31,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> ./receiver.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/././././sender_options.hpp:87: error:
> declaration of ‘proton::sender_options& proton::sender_options::
> delivery_mode(proton::delivery_mode)’
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> In file included from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:32,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./
> ./receiver.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
>  from /mnt/ec2-user/rpmbuild/BUILD/
> qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/././././receiver_options.hpp:83: error:
> declaration of ‘proton::receiver_options& proton::receiver_options::
> delivery_mode(proton::delivery_mode)’
> /mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/
> bindings/cpp/include/proton/./././././delivery_mode.hpp:30: error:
> changes meaning of ‘delivery_mode’ from ‘struct proton::delivery_mode’
> make[2]: *** 
> [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o]
> Error 1
> make[1]: *** [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/all]
> Error 2
>
>
> Ram
>


qpid-cpp-1.35 rpm build for SUSE

2016-11-08 Thread rammohan ganapavarapu
Hi,

I am trying to build qpid-cpp v1.35 rpms for SUSE, while i am building
qpid-proton rpm i am getting this compilation error any idea why i am
getting this error and how to fix it?

[ 34%] Building CXX object
proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o
In file included from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:31,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/././receiver.hpp:27,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/././././sender_options.hpp:87:
error: declaration of ‘proton::sender_options&
proton::sender_options::delivery_mode(proton::delivery_mode)’
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././././delivery_mode.hpp:30:
error: changes meaning of ‘delivery_mode’ from ‘struct
proton::delivery_mode’
In file included from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././link.hpp:32,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/././receiver.hpp:27,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./session.hpp:27,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/connection.hpp:28,
 from
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/src/connection.cpp:24:
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/././././receiver_options.hpp:83:
error: declaration of ‘proton::receiver_options&
proton::receiver_options::delivery_mode(proton::delivery_mode)’
/mnt/ec2-user/rpmbuild/BUILD/qpid-proton-0.14.0/proton-c/bindings/cpp/include/proton/./././././delivery_mode.hpp:30:
error: changes meaning of ‘delivery_mode’ from ‘struct
proton::delivery_mode’
make[2]: ***
[proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/src/connection.cpp.o]
Error 1
make[1]: *** [proton-c/bindings/cpp/CMakeFiles/qpid-proton-cpp.dir/all]
Error 2


Ram


Re: qpid monitoring tool in GO

2016-11-03 Thread rammohan ganapavarapu
Alan,

I have that qpid-proton-c-devel but still getting this error when do go get

# qpid.apache.org/amqp
golang/src/qpid.apache.org/amqp/types.go:33:9: expected (unqualified)
identifier
[root@gohost ~]# rpm -qa |grep qpid-proton-c-devel
qpid-proton-c-devel-0.14.0-1.el6.x86_64

Ram

On Wed, Nov 2, 2016 at 10:27 AM, Alan Conway <acon...@redhat.com> wrote:

> On Tue, 2016-11-01 at 12:20 -0700, rammohan ganapavarapu wrote:
> > I have "qpid-proton-c-0.14.0-1.el6.x86_64" installed and my borker
> > version
> > is "qpid-cpp-server-1.35.0-2.el6.x86_64" so the proton-c rpm version
> > is not
> > right?
> >
>
> You need qpid-proton-c-devel RPM for the header files which are used to
> compile the Go binding. With that you should be able to
>
>go get qpid.apache.org/electron
>
> Note I updated it yesterday, so if it didn't work before, try again.
>
> > Ram
> >
> > On Thu, Oct 27, 2016 at 2:05 PM, Alan Conway <acon...@redhat.com>
> > wrote:
> >
> > >
> > > On Thu, 2016-10-27 at 10:16 -0700, rammohan ganapavarapu wrote:
> > > >
> > > > Alan,
> > > >
> > > > I was trying to use this one http://godoc.org/github.com/streadwa
> > > > y/am
> > > > qp
> > > >
> > > > but not sure if it will work with any qpid broker or not, i am
> > > > getting
> > > > "2016/10/27 04:09:48 Exception (501) Reason: "Exception (501)
> > > > Reason:
> > > > \"frame could not be parsed\"""
> > > >
> > >
> > > First, make sure your qpidd is built & configured to support AMQP
> > > 1.0.
> > > The go client doesn't speak the older 0-10 version that qpidd also
> > > uses, but qpidd has spoken AMQP 1.0 for a long time so that
> > > shouldn't
> > > be a problem.
> > >
> > > Second: The Go client is a wrapper for a C library, there may be a
> > > mismatch between the Go code and the proton-C library you have
> > > installed. I would suggest downloading the latest proton release
> > > from
> > >
> > >   http://qpid.apache.org/proton/
> > >
> > > and use the matching C and Go code from the release. We will figure
> > > out
> > > a better story for managing that relationship but we haven't yet :(
> > >
> > > Otherwise if you have to work with some specific (reasonably
> > > recent)
> > > older released version of proton I can help you get it working.
> > >
> > > Please let me know how you get on - like I said I'd like to add a
> > > generic request-response pattern to the electron package.
> > >
> > > >
> > > > Thanks,
> > > > Ram
> > > >
> > > >
> > > > On Thu, Oct 27, 2016 at 7:42 AM, Alan Conway <acon...@redhat.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On Wed, 2016-10-26 at 21:05 -0700, rammohan ganapavarapu wrote:
> > > > > >
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I am trying to write a qpid-stat tool in go, any one
> > > > > > attempted to
> > > > > > do
> > > > > > it, if
> > > > > > so can you please share your ideas? or if you have any
> > > > > > documents
> > > > > > on
> > > > > > how to
> > > > > > write please share.
> > > > > >
> > > > > > Thanks,
> > > > > > Ram
> > > > >
> > > > > For a Go client I recommend the qpid.apache.org/electron
> > > > > package
> > > > > which
> > > > > is part of the Proton project.
> > > > >
> > > > > https://github.com/apache/qpid-proton/tree/master/examples/go
> > > > >
> > > > > To work with the latest you should use the code from the proton
> > > > > repo
> > > > > rather than the `go get` version as it is still progressing
> > > > > rapidly.
> > > > >
> > > > > qpid-stat and the other qpid tools are based on the request-
> > > > > response
> > > > > message pattern: send a correctly-formatted QMF request message
> > > > > with
> > > > > correlation-id and then process the response message. Th

Re: qpid monitoring tool in GO

2016-11-02 Thread rammohan ganapavarapu
ok, will try and let you know.

On Nov 2, 2016 10:27 AM, "Alan Conway" <acon...@redhat.com> wrote:

> On Tue, 2016-11-01 at 12:20 -0700, rammohan ganapavarapu wrote:
> > I have "qpid-proton-c-0.14.0-1.el6.x86_64" installed and my borker
> > version
> > is "qpid-cpp-server-1.35.0-2.el6.x86_64" so the proton-c rpm version
> > is not
> > right?
> >
>
> You need qpid-proton-c-devel RPM for the header files which are used to
> compile the Go binding. With that you should be able to
>
>go get qpid.apache.org/electron
>
> Note I updated it yesterday, so if it didn't work before, try again.
>
> > Ram
> >
> > On Thu, Oct 27, 2016 at 2:05 PM, Alan Conway <acon...@redhat.com>
> > wrote:
> >
> > >
> > > On Thu, 2016-10-27 at 10:16 -0700, rammohan ganapavarapu wrote:
> > > >
> > > > Alan,
> > > >
> > > > I was trying to use this one http://godoc.org/github.com/streadwa
> > > > y/am
> > > > qp
> > > >
> > > > but not sure if it will work with any qpid broker or not, i am
> > > > getting
> > > > "2016/10/27 04:09:48 Exception (501) Reason: "Exception (501)
> > > > Reason:
> > > > \"frame could not be parsed\"""
> > > >
> > >
> > > First, make sure your qpidd is built & configured to support AMQP
> > > 1.0.
> > > The go client doesn't speak the older 0-10 version that qpidd also
> > > uses, but qpidd has spoken AMQP 1.0 for a long time so that
> > > shouldn't
> > > be a problem.
> > >
> > > Second: The Go client is a wrapper for a C library, there may be a
> > > mismatch between the Go code and the proton-C library you have
> > > installed. I would suggest downloading the latest proton release
> > > from
> > >
> > >   http://qpid.apache.org/proton/
> > >
> > > and use the matching C and Go code from the release. We will figure
> > > out
> > > a better story for managing that relationship but we haven't yet :(
> > >
> > > Otherwise if you have to work with some specific (reasonably
> > > recent)
> > > older released version of proton I can help you get it working.
> > >
> > > Please let me know how you get on - like I said I'd like to add a
> > > generic request-response pattern to the electron package.
> > >
> > > >
> > > > Thanks,
> > > > Ram
> > > >
> > > >
> > > > On Thu, Oct 27, 2016 at 7:42 AM, Alan Conway <acon...@redhat.com>
> > > > wrote:
> > > >
> > > > >
> > > > >
> > > > > On Wed, 2016-10-26 at 21:05 -0700, rammohan ganapavarapu wrote:
> > > > > >
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I am trying to write a qpid-stat tool in go, any one
> > > > > > attempted to
> > > > > > do
> > > > > > it, if
> > > > > > so can you please share your ideas? or if you have any
> > > > > > documents
> > > > > > on
> > > > > > how to
> > > > > > write please share.
> > > > > >
> > > > > > Thanks,
> > > > > > Ram
> > > > >
> > > > > For a Go client I recommend the qpid.apache.org/electron
> > > > > package
> > > > > which
> > > > > is part of the Proton project.
> > > > >
> > > > > https://github.com/apache/qpid-proton/tree/master/examples/go
> > > > >
> > > > > To work with the latest you should use the code from the proton
> > > > > repo
> > > > > rather than the `go get` version as it is still progressing
> > > > > rapidly.
> > > > >
> > > > > qpid-stat and the other qpid tools are based on the request-
> > > > > response
> > > > > message pattern: send a correctly-formatted QMF request message
> > > > > with
> > > > > correlation-id and then process the response message. There's a
> > > > > "canned" request-response pattern in python at:
> > > > >
> > > > > https://github.com/apache/qpid-proton/blob/master/proton-c/bind
> &g

Re: qpid monitoring tool in GO

2016-11-01 Thread rammohan ganapavarapu
I have "qpid-proton-c-0.14.0-1.el6.x86_64" installed and my borker version
is "qpid-cpp-server-1.35.0-2.el6.x86_64" so the proton-c rpm version is not
right?

Ram

On Thu, Oct 27, 2016 at 2:05 PM, Alan Conway <acon...@redhat.com> wrote:

> On Thu, 2016-10-27 at 10:16 -0700, rammohan ganapavarapu wrote:
> > Alan,
> >
> > I was trying to use this one http://godoc.org/github.com/streadway/am
> > qp
> >
> > but not sure if it will work with any qpid broker or not, i am
> > getting
> > "2016/10/27 04:09:48 Exception (501) Reason: "Exception (501) Reason:
> > \"frame could not be parsed\"""
> >
>
> First, make sure your qpidd is built & configured to support AMQP 1.0.
> The go client doesn't speak the older 0-10 version that qpidd also
> uses, but qpidd has spoken AMQP 1.0 for a long time so that shouldn't
> be a problem.
>
> Second: The Go client is a wrapper for a C library, there may be a
> mismatch between the Go code and the proton-C library you have
> installed. I would suggest downloading the latest proton release from
>
>   http://qpid.apache.org/proton/
>
> and use the matching C and Go code from the release. We will figure out
> a better story for managing that relationship but we haven't yet :(
>
> Otherwise if you have to work with some specific (reasonably recent)
> older released version of proton I can help you get it working.
>
> Please let me know how you get on - like I said I'd like to add a
> generic request-response pattern to the electron package.
>
> > Thanks,
> > Ram
> >
> >
> > On Thu, Oct 27, 2016 at 7:42 AM, Alan Conway <acon...@redhat.com>
> > wrote:
> >
> > >
> > > On Wed, 2016-10-26 at 21:05 -0700, rammohan ganapavarapu wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am trying to write a qpid-stat tool in go, any one attempted to
> > > > do
> > > > it, if
> > > > so can you please share your ideas? or if you have any documents
> > > > on
> > > > how to
> > > > write please share.
> > > >
> > > > Thanks,
> > > > Ram
> > >
> > > For a Go client I recommend the qpid.apache.org/electron package
> > > which
> > > is part of the Proton project.
> > >
> > > https://github.com/apache/qpid-proton/tree/master/examples/go
> > >
> > > To work with the latest you should use the code from the proton
> > > repo
> > > rather than the `go get` version as it is still progressing
> > > rapidly.
> > >
> > > qpid-stat and the other qpid tools are based on the request-
> > > response
> > > message pattern: send a correctly-formatted QMF request message
> > > with
> > > correlation-id and then process the response message. There's a
> > > "canned" request-response pattern in python at:
> > >
> > > https://github.com/apache/qpid-proton/blob/master/proton-c/bindings
> > > /pyt
> > > hon/proton/utils.py
> > >
> > > QMF messages are maps, so your main work will be to construct and
> > > interpret nested maps with the right names/values. Look at the
> > > qpid-
> > > stat sources and the qmf schemas here:
> > >
> > > /home/aconway/qpid-cpp/src/qpid/acl/management-schema.xml
> > > /home/aconway/qpid-cpp/src/qpid/broker/management-schema.xml
> > > /home/aconway/qpid-cpp/src/qpid/legacystore/management-schema.xml
> > > /home/aconway/qpid-cpp/src/qpid/ha/management-schema.xml
> > > /home/aconway/qpid-cpp/src/qpid/linearstore/management-schema.xml
> > >
> > > You are probably mostly/only interested in the broker schema.
> > >
> > > I am very interested in this as I worked on the Go binding and also
> > > on
> > > schema-driven management tools (in python) for qpid-dispatch. I
> > > would
> > > like to add a canned "request-response" pattern to the Go binding
> > > which
> > > would make your job easier, but I won't get to that in the
> > > immediate
> > > term so maybe you'll do it for me :) I'm happy to help with
> > > pointers
> > > and any problems you find in the Go binding.
> > >
> > > Cheers,
> > > Alan.
> > >
> > >
> > >
> > >
> > >
> > > -
> > > 
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> > >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid monitoring tool in GO

2016-10-27 Thread rammohan ganapavarapu
Alan,

I was trying to use this one http://godoc.org/github.com/streadway/amqp

but not sure if it will work with any qpid broker or not, i am getting
"2016/10/27 04:09:48 Exception (501) Reason: "Exception (501) Reason:
\"frame could not be parsed\"""

Thanks,
Ram


On Thu, Oct 27, 2016 at 7:42 AM, Alan Conway <acon...@redhat.com> wrote:

> On Wed, 2016-10-26 at 21:05 -0700, rammohan ganapavarapu wrote:
> > Hi,
> >
> > I am trying to write a qpid-stat tool in go, any one attempted to do
> > it, if
> > so can you please share your ideas? or if you have any documents on
> > how to
> > write please share.
> >
> > Thanks,
> > Ram
>
> For a Go client I recommend the qpid.apache.org/electron package which
> is part of the Proton project.
>
> https://github.com/apache/qpid-proton/tree/master/examples/go
>
> To work with the latest you should use the code from the proton repo
> rather than the `go get` version as it is still progressing rapidly.
>
> qpid-stat and the other qpid tools are based on the request-response
> message pattern: send a correctly-formatted QMF request message with
> correlation-id and then process the response message. There's a
> "canned" request-response pattern in python at:
>
> https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/pyt
> hon/proton/utils.py
>
> QMF messages are maps, so your main work will be to construct and
> interpret nested maps with the right names/values. Look at the qpid-
> stat sources and the qmf schemas here:
>
> /home/aconway/qpid-cpp/src/qpid/acl/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/broker/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/legacystore/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/ha/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/linearstore/management-schema.xml
>
> You are probably mostly/only interested in the broker schema.
>
> I am very interested in this as I worked on the Go binding and also on
> schema-driven management tools (in python) for qpid-dispatch. I would
> like to add a canned "request-response" pattern to the Go binding which
> would make your job easier, but I won't get to that in the immediate
> term so maybe you'll do it for me :) I'm happy to help with pointers
> and any problems you find in the Go binding.
>
> Cheers,
> Alan.
>
>
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: qpid monitoring tool in GO

2016-10-27 Thread rammohan ganapavarapu
Alan,

Thanks for the pointers, let's see if I can drive from here :)

Ram

On Oct 27, 2016 7:43 AM, "Alan Conway" <acon...@redhat.com> wrote:

> On Wed, 2016-10-26 at 21:05 -0700, rammohan ganapavarapu wrote:
> > Hi,
> >
> > I am trying to write a qpid-stat tool in go, any one attempted to do
> > it, if
> > so can you please share your ideas? or if you have any documents on
> > how to
> > write please share.
> >
> > Thanks,
> > Ram
>
> For a Go client I recommend the qpid.apache.org/electron package which
> is part of the Proton project.
>
> https://github.com/apache/qpid-proton/tree/master/examples/go
>
> To work with the latest you should use the code from the proton repo
> rather than the `go get` version as it is still progressing rapidly.
>
> qpid-stat and the other qpid tools are based on the request-response
> message pattern: send a correctly-formatted QMF request message with
> correlation-id and then process the response message. There's a
> "canned" request-response pattern in python at:
>
> https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/pyt
> hon/proton/utils.py
>
> QMF messages are maps, so your main work will be to construct and
> interpret nested maps with the right names/values. Look at the qpid-
> stat sources and the qmf schemas here:
>
> /home/aconway/qpid-cpp/src/qpid/acl/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/broker/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/legacystore/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/ha/management-schema.xml
> /home/aconway/qpid-cpp/src/qpid/linearstore/management-schema.xml
>
> You are probably mostly/only interested in the broker schema.
>
> I am very interested in this as I worked on the Go binding and also on
> schema-driven management tools (in python) for qpid-dispatch. I would
> like to add a canned "request-response" pattern to the Go binding which
> would make your job easier, but I won't get to that in the immediate
> term so maybe you'll do it for me :) I'm happy to help with pointers
> and any problems you find in the Go binding.
>
> Cheers,
> Alan.
>
>
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


SUSE linux packages for qpid-cpp v1.35

2016-10-25 Thread rammohan ganapavarapu
Hi,

I am looking for qpid-cpp v1.35 packages for open suse linux, if any one
have repo can you please share?

Thanks,
Ram


Re: qpid-cpp-1.35 broker crashed with resource-limit-exceeded

2016-10-24 Thread rammohan ganapavarapu
Yes, looks like my system went out of memory and killed qpidd, i have
restarted broker with trace log level, lets see if i can reproduce the
issue.

[45527559.740820] Out of memory: Kill process 8311 (qpidd) score 176 or
sacrifice child
[45527559.740827] Killed process 8311 (qpidd) total-vm:1753484kB,
anon-rss:1356896kB, file-rss:48kB


Thanks,
Ram

On Mon, Oct 24, 2016 at 12:33 PM, Jakub Scholz <ja...@scholz.cz> wrote:

> Can you reproduce it?
>
> Maybe you can try to increase the log levels in the broker to get some more
> information about why it crashed (
> http://qpid.apache.org/releases/qpid-cpp-1.35.0/cpp-
> broker/book/ch01.html#RASC-logging-options).
> If the broker crashed, you can also try to get a core dump for further
> analysis. Following our discussion from the previous thread, it might be
> also interesting to check the system logs. When the broker is killed by the
> kernel because of missing memory, it also doesn't print anything into the
> log.
>
> J.
>
> On Mon, Oct 24, 2016 at 8:46 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > on't know why it crashed but I don't see any other message in the log
> > other than that and broker processes is not running, i can run it in
> debug
> > mode and see if i can get any clue. So ideally if the queue siz
> >
>


Re: qpid-cpp-1.35 broker crashed with resource-limit-exceeded

2016-10-24 Thread rammohan ganapavarapu
I don't know why it crashed but I don't see any other message in the log
other than that and broker processes is not running, i can run it in debug
mode and see if i can get any clue. So ideally if the queue size limit
exceeds broker just drops the incoming messages till it flush out the
messages in the queue?

Thanks,
Ram

On Mon, Oct 24, 2016 at 11:38 AM, Jakub Scholz <ja...@scholz.cz> wrote:

> Hi Ram,
>
> This should normally not crash the broker. This error should be only sent
> to the client which exceeded the max queue size limit and that particular
> client might be kicked out. But the broker should continue running and
> serving other clients. Why do you think the broker crashed because of this?
>
> J.
>
> On Mon, Oct 24, 2016 at 6:04 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > -10-24 15:36:21 [Broker] warning Exchange ax-ex-eaxgroup002 cannot
> > deliver to  queue ax-q-eaxgroup002-consumer-group-001:
> > resource-limit-exceeded: Maximum depth exceeded on
> > ax-q-eaxgroup002-consumer-group-001: current=[count: 5843, size:
> > 1073740027], max=[count: 100, size: 1073741824]
> > (/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp
> >
>


qpid-cpp-1.35 broker crashed with resource-limit-exceeded

2016-10-24 Thread rammohan ganapavarapu
Hi

I am running cpp broker v1.35 and i hve producers pumping messages and my
broker got crashed with bellow error, i know messages in queue exceeded the
configure queue limit but will it kill the broker? how to make my broker up
and running even if it exceed the configured queue capacity?


2016-10-24 15:36:21 [Broker] warning Exchange ax-ex-eaxgroup002 cannot
deliver to  queue ax-q-eaxgroup002-consumer-group-001:
resource-limit-exceeded: Maximum depth exceeded on
ax-q-eaxgroup002-consumer-group-001: current=[count: 5843, size:
1073740027], max=[count: 100, size: 1073741824]
(/builddir/build/BUILD/qpid-cpp-1.35.0/src/qpid/broker/Queue.cpp:1662)

Current queue stats:

qpid-stat -q
Queues
  queue dur  autoDel  excl  msg
msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

==
  ax-q-eaxgroup002-consumer-group-001   Y   5.94k
5.94k 01.07g  1.07g   0 0 2
  ax-q-eaxgroup002-consumer-group-001-dlY 68
68  012.9m  12.9m   0 0 2

Queue configuration:

qpid-config queues
Queue NameAttributes
=
4dee1a3f-b4be-4580-9549-670510c0f075:0.0  auto-del excl
ax-q-eaxgroup002-consumer-group-001   --durable --file-size=5120
--file-count=64 --max-queue-size=1073741824 --max-queue-count=100
--limit-policy=flow-to-disk --argument no-local=False
ax-q-eaxgroup002-consumer-group-001-dl--durable --file-size=6000
--file-count=4 --max-queue-size=52428800 --max-queue-count=10
--limit-policy=flow-to-disk --argument no-local=False


Thanks,
Ram


Re: How to enable persistent message store in qpid-cpp-1.35.0

2016-10-21 Thread rammohan ganapavarapu
Jakub,

Thanks for clearing my doubts, yes i wanted to limit the memory usage and
survive messages on broker restart, in that case i will use option3.

Yes, in java broker will be available right way upon restart and it does
message recovery in the background. In the 0.28 cpp version we see broker
will take lot of time to recover if i have lot of messages to recover from
disk and some times it times out, i hope we new version will have faster
recovery mechanism.

Ram

On Fri, Oct 21, 2016 at 11:12 AM, Jakub Scholz <ja...@scholz.cz> wrote:

> These are two separate things.
>
> The linearstore is the persistent message store. When you create a queue as
> durable (and your queues are durable in the email above) and send durable /
> persistent messages into the queue, the linear store will store all these
> messages to the disk for recovery in case of broker restart. However, the
> linear store it self doesn't offload them from memory. So they stay both in
> memory and on disk. When you stop and start the broker, it will read them
> from the disk into the memory and use them. This is what you are seeing
> right now.
>
> Paging will take the messages from the memory and offload some parts of the
> memory to disk (it will use separate files for that - not the same files as
> the linearstore). Paging it self will only make sure that your memory
> consumption is under control. But when you stop and start the broker, the
> paging files are deleted and no message will be recovered from them.
>
> Of course you can combine both features and use them together. So depending
> on what is your usecase and what you actually want to achieve you have to
> configure the broker and the queues, for example:
> 1) "I want the messages to survive the broker restart, but I have more then
> enough memory" -> Durable queues / linerstore ON && Paging OFF
> 2) "I don't need that the messages survive the restart, but I want to use
> only small amounts of memory for looot of messages" -> Durable queues /
> Linearstore OFF && Paging ON
> 3) "I want the messages to survive broker restart and limit the RAM usage"
> -> Durable queues / Linear store ON && Paging ON.
>
> Right now you seem to have the option 1 configured.
>
> I'm not sure what does "background recovery" do in Java broker. Does it
> mean that the broker is available already before loading all the messages?
> In that case no, the C++ broker will always first read all the messages
> from the disk and only once they are loaded, it will be available for
> connections. However, the startup is much faster with the linearstore than
> it used to be several releases ago with the old message store.
>
> J.
>
>
>
> On Fri, Oct 21, 2016 at 7:54 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Jakub,
> >
> > If by default its not flowing to disk, every thing should be in memory
> > right and if that is the case on broker restart i should loose messages
> > right? but in my case i do see messages got persisted on broker restart.
> >
> >
> > [root@broker1 ~]# ps -ef |grep linearstore
> > qpidd 1935 1  0 Oct20 ?00:00:54 /usr/sbin/qpidd --config
> > /etc/qpid/qpidd.conf --daemon --module-dir=/usr/lib64/qpid/daemon/
> > --load-module=/usr/lib64/qpid/daemon/linearstore.so
> > --load-module=/usr/lib64/liblinearstoreutils.so --data-dir=/data
> > --close-fd
> > 9 --pidfile /var/run/qpidd.pid
> >
> > root 21233 21143  0 17:43 pts/000:00:00 grep linearstore
> > [root@broker1 ~]# qpid-stat -q
> > Queues
> >   queue dur  autoDel  excl  msg
> > msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind
> >
> > 
> > ==
> >   52177446-3a69-4680-b349-d4a5590a6fdd:0.0   YY0
> > 0  0   0  00 1 2
> >   ax-q-eaxgroup002-consumer-group-001   Y   5.47k
> > 5.67k   1961.07g  1.11g38.6m0 2
> >   ax-q-eaxgroup002-consumer-group-001-dlY  0
> > 0  0   0  00 0 2
> >
> >
> > [root@broker1 ~]# /etc/init.d/qpidd stop
> > Stopping Qpid AMQP daemon: [  OK  ]
> >
> > [root@eqp042wo ~]# ps -ef |grep linearstore
> > root 21270 21143  0 17:43 pts/000:00:00 grep linearstore
> >
> > [root@broker1 ~]# /etc/init.d/qpidd start
> > Starting Qpid AMQP daemon: [  OK  ]
&g

Re: How to enable persistent message store in qpid-cpp-1.35.0

2016-10-21 Thread rammohan ganapavarapu
Jakub,

If by default its not flowing to disk, every thing should be in memory
right and if that is the case on broker restart i should loose messages
right? but in my case i do see messages got persisted on broker restart.


[root@broker1 ~]# ps -ef |grep linearstore
qpidd 1935 1  0 Oct20 ?00:00:54 /usr/sbin/qpidd --config
/etc/qpid/qpidd.conf --daemon --module-dir=/usr/lib64/qpid/daemon/
--load-module=/usr/lib64/qpid/daemon/linearstore.so
--load-module=/usr/lib64/liblinearstoreutils.so --data-dir=/data --close-fd
9 --pidfile /var/run/qpidd.pid

root 21233 21143  0 17:43 pts/000:00:00 grep linearstore
[root@broker1 ~]# qpid-stat -q
Queues
  queue dur  autoDel  excl  msg
msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

==
  52177446-3a69-4680-b349-d4a5590a6fdd:0.0   YY0
0  0   0  00 1 2
  ax-q-eaxgroup002-consumer-group-001   Y   5.47k
5.67k   1961.07g  1.11g38.6m0 2
  ax-q-eaxgroup002-consumer-group-001-dlY  0
0  0   0  00 0 2


[root@broker1 ~]# /etc/init.d/qpidd stop
Stopping Qpid AMQP daemon: [  OK  ]

[root@eqp042wo ~]# ps -ef |grep linearstore
root 21270 21143  0 17:43 pts/000:00:00 grep linearstore

[root@broker1 ~]# /etc/init.d/qpidd start
Starting Qpid AMQP daemon: [  OK  ]
[root@eqp042wo ~]# ps -ef |grep linearstore
qpidd21293 1 70 17:43 ?00:00:21 /usr/sbin/qpidd --config
/etc/qpid/qpidd.conf --daemon --module-dir=/usr/lib64/qpid/daemon/
--load-module=/usr/lib64/qpid/daemon/linearstore.so
--load-module=/usr/lib64/liblinearstoreutils.so --data-dir=/data --close-fd
9 --pidfile /var/run/qpidd.pid

root 21325 21143  0 17:44 pts/000:00:00 grep linearstore
[root@broker1 ~]# qpid-stat -q
Queues
  queue dur  autoDel  excl  msg
msgIn  msgOut  bytes  bytesIn  bytesOut  cons  bind

==
  a55ce766-c3f3-43ff-9104-c398a5bc6104:0.0   YY0
0  0   0  00 1 2
  ax-q-eaxgroup002-consumer-group-001   Y   5.47k
5.47k 01.07g  1.07g   0 0 2
  ax-q-eaxgroup002-consumer-group-001-dlY  0
0  0   0  00 0 2


If you observer above, i have 5.47k messages in queue before and after
broker restart if they are not flowing to disk and keeping them only in
RAM, on broker restart i those queues and messages should go away right? am
i missing any thing here?. Below is my queue config looks like.

[root@broker1 ~]# python26 /usr/bin/qpid-config queues
Queue NameAttributes
=
a98385a7-2be9-45bb-a823-5d996e34222d:0.0  auto-del excl
ax-q-eaxgroup002-consumer-group-001   --durable --file-size=5120
--file-count=64 --max-queue-size=1073741824 --max-queue-count=100
--limit-policy=flow-to-disk --argument no-local=False
ax-q-eaxgroup002-consumer-group-001-dl--durable --file-size=6000
--file-count=4 --max-queue-size=52428800 --max-queue-count=10
--limit-policy=flow-to-disk --argument no-local=False

One more question, is  background recovery feature available in cpp broker
as well like in java broker?

Thanks,
Ram


On Fri, Oct 21, 2016 at 12:16 AM, Jakub Scholz <ja...@scholz.cz> wrote:

> I think the docs just list an example of what you might get when running
> qpid-config --help. But your actual qpid-config from 1.35.0 should nto
> contain flow-to-disk anymore.
>
> Jakub
>
> On Fri, Oct 21, 2016 at 12:45 AM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > From the docs i still see "--limit-policy [none | reject | flow-to-disk"
> to
> > configure queues but its not supported in V1.35.0?
> >
> >
> > https://qpid.apache.org/releases/qpid-cpp-1.35.0/cpp-
> > broker/book/chapter-Managing-CPP-Broker.html#MgmtC-2B-2B-Usingqpidconfig
> >
> > --durableQueue is durable
> > --file-count N (8)   Number of files in queue's persistence journal
> > --file-size  N (24)  File size in pages (64Kib/page)
> > --max-queue-size N   Maximum in-memory queue size as bytes
> > --max-queue-count N  Maximum in-memory queue size as a number of
> > messages
> > --limit-policy [none | reject | flow-to-disk | ring | ring-strict]
> >  Action taken when queue limit is reached:
> >  none (default) - 

Re: How to enable persistent message store in qpid-cpp-1.35.0

2016-10-20 Thread rammohan ganapavarapu
>From the docs i still see "--limit-policy [none | reject | flow-to-disk" to
configure queues but its not supported in V1.35.0?


https://qpid.apache.org/releases/qpid-cpp-1.35.0/cpp-broker/book/chapter-Managing-CPP-Broker.html#MgmtC-2B-2B-Usingqpidconfig

--durableQueue is durable
--file-count N (8)   Number of files in queue's persistence journal
--file-size  N (24)  File size in pages (64Kib/page)
--max-queue-size N   Maximum in-memory queue size as bytes
--max-queue-count N  Maximum in-memory queue size as a number of messages
--limit-policy [none | reject | flow-to-disk | ring | ring-strict]
 Action taken when queue limit is reached:
 none (default) - Use broker's default policy
 reject - Reject enqueued messages
 flow-to-disk   - Page messages to disk
 ring   - Replace oldest
unacquired message with new
 ring-strict- Replace oldest message,
reject if oldest is acquired



On Thu, Oct 20, 2016 at 3:20 PM, Jakub Scholz <ja...@scholz.cz> wrote:

> Hi,
>
> Do you have the module installed in the path you mentioned? If yes, then
> you should be able to load the module using the option
> "--load-module=/usr/lib64/qpid/daemon/linearstore.so" or by placing
> "load-module=/usr/lib64/qpid/daemon/linearstore.so" into your config file.
> However, I believe that usually the store should be loaded by default when
> it is installed. That is unless you specified that you want to start the
> broker without any modules with the no-module-dir option. If the module is
> loaded, you should see in your log file something like this:
> 2016-10-20 22:15:25 [Store] notice Linear Store: Store module initialized;
> store-dir=/var/lib/qpidd/store
>
> Additionally to loading the store, you need to have the queues created as
> durable and send the messages as durable/persistent. Otherwise the
> queues/messages will not use the persistent message store.
>
> Flow to disk feature has been removed some time ago. The functionality it
> provided (offloading messages from memory to disk) is now provided by the
> queue paging feature. Strangely, I can't find it described anywhere in the
> Qpid C++ broker documentation. But you can have a look at my answer in this
> thread -
> http://qpid.2158936.n2.nabble.com/How-do-I-create-a-queue-
> larger-than-available-RAM-td7643861.html
> - it describes how to configure it and it should be still valid in 1.35.0.
>
> Regards
> Jakub
>
> On Thu, Oct 20, 2016 at 11:56 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > How to enable persistent message store in qpid-cpp-1.35 do i have to
> > install/load "/usr/lib64/qpid/daemon/linearstore.so"? also how to
> > configure
> > flow_to_disk policy?
> >
> >
> > Thanks,
> > Ram
> >
>


Re: How to enable persistent message store in qpid-cpp-1.35.0

2016-10-20 Thread rammohan ganapavarapu
Jakub,

Thanks for pointing me to right docs, so is there any default settings for
how much/many messages can be in RAM and then flush to disk if i don't give
any options while creating queues? if i dont create queues with "paging"
options what is behavior of broker?

Thanks,
Ram

On Thu, Oct 20, 2016 at 3:20 PM, Jakub Scholz <ja...@scholz.cz> wrote:

> Hi,
>
> Do you have the module installed in the path you mentioned? If yes, then
> you should be able to load the module using the option
> "--load-module=/usr/lib64/qpid/daemon/linearstore.so" or by placing
> "load-module=/usr/lib64/qpid/daemon/linearstore.so" into your config file.
> However, I believe that usually the store should be loaded by default when
> it is installed. That is unless you specified that you want to start the
> broker without any modules with the no-module-dir option. If the module is
> loaded, you should see in your log file something like this:
> 2016-10-20 22:15:25 [Store] notice Linear Store: Store module initialized;
> store-dir=/var/lib/qpidd/store
>
> Additionally to loading the store, you need to have the queues created as
> durable and send the messages as durable/persistent. Otherwise the
> queues/messages will not use the persistent message store.
>
> Flow to disk feature has been removed some time ago. The functionality it
> provided (offloading messages from memory to disk) is now provided by the
> queue paging feature. Strangely, I can't find it described anywhere in the
> Qpid C++ broker documentation. But you can have a look at my answer in this
> thread -
> http://qpid.2158936.n2.nabble.com/How-do-I-create-a-queue-
> larger-than-available-RAM-td7643861.html
> - it describes how to configure it and it should be still valid in 1.35.0.
>
> Regards
> Jakub
>
> On Thu, Oct 20, 2016 at 11:56 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
> > Hi,
> >
> > How to enable persistent message store in qpid-cpp-1.35 do i have to
> > install/load "/usr/lib64/qpid/daemon/linearstore.so"? also how to
> > configure
> > flow_to_disk policy?
> >
> >
> > Thanks,
> > Ram
> >
>


How to enable persistent message store in qpid-cpp-1.35.0

2016-10-20 Thread rammohan ganapavarapu
Hi,

How to enable persistent message store in qpid-cpp-1.35 do i have to
install/load "/usr/lib64/qpid/daemon/linearstore.so"? also how to configure
flow_to_disk policy?


Thanks,
Ram


Re: java broker 6.0.2 OOM

2016-10-19 Thread rammohan ganapavarapu
Thank you!!

On Wed, Oct 19, 2016 at 4:15 AM, Lorenz Quack <quack.lor...@gmail.com>
wrote:

> Hi Ram, Hi Alex,
>
> thanks for sending me your example code and logs.
> I am now able to reproduce and I am investigating now.
> It currently looks like there is a defect somewhere in the AMQP 0-10 code
> path.
>
> I will keep you posted.
>
> Kind regards,
> Lorenz
>
>
>
>
> On 18/10/16 22:13, rammohan ganapavarapu wrote:
>
>> Lorenz,
>>
>> Alex and i work together, please find the config,logs for the tests he
>> performed.
>>
>> Thanks,
>> Ram
>>
>> On Tue, Oct 18, 2016 at 2:04 PM, alexk <akhim...@apigee.com> wrote:
>>
>> Hi folks,
>>> Here is the config  config.json
>>> <http://qpid.2158936.n2.nabble.com/file/n7652137/config.json>
>>> Here is the program  qpid.java
>>> <http://qpid.2158936.n2.nabble.com/file/n7652137/qpid.java>
>>> Here is the log of qpid java broker  qpid.log
>>> <http://qpid.2158936.n2.nabble.com/file/n7652137/qpid.log>
>>> Here is the java-broker-console output  qpid_broker_console.log
>>> <http://qpid.2158936.n2.nabble.com/file/n7652137/qpid_broker_console.log
>>> >
>>>
>>>
>>> I'm trying to experiment with QPID_JAVA_MEM="-Xmx4G
>>> -XX:MaxDirectMemorySize=1500m" was able to get heap OOM
>>>
>>> Alex
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context: http://qpid.2158936.n2.nabble.
>>> com/java-broker-6-0-2-OOM-tp7651831p7652137.html
>>> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>>
>>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: java broker 6.0.2 OOM

2016-10-18 Thread rammohan ganapavarapu
Lorenz,

Alex and i work together, please find the config,logs for the tests he
performed.

Thanks,
Ram

On Tue, Oct 18, 2016 at 2:04 PM, alexk  wrote:

> Hi folks,
> Here is the config  config.json
> 
> Here is the program  qpid.java
> 
> Here is the log of qpid java broker  qpid.log
> 
> Here is the java-broker-console output  qpid_broker_console.log
> 
>
>
> I'm trying to experiment with QPID_JAVA_MEM="-Xmx4G
> -XX:MaxDirectMemorySize=1500m" was able to get heap OOM
>
> Alex
>
>
>
>
> --
> View this message in context: http://qpid.2158936.n2.nabble.
> com/java-broker-6-0-2-OOM-tp7651831p7652137.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: java broker 6.0.2 OOM

2016-10-18 Thread rammohan ganapavarapu
Lorenz,

Thanks for quick test, we also see its flowing to disk but direct mem is
not leveling off, we will perform basic test with out application and share
results.

Do you have any recommendation on heap and direct mem settings, i was
testing with heap: 768m and direct: 2304m.

Ram

On Tue, Oct 18, 2016 at 7:12 AM, Lorenz Quack <quack.lor...@gmail.com>
wrote:

> Hello Ram,
>
> I just tried to reproduce your issue but was not successful.
> I ran a 6.0.2 broker (with default config) and trunk clients.
> I created 30 producers on their own connections and sent 10k persistent
> messages each in its own transaction.
> After hitting 634,583,040 B direct memory usage flow to disk kicked in and
> the direct memory usage leveled off.
> I monitored the log for the QUE-1014 and BRK-1014 messages to verify that
> the broker starts flowing messages to disk and I monitored the direct
> memory usage with jvisualvm.
> I attached my hacked together test client application, the log file and a
> screenshot showing the direct memory usage.
>
> Can you see something that you are using differently? Can you create a
> minimal example that exposes the problem that you could share with me?
>
> Note that monitoring disk usage as a measure of whether flow to disk is
> active is not going to work when you have persistent messages because in
> this case messages are always written to disk regardless of flow to disk.
>
> Kind regards,
> Lorenz
>
>
>
> On 18/10/16 03:10, rammohan ganapavarapu wrote:
>
>> please let me know if u need any thing else.
>>
>> On Oct 17, 2016 11:02 AM, "rammohan ganapavarapu" <
>> rammohanga...@gmail.com>
>> wrote:
>>
>> Lorenz,
>>>
>>>
>>> Actually message size vary between ~ 1kb to 10k
>>>
>>> Thanks,
>>> Ram
>>>
>>> On Mon, Oct 17, 2016 at 10:23 AM, rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>> Lorenz,
>>>>
>>>> Thanks for trying to help, Please find the below answers for your
>>>> questions.
>>>>
>>>>
>>>> Q:What is the type of your virtualhost (Derby, BDB, ...)?
>>>>
>>>> A: Derby ( i actually wanted to know your recomendation)
>>>>
>>>> Q: How large are your messages? Do they vary in size or all the same
>>>> size?
>>>>
>>>> A: Message size is approximately 1k
>>>>
>>>> Q: How many connections/sessions/producers/consumers are connected to
>>>> the broker?
>>>>
>>>> A: we are using 3 producers and each have 10 connections.
>>>>
>>>> Q: Are there any consumers active while you are testing?
>>>>
>>>> A: No, we blocked all the consumers
>>>> Q: Do you use transactions?
>>>> A: They are transnational but ack is done immediately after accepting,
>>>> if
>>>> fails we push it back to dl queue.
>>>> Q: Are the messages persistent or transient?
>>>> A: They are persistent.
>>>>
>>>> Ram
>>>>
>>>> On Mon, Oct 17, 2016 at 1:15 AM, Lorenz Quack <quack.lor...@gmail.com>
>>>> wrote:
>>>>
>>>> Hello Ram,
>>>>>
>>>>> This seems curious.
>>>>> Yes, the idea behind flow to disk is to prevent the broker from running
>>>>> out of direct memory.
>>>>> The broker does keep a certain representation of the message in memory
>>>>> but that should affect heap and not direct memory.
>>>>>
>>>>> I currently do not understand what is happening here so I raised a JIRA
>>>>> [1].
>>>>>
>>>>> Could you provide some more information about your test case so I can
>>>>> try to reproduce it on my end?
>>>>> What is the type of your virtualhost (Derby, BDB, ...)?
>>>>> How large are your messages? Do they vary in size or all the same size?
>>>>> How many connections/sessions/producers/consumers are connected to the
>>>>> broker?
>>>>> Are there any consumers active while you are testing?
>>>>> Do you use transactions?
>>>>> Are the messages persistent or transient?
>>>>>
>>>>> Kind regards,
>>>>> Lorenz
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/QPID-7461
>>>>>
>>>>>
>>>>>
>>>>> On 14/10/16 19:14, rammohan ganapavarapu wrot

Re: java broker 6.0.2 OOM

2016-10-17 Thread rammohan ganapavarapu
Lorenz,


Actually message size vary between ~ 1kb to 10k

Thanks,
Ram

On Mon, Oct 17, 2016 at 10:23 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Lorenz,
>
> Thanks for trying to help, Please find the below answers for your
> questions.
>
>
> Q:What is the type of your virtualhost (Derby, BDB, ...)?
>
> A: Derby ( i actually wanted to know your recomendation)
>
> Q: How large are your messages? Do they vary in size or all the same size?
>
> A: Message size is approximately 1k
>
> Q: How many connections/sessions/producers/consumers are connected to the
> broker?
>
> A: we are using 3 producers and each have 10 connections.
>
> Q: Are there any consumers active while you are testing?
>
> A: No, we blocked all the consumers
> Q: Do you use transactions?
> A: They are transnational but ack is done immediately after accepting, if
> fails we push it back to dl queue.
> Q: Are the messages persistent or transient?
> A: They are persistent.
>
> Ram
>
> On Mon, Oct 17, 2016 at 1:15 AM, Lorenz Quack <quack.lor...@gmail.com>
> wrote:
>
>> Hello Ram,
>>
>> This seems curious.
>> Yes, the idea behind flow to disk is to prevent the broker from running
>> out of direct memory.
>> The broker does keep a certain representation of the message in memory
>> but that should affect heap and not direct memory.
>>
>> I currently do not understand what is happening here so I raised a JIRA
>> [1].
>>
>> Could you provide some more information about your test case so I can try
>> to reproduce it on my end?
>> What is the type of your virtualhost (Derby, BDB, ...)?
>> How large are your messages? Do they vary in size or all the same size?
>> How many connections/sessions/producers/consumers are connected to the
>> broker?
>> Are there any consumers active while you are testing?
>> Do you use transactions?
>> Are the messages persistent or transient?
>>
>> Kind regards,
>> Lorenz
>>
>> [1] https://issues.apache.org/jira/browse/QPID-7461
>>
>>
>>
>> On 14/10/16 19:14, rammohan ganapavarapu wrote:
>>
>>> Hi,
>>>
>>> I am confused with flow to disk context, when direct memory reaches flow
>>> to
>>> disk threshold, broker directly write to disk or it keep in both memory
>>> and
>>> disk? i was in the impression that flow to disk threshold to free up
>>> direct
>>> memory so that broker wont crash, isn't it?
>>>
>>> So i have 1.5gb direct memory and here is my flow to disk threshodl
>>>
>>> "broker.flowToDiskThreshold":"644245094"  (40% as default)
>>>
>>> I am pushing messages and after 40% of direct memory messages are writing
>>> to disk as you can see disk space is going up but my question is when its
>>> writing to disk shouldn't it free up direct memory? but i see direct
>>> memory
>>> usage is also going up, am i missing any thing here?
>>>
>>>
>>> broker1 | success | rc=0 >>
>>> /data   50G  754M   46G   2% /ebs
>>> Fri Oct 14 17:59:25 UTC 2016
>>>"maximumDirectMemorySize" : 1610612736,
>>>      "usedDirectMemorySize" : 840089280,
>>>
>>> broker1 | success | rc=0 >>
>>> /data   50G  761M   46G   2% /ebs
>>> Fri Oct 14 17:59:27 UTC 2016
>>>"maximumDirectMemorySize" : 1610612736,
>>>  "usedDirectMemorySize" : 843497152,
>>>
>>> .
>>> .
>>> .
>>> /data   50G  1.3G   46G   3% /ebs
>>> Fri Oct 14 18:09:08 UTC 2016
>>>"maximumDirectMemorySize" : 1610612736,
>>>  "usedDirectMemorySize" : 889035136,
>>>
>>>
>>> Please help me understand this!
>>>
>>> Thanks,
>>> Ram
>>>
>>>
>>>
>>> On Fri, Oct 14, 2016 at 9:22 AM, rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>> So i ran the test few more times and it is happening every time, i was
>>>> monitoring direct memory usage and looks like it ran out of direct
>>>> memory.
>>>>
>>>>"maximumDirectMemorySize" : 2415919104,
>>>>  "usedDirectMemorySize" : 2414720896,
>>>>
>>>> Any thoughts guys?
>>>>
>>>> Ram
>>>>
>>>> On Thu, Oct 13, 2016 at 4:37 PM, rammohan ganapavarapu <
>>>> rammohanga...@gmail.com> wrote:

Re: java broker 6.0.2 OOM

2016-10-17 Thread rammohan ganapavarapu
Lorenz,

Thanks for trying to help, Please find the below answers for your questions.


Q:What is the type of your virtualhost (Derby, BDB, ...)?

A: Derby ( i actually wanted to know your recomendation)

Q: How large are your messages? Do they vary in size or all the same size?

A: Message size is approximately 1k

Q: How many connections/sessions/producers/consumers are connected to the
broker?

A: we are using 3 producers and each have 10 connections.

Q: Are there any consumers active while you are testing?

A: No, we blocked all the consumers
Q: Do you use transactions?
A: They are transnational but ack is done immediately after accepting, if
fails we push it back to dl queue.
Q: Are the messages persistent or transient?
A: They are persistent.

Ram

On Mon, Oct 17, 2016 at 1:15 AM, Lorenz Quack <quack.lor...@gmail.com>
wrote:

> Hello Ram,
>
> This seems curious.
> Yes, the idea behind flow to disk is to prevent the broker from running
> out of direct memory.
> The broker does keep a certain representation of the message in memory but
> that should affect heap and not direct memory.
>
> I currently do not understand what is happening here so I raised a JIRA
> [1].
>
> Could you provide some more information about your test case so I can try
> to reproduce it on my end?
> What is the type of your virtualhost (Derby, BDB, ...)?
> How large are your messages? Do they vary in size or all the same size?
> How many connections/sessions/producers/consumers are connected to the
> broker?
> Are there any consumers active while you are testing?
> Do you use transactions?
> Are the messages persistent or transient?
>
> Kind regards,
> Lorenz
>
> [1] https://issues.apache.org/jira/browse/QPID-7461
>
>
>
> On 14/10/16 19:14, rammohan ganapavarapu wrote:
>
>> Hi,
>>
>> I am confused with flow to disk context, when direct memory reaches flow
>> to
>> disk threshold, broker directly write to disk or it keep in both memory
>> and
>> disk? i was in the impression that flow to disk threshold to free up
>> direct
>> memory so that broker wont crash, isn't it?
>>
>> So i have 1.5gb direct memory and here is my flow to disk threshodl
>>
>> "broker.flowToDiskThreshold":"644245094"  (40% as default)
>>
>> I am pushing messages and after 40% of direct memory messages are writing
>> to disk as you can see disk space is going up but my question is when its
>> writing to disk shouldn't it free up direct memory? but i see direct
>> memory
>> usage is also going up, am i missing any thing here?
>>
>>
>> broker1 | success | rc=0 >>
>> /data   50G  754M   46G   2% /ebs
>> Fri Oct 14 17:59:25 UTC 2016
>>"maximumDirectMemorySize" : 1610612736,
>>  "usedDirectMemorySize" : 840089280,
>>
>> broker1 | success | rc=0 >>
>> /data   50G  761M   46G   2% /ebs
>> Fri Oct 14 17:59:27 UTC 2016
>>"maximumDirectMemorySize" : 1610612736,
>>  "usedDirectMemorySize" : 843497152,
>>
>> .
>> .
>> .
>> /data   50G  1.3G   46G   3% /ebs
>> Fri Oct 14 18:09:08 UTC 2016
>>"maximumDirectMemorySize" : 1610612736,
>>  "usedDirectMemorySize" : 889035136,
>>
>>
>> Please help me understand this!
>>
>> Thanks,
>> Ram
>>
>>
>>
>> On Fri, Oct 14, 2016 at 9:22 AM, rammohan ganapavarapu <
>> rammohanga...@gmail.com> wrote:
>>
>> So i ran the test few more times and it is happening every time, i was
>>> monitoring direct memory usage and looks like it ran out of direct
>>> memory.
>>>
>>>"maximumDirectMemorySize" : 2415919104,
>>>  "usedDirectMemorySize" : 2414720896,
>>>
>>> Any thoughts guys?
>>>
>>> Ram
>>>
>>> On Thu, Oct 13, 2016 at 4:37 PM, rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>> Guys,
>>>>
>>>> Not sure what i am doing wrong, i have set heap to 1gb and direct mem to
>>>> 2gb after ~150k msgs queuedepth  in the queue i am getting bellow error
>>>> and
>>>> broker is getting killed. Any suggestions?
>>>>
>>>> 2016-10-13 23:27:41,894 ERROR [IO-/10.16.1.34:46096] (o.a.q.s.Main) -
>>>> Uncaught exception, shutting down.
>>>> java.lang.OutOfMemoryError: Direct buffer memory
>>>>  at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_75]
>>>>  at java.nio.DirectByteBuffer.(DirectByteBuffer.java

Re: qpid-java-broker-v6.0.2 exchange getting deleted automatically

2016-10-15 Thread rammohan ganapavarapu
I dont see that message but i see bunch of these messages.

2016-10-10 19:53:42,128 INFO  [IoReceiver - /127.0.0.1:57042]
(queue.deleted) - [con:10,015(ANONYMOUS@/127.0.0.1:57042/default)/ch:1]
[vh(/default)/qu(reply-localhost.16017.1)] QUE-1002 : Deleted
2016-10-10 19:53:42,129 INFO  [IoReceiver - /127.0.0.1:57042]
(queue.deleted) - [con:10,015(ANONYMOUS@/127.0.0.1:57042/default)/ch:1]
[vh(/default)/qu(topic-eqp040wo.16017.1)] QUE-1002 : Deleted
2016-10-10 19:54:25,483 INFO  [IoReceiver - /127.0.0.1:57052]
(binding.deleted) - [Broker]
[vh(/default)/ex(direct/amq.direct)/qu(reply-localhost.16041.1)/rk(reply-localhost.16041.1)]
BND-1002 : Deleted
2016-10-10 19:54:25,483 INFO  [IoReceiver - /127.0.0.1:57052]
(queue.deleted) - [Broker] [vh(/default)/qu(reply-localhost.16041.1)]
QUE-1002 : Deleted
2016-10-10 19:54:25,483 INFO  [IoReceiver - /127.0.0.1:57052]
(queue.deleted) - [Broker] [vh(/default)/qu(topic-localhost.16041.1)]
QUE-1002 : Deleted

Ram

On Sat, Oct 15, 2016 at 9:19 AM, Rob Godfrey <rob.j.godf...@gmail.com>
wrote:

> If the exchange is deleted there should be a log entry for that - look for
> the string EXH-1002 in the logs.  As above the error you gave is simply
> that the client was trying to create an exchange but seemingly not
> providing enough information to do so (missing type).
>
> -- Rob
>
> On 15 October 2016 at 17:01, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > The log is only saying this one, I know there was a exchange we create
> and
> > it was gone, I created exchange again and it looks ok, wondering how that
> > exchange got deleted, the exchange entry in default.json is also
> > disappeared.
> >
> > Ram
> >
> > On Oct 15, 2016 3:02 AM, "Rob Godfrey" <rob.j.godf...@gmail.com> wrote:
> >
> > > The error you get "[con:36(producer11476488793213@/10.16.
> > > 16.241:61948/default)/ch:0] CHN-1003
> > > : Close : 404 - Unknown Exchange Type:" is not saying that the exchange
> > has
> > > been deleted, rather that the client has attempted to declare an
> exchange
> > > but the type of the exchange (e.g. fanout, direct, topic) is unknown.
> If
> > > you have included the whole of the log message then it would seem to
> > imply
> > > that the client is trying to declare an exchange of type "".  Do you
> know
> > > the address this producer is trying to send to?
> > >
> > > -- Rob
> > >
> > > On 15 October 2016 at 01:06, rammohan ganapavarapu <
> > > rammohanga...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am running java broker v6.0.2 version with derby message
> store,while
> > we
> > > > are doing load testing we see one of the exchanges we created was
> > getting
> > > > deleted  with below error, any idea why its getting deleted( it
> > happened
> > > > twice)? I don't see any log traces for delete operation
> > > >
> > > >
> > > > 2016-10-14 23:50:58,181 INFO  [IO-/10.16.16.241:61948]
> > > > (q.m.c.close_forced)
> > > > - [con:36(ANONYMOUS@/10.16.16.241:61948/default)/ch:0]
> > > > [con:36(producer11476488793213@/10.16.16.241:61948/default)/ch:0]
> > > CHN-1003
> > > > : Close : 404 - Unknown Exchange Type:
> > > >
> > > > These are my exchanges from other broker which is running fine.
> > > >
> > > >   }, {
> > > > "id" : "e299c17c-c15b-47d2-8d04-448eb868f6fe",
> > > > "name" : "ax-ex-mxgroup001",
> > > > "type" : "fanout",
> > > > "durable" : true,
> > > > "lifetimePolicy" : "PERMANENT",
> > > > "lastUpdatedBy" : "ANONYMOUS",
> > > > "lastUpdatedTime" : 1476467100136,
> > > > "createdBy" : "ANONYMOUS",
> > > > "createdTime" : 1476467100136
> > > >   }, {
> > > > "id" : "1c5e229c-19c8-4e0f-aa1e-377189411dab",
> > > > "name" : "ax-ex-mxgroup001-dl",
> > > > "type" : "fanout",
> > > > "durable" : true,
> > > > "lifetimePolicy" : "PERMANENT",
> > > > "lastUpdatedBy" : "ANONYMOUS",
> > > > "lastUpdatedTime" : 1476467100302,
> > > > "createdBy" : "ANONYMOUS",
> > > > "createdTime" : 1476467100302
> > > >   } ],
> > > >
> > > > Thanks,
> > > > Ram
> > > >
> > >
> >
>


Re: qpid-java-broker-v6.0.2 exchange getting deleted automatically

2016-10-15 Thread rammohan ganapavarapu
The log is only saying this one, I know there was a exchange we create and
it was gone, I created exchange again and it looks ok, wondering how that
exchange got deleted, the exchange entry in default.json is also
disappeared.

Ram

On Oct 15, 2016 3:02 AM, "Rob Godfrey" <rob.j.godf...@gmail.com> wrote:

> The error you get "[con:36(producer11476488793213@/10.16.
> 16.241:61948/default)/ch:0] CHN-1003
> : Close : 404 - Unknown Exchange Type:" is not saying that the exchange has
> been deleted, rather that the client has attempted to declare an exchange
> but the type of the exchange (e.g. fanout, direct, topic) is unknown.  If
> you have included the whole of the log message then it would seem to imply
> that the client is trying to declare an exchange of type "".  Do you know
> the address this producer is trying to send to?
>
> -- Rob
>
> On 15 October 2016 at 01:06, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am running java broker v6.0.2 version with derby message store,while we
> > are doing load testing we see one of the exchanges we created was getting
> > deleted  with below error, any idea why its getting deleted( it happened
> > twice)? I don't see any log traces for delete operation
> >
> >
> > 2016-10-14 23:50:58,181 INFO  [IO-/10.16.16.241:61948]
> > (q.m.c.close_forced)
> > - [con:36(ANONYMOUS@/10.16.16.241:61948/default)/ch:0]
> > [con:36(producer11476488793213@/10.16.16.241:61948/default)/ch:0]
> CHN-1003
> > : Close : 404 - Unknown Exchange Type:
> >
> > These are my exchanges from other broker which is running fine.
> >
> >   }, {
> > "id" : "e299c17c-c15b-47d2-8d04-448eb868f6fe",
> > "name" : "ax-ex-mxgroup001",
> > "type" : "fanout",
> > "durable" : true,
> > "lifetimePolicy" : "PERMANENT",
> > "lastUpdatedBy" : "ANONYMOUS",
> > "lastUpdatedTime" : 1476467100136,
> > "createdBy" : "ANONYMOUS",
> > "createdTime" : 1476467100136
> >   }, {
> > "id" : "1c5e229c-19c8-4e0f-aa1e-377189411dab",
> > "name" : "ax-ex-mxgroup001-dl",
> > "type" : "fanout",
> > "durable" : true,
> > "lifetimePolicy" : "PERMANENT",
> > "lastUpdatedBy" : "ANONYMOUS",
> > "lastUpdatedTime" : 1476467100302,
> > "createdBy" : "ANONYMOUS",
> > "createdTime" : 1476467100302
> >   } ],
> >
> > Thanks,
> > Ram
> >
>


qpid-java-broker-v6.0.2 exchange getting deleted automatically

2016-10-14 Thread rammohan ganapavarapu
Hi,

I am running java broker v6.0.2 version with derby message store,while we
are doing load testing we see one of the exchanges we created was getting
deleted  with below error, any idea why its getting deleted( it happened
twice)? I don't see any log traces for delete operation


2016-10-14 23:50:58,181 INFO  [IO-/10.16.16.241:61948] (q.m.c.close_forced)
- [con:36(ANONYMOUS@/10.16.16.241:61948/default)/ch:0]
[con:36(producer11476488793213@/10.16.16.241:61948/default)/ch:0] CHN-1003
: Close : 404 - Unknown Exchange Type:

These are my exchanges from other broker which is running fine.

  }, {
"id" : "e299c17c-c15b-47d2-8d04-448eb868f6fe",
"name" : "ax-ex-mxgroup001",
"type" : "fanout",
"durable" : true,
"lifetimePolicy" : "PERMANENT",
"lastUpdatedBy" : "ANONYMOUS",
"lastUpdatedTime" : 1476467100136,
"createdBy" : "ANONYMOUS",
"createdTime" : 1476467100136
  }, {
"id" : "1c5e229c-19c8-4e0f-aa1e-377189411dab",
"name" : "ax-ex-mxgroup001-dl",
"type" : "fanout",
"durable" : true,
"lifetimePolicy" : "PERMANENT",
"lastUpdatedBy" : "ANONYMOUS",
"lastUpdatedTime" : 1476467100302,
"createdBy" : "ANONYMOUS",
"createdTime" : 1476467100302
  } ],

Thanks,
Ram


Re: java broker 6.0.2 OOM

2016-10-14 Thread rammohan ganapavarapu
Hi,

I am confused with flow to disk context, when direct memory reaches flow to
disk threshold, broker directly write to disk or it keep in both memory and
disk? i was in the impression that flow to disk threshold to free up direct
memory so that broker wont crash, isn't it?

So i have 1.5gb direct memory and here is my flow to disk threshodl

"broker.flowToDiskThreshold":"644245094"  (40% as default)

I am pushing messages and after 40% of direct memory messages are writing
to disk as you can see disk space is going up but my question is when its
writing to disk shouldn't it free up direct memory? but i see direct memory
usage is also going up, am i missing any thing here?


broker1 | success | rc=0 >>
/data   50G  754M   46G   2% /ebs
Fri Oct 14 17:59:25 UTC 2016
  "maximumDirectMemorySize" : 1610612736,
"usedDirectMemorySize" : 840089280,

broker1 | success | rc=0 >>
/data   50G  761M   46G   2% /ebs
Fri Oct 14 17:59:27 UTC 2016
  "maximumDirectMemorySize" : 1610612736,
"usedDirectMemorySize" : 843497152,

.
.
.
/data   50G  1.3G   46G   3% /ebs
Fri Oct 14 18:09:08 UTC 2016
  "maximumDirectMemorySize" : 1610612736,
"usedDirectMemorySize" : 889035136,


Please help me understand this!

Thanks,
Ram



On Fri, Oct 14, 2016 at 9:22 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> So i ran the test few more times and it is happening every time, i was
> monitoring direct memory usage and looks like it ran out of direct memory.
>
>   "maximumDirectMemorySize" : 2415919104,
> "usedDirectMemorySize" : 2414720896,
>
> Any thoughts guys?
>
> Ram
>
> On Thu, Oct 13, 2016 at 4:37 PM, rammohan ganapavarapu <
> rammohanga...@gmail.com> wrote:
>
>> Guys,
>>
>> Not sure what i am doing wrong, i have set heap to 1gb and direct mem to
>> 2gb after ~150k msgs queuedepth  in the queue i am getting bellow error and
>> broker is getting killed. Any suggestions?
>>
>> 2016-10-13 23:27:41,894 ERROR [IO-/10.16.1.34:46096] (o.a.q.s.Main) -
>> Uncaught exception, shutting down.
>> java.lang.OutOfMemoryError: Direct buffer memory
>> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_75]
>> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>> ~[na:1.7.0_75]
>> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
>> ~[na:1.7.0_75]
>>
>> 2016-10-13 23:27:41,894 ERROR [IO-/10.16.1.34:46096] (o.a.q.s.Main) -
>> Uncaught exception, shutting down.
>> java.lang.OutOfMemoryError: Direct buffer memory
>> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_75]
>> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>> ~[na:1.7.0_75]
>> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
>> ~[na:1.7.0_75]
>> at 
>> org.apache.qpid.bytebuffer.QpidByteBuffer.allocateDirect(QpidByteBuffer.java:469)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at 
>> org.apache.qpid.bytebuffer.QpidByteBuffer.allocateDirect(QpidByteBuffer.java:482)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at 
>> org.apache.qpid.server.protocol.v0_10.ServerEncoder.init(ServerEncoder.java:57)
>> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.server.protocol.v0_10.ServerDisassembler.
>> method(ServerDisassembler.java:196) ~[qpid-broker-plugins-amqp-0-1
>> 0-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.server.protocol.v0_10.ServerDisassembler.
>> control(ServerDisassembler.java:185) ~[qpid-broker-plugins-amqp-0-1
>> 0-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.server.protocol.v0_10.ServerDisassembler.
>> control(ServerDisassembler.java:57) ~[qpid-broker-plugins-amqp-0-1
>> 0-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.transport.Method.delegate(Method.java:159)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at org.apache.qpid.server.protocol.v0_10.ServerDisassembler.
>> send(ServerDisassembler.java:79) ~[qpid-broker-plugins-amqp-0-1
>> 0-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.transport.Connection.send(Connection.java:415)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at 
>> org.apache.qpid.server.protocol.v0_10.ServerConnection.send(ServerConnection.java:497)
>> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
>> at org.apache.qpid.transport.Session.send(Session.java:588)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at org.apache.qpid.transport.Session.invoke(Session.java:804)
>> ~[qpid-common-6.0.2.jar:6.0.2]
>> at org.apache.qpid.trans

Re: java broker 6.0.2 OOM

2016-10-14 Thread rammohan ganapavarapu
So i ran the test few more times and it is happening every time, i was
monitoring direct memory usage and looks like it ran out of direct memory.

  "maximumDirectMemorySize" : 2415919104,
"usedDirectMemorySize" : 2414720896,

Any thoughts guys?

Ram

On Thu, Oct 13, 2016 at 4:37 PM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Guys,
>
> Not sure what i am doing wrong, i have set heap to 1gb and direct mem to
> 2gb after ~150k msgs queuedepth  in the queue i am getting bellow error and
> broker is getting killed. Any suggestions?
>
> 2016-10-13 23:27:41,894 ERROR [IO-/10.16.1.34:46096] (o.a.q.s.Main) -
> Uncaught exception, shutting down.
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_75]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> ~[na:1.7.0_75]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> ~[na:1.7.0_75]
>
> 2016-10-13 23:27:41,894 ERROR [IO-/10.16.1.34:46096] (o.a.q.s.Main) -
> Uncaught exception, shutting down.
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_75]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> ~[na:1.7.0_75]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> ~[na:1.7.0_75]
> at 
> org.apache.qpid.bytebuffer.QpidByteBuffer.allocateDirect(QpidByteBuffer.java:469)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.bytebuffer.QpidByteBuffer.allocateDirect(QpidByteBuffer.java:482)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.server.protocol.v0_10.ServerEncoder.init(ServerEncoder.java:57)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.server.protocol.v0_10.
> ServerDisassembler.method(ServerDisassembler.java:196)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.server.protocol.v0_10.
> ServerDisassembler.control(ServerDisassembler.java:185)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.server.protocol.v0_10.
> ServerDisassembler.control(ServerDisassembler.java:57)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Method.delegate(Method.java:159)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.server.protocol.v0_10.ServerDisassembler.send(ServerDisassembler.java:79)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Connection.send(Connection.java:415)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.server.protocol.v0_10.ServerConnection.send(ServerConnection.java:497)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Session.send(Session.java:588)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Session.invoke(Session.java:804)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Session.invoke(Session.java:613)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.SessionInvoker.sessionCompleted(SessionInvoker.java:65)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Session.flushProcessed(Session.java:514)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.server.protocol.v0_10.
> ServerSessionDelegate.command(ServerSessionDelegate.java:119)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.server.protocol.v0_10.
> ServerSessionDelegate.command(ServerSessionDelegate.java:87)
> ~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Method.delegate(Method.java:155)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Session.received(Session.java:582)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at org.apache.qpid.transport.Connection.dispatch(Connection.java:447)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.ConnectionDelegate.handle(ConnectionDelegate.java:65)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.ConnectionDelegate.handle(ConnectionDelegate.java:41)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.MethodDelegate.executionSync(MethodDelegate.java:104)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.ExecutionSync.dispatch(ExecutionSync.java:82)
> ~[qpid-common-6.0.2.jar:6.0.2]
> at 
> org.apache.qpid.transport.ConnectionDelegate.command(ConnectionDelegate.java:55)
> ~[qpid-common-6.0.2.jar:6.0.2]

Re: java broker 6.0.2 OOM

2016-10-13 Thread rammohan ganapavarapu
]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_75]
at
org.apache.qpid.server.protocol.v0_10.ServerConnection.received(ServerConnection.java:272)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerAssembler.emit(ServerAssembler.java:122)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerAssembler.assemble(ServerAssembler.java:211)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerAssembler.frame(ServerAssembler.java:151)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerAssembler.received(ServerAssembler.java:79)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerInputHandler.parse(ServerInputHandler.java:175)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.ServerInputHandler.received(ServerInputHandler.java:82)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10$3.run(AMQPConnection_0_10.java:156)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_75]
at
org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10.received(AMQPConnection_0_10.java:148)
~[qpid-broker-plugins-amqp-0-10-protocol-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:144)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.NonBlockingConnection.processAmqpData(NonBlockingConnection.java:609)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.NonBlockingConnectionPlainDelegate.processData(NonBlockingConnectionPlainDelegate.java:58)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.NonBlockingConnection.doRead(NonBlockingConnection.java:503)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.NonBlockingConnection.doWork(NonBlockingConnection.java:282)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.NetworkConnectionScheduler.processConnection(NetworkConnectionScheduler.java:124)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.SelectorThread$ConnectionProcessor.processConnection(SelectorThread.java:504)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:337)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:87)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:462)
~[qpid-broker-core-6.0.2.jar:6.0.2]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
~[na:1.7.0_75]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
~[na:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_75]


Ram

On Thu, Oct 13, 2016 at 1:45 PM, Rob Godfrey <rob.j.godf...@gmail.com>
wrote:

> On 13 October 2016 at 19:54, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > Rob,
> >
> > Understood, we are doing negative testing like what will happened to
> broker
> > when all the consumers are down but producers are pumping messages, so i
> > was the in the impression that flow to disk threshold will avoid broker
> go
> > bad because of OOM. So i have bumped up the heap and direct mem setting
> of
> > broker and try to restart  but it was complaining with bellow error.
> >
> >
> >
> > *2016-10-13 18:28:46,157 INFO  [Housekeeping[default]]
> > (q.m.q.flow_to_disk_active) - [Housekeeping[default]]
> > [vh(/default)/qu(ax-q-mxgroup001)] QUE-1014 : Message flow to disk
> active
> > :  Message memory use 13124325 kB (13gb) exceeds threshold 168659 kB
> > (168mb)*
> >
> >
> > But actual flow to disk threshold from broker is as:
> >
> > * "broker.flowToDiskThreshold" : "858993459", ( with is 40% of
> > direct-mem(2g))*
> >
> > I know my message size is more than the threshold but i am trying to see
> > how log message was saying 168mb.
> >
>
> So the broker takes its overall flow to disk "quota" and then divides this
> up between virtual hosts, and for each virtual host divides up between the
> queues on the virtual host.  This allows for some fairness when multiple
> virtual hosts or multiple que

Re: java broker 6.0.2 OOM

2016-10-13 Thread rammohan ganapavarapu
Rob,

Understood, we are doing negative testing like what will happened to broker
when all the consumers are down but producers are pumping messages, so i
was the in the impression that flow to disk threshold will avoid broker go
bad because of OOM. So i have bumped up the heap and direct mem setting of
broker and try to restart  but it was complaining with bellow error.



*2016-10-13 18:28:46,157 INFO  [Housekeeping[default]]
(q.m.q.flow_to_disk_active) - [Housekeeping[default]]
[vh(/default)/qu(ax-q-mxgroup001)] QUE-1014 : Message flow to disk active
:  Message memory use 13124325 kB (13gb) exceeds threshold 168659 kB
(168mb)*


But actual flow to disk threshold from broker is as:

* "broker.flowToDiskThreshold" : "858993459", ( with is 40% of
direct-mem(2g))*

I know my message size is more than the threshold but i am trying to see
how log message was saying 168mb.

So to make broker running i have enabled background recovery and it seems
working fine but  i am curious to know how broker dump back all the
messages from disk to memory does it dump all or does dump in batches?

Thanks,
Ram

On Thu, Oct 13, 2016 at 11:29 AM, Rob Godfrey <rob.j.godf...@gmail.com>
wrote:

> On 13 October 2016 at 17:36, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > Lorenz,
> >
> > Thank you for the link, so no matter how much heap you have you will hit
> > the hard limit at some point right?, i thought flow to disk will make
> > broker not to crash because of out of memory issue but looks like its not
> > the case.
> >
> > In my environment we will have dynamic number of producers and consumers
> so
> > its hard to pre measure how much heap we can allocate based on number of
> > connection/sessions.
> >
> > Ram
> >
> >
> Yeah - currently there is always a hard limit based on the number of "queue
> entries".  Ultimately there's a trade-off to be had with designing a queue
> data structure which is high performing, vs. one which can be offloaded
> onto disk.  This gets even more complicated for queues which are not strict
> FIFO (priority queues, LVQ, etc) or where consumers have selectors.
> Ultimately if you are storing millions of messages in your broker then you
> are probably doing things wrong - we would expect people to enforce queue
> limits and flow control rather than expect the broker to have infinite
> capacity (and even off-loading to disk you will still run out of disk space
> at some point).
>
> -- Rob
>
>
> >
> >
> > On Thu, Oct 13, 2016 at 9:05 AM, Lorenz Quack <quack.lor...@gmail.com>
> > wrote:
> >
> > > Hello Ram,
> > >
> > > may I refer you to the relevant section of the documentation [1].
> > > As explained there in more detail, the broker keeps a representation of
> > > each message in heap even when flowing the message to disk.
> > > Therefore the amount of JVM heap memory puts a hard limit on the number
> > of
> > > message the broker can hold.
> > >
> > > Kind Regards,
> > > Lorenz
> > >
> > > [1] https://qpid.apache.org/releases/qpid-java-6.0.4/java-broker
> > > /book/Java-Broker-Runtime-Memory.html
> > >
> > >
> > >
> > > On 13/10/16 16:40, rammohan ganapavarapu wrote:
> > >
> > >> Hi,
> > >>
> > >> We are doing some load test using java broker 6.0.2 by stopping all
> > >> consumers, broker was crashed at 644359 messages. Even if i try to
> > restart
> > >> broker its crashing with the same oom error.
> > >>
> > >>   "persistentEnqueuedBytes" : 12731167222,
> > >>  "persistentEnqueuedMessages" : 644359,
> > >>  "queueDepthBytes" : 12731167222,
> > >>  "queueDepthMessages" : 644359,
> > >>  "totalDequeuedBytes" : 0,
> > >>  "totalDequeuedMessages" : 0,
> > >>  "totalEnqueuedBytes" : 12731167222,
> > >>  "totalEnqueuedMessages" : 644359,
> > >>
> > >> JVM settings of broker: -Xmx512m -XX:MaxDirectMemorySize=1536m
> > >>
> > >> "broker.flowToDiskThreshold" : "644245094",
> > >>
> > >> So theoretically broker should flow those messages to disk after the
> > >> threshold right then broker shouldn't have caused OOM exception right?
> > do
> > >> i
> > >> have to do any other tuning?
> > >>
> > >> Thanks,
> > >> Ram
> > >>
> > >>
> > >
> > > -
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> > >
> >
>


Re: java broker 6.0.2 OOM

2016-10-13 Thread rammohan ganapavarapu
 Local Variable:
org.apache.qpid.server.management.plugin.filter.LoggingFilter#1
at
org.apache.qpid.server.management.plugin.filter.ExceptionHandlingFilter.doFilter(ExceptionHandlingFilter.java:56)
   Local Variable:
org.eclipse.jetty.servlet.ServletHandler$CachedChain#2
   Local Variable: java.lang.String#323536
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
   Local Variable:
org.apache.qpid.server.management.plugin.filter.ExceptionHandlingFilter#1
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
   Local Variable:
org.eclipse.jetty.servlet.ServletHandler$CachedChain#1
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)
   Local Variable: org.eclipse.jetty.servlet.ServletHandler#1
   Local Variable: org.eclipse.jetty.servlet.ServletHolder#53
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
   Local Variable: org.eclipse.jetty.server.session.SessionHandler#1
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
   Local Variable: javax.servlet.DispatcherType#3
   Local Variable: java.lang.String#323962
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
   Local Variable: org.eclipse.jetty.servlet.ServletContextHandler#1
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
   Local Variable: org.eclipse.jetty.server.Response#1
   Local Variable: org.eclipse.jetty.server.Request#1
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
   Local Variable: java.lang.String#323533
   Local Variable: java.lang.String#323534
   Local Variable: org.eclipse.jetty.server.Server#1
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
   Local Variable:
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler#1
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
   Local Variable: byte[]#108452
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
   Local Variable: org.eclipse.jetty.http.HttpParser#1
at
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
   Local Variable: org.eclipse.jetty.server.AsyncHttpConnection#1
at
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
   Local Variable: org.eclipse.jetty.io.nio.SelectChannelEndPoint#1
at
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
   Local Variable: org.eclipse.jetty.io.nio.SelectChannelEndPoint$1#1
at java.lang.Thread.run(Thread.java:745)


Please let me know if you need any more information.

On Thu, Oct 13, 2016 at 8:40 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

> Hi,
>
> We are doing some load test using java broker 6.0.2 by stopping all
> consumers, broker was crashed at 644359 messages. Even if i try to restart
> broker its crashing with the same oom error.
>
>  "persistentEnqueuedBytes" : 12731167222,
> "persistentEnqueuedMessages" : 644359,
> "queueDepthBytes" : 12731167222,
> "queueDepthMessages" : 644359,
> "totalDequeuedBytes" : 0,
> "totalDequeuedMessages" : 0,
> "totalEnqueuedBytes" : 12731167222,
> "totalEnqueuedMessages" : 644359,
>
> JVM settings of broker: -Xmx512m -XX:MaxDirectMemorySize=1536m
>
> "broker.flowToDiskThreshold" : "644245094",
>
> So theoretically broker should flow those messages to disk after the
> threshold right then broker shouldn't have caused OOM exception right? do i
> have to do any other tuning?
>
> Thanks,
> Ram
>
>
>


java broker 6.0.2 OOM

2016-10-13 Thread rammohan ganapavarapu
Hi,

We are doing some load test using java broker 6.0.2 by stopping all
consumers, broker was crashed at 644359 messages. Even if i try to restart
broker its crashing with the same oom error.

 "persistentEnqueuedBytes" : 12731167222,
"persistentEnqueuedMessages" : 644359,
"queueDepthBytes" : 12731167222,
"queueDepthMessages" : 644359,
"totalDequeuedBytes" : 0,
"totalDequeuedMessages" : 0,
"totalEnqueuedBytes" : 12731167222,
"totalEnqueuedMessages" : 644359,

JVM settings of broker: -Xmx512m -XX:MaxDirectMemorySize=1536m

"broker.flowToDiskThreshold" : "644245094",

So theoretically broker should flow those messages to disk after the
threshold right then broker shouldn't have caused OOM exception right? do i
have to do any other tuning?

Thanks,
Ram


Re: qpid.work_dir property is not honoring on broker startup

2016-10-12 Thread rammohan ganapavarapu
Ohh ok, it think the console log it misleading "-DQPID_WORK=/Users/rob" i
would assume it is using "/Users/rob" as work directory. But any way thanks
for the help.

Ram

On Wed, Oct 12, 2016 at 3:50 PM, Rob Godfrey <rob.j.godf...@gmail.com>
wrote:

> On 12 October 2016 at 23:05, rammohan ganapavarapu <
> rammohanga...@gmail.com>
> wrote:
>
> > When you say "prop" config option pass -prop=value option to startup
> script
> > right? if that is the case it didn't work either.
>
>
> What do you mean by "doesn't work"?
>
>
> > I think by default
> > QPID_WORK dir is set to user home.
> >
> > ./qpid-server -os -icp /opt/qpid-java-broker/etc/config.json -props
> > /opt/qpid-java-broker/etc/qpid.properties -prop qpid.work_dir=/ebs -prop
> > qpid.home_dir=/opt/qpid-java-broker/
> > Setting QPID_WORK to /root as default
> > System Properties set to -DQPID_HOME=/opt/qpid-java-broker
> > -DQPID_WORK=/root
> >
> >
> if qpid.home_dir is set in this way, the value of QPID_WORK is never
> actually used by the broker...
>
> So, I started the broker like so:
>
> rob$ bin/qpid-server -prop qpid.work_dir=/tmp/foo
> Setting QPID_WORK to /Users/rob as default
> System Properties set to
> -DQPID_HOME=/Users/rob/qpid-6.0.x/systests/target/qpid-
> broker/6.0.5-SNAPSHOT
> -DQPID_WORK=/Users/rob
> QPID_OPTS set to
> Using QPID_CLASSPATH
> /Users/rob/qpid-6.0.x/systests/target/qpid-broker/6.
> 0.5-SNAPSHOT/lib/*:/Users/rob/qpid-6.0.x/systests/target/
> qpid-broker/6.0.5-SNAPSHOT/lib/plugins/*:/Users/rob/qpid-
> 6.0.x/systests/target/qpid-broker/6.0.5-SNAPSHOT/lib/opt/*
> Info: QPID_JAVA_GC not set. Defaulting to JAVA_GC -XX:+UseConcMarkSweepGC
> -XX:+HeapDumpOnOutOfMemoryError
> Info: QPID_JAVA_MEM not set. Defaulting to JAVA_MEM -Xmx512m
> -XX:MaxDirectMemorySize=1536m
> Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
> [Broker] BRK-1006 : Using configuration : /tmp/foo/config.json
> [Broker] BRK-1001 : Startup : Version: 6.0.5-SNAPSHOT Build: 1763290
> [Broker] BRK-1010 : Platform : JVM : Oracle Corporation version:
> 1.8.0_74-b02 OS : Mac OS X version: 10.11.6 arch: x86_64 cores: 8
> [Broker] BRK-1011 : Maximum Memory : Heap : 518,979,584 bytes Direct :
> 1,610,612,736 bytes
> [Broker] BRK-1017 : Process : PID : 86175
> [Broker] BRK-1002 : Starting : Listening on TCP port 5672
> [Broker] MNG-1001 : Web Management Startup
> [Broker] MNG-1002 : Starting : HTTP : Listening on TCP port 8080
> [Broker] MNG-1004 : Web Management Ready
> [Broker] BRK-1004 : Qpid Broker Ready
>
> So, yes, the shell script is setting QPID_WORK, but the broker is using the
> value in qpid.work_dir, i.e. /tmp/foo
>
> -- Rob
>
> On Wed, Oct 12, 2016 at 1:33 PM, Robbie Gemmell <robbie.gemm...@gmail.com>
> > wrote:
> >
> > > Rob's look suggested it will only pick up the qpid.work_dir value at
> > > startup when using the command line 'prop' config option, so you would
> > > currently have to do that, or set QPID_WORK instead.
> > >
> > > On 12 October 2016 at 17:47, rammohan ganapavarapu
> > > <rammohanga...@gmail.com> wrote:
> > > > So "qpid.work_dir" doesn't take effect at the time of boot up? i have
> > to
> > > > set "
> > > > QPID_WORK" environment variable?
> > > >
> > > > On Tue, Oct 11, 2016 at 2:45 AM, Rob Godfrey <
> rob.j.godf...@gmail.com>
> > > > wrote:
> > > >
> > > >> Looking at the code, in the startup phase the Broker seems to use
> only
> > > >> QPID_WORK as provided in a system property, or qpid.work_dir
> provided
> > > >> on the command line with the -prop qpid.work_dir=foo form.  It seems
> > > >> that it does not try to look for qpid.work_dir in the system
> > > >> properties (or, implicitly, as defined in a properties file).  I
> think
> > > >> this can be considered a bug.
> > > >>
> > > >> -- Rob
> > > >>
> > > >> On 11 October 2016 at 10:25, Lorenz Quack <quack.lor...@gmail.com>
> > > wrote:
> > > >> > Hi Ram,
> > > >> >
> > > >> > Notice that in your example both QPID_WORK and qpid.work_dir are
> > > >> specified.
> > > >> > It seems that currently QPID_WORK take precedence.
> > > >> >
> > > >> > I guess if the environment variable and system property QPID_WORK
> > are
> > > not
> > > >> > set then the broker picks up the qpid.work_

  1   2   >