Re: Qpid broker 6.0.4 performance issues

2016-10-18 Thread Rob Godfrey
On 17 October 2016 at 21:50, Rob Godfrey  wrote:

>
>
> On 17 October 2016 at 21:24, Ramayan Tiwari 
> wrote:
>
>> Hi Rob,
>>
>> We are certainly interested in testing the "multi queue consumers"
>> behavior
>> with your patch in the new broker. We would like to know:
>>
>> 1. What will the scope of changes, client or broker or both? We are
>> currently running 0.16 client, so would like to make sure that we will
>> able
>> to use these changes with 0.16 client.
>>
>>
> There's no change to the client.  I can't remember what was in the 0.16
> client... the only issue would be if there are any bugs in the parsing of
> address arguments.  I can try to test that out tmr.
>


OK - with a little bit of care to get round the address parsing issues in
the 0.16 client... I think we can get this to work.  I've created the
following JIRA:

https://issues.apache.org/jira/browse/QPID-7462

and attached to it are a patch which applies against trunk, and a separate
patch which applies against the 6.0.x branch (
https://svn.apache.org/repos/asf/qpid/java/branches/6.0.x - this is 6.0.4
plus a few other fixes which we will soon be releasing as 6.0.5)

To create a consumer which uses this feature (and multi queue consumption)
for the 0.16 client you need to use something like the following as the
address:

queue_01 ; {node : { type : queue }, link : { x-subscribes : {
arguments : { x-multiqueue : [ queue_01, queue_02, queue_03 ],
x-pull-only : true 


Note that the initial queue_01 has to be a name of an actual queue on
the virtual host, but otherwise it is not actually used (if you were
using a 0.32 or later client you could just use '' here).  The actual
queues that are consumed from are in the list value associated with
x-multiqueue.  For my testing I created a list with 3000 queues here
and this worked fine.

Let me know if you have any questions / issues,

Hope this helps,
Rob


>
>
>> 2. My understanding is that the "pull vs push" change is only with respect
>> to broker and it does not change our architecture where we use
>> MessageListerner to receive messages asynchronously.
>>
>
> Exactly - this is only a change within the internal broker threading
> model.  The external behaviour of the broker remains essentially unchanged.
>
>
>>
>> 3. Once I/O refactoring is completely, we would be able to go back to use
>> standard JMS consumer (Destination), what is the timeline and broker
>> release version for the completion of this work?
>>
>
> You might wish to continue to use the "multi queue" model, depending on
> your actual use case, but yeah once the I/O work is complete I would hope
> that you could use the thousands of consumers model should you wish.  We
> don't have a schedule for the next phase of I/O rework right now - about
> all I can say is that it is unlikely to be complete this year.  I'd need to
> talk with Keith (who is currently on vacation) as to when we think we may
> be able to schedule it.
>
>
>>
>> Let me know once you have integrated the patch and I will re-run our
>> performance tests to validate it.
>>
>>
> I'll make a patch for 6.0.x presently (I've been working on a change
> against trunk - the patch will probably have to change a bit to apply to
> 6.0.x).
>
> Cheers,
> Rob
>
> Thanks
>> Ramayan
>>
>> On Sun, Oct 16, 2016 at 3:30 PM, Rob Godfrey 
>> wrote:
>>
>> > OK - so having pondered / hacked around a bit this weekend, I think to
>> get
>> > decent performance from the IO model in 6.0 for your use case we're
>> going
>> > to have to change things around a bit.
>> >
>> > Basically 6.0 is an intermediate step on our IO / threading model
>> journey.
>> > In earlier versions we used 2 threads per connection for IO (one read,
>> one
>> > write) and then extra threads from a pool to "push" messages from
>> queues to
>> > connections.
>> >
>> > In 6.0 we move to using a pool for the IO threads, and also stopped
>> queues
>> > from "pushing" to connections while the IO threads were acting on the
>> > connection.  It's this latter fact which is screwing up performance for
>> > your use case here because what happens is that on each network read we
>> > tell each consumer to stop accepting pushes from the queue until the IO
>> > interaction has completed.  This is causing lots of loops over your 3000
>> > consumers on each session, which is eating up a lot of CPU on every
>> network
>> > interaction.
>> >
>> > In the final version of our IO refactoring we want to remove the
>> "pushing"
>> > from the queue, and instead have the consumers "pull" - so that the only
>> > threads that operate on the queues (outside of housekeeping tasks like
>> > expiry) will be the IO threads.
>> >
>> > So, what we could do (and I have a patch sitting on my laptop for this)
>> is
>> > to look at using the "multi queue consumers" work I did for you guys
>> > before, but augmenting this so that the consumers work using a "pull"
>> model
>> > rather than the push model.  This will guarantee strict fairness between
>> 

Re: java broker 6.0.2 OOM

2016-10-18 Thread Lorenz Quack

Hello Ram,

I just tried to reproduce your issue but was not successful.
I ran a 6.0.2 broker (with default config) and trunk clients.
I created 30 producers on their own connections and sent 10k persistent 
messages each in its own transaction.
After hitting 634,583,040 B direct memory usage flow to disk kicked in 
and the direct memory usage leveled off.
I monitored the log for the QUE-1014 and BRK-1014 messages to verify 
that the broker starts flowing messages to disk and I monitored the 
direct memory usage with jvisualvm.
I attached my hacked together test client application, the log file and 
a screenshot showing the direct memory usage.


Can you see something that you are using differently? Can you create a 
minimal example that exposes the problem that you could share with me?


Note that monitoring disk usage as a measure of whether flow to disk is 
active is not going to work when you have persistent messages because in 
this case messages are always written to disk regardless of flow to disk.


Kind regards,
Lorenz


On 18/10/16 03:10, rammohan ganapavarapu wrote:

please let me know if u need any thing else.

On Oct 17, 2016 11:02 AM, "rammohan ganapavarapu" 
wrote:


Lorenz,


Actually message size vary between ~ 1kb to 10k

Thanks,
Ram

On Mon, Oct 17, 2016 at 10:23 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:


Lorenz,

Thanks for trying to help, Please find the below answers for your
questions.


Q:What is the type of your virtualhost (Derby, BDB, ...)?

A: Derby ( i actually wanted to know your recomendation)

Q: How large are your messages? Do they vary in size or all the same size?

A: Message size is approximately 1k

Q: How many connections/sessions/producers/consumers are connected to
the broker?

A: we are using 3 producers and each have 10 connections.

Q: Are there any consumers active while you are testing?

A: No, we blocked all the consumers
Q: Do you use transactions?
A: They are transnational but ack is done immediately after accepting, if
fails we push it back to dl queue.
Q: Are the messages persistent or transient?
A: They are persistent.

Ram

On Mon, Oct 17, 2016 at 1:15 AM, Lorenz Quack 
wrote:


Hello Ram,

This seems curious.
Yes, the idea behind flow to disk is to prevent the broker from running
out of direct memory.
The broker does keep a certain representation of the message in memory
but that should affect heap and not direct memory.

I currently do not understand what is happening here so I raised a JIRA
[1].

Could you provide some more information about your test case so I can
try to reproduce it on my end?
What is the type of your virtualhost (Derby, BDB, ...)?
How large are your messages? Do they vary in size or all the same size?
How many connections/sessions/producers/consumers are connected to the
broker?
Are there any consumers active while you are testing?
Do you use transactions?
Are the messages persistent or transient?

Kind regards,
Lorenz

[1] https://issues.apache.org/jira/browse/QPID-7461



On 14/10/16 19:14, rammohan ganapavarapu wrote:


Hi,

I am confused with flow to disk context, when direct memory reaches
flow to
disk threshold, broker directly write to disk or it keep in both memory
and
disk? i was in the impression that flow to disk threshold to free up
direct
memory so that broker wont crash, isn't it?

So i have 1.5gb direct memory and here is my flow to disk threshodl

"broker.flowToDiskThreshold":"644245094"  (40% as default)

I am pushing messages and after 40% of direct memory messages are
writing
to disk as you can see disk space is going up but my question is when
its
writing to disk shouldn't it free up direct memory? but i see direct
memory
usage is also going up, am i missing any thing here?


broker1 | success | rc=0 >>
/data   50G  754M   46G   2% /ebs
Fri Oct 14 17:59:25 UTC 2016
"maximumDirectMemorySize" : 1610612736,
  "usedDirectMemorySize" : 840089280,

broker1 | success | rc=0 >>
/data   50G  761M   46G   2% /ebs
Fri Oct 14 17:59:27 UTC 2016
"maximumDirectMemorySize" : 1610612736,
  "usedDirectMemorySize" : 843497152,

.
.
.
/data   50G  1.3G   46G   3% /ebs
Fri Oct 14 18:09:08 UTC 2016
"maximumDirectMemorySize" : 1610612736,
  "usedDirectMemorySize" : 889035136,


Please help me understand this!

Thanks,
Ram



On Fri, Oct 14, 2016 at 9:22 AM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

So i ran the test few more times and it is happening every time, i was

monitoring direct memory usage and looks like it ran out of direct
memory.

"maximumDirectMemorySize" : 2415919104,
  "usedDirectMemorySize" : 2414720896,

Any thoughts guys?

Ram

On Thu, Oct 13, 2016 at 4:37 PM, rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:

Guys,

Not sure what i am doing wrong, i have set heap to 1gb and direct mem
to
2gb after ~150k msgs queuedepth  in the queue i am getting bellow
error and
broker is getting killed. Any suggestions?

2016-10-13 23:27:41,894 ERROR [IO-/10.1

RE: [Proton-c 0.14.0][Visual Studio 2013] Failing ssl unit test only in Debug mode

2016-10-18 Thread Antoine Chevin
Hello,

We tried to investigate more about the problem today.

>From what we understood from the ssl.exe test code:
The test throws an exception on the on_transport_error() because the
certificate is wrong. This triggers the destruction of the objects on the
stack.
Apparently, there is a memory corruption in iocpdesc_t structure:
We are calling (selector)->triggered_list_tail->triggered_list_next while
(selector)->triggered_list_tail is null [in triggered_list_add() in
selector.c line 300].
Therefore we have the crash.

Can you help us find more info about the bug? The crash is very low level
and we have not very much experience in the proton-c layer...

Thank you,
Regards,
Antoine



-Original Message-
From: Adel Boutros [mailto:adelbout...@live.com]
Sent: lundi 17 octobre 2016 19:30
To: users@qpid.apache.org
Subject: [Proton-c 0.14.0][Visual Studio 2013] Failing ssl unit test only
in Debug mode

Hello,


We are compiling Proton-c 0.14.0 and its C++ bindings in 2 modes:
RelWithDebInfo and Debug.


For the RelWithDebInfo mode, all tests are green.

For the Debug mode, we have a ssl test failing.


We are using OpenSSL 1.0.2h. Swing and Cyrus are disabled (not found).


Can you please help us find the issue?


Test output

--

18: ==
18: ERROR: test_ssl_bad_name (__main__.ContainerExampleTest)
18: --
18: Traceback (most recent call last):
18:   File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line
375, in test_ssl_bad_name
18: out = self.proc(["ssl", "-a", addr, "-c", self.ssl_certs_dir(),
"-v", "fail"], skip_valgrind=True).wait_exit()
18:   File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line
180, in wait_exit
18: self.check_()
18:   File "PATH_TO_PROTON_SOURCE_CODE/examples/cpp/example_test.py", line
164, in check_
18: raise self.error
18: ProcError: ['ssl.exe', '-a', 'amqps://127.0.0.1:12202/examples', '-c',
'PATH_TO_PROTON_SOURCE_CODE\\examples/cpp/ssl_certs', '-v', 'fail'] non-0
exit, code=255
18: 
18: certificate verification failed for host wrong_name_for_server
18:  : The target principal name is incorrect.
18: 



Command

--

build_dir\Debug\examples\cpp\Debug\ssl.exe -c
PATH_TO_PROTON_SOURCE_CODE\\examples/cpp/ssl_certs -v fail


Output

-

certificate verification failed for host wrong_name_for_server
 : The target principal name is incorrect.


Exception in Visual Studio 2013



Unhandled exception at 0x07FB345782B2 (qpid-protond.dll) in ssl.exe:
0xC005: Access violation reading location 0x


Stack

--

> qpid-protond.dll!triggered_list_add(pn_selector_t * selector, iocpdesc_t
* iocpd) Line 300 C++
  qpid-protond.dll!pni_events_update(iocpdesc_t * iocpd, int events) Line
324 C++
  qpid-protond.dll!complete_read(read_result_t * result, unsigned long
xfer_count, HRESULT status) Line 666 C++
  qpid-protond.dll!complete(iocp_result_t * result, bool success, unsigned
long num_transferred) Line 868 C++
  qpid-protond.dll!pni_iocp_drain_completions(iocp_t * iocp) Line 888 C++
  qpid-protond.dll!iocp_map_close_all(iocp_t * iocp) Line 1044 C++
  qpid-protond.dll!pni_iocp_finalize(void * obj) Line 1151 C++
  qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object)
Line 98 C++
  qpid-protond.dll!pn_class_free(const pn_class_t * clazz, void * object)
Line 120 C++
  qpid-protond.dll!pn_free(void * object) Line 264 C++
  qpid-protond.dll!pn_io_finalize(void * obj) Line 95 C++
  qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object)
Line 98 C++
  qpid-protond.dll!pn_decref(void * object) Line 254 C++
  qpid-protond.dll!pn_reactor_finalize(pn_reactor_t * reactor) Line 100 C++
  qpid-protond.dll!pn_reactor_finalize_cast(void * object) Line 106 C++
  qpid-protond.dll!pn_class_decref(const pn_class_t * clazz, void * object)
Line 98 C++
  qpid-protond.dll!pn_decref(void * object) Line 254 C++
  qpid-proton-cppd.dll!proton::internal::pn_ptr_base::decref(void * p) Line
32 C++

qpid-proton-cppd.dll!proton::internal::pn_ptr::~pn_ptr()
Line 55 C++
  [External Code]
  qpid-proton-cppd.dll!proton::container_impl::~container_impl() Line 160
C++
  [External Code]
  ssl.exe!hello_world_direct::on_transport_error(proton::transport & t)
Line 134 C++

qpid-proton-cppd.dll!proton::messaging_adapter::on_transport_closed(proton::proton_event
& pe) Line 303 C++

qpid-proton-cppd.dll!proton::proton_event::dispatch(proton::proton_handler
& handler) Line 74 C++
  qpid-proton-cppd.dll!proton::handler_context::dispatch(pn_handler_t *
c_handler, pn_event_t * c_event, pn_event_type_t __formal) Line 74 C++
  qpid-protond.dll!pn_handler_dispatch(pn_handler_t * handler, pn_event_t *
event, pn_event_type_t type) Line 104 C++
  qpid-protond.dll!pn_reactor_process(pn_reactor_

Re: java broker 6.0.2 OOM

2016-10-18 Thread rammohan ganapavarapu
Lorenz,

Thanks for quick test, we also see its flowing to disk but direct mem is
not leveling off, we will perform basic test with out application and share
results.

Do you have any recommendation on heap and direct mem settings, i was
testing with heap: 768m and direct: 2304m.

Ram

On Tue, Oct 18, 2016 at 7:12 AM, Lorenz Quack 
wrote:

> Hello Ram,
>
> I just tried to reproduce your issue but was not successful.
> I ran a 6.0.2 broker (with default config) and trunk clients.
> I created 30 producers on their own connections and sent 10k persistent
> messages each in its own transaction.
> After hitting 634,583,040 B direct memory usage flow to disk kicked in and
> the direct memory usage leveled off.
> I monitored the log for the QUE-1014 and BRK-1014 messages to verify that
> the broker starts flowing messages to disk and I monitored the direct
> memory usage with jvisualvm.
> I attached my hacked together test client application, the log file and a
> screenshot showing the direct memory usage.
>
> Can you see something that you are using differently? Can you create a
> minimal example that exposes the problem that you could share with me?
>
> Note that monitoring disk usage as a measure of whether flow to disk is
> active is not going to work when you have persistent messages because in
> this case messages are always written to disk regardless of flow to disk.
>
> Kind regards,
> Lorenz
>
>
>
> On 18/10/16 03:10, rammohan ganapavarapu wrote:
>
>> please let me know if u need any thing else.
>>
>> On Oct 17, 2016 11:02 AM, "rammohan ganapavarapu" <
>> rammohanga...@gmail.com>
>> wrote:
>>
>> Lorenz,
>>>
>>>
>>> Actually message size vary between ~ 1kb to 10k
>>>
>>> Thanks,
>>> Ram
>>>
>>> On Mon, Oct 17, 2016 at 10:23 AM, rammohan ganapavarapu <
>>> rammohanga...@gmail.com> wrote:
>>>
>>> Lorenz,

 Thanks for trying to help, Please find the below answers for your
 questions.


 Q:What is the type of your virtualhost (Derby, BDB, ...)?

 A: Derby ( i actually wanted to know your recomendation)

 Q: How large are your messages? Do they vary in size or all the same
 size?

 A: Message size is approximately 1k

 Q: How many connections/sessions/producers/consumers are connected to
 the broker?

 A: we are using 3 producers and each have 10 connections.

 Q: Are there any consumers active while you are testing?

 A: No, we blocked all the consumers
 Q: Do you use transactions?
 A: They are transnational but ack is done immediately after accepting,
 if
 fails we push it back to dl queue.
 Q: Are the messages persistent or transient?
 A: They are persistent.

 Ram

 On Mon, Oct 17, 2016 at 1:15 AM, Lorenz Quack 
 wrote:

 Hello Ram,
>
> This seems curious.
> Yes, the idea behind flow to disk is to prevent the broker from running
> out of direct memory.
> The broker does keep a certain representation of the message in memory
> but that should affect heap and not direct memory.
>
> I currently do not understand what is happening here so I raised a JIRA
> [1].
>
> Could you provide some more information about your test case so I can
> try to reproduce it on my end?
> What is the type of your virtualhost (Derby, BDB, ...)?
> How large are your messages? Do they vary in size or all the same size?
> How many connections/sessions/producers/consumers are connected to the
> broker?
> Are there any consumers active while you are testing?
> Do you use transactions?
> Are the messages persistent or transient?
>
> Kind regards,
> Lorenz
>
> [1] https://issues.apache.org/jira/browse/QPID-7461
>
>
>
> On 14/10/16 19:14, rammohan ganapavarapu wrote:
>
> Hi,
>>
>> I am confused with flow to disk context, when direct memory reaches
>> flow to
>> disk threshold, broker directly write to disk or it keep in both
>> memory
>> and
>> disk? i was in the impression that flow to disk threshold to free up
>> direct
>> memory so that broker wont crash, isn't it?
>>
>> So i have 1.5gb direct memory and here is my flow to disk threshodl
>>
>> "broker.flowToDiskThreshold":"644245094"  (40% as default)
>>
>> I am pushing messages and after 40% of direct memory messages are
>> writing
>> to disk as you can see disk space is going up but my question is when
>> its
>> writing to disk shouldn't it free up direct memory? but i see direct
>> memory
>> usage is also going up, am i missing any thing here?
>>
>>
>> broker1 | success | rc=0 >>
>> /data   50G  754M   46G   2% /ebs
>> Fri Oct 14 17:59:25 UTC 2016
>> "maximumDirectMemorySize" : 1610612736,
>>   "usedDirectMemorySize" : 840089280,
>>
>> broker1 | success | rc=0 >>
>> /data   50G  761M   46G   2% /ebs

Re: Qpid broker 6.0.4 performance issues

2016-10-18 Thread Ramayan Tiwari
Thanks so much Rob, I will test the patch against trunk and will update you
with the outcome.

- Ramayan

On Tue, Oct 18, 2016 at 2:37 AM, Rob Godfrey 
wrote:

> On 17 October 2016 at 21:50, Rob Godfrey  wrote:
>
> >
> >
> > On 17 October 2016 at 21:24, Ramayan Tiwari 
> > wrote:
> >
> >> Hi Rob,
> >>
> >> We are certainly interested in testing the "multi queue consumers"
> >> behavior
> >> with your patch in the new broker. We would like to know:
> >>
> >> 1. What will the scope of changes, client or broker or both? We are
> >> currently running 0.16 client, so would like to make sure that we will
> >> able
> >> to use these changes with 0.16 client.
> >>
> >>
> > There's no change to the client.  I can't remember what was in the 0.16
> > client... the only issue would be if there are any bugs in the parsing of
> > address arguments.  I can try to test that out tmr.
> >
>
>
> OK - with a little bit of care to get round the address parsing issues in
> the 0.16 client... I think we can get this to work.  I've created the
> following JIRA:
>
> https://issues.apache.org/jira/browse/QPID-7462
>
> and attached to it are a patch which applies against trunk, and a separate
> patch which applies against the 6.0.x branch (
> https://svn.apache.org/repos/asf/qpid/java/branches/6.0.x - this is 6.0.4
> plus a few other fixes which we will soon be releasing as 6.0.5)
>
> To create a consumer which uses this feature (and multi queue consumption)
> for the 0.16 client you need to use something like the following as the
> address:
>
> queue_01 ; {node : { type : queue }, link : { x-subscribes : {
> arguments : { x-multiqueue : [ queue_01, queue_02, queue_03 ],
> x-pull-only : true 
>
>
> Note that the initial queue_01 has to be a name of an actual queue on
> the virtual host, but otherwise it is not actually used (if you were
> using a 0.32 or later client you could just use '' here).  The actual
> queues that are consumed from are in the list value associated with
> x-multiqueue.  For my testing I created a list with 3000 queues here
> and this worked fine.
>
> Let me know if you have any questions / issues,
>
> Hope this helps,
> Rob
>
>
> >
> >
> >> 2. My understanding is that the "pull vs push" change is only with
> respect
> >> to broker and it does not change our architecture where we use
> >> MessageListerner to receive messages asynchronously.
> >>
> >
> > Exactly - this is only a change within the internal broker threading
> > model.  The external behaviour of the broker remains essentially
> unchanged.
> >
> >
> >>
> >> 3. Once I/O refactoring is completely, we would be able to go back to
> use
> >> standard JMS consumer (Destination), what is the timeline and broker
> >> release version for the completion of this work?
> >>
> >
> > You might wish to continue to use the "multi queue" model, depending on
> > your actual use case, but yeah once the I/O work is complete I would hope
> > that you could use the thousands of consumers model should you wish.  We
> > don't have a schedule for the next phase of I/O rework right now - about
> > all I can say is that it is unlikely to be complete this year.  I'd need
> to
> > talk with Keith (who is currently on vacation) as to when we think we may
> > be able to schedule it.
> >
> >
> >>
> >> Let me know once you have integrated the patch and I will re-run our
> >> performance tests to validate it.
> >>
> >>
> > I'll make a patch for 6.0.x presently (I've been working on a change
> > against trunk - the patch will probably have to change a bit to apply to
> > 6.0.x).
> >
> > Cheers,
> > Rob
> >
> > Thanks
> >> Ramayan
> >>
> >> On Sun, Oct 16, 2016 at 3:30 PM, Rob Godfrey 
> >> wrote:
> >>
> >> > OK - so having pondered / hacked around a bit this weekend, I think to
> >> get
> >> > decent performance from the IO model in 6.0 for your use case we're
> >> going
> >> > to have to change things around a bit.
> >> >
> >> > Basically 6.0 is an intermediate step on our IO / threading model
> >> journey.
> >> > In earlier versions we used 2 threads per connection for IO (one read,
> >> one
> >> > write) and then extra threads from a pool to "push" messages from
> >> queues to
> >> > connections.
> >> >
> >> > In 6.0 we move to using a pool for the IO threads, and also stopped
> >> queues
> >> > from "pushing" to connections while the IO threads were acting on the
> >> > connection.  It's this latter fact which is screwing up performance
> for
> >> > your use case here because what happens is that on each network read
> we
> >> > tell each consumer to stop accepting pushes from the queue until the
> IO
> >> > interaction has completed.  This is causing lots of loops over your
> 3000
> >> > consumers on each session, which is eating up a lot of CPU on every
> >> network
> >> > interaction.
> >> >
> >> > In the final version of our IO refactoring we want to remove the
> >> "pushing"
> >> > from the queue, and instead have the consumers "pull" - so that the
> only
> >> > threads 

Re: java broker 6.0.2 OOM

2016-10-18 Thread alexk
Hi folks, 
Here is the config  config.json
  
Here is the program  qpid.java
  
Here is the log of qpid java broker  qpid.log
  
Here is the java-broker-console output  qpid_broker_console.log
  


I'm trying to experiment with QPID_JAVA_MEM="-Xmx4G
-XX:MaxDirectMemorySize=1500m" was able to get heap OOM

Alex




--
View this message in context: 
http://qpid.2158936.n2.nabble.com/java-broker-6-0-2-OOM-tp7651831p7652137.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Proton Python on_sendable behavior

2016-10-18 Thread Justin Ross
https://gist.github.com/ssorj/1ccf4d1499563722bc419f1e1fac11bf

In this example, there is still ample credit on the link after the last
on_sendable() is printed, but on_sendable is never fired again.  Is that
expected behavior?

It appears that on_link_flow is only fired when link credit is updated, and
on_link_flow is the source of all on_sendable events.

https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/python/proton/handlers.py#L36


Re: java broker 6.0.2 OOM

2016-10-18 Thread rammohan ganapavarapu
Lorenz,

Alex and i work together, please find the config,logs for the tests he
performed.

Thanks,
Ram

On Tue, Oct 18, 2016 at 2:04 PM, alexk  wrote:

> Hi folks,
> Here is the config  config.json
> 
> Here is the program  qpid.java
> 
> Here is the log of qpid java broker  qpid.log
> 
> Here is the java-broker-console output  qpid_broker_console.log
> 
>
>
> I'm trying to experiment with QPID_JAVA_MEM="-Xmx4G
> -XX:MaxDirectMemorySize=1500m" was able to get heap OOM
>
> Alex
>
>
>
>
> --
> View this message in context: http://qpid.2158936.n2.nabble.
> com/java-broker-6-0-2-OOM-tp7651831p7652137.html
> Sent from the Apache Qpid users mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: java broker 6.0.2 OOM

2016-10-18 Thread alexk
Java-Broker sustained longer with [Broker] BRK-1011 : Maximum Memory : Heap :
4,225,236,992 bytes Direct : 209,715,200 bytes



--
View this message in context: 
http://qpid.2158936.n2.nabble.com/java-broker-6-0-2-OOM-tp7651831p7652140.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Proton Python on_sendable behavior

2016-10-18 Thread Chuck Rolke


- Original Message -
> From: "Justin Ross" 
> To: users@qpid.apache.org
> Sent: Tuesday, October 18, 2016 5:13:27 PM
> Subject: Proton Python on_sendable behavior
> 
> https://gist.github.com/ssorj/1ccf4d1499563722bc419f1e1fac11bf
> 
> In this example, there is still ample credit on the link after the last
> on_sendable() is printed, but on_sendable is never fired again.  Is that
> expected behavior?
> 
> It appears that on_link_flow is only fired when link credit is updated, and
> on_link_flow is the source of all on_sendable events.
> 
> https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/python/proton/handlers.py#L36
> 

When I run this code all messages go to the broker in one frame and the flow 
and dispositions come back in one frame:

 Frame 46  127.0.0.1:60006  -> 127.0.0.1:5672  ->   transfer [0,0] (0..9)
 Frame 47  127.0.0.1:60006 <-  127.0.0.1:5672 <-flow [0,0] (10,500), 
disposition [0] (receiver 0-9) 

You have credit and just got some on_sendable callbacks. What's wrong with what 
you see?

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org