Addresses1088
>
> Connections 1690
>
> Presettled Count 0
>
> Dropped Presettled Count 0
>
> Accepted Count 106722455
>
> Rejected Count 0
>
> Releas
We had this issue with qpid ++ broker and was related to vectors used in
long running objects (queues etc) c++11 introduced shrink_to_fit() and we
patched the broker at that time to avoid those memory issues.
On Fri, 15 Dec 2023 at 18:21 Ekta Awasthi
wrote:
> Hello Tod,
>
> while running the
I've managed to build and get all those working on RHEL8 & RHEL9 some time
ago.
Let me search for these files here to check what I did and I can bring you
some info on how to proceed.
On Sat, Apr 15, 2023 at 3:41 PM Jiri Daněk wrote:
> I created a Dockerfile/Containerfile that compiles qpid-cpp
MacOS does not have pthread_condattr_setclock, Mach kernel has monotonic
clock with different api sets, look at
https://stackoverflow.com/questions/11680461/monotonic-clock-on-osx
https://chromium.googlesource.com/chromium/src/base/+/master/synchronization/condition_variable_posix.cc
The easy
I used C++ broker for about a decade, it was able to send/receive 85
messages / second (average 500 bytes, with lowest latency possible, near
few milliseconds on high load) on a 8-core (16 threads) 64gb RAM SCSI 15k
rpm disks, using non-persistent queues and AMQP 0-10. These performance we
are
I've built it. Can share you tou.
On Thu, Apr 1, 2021 at 11:43 AM fostercm2 wrote:
> Hello,I just had a question, hopefully someone could answer.Are there any
> plans to publish rpms to Epel 8 for qpid-cpp? It looks like the
> qpid-proton packages are available, but not qpid-cpp.Thanks!Sent
at by using two separate iperf3 sender/receiver pairs, pointing in
> opposite directions.)
>
> I think we were not able to saturate the 40 Gbit link with just one iperf3
> sender/receiver pair because the receiver went to 100% CPU.
>
>
>
>
>
> On Mon, Mar 29, 2021 at 12:
1000 req/s is SOO
sLLLlloooW w w ww w . . .
qpidd c++ broker was able to 800.000k msg in / 800.000k msg out on a
12-core xeon e5690 32gb ram , 2x 10gbe lan, rhel 6.x.
Test ran was on 2011, current HW should be at least 2 / 3 times
this is a well known issue... we had running for almost 8 years with
broker, it never releases cache back to S.O.
it´s related to the vectors<> (C++) used in queue management structures,
they never release back ram to S.O.
c++11 and beyond implemented shrink_to_fit() call to release memory back,
we've managed to do ~78 msg /s on qpid-1.36 on 3 qpid broker instances
on a intel X5670, 64GB ram, 4x 1gb lan, 16 clients flooding, 16 clients
receiving (~48000msg /s on each direction).
on a test running on 2011.
I'm pretty sure C++ broker is able to do a bit more than your numbers in
Ah it's using Artemis Broker... should look at Artemis list instead
On Mon, Jun 22, 2020 at 3:01 PM Gordon Sim wrote:
> On 22/06/2020 6:49 pm, Virgilio Fornazin wrote:
> > show source code of qpid-stat tools, it does what you want
>
> Not really. That uses QMF which A
show source code of qpid-stat tools, it does what you want
On Mon, Jun 22, 2020 at 8:59 AM Gordon Sim wrote:
> On 22/06/2020 11:59 am, mohank wrote:
> > Client : QPID C++
> > Broker : ActiveMQ Artemis
> >
> > Is there any possibility to check whether *topic address*/*queue* has any
> >
are again?
>
> Ram
>
> On Sat, May 30, 2020 at 3:50 AM Virgilio Fornazin <
> virgilioforna...@gmail.com> wrote:
>
> > I've built by myself for rhel6, 7, 8 ...
> >
> > https://www.dropbox.com/s/kawv8dfrf3ez4x8/qpid136rhel6.tar?dl=0
> >
> >
&g
I've built by myself for rhel6, 7, 8 ...
https://www.dropbox.com/s/kawv8dfrf3ez4x8/qpid136rhel6.tar?dl=0
On Sat, May 30, 2020 at 1:18 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:
> Hi,
>
> I am looking for the qpid-cpp-server 1.36 and dependent rpms for
> CentOS6/RHEL6, can some
sure it has, I have this code somewhere, let me take a look I'll post it
here
On Wed, May 27, 2020 at 2:17 AM mohank
wrote:
> >> 3) is there any work around to delete the temp queue programmatically?
>
> >Not sure I understand the question. The temp queue would be deleted by
> >closing the
RHEL is pushing Dispatcher + Broker-J (Artemis based) as the new A-MQ
solution.
With container and another new things happening, could be the new way.
But C++ broker still the fastest and best broker around for me
On Fri, Mar 6, 2020 at 1:33 PM Robbie Gemmell
wrote:
> The C++ broker sees
Use broker exchanges for that.
amq.topic is fair enough for most situations
There were a xml exchange if I remember correctly. Not sure about that.
On Fri, 10 Jan 2020 at 16:01 tomt wrote:
> Fair enough. I was hoping to take advantage of the AMQP type system and
> message property filtering
You probably need msgpack, protobuf (and others) are what you need to look
at.
Also, think in amqp as a 'low-level message transport layer', msgpack (or
another) as 'low-level message codec layer' and develop your app without
knowing how the things are done 'under the hood'.
On Fri, Jan 10, 2020
I've used RHEL MRG (Qpid) in the last 8 years, and what I could say is that
qpidd C++ broker never releases back memory to SO.
I've found that the queue code use std::vector in some paths, and need to
call vector.shrink_to_fit() because std::vector (and some other C++ STL
container) never
you must use QMF framework to dinamically bind/unbind topics to exchanges /
queues
On Fri, Aug 26, 2016 at 9:49 AM, lucas wrote:
> I have bound a queue and topic with code:
> "family ; {create:always,node: {type: topic,durable:True,
> x-bindings:[{queue:lucas}]}}";
You should use QMF calls to check it.
But, you just can catch the exception, it`s simpler and fastest than a
round-trip to the broker.
On Wed, Jan 13, 2016 at 2:07 PM, rat...@web.de wrote:
> Hi,
> I am using the C++ messaging API for QPID to realize an
> remote-procedure-call
We use it here in our business.
Is the AMQP of choice for throughtput & latency (we use 0.18-20 version,
planning to migrate to 0.34-5, we're testing to see if any regressions on
this side.
Memory footprint depends on how much messages will stay on queue if the
queue doesn't suport paging, you
I think MinGW should *do the work*.
You can cross-compile the code on linux or build on Windows host.
On Wed, Oct 14, 2015 at 7:09 PM, aconway wrote:
> I have spent a fruitless day trying to get the go binding to work on
> windows. Here's the scoop.
>
> cgo (the Go/C
Gordon
What's the qpidd broker version that started support paging on queues ?
On Tue, Aug 26, 2014 at 5:53 AM, Gordon Sim g...@redhat.com wrote:
On 08/24/2014 10:23 PM, Graham Leggett wrote:
I have a need to configure a very large queue that is able to store about
1 million messages of
Thanks. I'll take a look on that.
On Thu, Aug 28, 2014 at 1:40 PM, Gordon Sim g...@redhat.com wrote:
On 08/28/2014 05:35 PM, Virgilio Fornazin wrote:
What's the qpidd broker version that started support paging on queues ?
It was first included in the 0.24 release
Also remember to change /etc/security/limits.conf and put a entry like that
qpidd - nofile 65535
On Fri, Jul 19, 2013 at 2:20 PM, Chuck Rolke cro...@redhat.com wrote:
You may have too many connections to the broker. Try:
qpid-stat -b localhost:5672 -c
To see your
You raised a interesting question here Connor: aren't Sender objects
'thread-safe' ?
On Thu, Mar 14, 2013 at 3:17 PM, Gordon Sim g...@redhat.com wrote:
On 03/14/2013 05:15 PM, Connor Poske wrote:
Thanks Gordon, I created QPID-4648
We do this using QMF calls... they achieve the same effect as old C++
client libraries
On Tue, Dec 11, 2012 at 1:40 PM, Chuck Rolke cro...@redhat.com wrote:
Hi,
In the source tree there is a picture of how the .NET binding fits into
the native C++ messaging world:
We use MRG-M here too, and we are running in trouble sometimes with this
confuse flow-to-disk implementation.
What we expect to have, to replace it, it's something like a real
'queue-on-disk' with parameters like current
implementation of flow-to-disk have (max messages/bytes on memory, max
We used here to create a private reply queue for each request/reply (and
also, we discovered leaks with Ted Ross in 0.10 version that time).
After that, we changed our code to pre-create a private reply queue for
each connection that perform request / reply operations, using message.
This issue happens because Microsoft doesn't allow you to distribute debug
libraries from C runtime.
See
http://social.technet.microsoft.com/Forums/zh/itproxpsp/thread/dbb413ef-3782-4a26-b540-8f3b3269cbe5
Algo, VC90.DebugCRT is related to a VS 2008 compiled file, not VS 2010 one.
On Mon, Apr
Gordon
If I have the Sender Capacity to 1000 Messages per example, and the queue
has a limit of 800 Messages, and after I send 801 messages, the session is
closed after the
exception while trying to delivery the message to the queue.
But the 800 messages sent previously are guaranteed to be
For my safe, since my app is multithreaded, the best I can do is to
serialize access to the same Sender/Session object on each send() call and
check for errors
to try to handle such situation.
On Thu, Mar 22, 2012 at 14:32, Gordon Sim g...@redhat.com wrote:
On 03/22/2012 04:58 PM, Virgilio
A little snippet that may helps
void qpid_connection::send(const destination destination,
const message_interface message_to_send, const bool durable)
{
context * qpid_context = reinterpret_castcontext
*(m_qpid_context);
We built our own qpid client libraries, since we need to use the sample
boost version in project.
That´s a mess, but is the only option you have.
On Thu, Sep 22, 2011 at 17:47, yuriygeorge yuriygeo...@gmail.com wrote:
I had the same problem, and then I realized I wasn't linking against to
35 matches
Mail list logo