Re: [QPID PROTON PYTHON] How to send messages in loop within the same link or connection.

2024-03-14 Thread Ted Ross
Richard,

If you look in the Python examples directory, you'll find an example called
recurring_timer.py.  This shows how you can set up a recurring timer that
fires every 4 seconds.  When the timer fires, you can send your 30 messages
in a loop.  I would recommend checking the available credit on the sender
(sender.credit I think) and to not send any more messages than you have
credit for.

You should never write an infinite loop, nor should you ever call a
blocking "sleep" function.

HTH
-Ted

On Thu, Mar 14, 2024 at 10:48 AM Richard Sylvain <
sylvain.rich...@skyguide.ch> wrote:

> Hi all.
> I would like to know how to send messages in an infinite loop within the
> same producer (link). Or at least the same connection.
>
> I would like to send 30 messages then sleep 4 seconds in loop.
>
> I am not specialist of event programming. Could you share a sample please.
>
> As in the QPID proton python example.
> I start my proton container with a sender MessagingHandler.
> I create my AMQP1.0 connection and my sender (link) on the on_start method.
> Then I send all the messages in the  on_sendable method.
>
> I observe that the events on_accepted or on_rejected are not called until
> the on_sendable method is completed.
>
> So I don't know how to do an infinite loop!
>
> Thank you for helping me.
>


Re: High Memory consumption with Qpid Dispatch 1.19.0

2023-12-15 Thread Ted Ross
Ekta,

You can get more granular memory-use data by using the "qdstat -m" command
against the router when its memory footprint is larger than you think it
should be.

I assume you've been using this version for some time.  It might be helpful
to look into what other things changed right before the memory consumption
problem started.

-Ted

On Fri, Dec 15, 2023 at 12:08 PM Ekta Awasthi 
wrote:

> Hi All & Tod,
>
> We are currently encountering elevated memory consumption with qpid
> dispatch version 1.19.0. Although the memory is released upon restarting
> qpid, it gradually accumulates again, surpassing 80% memory usage. As QPID
> in our case servers as a routing mechanism, handling traffic from NLB to
> QPID and then to the broker. While investigating the cause of this behavior
> and examining memory usage from New Relic (NR) graph indicates that the
> qdrouterd process is responsible for the memory consumption. We are seeking
> insights into the root cause of this issue and whether it may be related to
> the version (1.19.0). Please find additional information below.
>
> *Architecture*:
> NLB --> QPID(2 qpids acting as consumers) --> BROKER (Total of 3 pairs.
> Master/Slave configuration for HA)
>
> *Qpids* were restarted on 12-10-23 as you can see below the gradual
> increase has been happening ever since.
>
> *Ekta Awasthi*
>
> CONFIDENTIALITY NOTICE: The information contained in this email and
> attached document(s) may contain confidential information that is intended
> only for the addressee(s). If you are not the intended recipient, you are
> hereby advised that any disclosure, copying, distribution or the taking of
> any action in reliance upon the information is prohibited. If you have
> received this email in error, please immediately notify the sender and
> delete it from your system.
>


Re: More than configured consumers count

2023-01-16 Thread Ted Ross
On Fri, Jan 13, 2023 at 2:21 PM Ekta Awasthi
 wrote:

> Hello Ted,
>
> I did go through both the qpid logs and saw below.
>
> I could see when the second qpid initiated the extra consumer link on
> 1/9/2023 at 4.24 PM EST there were 2775 events stating, "Auto Link
> deactivated" and then shortly after I could see about 5550 events stating
> "Auto Link Activated" That is exactly double the links which second qpid
> activated and therefore we could see two consumer link coming from qpid-2
> and 1 consumer link from qpid-1 for each queue.
>
> Why it activated/created double the links for all the queues is out of my
> mind and that too in the middle of the day when there was no maintenance,
> or anything was happening.
>

Deactivation happens when:
A) The configuration is removed (not the case here)
B) The connection to the broker is dropped

Check that log to see if the deactivations are a result of a dropped
connection to the broker.

Similarly, the activations occur as a result of a new connection being
established with the broker.

Ask yourself: How many auto-links do you expect to have?  How many
connections do you expect to see auto-links on?  Do you see multiple
activations of the same auto-link on the same connection?


>
> Why all of the sudden it deactivated all the links it was attached to? do
> we know what triggered that?
>
> Ekta Awasthi,
> Engineer, EAI Operations & Support | Office Depot, Inc.
> 6600 North Military Trail | Boca Raton, FL 33496-2434
> Office: 561-438-3552 | Mobile: 206-966-5577 | ekta.awas...@officedepot.com
> <mailto:ekta.awas...@officedepot.com>
>
> -- Tips for EAI Support Engagement --
> -EAI Pre-Prod Support: Create requests on the following JIRA board EAI
> Operations Support<
> https://officedepot.atlassian.net/secure/RapidBoard.jspa?rapidView=823=EOS
> >
> -EAI Production Support: Create requests via IT Service Desk<
> https://portal.compucom.com/SSO/Default.aspx?init=2468> self-service
> portal, instructions click here<
> https://officedepot.sharepoint.com/sites/portal/TechBytes/EUS/EUS%20User%20Guides/SERVICE%20DESK/RESOLVER%20PORTAL/Self-Service%20Portal%20-%20Resolver%20(IT%20to%20IT)%20Incident%20Request%20Form.pdf>:
> EAI Support queue --> ODP - Enterprise Apps Integration Support
> -As a reminder, the Service Availability Managers should be engaged for
> any service impacting issues, with a ***Page*** to
> naitavailabilitym...@officedepot.com naitavailabilitym...@officedepot.com> or by initiating a MIRT
>
> 
> From: Ted Ross 
> Sent: Thursday, January 12, 2023 4:32 PM
> To: Ekta Awasthi 
> Cc: users@qpid.apache.org ; Ajit Tathawade
> (CompuCom) ; Nilesh Khokale (Contractor) <
> nilesh.khok...@officedepot.com>; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> Subject: Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
>
> Ekta,
>
> Either there's a not-yet discovered (or fixed since 1.16) software defect
> causing this problem that you are seeing or there's some kind of
> misconfiguration in your setup.  If the software requires patching to fix a
> bug, you will need to go through an upgrade to deploy it.
>
> I would suggest looking at the logs from the router(s) that are involved in
> the issues you are seeing to find some clues as to what is going on.
>
> For example, look for the string "Auto Link Activated" in the logs.  This
> is logged every time the router sets up a producer or consumer on a broker
> connection.  You should see exactly one of these for each auto-link you
> have configured whenever a new broker connection is established.  This
> should shed some light on whether there's a configuration issue or a code
> issue.
>
> -Ted
>
>
> On Wed, Jan 11, 2023 at 1:35 PM Ekta Awasthi 
> wrote:
>
> > Hello Ted,
> >
> > I agree with you, we can certainly try upgrading but since this issue is
> > happening *directly* in *PROD* and so *frequently*, we need a quick
> > solution to solve these issues. Any help/suggestions will be greatly
> > appreciated.
> >
> > *Ekta Awasthi*,
> >
> > Engineer, EAI Operations & Support | Office Depot, Inc.
> > 6600 North Military Trail | Boca Raton, FL 33496-2434
> > Office: 561-438-3552 | Mobile: 206-966-5577 |
> ekta.awas...@officedepot.com
> >
> >
> >
> > *-- Tips for EAI Support Engagement --*
> >
> > -EAI Pre-Prod Support: Create requests on the following JIRA board EAI
> > Operations Support
> > <
> https://officedepot.atlassian.net/secure/RapidBoard.jspa?rapidView=823=EOS
> >
> >
> > -EAI Production Support: Create requ

Re: More than configured consumers count

2023-01-12 Thread Ted Ross
Ekta,

Either there's a not-yet discovered (or fixed since 1.16) software defect
causing this problem that you are seeing or there's some kind of
misconfiguration in your setup.  If the software requires patching to fix a
bug, you will need to go through an upgrade to deploy it.

I would suggest looking at the logs from the router(s) that are involved in
the issues you are seeing to find some clues as to what is going on.

For example, look for the string "Auto Link Activated" in the logs.  This
is logged every time the router sets up a producer or consumer on a broker
connection.  You should see exactly one of these for each auto-link you
have configured whenever a new broker connection is established.  This
should shed some light on whether there's a configuration issue or a code
issue.

-Ted


On Wed, Jan 11, 2023 at 1:35 PM Ekta Awasthi 
wrote:

> Hello Ted,
>
> I agree with you, we can certainly try upgrading but since this issue is
> happening *directly* in *PROD* and so *frequently*, we need a quick
> solution to solve these issues. Any help/suggestions will be greatly
> appreciated.
>
> *Ekta Awasthi*,
>
> Engineer, EAI Operations & Support | Office Depot, Inc.
> 6600 North Military Trail | Boca Raton, FL 33496-2434
> Office: 561-438-3552 | Mobile: 206-966-5577 | ekta.awas...@officedepot.com
>
>
>
> *-- Tips for EAI Support Engagement --*
>
> -EAI Pre-Prod Support: Create requests on the following JIRA board EAI
> Operations Support
> <https://officedepot.atlassian.net/secure/RapidBoard.jspa?rapidView=823=EOS>
>
> -EAI Production Support: Create requests via IT Service Desk
> <https://portal.compucom.com/SSO/Default.aspx?init=2468> self-service
> portal, instructions click here
> <https://officedepot.sharepoint.com/sites/portal/TechBytes/EUS/EUS%20User%20Guides/SERVICE%20DESK/RESOLVER%20PORTAL/Self-Service%20Portal%20-%20Resolver%20(IT%20to%20IT)%20Incident%20Request%20Form.pdf>:
> EAI Support queue --> ODP - Enterprise Apps Integration Support
>
> -As a reminder, the Service Availability Managers should be engaged for
> any service impacting issues, with a ***Page*** to
> naitavailabilitym...@officedepot.com or by initiating a MIRT
>
> --
> *From:* Ted Ross 
> *Sent:* Friday, January 6, 2023 3:37 PM
> *To:* Ekta Awasthi 
> *Cc:* users@qpid.apache.org ; Ajit Tathawade
> (CompuCom) ; Nilesh Khokale (Contractor) <
> nilesh.khok...@officedepot.com>; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Ekta,
>
> One thing you should consider is to bring your Qpid Dispatch Router code
> up to the latest version.  You are running 1.16.0 which is pretty old.
> Many bugs have been fixed since 1.16.0.
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 5:03 PM Ekta Awasthi 
> wrote:
>
> Hello Ted,
>
> Regarding Are the od-broker-1-[ms] connections configured via connectors
> or listeners? ---> They are configured via connectors only below is an
> example.
>
> #od-broker-1-m connector
> connector{
> name: od-broker-1-m
> host: activemq-test-1.odprivatecloud.com
> port: 61618
> role: route-container
>     linkCapacity: 40
> sslProfile: od-router-2-test-ssl-profile
> verifyHostName: no  #Since we have cert/hostname mismatches
> #saslMechanisms: PLAIN
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 10:43 AM
> *To:* Ekta Awasthi 
> *Cc:* users@qpid.apache.org ; Ajit Tathawade
> (CompuCom) ; Nilesh Khokale (Contractor) <
> nilesh.khok...@officedepot.com>; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Another question...
>
> Are the od-broker-1-[ms] connections configured via connectors or
> listeners?
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:14 AM Ted Ross  wrote:
>
> Happy New Year Ekta,
>
> Are your micro-services message producers or consumers?
>
> When you see the more-than-expected consumers in Hawtio, get the link
> status from the routers using "qdstat -l".  There _should_ be one link for
> each auto-link as long as the targeted broker is reachable.  This should
> provide some clue as to what is happening.
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:01 AM Ekta Awasthi 
> wrote:
>
> Hello Ted,
>
> Thank you for your response.
>
> We are using AutoLinks for our addresses. Below is an example of a
> autolink queue in qdrouterd.conf for one activemq 

Re: More than configured consumers count

2023-01-06 Thread Ted Ross
Ekta,

One thing you should consider is to bring your Qpid Dispatch Router code up
to the latest version.  You are running 1.16.0 which is pretty old.  Many
bugs have been fixed since 1.16.0.

-Ted


On Tue, Jan 3, 2023 at 5:03 PM Ekta Awasthi 
wrote:

> Hello Ted,
>
> Regarding Are the od-broker-1-[ms] connections configured via connectors
> or listeners? ---> They are configured via connectors only below is an
> example.
>
> #od-broker-1-m connector
> connector{
> name: od-broker-1-m
> host: activemq-test-1.odprivatecloud.com
> port: 61618
> role: route-container
> linkCapacity: 40
> sslProfile: od-router-2-test-ssl-profile
> verifyHostName: no  #Since we have cert/hostname mismatches
> #saslMechanisms: PLAIN
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 10:43 AM
> *To:* Ekta Awasthi 
> *Cc:* users@qpid.apache.org ; Ajit Tathawade
> (CompuCom) ; Nilesh Khokale (Contractor) <
> nilesh.khok...@officedepot.com>; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Another question...
>
> Are the od-broker-1-[ms] connections configured via connectors or
> listeners?
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:14 AM Ted Ross  wrote:
>
> Happy New Year Ekta,
>
> Are your micro-services message producers or consumers?
>
> When you see the more-than-expected consumers in Hawtio, get the link
> status from the routers using "qdstat -l".  There _should_ be one link for
> each auto-link as long as the targeted broker is reachable.  This should
> provide some clue as to what is happening.
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:01 AM Ekta Awasthi 
> wrote:
>
> Hello Ted,
>
> Thank you for your response.
>
> We are using AutoLinks for our addresses. Below is an example of a
> autolink queue in qdrouterd.conf for one activemq broker pair. Thanks
>
> address {
>   prefix: test-queue
>   waypoint: yes
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: out
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: out
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 8:47 AM
> *To:* users@qpid.apache.org 
> *Cc:* Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Hi Etka,
>
> Can you tell us how you have configured the qdrouters to act as
> consumers?  Are you using auto-links or are you using link-routed addresses?
>
> -Ted
>
>
> On Wed, Dec 28, 2022 at 11:20 AM Ekta Awasthi
>  wrote:
>
> Hello,
>
> Is this the right mailing DL for QPID related queries.
>
> Just wondering as I have not heard back but given it is a holiday season,
> so I will sit back and wait patiently for someone to reply.
>
> Thanks In Advance.
> Ekta Awasthi
>
> 
> From: Ekta Awasthi 
> Sent: Thursday, December 22, 2022 1:00 AM
> To: users@qpid.apache.org 
> Cc: Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> Subject: More than configured consumers count
>
> Hi There,
>
> We have few issues with the qpid-dispatch router where we are seeing
> multiple consumers are getting created randomly by our qpid dispatch
> routers. We are unable to trace nor able to replicate this issue in our
> lower env's. Below is the architecture diagram.
>
> I apologize if this isn't the right place to ask this question. In case
> there is another place to communicate, do let me know as this is my first
> time reaching out regarding qpid query Please provide any suggestions or
> feedback to help us understand this issue better. Thanks
>
> Flow:
> Microservice (Kubernetes) ---> NLB (Load balancer) ---> 2 Qpid-Dispatch
> Routers acting up as consumers in front of brokers (1.16 version) ---> 4
> Broker pairs of activemq(2.18 version) masters and slaves (independent
> pairs)
>
> Problem statement # 1
> We are seeing more than the configured number of consumers count in our
> activemq hawtio console causing messages to sit in delivering count which
&g

Re: More than configured consumers count

2023-01-03 Thread Ted Ross
Thanks for the info.

There should be no more than (the number of configured "in" auto-links)
consumers on the set of brokers.  Each "in" (inbound from the router's
perspective) link listed in "qdstat -l" represents one consumer.  If your
brokers are reporting more consumers than there are "in" links, it's
possible that there are other (non-qpid-router) consumers attached to your
brokers.

-Ted


On Tue, Jan 3, 2023 at 5:03 PM Ekta Awasthi 
wrote:

> Hello Ted,
>
> Regarding Are the od-broker-1-[ms] connections configured via connectors
> or listeners? ---> They are configured via connectors only below is an
> example.
>
> #od-broker-1-m connector
> connector{
> name: od-broker-1-m
> host: activemq-test-1.odprivatecloud.com
> port: 61618
> role: route-container
> linkCapacity: 40
> sslProfile: od-router-2-test-ssl-profile
> verifyHostName: no  #Since we have cert/hostname mismatches
> #saslMechanisms: PLAIN
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 10:43 AM
> *To:* Ekta Awasthi 
> *Cc:* users@qpid.apache.org ; Ajit Tathawade
> (CompuCom) ; Nilesh Khokale (Contractor) <
> nilesh.khok...@officedepot.com>; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Another question...
>
> Are the od-broker-1-[ms] connections configured via connectors or
> listeners?
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:14 AM Ted Ross  wrote:
>
> Happy New Year Ekta,
>
> Are your micro-services message producers or consumers?
>
> When you see the more-than-expected consumers in Hawtio, get the link
> status from the routers using "qdstat -l".  There _should_ be one link for
> each auto-link as long as the targeted broker is reachable.  This should
> provide some clue as to what is happening.
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:01 AM Ekta Awasthi 
> wrote:
>
> Hello Ted,
>
> Thank you for your response.
>
> We are using AutoLinks for our addresses. Below is an example of a
> autolink queue in qdrouterd.conf for one activemq broker pair. Thanks
>
> address {
>   prefix: test-queue
>   waypoint: yes
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: out
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: out
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 8:47 AM
> *To:* users@qpid.apache.org 
> *Cc:* Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Hi Etka,
>
> Can you tell us how you have configured the qdrouters to act as
> consumers?  Are you using auto-links or are you using link-routed addresses?
>
> -Ted
>
>
> On Wed, Dec 28, 2022 at 11:20 AM Ekta Awasthi
>  wrote:
>
> Hello,
>
> Is this the right mailing DL for QPID related queries.
>
> Just wondering as I have not heard back but given it is a holiday season,
> so I will sit back and wait patiently for someone to reply.
>
> Thanks In Advance.
> Ekta Awasthi
>
> 
> From: Ekta Awasthi 
> Sent: Thursday, December 22, 2022 1:00 AM
> To: users@qpid.apache.org 
> Cc: Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> Subject: More than configured consumers count
>
> Hi There,
>
> We have few issues with the qpid-dispatch router where we are seeing
> multiple consumers are getting created randomly by our qpid dispatch
> routers. We are unable to trace nor able to replicate this issue in our
> lower env's. Below is the architecture diagram.
>
> I apologize if this isn't the right place to ask this question. In case
> there is another place to communicate, do let me know as this is my first
> time reaching out regarding qpid query Please provide any suggestions or
> feedback to help us understand this issue better. Thanks
>
> Flow:
> Microservice (Kubernetes) ---> NLB (Load balancer) ---> 2 Qpid-Dispatch
> Routers acting up as consumers in front of brokers (1.16 version) ---> 4
> Broker pairs of activemq(2.18 ve

Re: More than configured consumers count

2023-01-03 Thread Ted Ross
Another question...

Are the od-broker-1-[ms] connections configured via connectors or listeners?

-Ted


On Tue, Jan 3, 2023 at 10:14 AM Ted Ross  wrote:

> Happy New Year Ekta,
>
> Are your micro-services message producers or consumers?
>
> When you see the more-than-expected consumers in Hawtio, get the link
> status from the routers using "qdstat -l".  There _should_ be one link for
> each auto-link as long as the targeted broker is reachable.  This should
> provide some clue as to what is happening.
>
> -Ted
>
>
> On Tue, Jan 3, 2023 at 10:01 AM Ekta Awasthi 
> wrote:
>
>> Hello Ted,
>>
>> Thank you for your response.
>>
>> We are using AutoLinks for our addresses. Below is an example of a
>> autolink queue in qdrouterd.conf for one activemq broker pair. Thanks
>>
>> address {
>>   prefix: test-queue
>>   waypoint: yes
>> }
>>
>> autoLink {
>>   connection: od-broker-1-m
>>   addr: test-queue
>>   dir: in
>> }
>>
>> autoLink {
>>   connection: od-broker-1-m
>>   addr: test-queue
>>   dir: out
>> }
>>
>> autoLink {
>>   connection: od-broker-1-s
>>   addr: test-queue
>>   dir: in
>> }
>>
>> autoLink {
>>   connection: od-broker-1-s
>>   addr: test-queue
>>   dir: out
>> }
>>
>> *Ekta Awasthi*,
>>
>> --
>> *From:* Ted Ross 
>> *Sent:* Tuesday, January 3, 2023 8:47 AM
>> *To:* users@qpid.apache.org 
>> *Cc:* Ajit Tathawade (CompuCom) ; Nilesh
>> Khokale (Contractor) ; EAIOpsSupport <
>> eaiopssupp...@officedepot.com>
>> *Subject:* Re: More than configured consumers count
>>
>> [CAUTION: EXTERNAL SENDER]
>>
>> Hi Etka,
>>
>> Can you tell us how you have configured the qdrouters to act as
>> consumers?  Are you using auto-links or are you using link-routed addresses?
>>
>> -Ted
>>
>>
>> On Wed, Dec 28, 2022 at 11:20 AM Ekta Awasthi
>>  wrote:
>>
>> Hello,
>>
>> Is this the right mailing DL for QPID related queries.
>>
>> Just wondering as I have not heard back but given it is a holiday season,
>> so I will sit back and wait patiently for someone to reply.
>>
>> Thanks In Advance.
>> Ekta Awasthi
>>
>> 
>> From: Ekta Awasthi 
>> Sent: Thursday, December 22, 2022 1:00 AM
>> To: users@qpid.apache.org 
>> Cc: Ajit Tathawade (CompuCom) ; Nilesh
>> Khokale (Contractor) ; EAIOpsSupport <
>> eaiopssupp...@officedepot.com>
>> Subject: More than configured consumers count
>>
>> Hi There,
>>
>> We have few issues with the qpid-dispatch router where we are seeing
>> multiple consumers are getting created randomly by our qpid dispatch
>> routers. We are unable to trace nor able to replicate this issue in our
>> lower env's. Below is the architecture diagram.
>>
>> I apologize if this isn't the right place to ask this question. In case
>> there is another place to communicate, do let me know as this is my first
>> time reaching out regarding qpid query Please provide any suggestions or
>> feedback to help us understand this issue better. Thanks
>>
>> Flow:
>> Microservice (Kubernetes) ---> NLB (Load balancer) ---> 2 Qpid-Dispatch
>> Routers acting up as consumers in front of brokers (1.16 version) ---> 4
>> Broker pairs of activemq(2.18 version) masters and slaves (independent
>> pairs)
>>
>> Problem statement # 1
>> We are seeing more than the configured number of consumers count in our
>> activemq hawtio console causing messages to sit in delivering count which
>> are Un browsable since those messages are currently being delivered to its
>> consumers. Having only two qpid dispatch routers(acting as consumers)
>> infront of our activemq brokers, the count should always remain 2 but at
>> times it is going 3 sometimes 4. To resolve this issue, we are having to
>> bounce the qpid’s to release the stuck/bad consumer so that messages can be
>> processed/consumed.
>>
>> Problem statement # 2
>> At times we see messages going to delivering count even when there are
>> only two configured consumers(2qpids) showing in activemq hawtio console.
>> We don't know the cause to why the messages get stuck in delivering count.
>> To resolve this, we tried restarting the consumer service but that did not
>> help. Next up we tried restarting the brokers, that did not help and
>> noticed that all the stuck delivering messages, the 

Re: More than configured consumers count

2023-01-03 Thread Ted Ross
Happy New Year Ekta,

Are your micro-services message producers or consumers?

When you see the more-than-expected consumers in Hawtio, get the link
status from the routers using "qdstat -l".  There _should_ be one link for
each auto-link as long as the targeted broker is reachable.  This should
provide some clue as to what is happening.

-Ted


On Tue, Jan 3, 2023 at 10:01 AM Ekta Awasthi 
wrote:

> Hello Ted,
>
> Thank you for your response.
>
> We are using AutoLinks for our addresses. Below is an example of a
> autolink queue in qdrouterd.conf for one activemq broker pair. Thanks
>
> address {
>   prefix: test-queue
>   waypoint: yes
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-m
>   addr: test-queue
>   dir: out
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: in
> }
>
> autoLink {
>   connection: od-broker-1-s
>   addr: test-queue
>   dir: out
> }
>
> *Ekta Awasthi*,
>
> --
> *From:* Ted Ross 
> *Sent:* Tuesday, January 3, 2023 8:47 AM
> *To:* users@qpid.apache.org 
> *Cc:* Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> *Subject:* Re: More than configured consumers count
>
> [CAUTION: EXTERNAL SENDER]
>
> Hi Etka,
>
> Can you tell us how you have configured the qdrouters to act as
> consumers?  Are you using auto-links or are you using link-routed addresses?
>
> -Ted
>
>
> On Wed, Dec 28, 2022 at 11:20 AM Ekta Awasthi
>  wrote:
>
> Hello,
>
> Is this the right mailing DL for QPID related queries.
>
> Just wondering as I have not heard back but given it is a holiday season,
> so I will sit back and wait patiently for someone to reply.
>
> Thanks In Advance.
> Ekta Awasthi
>
> 
> From: Ekta Awasthi 
> Sent: Thursday, December 22, 2022 1:00 AM
> To: users@qpid.apache.org 
> Cc: Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> Subject: More than configured consumers count
>
> Hi There,
>
> We have few issues with the qpid-dispatch router where we are seeing
> multiple consumers are getting created randomly by our qpid dispatch
> routers. We are unable to trace nor able to replicate this issue in our
> lower env's. Below is the architecture diagram.
>
> I apologize if this isn't the right place to ask this question. In case
> there is another place to communicate, do let me know as this is my first
> time reaching out regarding qpid query Please provide any suggestions or
> feedback to help us understand this issue better. Thanks
>
> Flow:
> Microservice (Kubernetes) ---> NLB (Load balancer) ---> 2 Qpid-Dispatch
> Routers acting up as consumers in front of brokers (1.16 version) ---> 4
> Broker pairs of activemq(2.18 version) masters and slaves (independent
> pairs)
>
> Problem statement # 1
> We are seeing more than the configured number of consumers count in our
> activemq hawtio console causing messages to sit in delivering count which
> are Un browsable since those messages are currently being delivered to its
> consumers. Having only two qpid dispatch routers(acting as consumers)
> infront of our activemq brokers, the count should always remain 2 but at
> times it is going 3 sometimes 4. To resolve this issue, we are having to
> bounce the qpid’s to release the stuck/bad consumer so that messages can be
> processed/consumed.
>
> Problem statement # 2
> At times we see messages going to delivering count even when there are
> only two configured consumers(2qpids) showing in activemq hawtio console.
> We don't know the cause to why the messages get stuck in delivering count.
> To resolve this, we tried restarting the consumer service but that did not
> help. Next up we tried restarting the brokers, that did not help and
> noticed that all the stuck delivering messages, the broker slowly replayed
> back and Eventually the stuck messages came back to delivering count post
> broker restart. To resolve this issue, we are having to bounce the qpid’s
> to release the messages and that fixes the issue.
>
> Version of qpid-dispatch router
> qpid-dispatch-router-1.16.0-1.el7.x86_64.rpm
> qpid-dispatch-tools-1.16.0-1.el7.noarch.rpm
>
>
> Ekta Awasthi
>
> CONFIDENTIALITY NOTICE: The information contained in this email and
> attached document(s) may contain confidential information that is intended
> only for the addressee(s). If you are not the intended recipient, you are
> hereby advised that any disclosure, copying, distribution or the taking of
> any action in reliance upon the information is prohibited. If you have
> received this email in error, please immediately notify the sender and
> delete it from your system.
>
>


Re: More than configured consumers count

2023-01-03 Thread Ted Ross
Hi Etka,

Can you tell us how you have configured the qdrouters to act as consumers?
Are you using auto-links or are you using link-routed addresses?

-Ted


On Wed, Dec 28, 2022 at 11:20 AM Ekta Awasthi
 wrote:

> Hello,
>
> Is this the right mailing DL for QPID related queries.
>
> Just wondering as I have not heard back but given it is a holiday season,
> so I will sit back and wait patiently for someone to reply.
>
> Thanks In Advance.
> Ekta Awasthi
>
> 
> From: Ekta Awasthi 
> Sent: Thursday, December 22, 2022 1:00 AM
> To: users@qpid.apache.org 
> Cc: Ajit Tathawade (CompuCom) ; Nilesh
> Khokale (Contractor) ; EAIOpsSupport <
> eaiopssupp...@officedepot.com>
> Subject: More than configured consumers count
>
> Hi There,
>
> We have few issues with the qpid-dispatch router where we are seeing
> multiple consumers are getting created randomly by our qpid dispatch
> routers. We are unable to trace nor able to replicate this issue in our
> lower env's. Below is the architecture diagram.
>
> I apologize if this isn't the right place to ask this question. In case
> there is another place to communicate, do let me know as this is my first
> time reaching out regarding qpid query Please provide any suggestions or
> feedback to help us understand this issue better. Thanks
>
> Flow:
> Microservice (Kubernetes) ---> NLB (Load balancer) ---> 2 Qpid-Dispatch
> Routers acting up as consumers in front of brokers (1.16 version) ---> 4
> Broker pairs of activemq(2.18 version) masters and slaves (independent
> pairs)
>
> Problem statement # 1
> We are seeing more than the configured number of consumers count in our
> activemq hawtio console causing messages to sit in delivering count which
> are Un browsable since those messages are currently being delivered to its
> consumers. Having only two qpid dispatch routers(acting as consumers)
> infront of our activemq brokers, the count should always remain 2 but at
> times it is going 3 sometimes 4. To resolve this issue, we are having to
> bounce the qpid’s to release the stuck/bad consumer so that messages can be
> processed/consumed.
>
> Problem statement # 2
> At times we see messages going to delivering count even when there are
> only two configured consumers(2qpids) showing in activemq hawtio console.
> We don't know the cause to why the messages get stuck in delivering count.
> To resolve this, we tried restarting the consumer service but that did not
> help. Next up we tried restarting the brokers, that did not help and
> noticed that all the stuck delivering messages, the broker slowly replayed
> back and Eventually the stuck messages came back to delivering count post
> broker restart. To resolve this issue, we are having to bounce the qpid’s
> to release the messages and that fixes the issue.
>
> Version of qpid-dispatch router
> qpid-dispatch-router-1.16.0-1.el7.x86_64.rpm
> qpid-dispatch-tools-1.16.0-1.el7.noarch.rpm
>
>
> Ekta Awasthi
>
> CONFIDENTIALITY NOTICE: The information contained in this email and
> attached document(s) may contain confidential information that is intended
> only for the addressee(s). If you are not the intended recipient, you are
> hereby advised that any disclosure, copying, distribution or the taking of
> any action in reliance upon the information is prohibited. If you have
> received this email in error, please immediately notify the sender and
> delete it from your system.
>


Re: [C++] QPID Connection Closing after 60s

2022-06-06 Thread Ted Ross
On Mon, Jun 6, 2022 at 2:17 PM Arjee Jacob  wrote:

> Does that mean I have to close the socket  after sending each message?
> This is basically running in one device between 2 processes, and messages
> come every 15-60 seconds.
>

No.  I think what Gordon was suggesting is that this has nothing to do with
your clients but is something else in your environment that is opening
connections to your server.  I've seen this happen in different cloud
environments, like MS Azure.


>
> On Mon, 6 Jun 2022 at 13:50, Gordon Sim  wrote:
>
> > On a server, that log suggests that something is opening a socket to
> > the 5672 port, but then not actually transmitting anything over it.
> > E.g. it could be some kind of L4 probe.
> >
> > On Mon, Jun 6, 2022 at 8:35 AM Arjee Jacob 
> > wrote:
> > >
> > > Hey there,
> > >
> > > I am getting a message in my logs that says
> > > "[System] error Connection qpid.5672-No protocol received after 60s,
> > > closing"
> > >
> > > Any idea what this means? How to rectify it?
> > >
> > > Warm Regards,
> > > Jacob
> >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>


Re: Listen performance

2022-06-02 Thread Ted Ross
On Thu, Jun 2, 2022 at 9:06 AM Fredrik Hallenberg 
wrote:

> Hi, my application tends to get a lot of short lived incoming connections.
> Messages are very short sync messages that usually can be responded with
> very little processing on the server side. It works fine but I feel
> that the performance is a bit lacking when many connections happen at the
> same time and would like advice on how to improve it. I am using qpid
> proton c++ 0.37 with epoll proactor.
> My current design uses a single thread for the listener but it will
> immediately push incoming messages in on_message to a queue that is handled
> elsewhere. I can see that clients have to wait for a long time (up to a
> minute) until they get a response, but I don't believe there is an issue on
> my end as I as will quickly deal with any client messages as soon as they
> show up. Rather the issues seems to be that messages are not pushed into
> the queue quickly enough.
> I have noticed that the pn_proactor_listen is hardcoded to use a backlog of
> 16 in the default container implementation, this seems low, but I am not
> sure if it is correct to change it.
> Any advice apppreciated. My goal is that a client should never need to wait
> more than a few seconds for a response even under reasonably high load,
> maybe a few hundred connections per seconds.
>

I would try increasing the backlog.  16 seems low to me as well.  Do you
know if any of your clients are re-trying the connection setup because they
overran the server's backlog?

-Ted


[NOTICE/DISCUSS] Dispatch Router status + fork

2022-02-28 Thread Ted Ross
Most, if not all, of the individuals currently developing and maintaining
Qpid Dispatch Router will be redirecting their focus onto a fork of the
software within the Skupper project. The reason for doing this is that we
want to take the codebase in a decidedly different direction in support of
multi-protocol interconnect as opposed to middleware messaging.  We will be
adding features to support these goals and removing other features and
complexities not needed.  These changes will not be backwards compatible
and will remove support for some of the existing messaging use cases.

The Qpid Dispatch 1.19.0 release will proceed as planned and previously
outlined.  After this release, feature development for Dispatch will slow
significantly unless new volunteers join the project to help evolve the
codebase and drive it forward.


-Ted


Re: [VOTE] Release Qpid Dispatch Router 1.17.0 (RC1)

2021-08-18 Thread Ted Ross
Is this vote still open?

+1 from me.

On Wed, Aug 11, 2021 at 4:17 PM Chuck Rolke  wrote:

> +1
>
> Verified checksum
> Tested with proton 0.35 debug build
> Debug build
> Passes self tests
>
>
> On Tue, Aug 10, 2021 at 6:00 PM Ken Giusti  wrote:
>
> > Folks,
> >
> > Please cast your vote on this thread to release RC1 as the official
> > Qpid Dispatch Router version  1.17.0.
> >
> > RC1 of Qpid Dispatch Router version 1.17.0 can be found here:
> > https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.17.0-rc1/
> >
> > To validate the integrity and signature of the tar file please follow
> > the instructions on the Downloads page from the qpid.apache.org
> > website:
> > http://qpid.apache.org/download.html#verify-what-you-download
> >
> > The JIRAs assigned are:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315321=12349700
> >
> > It is tagged as 1.17.0-rc1.
> >
> > Thanks
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>


Re: [Dispatch] what's the purpose of `type_registered` flag in router_node.c

2021-07-16 Thread Ted Ross
On Fri, Jul 16, 2021 at 10:36 AM Ken Giusti  wrote:

> On Fri, Jul 16, 2021 at 9:39 AM Ted Ross  wrote:
>
> > Hi Jiri,
> >
> > In normal operation of the router, there is only one call to 'qd_router',
> > so I'm not sure we really need 'type_registered'.  That code has been
> there
> > from the very beginning so I don't think it was added to fix a particular
> > problem.  I think it could safely be removed.
> >
> >
> Hey Ted - do you think there'd be a benefit in terms of reduced code if we
> refactor out the old generic node-based dispatcher layer?
>

Probably.  There's been talk in the past about eliminating the "container"
layer altogether thus removing one level of callbacks.  I think the right
approach would be to refactor the Proton interface to be a protocol
adaptor.  This would have the added benefit of allowing management hooks
for AMQP access via Skupper.

But, like you said, probably not a high priority.


>
> Not a high priority of course, but if there's ways to simplify the codebase
> that would be great.
>
> -K
>
>
>
> > -Ted
> >
> > On Fri, Jul 16, 2021 at 5:56 AM Jiri Daněk  wrote:
> >
> > > Hello folks
> > >
> > > ```
> > > static int type_registered = 0;
> > >
> > > qd_router_t *qd_router(qd_dispatch_t *qd, qd_router_mode_t mode, const
> > char
> > > *area, const char *id)
> > > {
> > > if (!type_registered) {
> > > type_registered = 1;
> > > qd_container_register_node_type(qd, _node);
> > > }
> > > ```
> > >
> > >
> >
> https://github.com/apache/qpid-dispatch/blob/d8800269d3a80225794be9914b5fc9f8d6118d04/src/router_node.c#L1623-L1630
> > >
> > > I'd like to understand the motivation behind this code better.
> > >
> > > Some parts of the codebase assume that there may be many qd_dispatch_t
> > > instances around, while others assume there is only one. There is the
> > > `dispatch` global variable in python_embedded.c, there is the global
> flag
> > > `type_registered` here, but the `qd_dispatch_t` pointer is usually
> passed
> > > through function argument (as if there could be more than one).
> > >
> > > Having this check for `type_registered` prevents me from stopping and
> > > freeing one instance of a router, then immediately starting another in
> > the
> > > same process. I want to do this for testing purposes. What happens now
> is
> > > that the second router I start will not function correctly; deleting
> this
> > > type_registered logic makes it work right (as far as my tests so far
> are
> > > concerned).
> > >
> > > It seems to me that it should be perfectly ok to have multiple dispatch
> > > instances in the single process, as long as there is only one at a
> time.
> > > --
> > > Mit freundlichen Grüßen / Kind regards
> > > Jiri Daněk
> > >
> >
>
>
> --
> -K
>


Re: [Dispatch] what's the purpose of `type_registered` flag in router_node.c

2021-07-16 Thread Ted Ross
Hi Jiri,

In normal operation of the router, there is only one call to 'qd_router',
so I'm not sure we really need 'type_registered'.  That code has been there
from the very beginning so I don't think it was added to fix a particular
problem.  I think it could safely be removed.

-Ted

On Fri, Jul 16, 2021 at 5:56 AM Jiri Daněk  wrote:

> Hello folks
>
> ```
> static int type_registered = 0;
>
> qd_router_t *qd_router(qd_dispatch_t *qd, qd_router_mode_t mode, const char
> *area, const char *id)
> {
> if (!type_registered) {
> type_registered = 1;
> qd_container_register_node_type(qd, _node);
> }
> ```
>
> https://github.com/apache/qpid-dispatch/blob/d8800269d3a80225794be9914b5fc9f8d6118d04/src/router_node.c#L1623-L1630
>
> I'd like to understand the motivation behind this code better.
>
> Some parts of the codebase assume that there may be many qd_dispatch_t
> instances around, while others assume there is only one. There is the
> `dispatch` global variable in python_embedded.c, there is the global flag
> `type_registered` here, but the `qd_dispatch_t` pointer is usually passed
> through function argument (as if there could be more than one).
>
> Having this check for `type_registered` prevents me from stopping and
> freeing one instance of a router, then immediately starting another in the
> same process. I want to do this for testing purposes. What happens now is
> that the second router I start will not function correctly; deleting this
> type_registered logic makes it work right (as far as my tests so far are
> concerned).
>
> It seems to me that it should be perfectly ok to have multiple dispatch
> instances in the single process, as long as there is only one at a time.
> --
> Mit freundlichen Grüßen / Kind regards
> Jiri Daněk
>


Re: [Qpid Dispatch] link's owning_addr

2021-04-06 Thread Ted Ross
On Tue, Apr 6, 2021 at 3:28 PM Ganesh Murthy  wrote:

> On Tue, Apr 6, 2021 at 2:57 PM Ted Ross  wrote:
>
> > Hi Ganesh,
> >
> > Yes, multiple links can share the same owning_addr.  It looks, from a
> > reading of the backtrace, that it might not be the address that's double
> > freed, but it might be the outstanding_deliveries field of the address
> > that's being freed here.
> >
> It does look like the crash occurs due to the double freeing of
> outstanding_deliveries but grep-ing for outstanding_deliveries, it is freed
> only
> in that code and nowhere else. That is what leads me to think that the
> address itself is being double-freed.
>
> If multiple link->owning_addr(s) can point to the same addr, should we use
> the qdr_address_t's ref_count field to avoid such crashes? Increase the
> ref_count when an addr is assigned to a link->owning_addr and decrease
> the ref_count when the link->owning_addr is set to zero ? This ref_count is
> already used when deleting qdr_address_t objects.
>

qdr_check_addr_CT already looks at the number of rlinks and inlinks for the
address.  I believe that every link that claims the address as it's
owning_addr should be listed in one of those two lists.  That should
protect against address-double-frees on link detach.


>
> Thanks.
>
> >
> > -Ted
> >
> > On Tue, Apr 6, 2021 at 12:16 PM Ganesh Murthy 
> wrote:
> >
> > > I have a quick question about qdr_link_t's owning_addr field (
> > >
> > >
> >
> https://github.com/apache/qpid-dispatch/blob/1.15.0/src/router_core/router_core_private.h#L437
> > > )
> > >
> > > Can the owning_addr on many links point to the same address ?
> > >
> > > For example, can the following be true?
> > >
> > > link1->owning_addr = my_addr
> > > link2->owning_addr = my_addr
> > >
> > > The reason I ask is because of the ASAN crash seen here -
> > >
> > >
> >
> https://issues.apache.org/jira/browse/DISPATCH-2019?focusedCommentId=17314238=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17314238
> > > (you will have to "git checkout
> ead503c94926f732fba7ddd5ee0826aa3bcd2c79"
> > > for the line numbers on the backtrace to match up).
> > >
> > > Looking at that backtrace, it *seems* like two links got detaches and
> > both
> > > links point to the same owning_addr. The first detach call to the core
> > > frees the owning_addr object while the second detach on a different
> link
> > > with the same owning_addr causes a double free to happen.
> > > The reason I ask is because I have been unable to reproduce this crash
> > so I
> > > am left to guess that this might be the reason for the crash.
> > >
> > > Thanks.
> > >
> >
>


Re: [Qpid Dispatch] link's owning_addr

2021-04-06 Thread Ted Ross
Hi Ganesh,

Yes, multiple links can share the same owning_addr.  It looks, from a
reading of the backtrace, that it might not be the address that's double
freed, but it might be the outstanding_deliveries field of the address
that's being freed here.

-Ted

On Tue, Apr 6, 2021 at 12:16 PM Ganesh Murthy  wrote:

> I have a quick question about qdr_link_t's owning_addr field (
>
> https://github.com/apache/qpid-dispatch/blob/1.15.0/src/router_core/router_core_private.h#L437
> )
>
> Can the owning_addr on many links point to the same address ?
>
> For example, can the following be true?
>
> link1->owning_addr = my_addr
> link2->owning_addr = my_addr
>
> The reason I ask is because of the ASAN crash seen here -
>
> https://issues.apache.org/jira/browse/DISPATCH-2019?focusedCommentId=17314238=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17314238
> (you will have to "git checkout ead503c94926f732fba7ddd5ee0826aa3bcd2c79"
> for the line numbers on the backtrace to match up).
>
> Looking at that backtrace, it *seems* like two links got detaches and both
> links point to the same owning_addr. The first detach call to the core
> frees the owning_addr object while the second detach on a different link
> with the same owning_addr causes a double free to happen.
> The reason I ask is because I have been unable to reproduce this crash so I
> am left to guess that this might be the reason for the crash.
>
> Thanks.
>


Re: Inter-cloud linkrouting (over TLS route)

2021-03-09 Thread Ted Ross
Hi Andre,

I'm not very clear on exactly what you're trying to do, but a few thoughts
and questions come to mind.

Did you consider joining the two zones into a single network using an
inter-router connection instead of a route-container connection?  This
would provide link-route access from both zones to your destinations.

Assuming you have good reasons for using a route-container connection
between the zones:

Does your remote-amq-mesh connection successfully connect?

If so, did you create a link-route configuration in _both_ zones?

I don't believe that the router normally issues an "amqp:not-found" error.
Is it possible that the "address:queue" node does not exist on your
destination and that your destination (broker?) is issuing the error?

-Ted

On Tue, Mar 9, 2021 at 2:48 PM André van der Heijden <
vanderheijde...@gmail.com> wrote:

> Dear qpid developers,
>
> I am wondering if it's possible to use linkrouting when you connect a mesh
> in ZoneA (for example in Azure) to a mesh in ZoneB (for example in AWS). We
> have established such a network and connect the two meshes via a connector
> that looks like this:
>
> connector {
> name: remote-amq-mesh
> host: zone-b.example.azure.address.io
> port: 443
> saslMechanisms: EXTERNAL PLAIN
>
> sslProfile: client_tls
> saslUsername: remote_connection@amq-interconnect-mesh
> saslPassword: *
>
> role: route-container
> verifyHostname: false
> idleTimeoutSeconds: 0
> messageLoggingComponents: all
> }
>
> When trying to connect to an address:queue that is in the other Zone and is
> exposed via linkrouting, we get the following error:
>
> java.util.concurrent.ExecutionException:
> javax.jms.InvalidDestinationException: Node not found [condition =
> amqp:not-found]
> at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
>
> So at first sight, it seems that the above solution isn't working and we
> cannot use linkrouting in a inter-cloud setup with the meshes connected
> over a TLS route. Do you think this is plausible or do you have other
> experiences?
>
> If the above is indeed impossible, would creating some routes that expose
> AMQPS directly between different cloud environments, so that we can connect
> the meshes inter-cloud via AMQPS directly be a good idea? Are there any
> obvious downsides to this approach?
>
> Thanks a lot for your time.
>
> Kind regards,
>
> André van der Heijden
>


Re: Router Throughput vs Buffer Size -- 11 data points (2nd try)

2021-02-16 Thread Ted Ross
Thanks for this Mick,

The original context of the request for this graph was the AMQP
large-message testing.  Was this test run using iperf over the TCP
adaptor?  If so, it's possible that the buffering between the adaptor and
Proton is obscuring the results.

I suspect that in the absence of vectored IO, the optimal Proton
raw-buffer-size is larger than the optimal internal-buffer-size.  This is a
conjecture not based on science.

-Ted

On Tue, Feb 16, 2021 at 1:53 PM Michael Goulish  wrote:

> I keep forgetting that I can't include images.
>
> 
> Here it is.
> 
>
> I'm afraid there's not much of any knee in this curve.
>
> By the way, CPU was almost unaffected.
> Actually it improved slightly, from 208% at 512 byte buffer,
> to 195 for buf=2048  to 191 for buf=3072 and above.
>


Re: Dispatch Router: Wow. Large message test with different buffer sizes

2021-02-13 Thread Ted Ross
On Fri, Feb 12, 2021 at 1:47 PM Michael Goulish  wrote:

> Well, *this* certainly made a difference!
> I tried this test:
>
> *message size:*  20 bytes
> *client-pairs:*  10
> *sender pause between messages:* 10 msec
> *messages per sender:*   10,000
>* credit window:* 1000
>
>
>
>
>   *Results:*
>
>   router buffer size
>512 bytes4K bytes
>   ---
>CPU517%  102%
>Mem711 MB59 MB
>Latency26.9 *seconds*  2.486 *msec*
>
>
> So with the large messages and our normal buffer size of 1/2 K, the router
> just got overwhelmed. What I recorded was average memory usage, but looking
> at the time sequence I see that its memory kept increasing steadily until
> the end of the test.
>

With the large messages, the credit window is not sufficient to protect the
memory of the router.  I think this test needs to use a limited session
window as well.  This will put back-pressure on the senders much earlier in
the test.  With 200Kbyte messages x 1000 credits x 10 senders, there's a
theoretical maximum of 2Gig of proton buffer memory that can be consumed
before the router core ever moves any data.  It's interesting that in the
4K-buffer case, the router core keeps up with the flow and in the 512-byte
case, it does not.

It appears that increasing the buffer size is a good idea.  I don't think
we've figured out how much the increase should be, however.  We should look
at interim sizes:  1K, 2K, maybe 1.5K and 3K.  We want the smallest buffer
size that gives us acceptable performance.  If throughput, CPU, and memory
use improve sharply with buffer size then level off, let's identify the
"knee of the curve" and see what buffer size that represents.


>
> Messages just sat there waiting to get processed, which is maybe why their
> average latency was *10,000 times longer* than when I used the large
> buffers.
>
> And Nothing Bad Happened in the 4K buffer test. No crash, all messages
> delivered, normal shutdown.
>
> Now I will try a long-duration test to see if it survives that while using
> the large buffers.
>
> If it does survive OK, we need to see what happens with large buffers as
> message size varies from small to large.
>


Re: Dispatch Router: Changing buffer size in buffer.c blows up AMQP.

2021-02-12 Thread Ted Ross
Another thing to consider when increasing the buffer size is how to adjust
the "Q2" limit.  This limits the number of buffers that will be stored in a
streaming message at any one time.  By increasing the buffer size by 8x,
the Q2 limit in bytes is also increased 8x.

This won't have any effect on your small-message test, but it will affect
the router's memory consumption with the transfer of large or streaming
messages.

-Ted

On Fri, Feb 12, 2021 at 11:56 AM Ted Ross  wrote:

>
>
> On Fri, Feb 12, 2021 at 2:44 AM Michael Goulish 
> wrote:
>
>>
>>
>> *Can you explain how you are measuring AMQP throughput?  What message
>> sizes are you using?  Credit windows?  How many senders and receivers?  Max
>> frame*
>> * size?*
>>
>> Oops! Good point. Describe the Test!
>>
>> 100 senders, 100 receivers, 100 unique addresses -- each sender sends to
>> one receiver.
>> Each sender is throttled to 100 messages per second (Apparently I Really
>> Like the number 100).
>> And message size is  wait for it ...   100.(payload size .. so
>> really 139 or something like that.)
>>
>
> Though I'm not sure exactly what's causing the strange things you are
> seeing, this is not a good test to evaluate the effect of the larger buffer
> size.
>
> Since the message sizes are so small, they will be using the same number
> of buffers in both size cases (512 and 4096).  The 100 byte messages fit
> into both buffer sizes.  The router will not place multiple messages into
> the same buffer.  So, with 512 byte buffers, this test leaves ~400 bytes
> unused per buffer.  With 4096 byte buffers, it leaves ~4000 bytes unused
> per buffer.  You are allocating a lot more buffer space for no benefit.
>
> A better test would involve much larger messages, maybe 64K, 128K, or more.
>
>
>>
>> Credit window is 1000.
>>
>> I can't find anything in my router config nor in my C client code about
>> max frame size.   What do I get by default? Or, how can I check that?
>>
>> The way I measured throughput was that -- first -- I noticed that when I
>> made the test go longer, i.e. send 20 million total messages instead of the
>> original 1 million -- it was taking much longer than I expected. So I had
>> each receiver log a message every time its total received messages was
>> divisible by 1000.
>>
>> What I saw was that the first thousand came after 11 seconds (just about
>> as expected because of sender-throttle to 100/sec) but that later thousands
>> became slower. By the time I stopped the test -- after more than 50,000
>> messages per receiver -- each thousand was taking ... well ... look at this
>> very interesting graph that I made of one receiver's behavior.
>>
>> This graph is made by just noting the time when you receive each
>> thousandth message (time since test started) and graphing that -- so we
>> expect to see an upward-sloping straight line whose slope is determined by
>> how long it takes to receive each 1000 messages (should be close to 10
>> seconds).
>>
>> [image: messages_vs_time.jpg]
>>
>> I'm glad I graphed this! This inflection point was a total shock to me.
>> NOTE TO SELF: always graph everything from now on forever.
>>
>> I guess Something Interesting happened at about 28 seconds!
>>
>> Maybe what I need ... is a reading from "qdstat -m" just before and after
>> that inflection point !?!??
>>
>>
>>
>> On Thu, Feb 11, 2021 at 5:37 PM Ted Ross  wrote:
>>
>>> On Thu, Feb 11, 2021 at 2:08 PM Michael Goulish 
>>> wrote:
>>>
>>> > OK, so in the file Dispatch Router file src/buffer.c I changed this:
>>> >   size_t BUFFER_SIZE = 512;
>>> > to this:
>>> >   size_t BUFFER_SIZE = 4096;
>>> >
>>> > Gordon tells me that's like 8 times bigger.
>>> >
>>> >
>>> > It makes a terrific difference in throughput in the TCP adapter, and
>>> if you
>>> > limit the sender to the throughput that the receiver can accept, it
>>> can go
>>> > Real Fast with no memory bloat.  ( Like 15 Gbit/sec )
>>> >
>>> > But.
>>> > AMQP throughput is Not Happy with this change.
>>> >
>>> > Some of the managed fields grow rapidly (although not enough to
>>> account for
>>> > total memory growth) -- and throughput gradually drops to a crawl.
>>> >
>>> > Here are the fields that increase dramatically (like 10x or more) --
>>> and
>>> > the ones th

Re: Dispatch Router: Changing buffer size in buffer.c blows up AMQP.

2021-02-12 Thread Ted Ross
On Fri, Feb 12, 2021 at 2:44 AM Michael Goulish  wrote:

>
>
> *Can you explain how you are measuring AMQP throughput?  What message
> sizes are you using?  Credit windows?  How many senders and receivers?  Max
> frame*
> * size?*
>
> Oops! Good point. Describe the Test!
>
> 100 senders, 100 receivers, 100 unique addresses -- each sender sends to
> one receiver.
> Each sender is throttled to 100 messages per second (Apparently I Really
> Like the number 100).
> And message size is  wait for it ...   100.(payload size .. so
> really 139 or something like that.)
>

Though I'm not sure exactly what's causing the strange things you are
seeing, this is not a good test to evaluate the effect of the larger buffer
size.

Since the message sizes are so small, they will be using the same number of
buffers in both size cases (512 and 4096).  The 100 byte messages fit into
both buffer sizes.  The router will not place multiple messages into the
same buffer.  So, with 512 byte buffers, this test leaves ~400 bytes unused
per buffer.  With 4096 byte buffers, it leaves ~4000 bytes unused per
buffer.  You are allocating a lot more buffer space for no benefit.

A better test would involve much larger messages, maybe 64K, 128K, or more.


>
> Credit window is 1000.
>
> I can't find anything in my router config nor in my C client code about
> max frame size.   What do I get by default? Or, how can I check that?
>
> The way I measured throughput was that -- first -- I noticed that when I
> made the test go longer, i.e. send 20 million total messages instead of the
> original 1 million -- it was taking much longer than I expected. So I had
> each receiver log a message every time its total received messages was
> divisible by 1000.
>
> What I saw was that the first thousand came after 11 seconds (just about
> as expected because of sender-throttle to 100/sec) but that later thousands
> became slower. By the time I stopped the test -- after more than 50,000
> messages per receiver -- each thousand was taking ... well ... look at this
> very interesting graph that I made of one receiver's behavior.
>
> This graph is made by just noting the time when you receive each
> thousandth message (time since test started) and graphing that -- so we
> expect to see an upward-sloping straight line whose slope is determined by
> how long it takes to receive each 1000 messages (should be close to 10
> seconds).
>
> [image: messages_vs_time.jpg]
>
> I'm glad I graphed this! This inflection point was a total shock to me.
> NOTE TO SELF: always graph everything from now on forever.
>
> I guess Something Interesting happened at about 28 seconds!
>
> Maybe what I need ... is a reading from "qdstat -m" just before and after
> that inflection point !?!??
>
>
>
> On Thu, Feb 11, 2021 at 5:37 PM Ted Ross  wrote:
>
>> On Thu, Feb 11, 2021 at 2:08 PM Michael Goulish 
>> wrote:
>>
>> > OK, so in the file Dispatch Router file src/buffer.c I changed this:
>> >   size_t BUFFER_SIZE = 512;
>> > to this:
>> >   size_t BUFFER_SIZE = 4096;
>> >
>> > Gordon tells me that's like 8 times bigger.
>> >
>> >
>> > It makes a terrific difference in throughput in the TCP adapter, and if
>> you
>> > limit the sender to the throughput that the receiver can accept, it can
>> go
>> > Real Fast with no memory bloat.  ( Like 15 Gbit/sec )
>> >
>> > But.
>> > AMQP throughput is Not Happy with this change.
>> >
>> > Some of the managed fields grow rapidly (although not enough to account
>> for
>> > total memory growth) -- and throughput gradually drops to a crawl.
>> >
>> > Here are the fields that increase dramatically (like 10x or more) -- and
>> > the ones that don't much change.
>> >
>> >   qd_bitmask_t
>> >   *qd_buffer_t   *
>> >   qd_composed_field_t
>> >   qd_composite_t
>> >   qd_connection_t
>> >   qd_hash_handle_t
>> >   qd_hash_item_t
>> >   qd_iterator_t
>> >   *qd_link_ref_t*
>> >   qd_link_t
>> >   qd_listener_t
>> >   qd_log_entry_t
>> >   qd_management_context_t
>> >   *qd_message_content_t*
>> >   *qd_message_t*
>> >   qd_node_t
>> >   qd_parse_node_t
>> >   qd_parse_tree_t
>> >   qd_parsed_field_t
>> >   qd_session_t
>> >   qd_timer_t
>> >   *qdr_action_t*
>> >   qdr_address_config_t
>> >   qdr_address_t
>> >   qdr_connection_info_t
>> >   qdr_connection_t
>> >   qdr_connection_work_t
>> >   qdr_core_timer_t
>> >   qdr_delivery_cleanup_t
>> >   *qdr_delivery_ref_t*
>> >   *qdr_delivery_t*
>> >   qdr_field_t
>> >   qdr_general_work_t
>> >   qdr_link_ref_t
>> >   qdr_link_t
>> >   qdr_link_work_t
>> >   qdr_query_t
>> >   qdr_terminus_t
>> >
>> >
>> > Does anyone have a great idea about any experiment I could do,
>> > instrumentation I could add, whatever -- that might help to further
>> > diagnose what is going on?
>> >
>>
>> Can you explain how you are measuring AMQP throughput?  What message sizes
>> are you using?  Credit windows?  How many senders and receivers?  Max
>> frame
>> size?
>>
>


Re: Dispatch Router: Changing buffer size in buffer.c blows up AMQP.

2021-02-11 Thread Ted Ross
On Thu, Feb 11, 2021 at 2:08 PM Michael Goulish  wrote:

> OK, so in the file Dispatch Router file src/buffer.c I changed this:
>   size_t BUFFER_SIZE = 512;
> to this:
>   size_t BUFFER_SIZE = 4096;
>
> Gordon tells me that's like 8 times bigger.
>
>
> It makes a terrific difference in throughput in the TCP adapter, and if you
> limit the sender to the throughput that the receiver can accept, it can go
> Real Fast with no memory bloat.  ( Like 15 Gbit/sec )
>
> But.
> AMQP throughput is Not Happy with this change.
>
> Some of the managed fields grow rapidly (although not enough to account for
> total memory growth) -- and throughput gradually drops to a crawl.
>
> Here are the fields that increase dramatically (like 10x or more) -- and
> the ones that don't much change.
>
>   qd_bitmask_t
>   *qd_buffer_t   *
>   qd_composed_field_t
>   qd_composite_t
>   qd_connection_t
>   qd_hash_handle_t
>   qd_hash_item_t
>   qd_iterator_t
>   *qd_link_ref_t*
>   qd_link_t
>   qd_listener_t
>   qd_log_entry_t
>   qd_management_context_t
>   *qd_message_content_t*
>   *qd_message_t*
>   qd_node_t
>   qd_parse_node_t
>   qd_parse_tree_t
>   qd_parsed_field_t
>   qd_session_t
>   qd_timer_t
>   *qdr_action_t*
>   qdr_address_config_t
>   qdr_address_t
>   qdr_connection_info_t
>   qdr_connection_t
>   qdr_connection_work_t
>   qdr_core_timer_t
>   qdr_delivery_cleanup_t
>   *qdr_delivery_ref_t*
>   *qdr_delivery_t*
>   qdr_field_t
>   qdr_general_work_t
>   qdr_link_ref_t
>   qdr_link_t
>   qdr_link_work_t
>   qdr_query_t
>   qdr_terminus_t
>
>
> Does anyone have a great idea about any experiment I could do,
> instrumentation I could add, whatever -- that might help to further
> diagnose what is going on?
>

Can you explain how you are measuring AMQP throughput?  What message sizes
are you using?  Credit windows?  How many senders and receivers?  Max frame
size?


Re: Edge router on the client side

2020-11-20 Thread Ted Ross
On Fri, Nov 20, 2020 at 5:32 AM Petrenko, Vadim 
wrote:

> Hi Qpid developers,
>
> We’re considering this possibility:
>
> Containerize a preconfigured Edge router (possibly together with Artemis)
> and give it to an application team.
>
> The application team will then deploy this container in their environment
> -> the Edge router will connect to a couple Interior routers in our Core
> network -> the client application will connect to the Edge router in the
> container using standard libraries like Qpid-JMS.
>
> We expect this to allow easy scaling up of clients. We also want to attach
> a broker to the edge router in case messages need to be buffered (but this
> is client specific and does not belong to the generic core network setup).
>
> Does this setup look reasonable from a Qpid developer’s point of view?
> Maybe there are some pitfalls to watch out for? Especially exposing
> Interior routers to the world.
>

This is a good use case, and one that I think is appropriate for edge
routers.

If you are going to deploy your interior routers in a public place, I think
you would want strong security (mutual TLS) on those open ports.  Can you
issue certificates to your application teams in the form of secrets so they
can securely connect to your network?


>
>
> Thanks!
>
>
>
> 
>
> Deze e-mail, inclusief eventuele bijlagen, is uitsluitend bestemd voor
> (gebruik door) de geadresseerde. De e-mail kan persoonlijke of
> vertrouwelijke informatie bevatten. Openbaarmaking, vermenigvuldiging,
> verspreiding en/of verstrekking van (de inhoud van) deze e-mail (en
> eventuele bijlagen) aan derden is uitdrukkelijk niet toegestaan. Indien u
> niet de bedoelde geadresseerde bent, wordt u vriendelijk verzocht degene
> die de e-mail verzond hiervan direct op de hoogte te brengen en de e-mail
> (en eventuele bijlagen) te vernietigen.
>
> Informatie vennootschap
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Connection aborted with amqp:connection:framing-error

2020-11-05 Thread Ted Ross
On Wed, Nov 4, 2020 at 2:00 PM Gordon Sim  wrote:

> On 04/11/2020 6:06 pm, Maheedhar Gunnam wrote:
> > Hello Team,
> >
> > I am running into a weird scenario where I have a python client
> connected as
> > a receiver, killing this client leads to a framing error as below
> >
> > 2020-11-03 11:25:34.249356 -0500 PROTOCOL (trace) [5]:FRAME: 0 ->
> @close(24)
> > [error=@error(29) [condition=:"amqp:connection:framing-error",
> > description="connection aborted"]] (distro/src/server.c:112)
> > 2020-11-03 11:25:34.252048 -0500 PROTOCOL (trace) [5]:RAW: RAW:
> >
> "\x00\x00\x00S\x02\x00\x00\x00\x00S\x18\xd0\x00\x00\x00C\x00\x00\x00\x01\x00S\x1d\xd0\x00\x00\x007\x00\x00\x00\x02\xa3\x1damqp:connection:framing-error\xa1\x12connection
> > aborted" (distro/src/server.c:112)
> >
> > Further the control flow fails at the assertion `qd_conn->n_senders >= 0`
> > but continues with the execution though.
> >
> > Below is the backtrace on gdb:
> > #1  0x00080126b6bf in __assert (func=,
> file=0x8004eb973
> > "distro/src/container.c", line=714, failedexpr=0x8004ea63c
> > "qd_conn->n_senders >= 0") at
> ../../../../../../src/lib/libc/gen/assert.c:55
> > #2  0x00080052f6ea in qd_container_handle_event
> (container=0x801b19020,
> > event=0x80559fb10, conn=0x8044dd510, qd_conn=0x8044ee090) at
> > distro/src/container.c:714
> > #3  0x00080059cc28 in handle (qd_server=0x801a66240, e=0x80559fb10,
> > pn_conn=0x8044dd510, ctx=0x8044ee090) at distro/src/server.c:1041
> > #4  0x00080059afbd in thread_run (arg=0x801a66240) at
> > distro/src/server.c:1066
> > #5  0x0008005a210a in _thread_init (arg=0x801b790a0) at
> > distro/src/posix/threading.c:172
> > #6  0x000800b6a8c5 in thread_start (curthread=0x801a34e00) at
> > ../../../../../../src/lib/libthr/thread/thr_create.c:299
> >
> >
> > I am not really sure whether this is an indication of an existing
> problem.
> > Can someone please shed some light on this?
> >
> > I am using qpid-c++ broker and python client as the receiver.
>
> The trace above is for the router (not the broker). If the assert is
> failing it certainly suggests a bug. Do you have a reproducer?
>

Also, can you tell us which version of the router you are running?

Thanks,
-Ted


>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Inter-router routing protocol over AMQP?

2020-10-26 Thread Ted Ross
On Mon, Oct 26, 2020 at 12:09 PM Petrenko, Vadim 
wrote:

> Dear Qpid developers,
>
> Interior routers use their specific Routing protocol to exchange routing
> information and discover each other.
> Does this protocol use the regular AMQP as the underlying transport or is
> it a separate protocol running on the same port (like Artemis that can
> listen to CORE, AMQP, MQTT, etc. on the same port)?
>

The inter-router routing protocol runs over the same inter-router
connection that carries the routed traffic.  The protocol runs over AMQP,
using AMQP encoding and is encrypted in the same way as all other
inter-router traffic.


>
> While the documentation already states that: “Connections between the
> interior routers are encrypted (with SSL/TLS)", I also wanted to double
> check whether this encryption applies to the Routing protocol too?
>

Yes.  It is very important that the routing protocol be secured with
encryption and secure cryptographic authentication to prevent unauthorized
"routers" from joining the network.


> And to complete the question: Are there any other (technical) protocols on
> the same port that are possibly not encrypted?
>

No.  The encryption is applied at the connection level.  All interactions
that are multiplexed over those connections are encrypted.

All of this assumes that the inter-router listeners are configured to
require encryption.  It is possible to configure them to run in-the-clear
or with optional encryption.


> Thanks!
>
> 
>
> Deze e-mail, inclusief eventuele bijlagen, is uitsluitend bestemd voor
> (gebruik door) de geadresseerde. De e-mail kan persoonlijke of
> vertrouwelijke informatie bevatten. Openbaarmaking, vermenigvuldiging,
> verspreiding en/of verstrekking van (de inhoud van) deze e-mail (en
> eventuele bijlagen) aan derden is uitdrukkelijk niet toegestaan. Indien u
> niet de bedoelde geadresseerde bent, wordt u vriendelijk verzocht degene
> die de e-mail verzond hiervan direct op de hoogte te brengen en de e-mail
> (en eventuele bijlagen) te vernietigen.
>
> Informatie vennootschap
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: C++ QPID Messaging : CPU high issue

2020-09-23 Thread Ted Ross
Another thing to try is to replace "fetch" with "get".  Since you have set
the capacity of the receiver, you don't need fetch to actively poll the
server.

On Wed, Sep 23, 2020 at 10:29 AM Gordon Sim  wrote:

> On 23/09/2020 3:13 pm, umohank wrote:
> > receiver.fetch()  waits infinitely if there is no duration mentioned .
> > As I have mentioned 5 seconds, It wait for 5 seconds and come out if no
> > message in Queue.
> >
> > Note : I am getting CPU high even if there is not message in queue to
> > consume.
> >   I have tried receiver.fetch with abnd without duration, in both
> > case I am getting CPU high.
>
> Get a protocol trace (set env var QPID_LOG_ENABLE=trace=+Protocol) and
> get a pstack thread dump for the client process. That might give you
> more information.
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: C++ QPID Messaging : CPU high issue

2020-09-23 Thread Ted Ross
On Wed, Sep 23, 2020 at 8:10 AM umohank  wrote:

> Hi,
>
>   Client :  C++ QPID Messaging
>   Broker : ActiveMQ Artemis
>
>I am facing CPU high issue.
>Creating temp queue and waiting for the message in recevier.fetch()
> call.
>Getting CPU *20+% in release built 32bit* for the below code.
>
>
>   try {
> connection.open();
> Session session = connection.createTransactionalSession();
> Receiver receiver = session.createReceiver("Temp");
> char szControlPlane[512] = { 0x00 };
>
> receiver.setCapacity(512);
> session.sync();
> while (true)
>{
> Message request;
> if (receiver.fetch(request, Duration::SECOND * 5))
> {
> }
>   }
>  }
>
> Is there any think I am missing OR waiting for message in receiver.fetch
> taking more CPU?
>

I believe that the receiver.fetch function will return immediately with
'false' if the receiver is closed.  It's possible that there was some error
in the attaching of the receiver to the broker.


>
> Thanks,
> mohan
>
>
>
> --
> Sent from:
> http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Qpid proton c -- pn_message_send

2020-06-18 Thread Ted Ross
If you look at the examples supplied with Proton, you will see simple
applications that behave as you desire.  Sends are immediate.

Changing your idle timeout is only altering the timing of the bad behavior
of your app.  You need to find a way to incorporate pn_proactor_wait into
your logic.

On Thu, Jun 18, 2020 at 1:07 AM Adrian Florea  wrote:

> So, based on this email chain and looking at what the idle timeout is
> intended for, I think that is true ... proton is "woke up" by these
> heartbeats, like you said. Playing with transport timeout values, just
> increased their frequency.
>
> I will look at other possibilities to obtain an "immediate send" effect.
>
>
> On Wed, Jun 17, 2020, 3:26 PM Adrian Florea  wrote:
>
> > Some news.
> >
> > After setting up the transport (SSL and all), I added a call to
> > pn_transport_set_idle_timeout, with 2ms.
> >
> > This provides great improvement, as now I can see my messages going out
> > every few seconds, definitely sooner than 20s.
> >
> > As a side note, I tried to set the timeout to a subsecond value, doesn't
> > work.
> > Said it must be min 1. Setting it to 1 is causing a subsequent
> > error with the connection timeout. The connection timeout becomes 5000
> ...
> > so I ended up setting transport timeout to 2 to achieve a cinnection
> > timeout of 1.
> >
> > As I said, this provides great improvement but it would be nice if the
> > send can be "flushed" immediately.
> >
> > Adrian
> >
> > On Wed, Jun 17, 2020, 2:40 PM Ted Ross  wrote:
> >
> >> Proactor is a single-threaded, event-driven API for messaging.  It owns
> >> the
> >> main execution loop and uses the pn_proactor_wait() execution to do
> >> background work like sending your message out the connection.
> >>
> >> I don't know what your application looks like, but I assume that you
> have
> >> your own main loop and you don't ever give proactor a chance to run.
> Your
> >> message is probably being sent when a heartbeat frame arrives from
> >> whatever
> >> you're connected to.  This is the PN_TRANSPORT event you are seeing.
> >>
> >> -Ted
> >>
> >> On Wed, Jun 17, 2020 at 3:00 PM Adrian Florea 
> >> wrote:
> >>
> >> > Yeah... forget my last mention. Looking at what pn_proactor_done does,
> >> it
> >> > doesn't make sense to call it when the batch of events is null.
> >> >
> >> > On Wed, Jun 17, 2020, 1:50 PM Adrian Florea 
> >> wrote:
> >> >
> >> > > Yes.
> >> > > I don't call it when the pn_proactor_get() returns null.
> >> > >
> >> > > I should probably call it in this case as well..
> >> > >
> >> > >
> >> > > On Wed, Jun 17, 2020, 1:30 PM Ted Ross  wrote:
> >> > >
> >> > >> On Wed, Jun 17, 2020 at 2:19 PM Adrian Florea <
> florea@gmail.com>
> >> > >> wrote:
> >> > >>
> >> > >> > Hi, thanks.
> >> > >> > I am using the proactor.
> >> > >> > I need a way to clearly send a message out.
> >> > >> > My program has a loop and everytime it loops, I tried this:
> >> > >> >
> >> > >> > - call pn_proactor_wait  --> this ends up blocking my loop, which
> >> is
> >> > not
> >> > >> > good.
> >> > >> >
> >> > >> > - call pn_proactor_get -- this does not block and returns no
> event
> >> > for a
> >> > >> > long while, when suddenly it gets a PN_TRANSPORT event and all my
> >> > >> messages
> >> > >> > are really sent out.
> >> > >> >
> >> > >>
> >> > >> Are you calling pn_proactor_done() after processing the batch of
> >> events
> >> > >> from pn_proactor_get()?
> >> > >>
> >> > >>
> >> > >> >
> >> > >> > Adrian
> >> > >> >
> >> > >> > On Wed, Jun 17, 2020, 12:36 PM Ted Ross 
> wrote:
> >> > >> >
> >> > >> > > Hi Adrian,
> >> > >> > >
> >> > >> > > What is your program doing after it calls pn_message_send?
> That
> >> > >> function
> >> > >> > > queues the messa

Re: Qpid proton c -- pn_message_send

2020-06-17 Thread Ted Ross
Proactor is a single-threaded, event-driven API for messaging.  It owns the
main execution loop and uses the pn_proactor_wait() execution to do
background work like sending your message out the connection.

I don't know what your application looks like, but I assume that you have
your own main loop and you don't ever give proactor a chance to run.  Your
message is probably being sent when a heartbeat frame arrives from whatever
you're connected to.  This is the PN_TRANSPORT event you are seeing.

-Ted

On Wed, Jun 17, 2020 at 3:00 PM Adrian Florea  wrote:

> Yeah... forget my last mention. Looking at what pn_proactor_done does, it
> doesn't make sense to call it when the batch of events is null.
>
> On Wed, Jun 17, 2020, 1:50 PM Adrian Florea  wrote:
>
> > Yes.
> > I don't call it when the pn_proactor_get() returns null.
> >
> > I should probably call it in this case as well..
> >
> >
> > On Wed, Jun 17, 2020, 1:30 PM Ted Ross  wrote:
> >
> >> On Wed, Jun 17, 2020 at 2:19 PM Adrian Florea 
> >> wrote:
> >>
> >> > Hi, thanks.
> >> > I am using the proactor.
> >> > I need a way to clearly send a message out.
> >> > My program has a loop and everytime it loops, I tried this:
> >> >
> >> > - call pn_proactor_wait  --> this ends up blocking my loop, which is
> not
> >> > good.
> >> >
> >> > - call pn_proactor_get -- this does not block and returns no event
> for a
> >> > long while, when suddenly it gets a PN_TRANSPORT event and all my
> >> messages
> >> > are really sent out.
> >> >
> >>
> >> Are you calling pn_proactor_done() after processing the batch of events
> >> from pn_proactor_get()?
> >>
> >>
> >> >
> >> > Adrian
> >> >
> >> > On Wed, Jun 17, 2020, 12:36 PM Ted Ross  wrote:
> >> >
> >> > > Hi Adrian,
> >> > >
> >> > > What is your program doing after it calls pn_message_send?  That
> >> function
> >> > > queues the message for delivery but the delivery isn't actually
> >> > transferred
> >> > > until the application yields the control back to the Proton reactor
> >> (via
> >> > > pn_proactor_wait).  If the application is doing other processing or
> >> > waiting
> >> > > on a condition or mutex, the delivery won't go out the door
> >> immediately.
> >> > >
> >> > > -Ted
> >> > >
> >> > > On Wed, Jun 17, 2020 at 1:11 PM Adrian Florea  >
> >> > > wrote:
> >> > >
> >> > > > Hi,
> >> > > >
> >> > > > Any idea is welcome on this one.
> >> > > >
> >> > > > I am trying to send messages (via a sender link) at various
> moments
> >> in
> >> > > the
> >> > > > life of a program. I am using pn_message_send.
> >> > > >
> >> > > > I have set the outgoing window size to 1, on the session.
> >> > > >
> >> > > > The current behavior is:
> >> > > >
> >> > > > 1. pn_message_send completes OK
> >> > > > 2. nothing is actually sent
> >> > > > 3. after a while (I guess this is where I miss something) I see
> that
> >> > the
> >> > > > proactor gets an event of type PN_TRANSPORT and I can see all
> >> messages
> >> > > > being really sent.
> >> > > >
> >> > > > Is there a way to achieve a "send immediate" behavior ?
> >> > > >
> >> > > > When a message send is invoked, I need it to really go out.
> >> > > >
> >> > > > many thanks for pointing me in the right direction,
> >> > > >
> >> > > > Adrian
> >> > > >
> >> > >
> >> >
> >>
> >
>


Re: Qpid proton c -- pn_message_send

2020-06-17 Thread Ted Ross
On Wed, Jun 17, 2020 at 2:19 PM Adrian Florea  wrote:

> Hi, thanks.
> I am using the proactor.
> I need a way to clearly send a message out.
> My program has a loop and everytime it loops, I tried this:
>
> - call pn_proactor_wait  --> this ends up blocking my loop, which is not
> good.
>
> - call pn_proactor_get -- this does not block and returns no event for a
> long while, when suddenly it gets a PN_TRANSPORT event and all my messages
> are really sent out.
>

Are you calling pn_proactor_done() after processing the batch of events
from pn_proactor_get()?


>
> Adrian
>
> On Wed, Jun 17, 2020, 12:36 PM Ted Ross  wrote:
>
> > Hi Adrian,
> >
> > What is your program doing after it calls pn_message_send?  That function
> > queues the message for delivery but the delivery isn't actually
> transferred
> > until the application yields the control back to the Proton reactor (via
> > pn_proactor_wait).  If the application is doing other processing or
> waiting
> > on a condition or mutex, the delivery won't go out the door immediately.
> >
> > -Ted
> >
> > On Wed, Jun 17, 2020 at 1:11 PM Adrian Florea 
> > wrote:
> >
> > > Hi,
> > >
> > > Any idea is welcome on this one.
> > >
> > > I am trying to send messages (via a sender link) at various moments in
> > the
> > > life of a program. I am using pn_message_send.
> > >
> > > I have set the outgoing window size to 1, on the session.
> > >
> > > The current behavior is:
> > >
> > > 1. pn_message_send completes OK
> > > 2. nothing is actually sent
> > > 3. after a while (I guess this is where I miss something) I see that
> the
> > > proactor gets an event of type PN_TRANSPORT and I can see all messages
> > > being really sent.
> > >
> > > Is there a way to achieve a "send immediate" behavior ?
> > >
> > > When a message send is invoked, I need it to really go out.
> > >
> > > many thanks for pointing me in the right direction,
> > >
> > > Adrian
> > >
> >
>


Re: Qpid proton c -- pn_message_send

2020-06-17 Thread Ted Ross
Hi Adrian,

What is your program doing after it calls pn_message_send?  That function
queues the message for delivery but the delivery isn't actually transferred
until the application yields the control back to the Proton reactor (via
pn_proactor_wait).  If the application is doing other processing or waiting
on a condition or mutex, the delivery won't go out the door immediately.

-Ted

On Wed, Jun 17, 2020 at 1:11 PM Adrian Florea  wrote:

> Hi,
>
> Any idea is welcome on this one.
>
> I am trying to send messages (via a sender link) at various moments in the
> life of a program. I am using pn_message_send.
>
> I have set the outgoing window size to 1, on the session.
>
> The current behavior is:
>
> 1. pn_message_send completes OK
> 2. nothing is actually sent
> 3. after a while (I guess this is where I miss something) I see that the
> proactor gets an event of type PN_TRANSPORT and I can see all messages
> being really sent.
>
> Is there a way to achieve a "send immediate" behavior ?
>
> When a message send is invoked, I need it to really go out.
>
> many thanks for pointing me in the right direction,
>
> Adrian
>


Re: multicast without consumers

2019-11-01 Thread Ted Ross
On Fri, Nov 1, 2019 at 10:46 AM Robbie Gemmell 
wrote:

> On Thu, 31 Oct 2019 at 17:31, Ted Ross  wrote:
> >
> > On Thu, Oct 31, 2019 at 6:57 AM Robbie Gemmell  >
> > wrote:
> >
> > > With the below, are you saying that previously the router would always
> > > immediately accept an unsettled message sent to a multicast address,
> > > and then either send it on (pre-settled) or drop it if there was
> > > nowhere to direct it
> >
> >
> > Yes.
> >
> >
> > > but in the latter case, now it would release
> > > it instead?
> >
> >
> > Yes.
> >
> >
> > > If so, is it possible to configure the old behaviour, for
> > > folks that actually wanted that?
> > >
> >
> > Perhaps.  Does anyone want it?  It _would_ prevent the looping behavior.
> >
>
> Who knows, did we ask? I dont think it is at all out of the realms of
> possibility that some do, especially given it is essentially how a
> brokered topic might often work, and how dispatch behaved until now.
>
> My main point is jsut that it was a fairly large change in behaviour
> given it worked the way it had for some time deliberately (as opposed
> to some bug), and now all of a sudden it isnt possible to get that
> behaviour at all. I dont think its a very nice change to have made in
> a minor release without providing any option to keep things working as
> they had. As suggested, sending pre-settled is the closest equivalent
> now (though not quite identical), but that unfortunately may require
> updating all your senders too.
>

I agree with you in general about the change in behavior, however the old
behavior in this case was never a desirable behavior.

A little history on this feature:

When we first supported multicast distribution, the default behavior was to
reject all unsettled deliveries to multicast addresses.  This was
technically correct in that multicast was only appropriate for pre-settled
QoS.  In practice, however, this was a pretty serious problem.  Not reading
the fine print, developers would send unsettled deliveries to multicast
addresses and get rejections and not understand why their communication
didn't work.

Because of this, we changed the behavior to accept unsettled deliveries
even if they were not delivered to any consumer.  I believe there was a
detailed discussion of this on this email list.

Now, we have a much more proper way to handle unsettled deliveries on
multicast addresses.  The question that arose was how to handle the
situation when there are no consumers.  We could behave like anycast
addresses and throttle the senders, or we could behave more like a topic
and allow the sending of messages, but release the deliveries if there are
no consumers.

In light of the current conversation, I would propose that the behavior be
made the same as anycast distribution without any configurable options.  I
don't think that releasing deliveries is all that "topic-like" and
accepting them is incorrect and misleading.

What would a broker do if messages were sent to a non-persistent topic for
which there were no subscribers?


>
> >
> > >
> > > Or does it still accept in that case and you just meant that the end
> > > receiver outcomes are now interpreted to decide the response to the
> > > sender? What if the different multicast points return different
> > > outcomes?
> >
> >
> > If there are different dispositions, there is a priority list that
> > determines which of the provided dispositions will be returned to the
> > sender.
> >
> >
> > > What if one recipient doesnt provide an outcome for ages?
> >
> >
> > Then that delivery will not be settled for ages.
> >
> >
> > > Or
> > > goes away without providing one?
> > >
> >
> > Then the outcome from that recipient will be MODIFIED at the time the
> > receiver detaches.
> >
> >
> > >
> > > Or maybe it was both of the above? :)
> > >
> > > In some ways either of these seem like odd changes for a minor release
> > > unless its possible to toggle the previous long standing behaviour,
> > > and not say have to switch all your senders to pre-settled to mimmick
> > > but still not quite match it (since at least with the unsettled case
> > > before, youd at least know whether a [first, if multiple hops] router
> > > processed the message at all).
> > >
> > > Robbie
> > >
> > > On Wed, 30 Oct 2019 at 13:54, Ken Giusti  wrote:
> > > >
> > > > On Tue, Oct 29, 2019 at 6:23 PM VERMEULEN Olivier <
> > > > olivier.vermeu

Re: multicast without consumers

2019-10-31 Thread Ted Ross
On Thu, Oct 31, 2019 at 2:07 PM Gordon Sim  wrote:

> On 31/10/2019 5:38 pm, Ted Ross wrote:
> > I think the choice that we would consider making configurable
> > would be:  Release all multicast deliveries for which there is no
> consumer
> > (the present behavior); or Withhold credit for sender links on multicast
> > addresses for which there is no consumer.
>
>
> Isn't that choice better stated as always grant credit for multicast
> addresses or withold credit when there are no consumers? In the latter
> case you would presumably still release a message that was sent because
> there was a consumer when the link attached but it has gone away now.
>

Yes, but there's no infinite cycling scenario since the router will drain
the credit and stop the sender in this case.  This is the same as the
anycast behavior.


>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: multicast without consumers

2019-10-31 Thread Ted Ross
On Thu, Oct 31, 2019 at 1:31 PM Ted Ross  wrote:

>
>
> On Thu, Oct 31, 2019 at 6:57 AM Robbie Gemmell 
> wrote:
>
>> With the below, are you saying that previously the router would always
>> immediately accept an unsettled message sent to a multicast address,
>> and then either send it on (pre-settled) or drop it if there was
>> nowhere to direct it
>
>
> Yes.
>
>
>> but in the latter case, now it would release
>> it instead?
>
>
> Yes.
>
>
>> If so, is it possible to configure the old behaviour, for
>> folks that actually wanted that?
>>
>
> Perhaps.  Does anyone want it?  It _would_ prevent the looping behavior.
>

Actually, I think the choice that we would consider making configurable
would be:  Release all multicast deliveries for which there is no consumer
(the present behavior); or Withhold credit for sender links on multicast
addresses for which there is no consumer.

The accept-and-possibly-drop is not, in my opinion, a desirable behavior.
If someone wants that behavior, they would be better off sending their
deliveries pre-settled.


>
>
>>
>> Or does it still accept in that case and you just meant that the end
>> receiver outcomes are now interpreted to decide the response to the
>> sender? What if the different multicast points return different
>> outcomes?
>
>
> If there are different dispositions, there is a priority list that
> determines which of the provided dispositions will be returned to the
> sender.
>
>
>> What if one recipient doesnt provide an outcome for ages?
>
>
> Then that delivery will not be settled for ages.
>
>
>> Or
>> goes away without providing one?
>>
>
> Then the outcome from that recipient will be MODIFIED at the time the
> receiver detaches.
>
>
>>
>> Or maybe it was both of the above? :)
>>
>> In some ways either of these seem like odd changes for a minor release
>> unless its possible to toggle the previous long standing behaviour,
>> and not say have to switch all your senders to pre-settled to mimmick
>> but still not quite match it (since at least with the unsettled case
>> before, youd at least know whether a [first, if multiple hops] router
>> processed the message at all).
>>
>> Robbie
>>
>> On Wed, 30 Oct 2019 at 13:54, Ken Giusti  wrote:
>> >
>> > On Tue, Oct 29, 2019 at 6:23 PM VERMEULEN Olivier <
>> > olivier.vermeu...@murex.com> wrote:
>> >
>> > > Hello,
>> > >
>> > > Yes the waypoint address (in from broker) is using a multicast
>> > > distribution.
>> > > Unfortunately skipping the broker is not an option for us right now.
>> > > Our whole architecture relies on the broker to guarantee that no
>> messages
>> > > will ever be lost...
>> > >
>> >
>> > That won't be the case for multicast actually.  Prior to release 1.9.0
>> of
>> > the router multicast messages would be dropped without notification when
>> > under load.
>> >
>> > This relates to the issue you're experiencing now I believe.  In 1.9.0
>> we
>> > fixed this via
>> >
>> > https://issues.apache.org/jira/browse/DISPATCH-1266
>> >
>> > Previously multicast messages were marked a pre-settled on entry to the
>> > router and an "accepted" status was returned to the sender _before_ the
>> > multicast was forwarded at all.  Since the message was marked
>> pre-settled
>> > the router mesh will be more likely to drop it should congestion occur.
>> > (Note this not the case with unsettled anycast messages - the router
>> will
>> > send a "release" status should the message need be discarded).
>> >
>> > This auto-settle behavior was undesirable for a number of reasons as you
>> > can imagine, so in 1.9.0 we changed the behavior of multicast
>> deliveries:
>> >
>> > *unsettled* messages sent to multicast addresses are no longer
>> pre-settled
>> > by the router.  The router will send back the final acknowledgement
>> > (accepted, release, etc) once all *present* subscribers for the
>> multicast
>> > address return acknowledgements.
>> >
>> > The behavior of pre-settled multicast did not change.
>> >
>> > So you can probably restore the original behavior by reverted back to a
>> > pre-1.9.0 release, however be aware that even then there's no guarantee
>> the
>> > message will be dropped.  In fact it's _more_ likely to be droppe

Re: multicast without consumers

2019-10-31 Thread Ted Ross
On Thu, Oct 31, 2019 at 6:57 AM Robbie Gemmell 
wrote:

> With the below, are you saying that previously the router would always
> immediately accept an unsettled message sent to a multicast address,
> and then either send it on (pre-settled) or drop it if there was
> nowhere to direct it


Yes.


> but in the latter case, now it would release
> it instead?


Yes.


> If so, is it possible to configure the old behaviour, for
> folks that actually wanted that?
>

Perhaps.  Does anyone want it?  It _would_ prevent the looping behavior.


>
> Or does it still accept in that case and you just meant that the end
> receiver outcomes are now interpreted to decide the response to the
> sender? What if the different multicast points return different
> outcomes?


If there are different dispositions, there is a priority list that
determines which of the provided dispositions will be returned to the
sender.


> What if one recipient doesnt provide an outcome for ages?


Then that delivery will not be settled for ages.


> Or
> goes away without providing one?
>

Then the outcome from that recipient will be MODIFIED at the time the
receiver detaches.


>
> Or maybe it was both of the above? :)
>
> In some ways either of these seem like odd changes for a minor release
> unless its possible to toggle the previous long standing behaviour,
> and not say have to switch all your senders to pre-settled to mimmick
> but still not quite match it (since at least with the unsettled case
> before, youd at least know whether a [first, if multiple hops] router
> processed the message at all).
>
> Robbie
>
> On Wed, 30 Oct 2019 at 13:54, Ken Giusti  wrote:
> >
> > On Tue, Oct 29, 2019 at 6:23 PM VERMEULEN Olivier <
> > olivier.vermeu...@murex.com> wrote:
> >
> > > Hello,
> > >
> > > Yes the waypoint address (in from broker) is using a multicast
> > > distribution.
> > > Unfortunately skipping the broker is not an option for us right now.
> > > Our whole architecture relies on the broker to guarantee that no
> messages
> > > will ever be lost...
> > >
> >
> > That won't be the case for multicast actually.  Prior to release 1.9.0 of
> > the router multicast messages would be dropped without notification when
> > under load.
> >
> > This relates to the issue you're experiencing now I believe.  In 1.9.0 we
> > fixed this via
> >
> > https://issues.apache.org/jira/browse/DISPATCH-1266
> >
> > Previously multicast messages were marked a pre-settled on entry to the
> > router and an "accepted" status was returned to the sender _before_ the
> > multicast was forwarded at all.  Since the message was marked pre-settled
> > the router mesh will be more likely to drop it should congestion occur.
> > (Note this not the case with unsettled anycast messages - the router will
> > send a "release" status should the message need be discarded).
> >
> > This auto-settle behavior was undesirable for a number of reasons as you
> > can imagine, so in 1.9.0 we changed the behavior of multicast deliveries:
> >
> > *unsettled* messages sent to multicast addresses are no longer
> pre-settled
> > by the router.  The router will send back the final acknowledgement
> > (accepted, release, etc) once all *present* subscribers for the multicast
> > address return acknowledgements.
> >
> > The behavior of pre-settled multicast did not change.
> >
> > So you can probably restore the original behavior by reverted back to a
> > pre-1.9.0 release, however be aware that even then there's no guarantee
> the
> > message will be dropped.  In fact it's _more_ likely to be dropped (and
> > signalled as accepted) in pre-1.9.0 releases.
> >
> >
> >
> > > For information we're asking for a quick workaround because we're
> facing
> > > this problem on a client production environment...
> > >
> > > Thanks,
> > > Olivier
> > >
> > > -Original Message-
> > > From: Ken Giusti 
> > > Sent: mardi 29 octobre 2019 18:07
> > > To: users 
> > > Subject: Re: multicast without consumers
> > >
> > > On Tue, Oct 29, 2019 at 11:54 AM jeremy  wrote:
> > >
> > > > Hello Gordon,
> > > >
> > > > We debugged the dispatch router, and fell on the code which releases
> > > > undeliverable messages(
> > > >
> https://github.com/apache/qpid-dispatch/blob/1.5.0/src/router_core/tra
> > > > nsfer.c#L869
> > > > ).
> > > >
> > > > Check the comment on line 879. It states that if the distribution is
> > > > multicast, the credit will be replenished after the release. The
> issue
> > > > that introduced this behavior is:
> > > > https://issues.apache.org/jira/browse/DISPATCH-1012
> > > >
> > > >
> > > Is the waypoint address (in from broker) using multicast distribution?
> > >
> > > The router treats multicast addresses like topics - you can publish to
> a
> > > multicast address (topic) regardless of the presence of consumers.
> That's
> > > the reason credit is being replenished even when no consumers are
> present.
> > >
> > > That's probably what's happening here - broker sends first queued
> 

Re: multicast without consumers

2019-10-30 Thread Ted Ross
On Tue, Oct 29, 2019 at 6:23 PM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:

> Hello,
>
> Yes the waypoint address (in from broker) is using a multicast
> distribution.
> Unfortunately skipping the broker is not an option for us right now.
> Our whole architecture relies on the broker to guarantee that no messages
> will ever be lost...
> For information we're asking for a quick workaround because we're facing
> this problem on a client production environment...
>

Are you looking for a patch you can apply locally to work around your issue?


>
> Thanks,
> Olivier
>
> -Original Message-
> From: Ken Giusti 
> Sent: mardi 29 octobre 2019 18:07
> To: users 
> Subject: Re: multicast without consumers
>
> On Tue, Oct 29, 2019 at 11:54 AM jeremy  wrote:
>
> > Hello Gordon,
> >
> > We debugged the dispatch router, and fell on the code which releases
> > undeliverable messages(
> > https://github.com/apache/qpid-dispatch/blob/1.5.0/src/router_core/tra
> > nsfer.c#L869
> > ).
> >
> > Check the comment on line 879. It states that if the distribution is
> > multicast, the credit will be replenished after the release. The issue
> > that introduced this behavior is:
> > https://issues.apache.org/jira/browse/DISPATCH-1012
> >
> >
> Is the waypoint address (in from broker) using multicast distribution?
>
> The router treats multicast addresses like topics - you can publish to a
> multicast address (topic) regardless of the presence of consumers.  That's
> the reason credit is being replenished even when no consumers are present.
>
> That's probably what's happening here - broker sends first queued message
> to the router, which attempts to send it to the topic.   Since there are no
> consumers (and the message is sent from the broker as unsettled) the
> router cannot deliver it so it returns the released status.  The released
> status causes the broker to redeliver the message. Repeat.
>
>
>
>
> > In fact, we need an urgent fix/workaround for this. Perhaps there is a
> > quick workaround, awaiting the full analysis of this problem?
> >
> >
> As a work around can you avoid sending these multicast messages to the
> broker queue?  In other words send them directly to the router instead of
> using a waypoint?
>
>
>
> > Thanks
> >
> >
> >
> >
> > -
> > Cheers,
> > Jeremy
> > --
> > Sent from:
> > http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
> --
> -K
> ***
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorized to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>


Re: QDR between brokers

2019-10-01 Thread Ted Ross
I believe the issue you are having is related to the fact that you
configured the address as a waypoint.

With a waypoint, there are two effective addresses in the router:
examples(phase0) and examples(phase1).  The phase0 address is used to route
messages from senders (connected to the router) to the brokers.  The phase1
address is used to route messages from the brokers to receivers (connected
to the router).

This configuration will not result in message transfer from one broker to
the other.  Any message placed on a broker (not using the router) will be
delivered via phase1 to consumers connected to the router but not to the
other broker, which would be on phase0.

I'm not sure I'm completely clear on what it is you are trying to
accomplish and why you expect messages sent directly to one broker to be
transferred to the other broker.  You could set up autolinks specifically
for transferring messages from one broker to the other, but you will run
into looping and duplication issues.  What exactly are you trying to do
here?

-Ted

On Tue, Oct 1, 2019 at 8:20 AM Gavrila, Daniel 
wrote:

> Hi all,
>
>
>
> I've the following configuration   Broker1 <-> QDR <-> Broker2  (archived
> in attachment):
>
>
>
>
>
> The applications are starting as daemons on the localhost , broker version
> is 1.38  and QPID dispatch router (QDR) version is 1.9.
>
>
>
> Each broker contains the exchange topic "examples" and the QDR contains
> the mobile address "examples" with the distribution set to multicast. QDR
> uses the default "message routing", has a symmetrical configuration
> regarding the brokers , more precisely contains two autolinks  (in ,out) to
> the exchange  "examples" of the Broker1 and the same for Broker2
>
>
>
> Using the command
>
> spout –b  localhost:PortBroker1   examples   --content brk1
>
> the message "arrives" in the QDR (i.e can be seen with drain -b
> localhost:PortQDR examples), but not in Broker2
>
>
>
> Using instead the command
>
> spout -b localhost:PortQDR examples --content router
>
>
>
> the message arrives on both brokers
>
>
>
>
>
> For me seems that if a message is arriving in QDR over one broker , the
> message cannot be dispatched further. If the message is sent directly from
> one peer to the QDR , the message is dispatched further.
>
> I would like to configure the QDR in such way that also if one message is
> coming from a broker is dispatched further.
>
>
>
>
>
> Many thanks,
>
> Daniel
>
>
> LEONARDO Germany GmbH
> Sitz der Gesellschaft / Registered Office: Neuss
> Registergericht / Register Court: Neuss HRB 17453
> Geschäftsführer / Managing Director: Ulrich Nellen
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org


Re: Uneven distribution of messages

2019-07-19 Thread Ted Ross
On Fri, Jul 19, 2019 at 7:50 AM Rabih M  wrote:

> Hello,
>
> Yes this is one of our production use cases, where we have low throughput
> but the processing of the messages can take from seconds to minutes
> (financial calculations). This is why it was bothering us to have an idle
> consumer while another is overloaded.
>

You should consider using link-routes for your consumers.  Link-routes
trade off the load balancing for tight control of credit end-to-end.  If a
consumer is link-routed to a broker, the broker won't deliver a message on
that link until the consumer issues a credit,  and then it will only
delivery as many messages as credits were issued.


>
> I understand that for the performance, we need to prefetch messages from
> broker to make them available in the dispatch, but what i did not
> understand is why inside a router we need to assign the prefetched messages
> to a consumer connection and not waiting until the connected consumer
> issues a credit, knowing that from a performance point of view, the costly
> action of "prefetching the messages through IO calls" was made.
> Is it because the complexity of the routing algorithm and the
> communications between the dispatch routers will increase?
>
>
When messages are sent into the router network, they are immediately routed
to a destination.  The routers don't hold messages for later routing.
Also, the synchronization of real-time credit state across a network for
all addresses is not practical or scalable.


> Last point, we did the following test:
> A dispatch router have a connector with a LinkCapacity=250 connected to a
> broker and a listener with a LinkCapacity=1 connected to a consumer.
>

The important link capacity in this scenario is the 250 as it controls the
router pre-fetch.  The consumer's link capacity of 1 is not relevant to
this case.


>
> [image: Untitled Diagram.jpg]
>

I can't see the diagram, but I think I get the idea.


>
>
> 1- the router 1 prefetches 250 message from the broker
> 2- the consumer issues a credit
> 3- the consumer receives a message from the router but does not acknowledge
> 4- the consumer issues another credit
> 5- the consumer receives a message from the router but does not
> acknowledge again
> Steps 4 and 5 can be repeated until the 250 msgs are all transferred to
> the consumer
>

This is consistent with there being only one consumer for the address on
the network at the time the broker sent the 250 messages.


>
> Is this an expected behavior? consumer 1 should not acknowledge before he
> can receive another message knowing that the link capacity of the listener
> is 1?
>

Best practice for acknowledgement is for the consumer to acknowledge
(settle) immediately after finishing the processing of the message (i.e.
once that message is no longer consuming memory or compute resources on the
host).  This causes the settlement state of deliveries to be directly
related to consumer resources.  Again, the link capacity of 1 is not having
any effect on the behavior of this scenario.


>
> Thanks for your explanations and help,
>

Am I to understand that your case is this?  You have a distributed work
queue in which the time-to-process is highly variable.  Some messages are
processed quickly and others take much longer.  You don't want to incur the
longer latency on messages that can be handled quickly if there are many
more fast messages than slow messages.

Is it possible to know beforehand which messages are going to take long?
Could you put these on a different queue with a different address?


> Best regards,
> Rabih
>
>
> On Wed, Jul 17, 2019 at 6:46 PM Ted Ross  wrote:
>
>>
>>
>> On Wed, Jul 17, 2019 at 12:00 PM Rabih M  wrote:
>>
>>> Hello,
>>>
>>> We tested with LinkCapacity equal to 1 on the "normal" listener with
>>> debug level trace+, here are our findings:
>>> Our Cluster:
>>> [image: Diagram.jpg]
>>> We are using the broker-j, the consumer are connected to the dispatch
>>> routers before we start sending.
>>>
>>> For use case 1:
>>> 1- the producer sends a message.
>>> 2- Consumer 1 issues one credit, receives the message without
>>> acknowledging.
>>> 3- the producer sends another message.
>>> 4- Consumer 2 in auto-ack mode issues one credit and receives the
>>> message.
>>> 5- we repeated steps 3 and 4 ten times.
>>> 6- Consumer 1 acknowledges.
>>> The results were correct: all the messages were correctly distributed to
>>> the idle consumer.
>>>
>>> For use case 2:
>>> 1- the producer send 10 messages while no credits were issued yet by the
>>> consumers.
>>> 2- Consume

Re: Uneven distribution of messages

2019-07-17 Thread Ted Ross
On Wed, Jul 17, 2019 at 12:00 PM Rabih M  wrote:

> Hello,
>
> We tested with LinkCapacity equal to 1 on the "normal" listener with debug
> level trace+, here are our findings:
> Our Cluster:
> [image: Diagram.jpg]
> We are using the broker-j, the consumer are connected to the dispatch
> routers before we start sending.
>
> For use case 1:
> 1- the producer sends a message.
> 2- Consumer 1 issues one credit, receives the message without
> acknowledging.
> 3- the producer sends another message.
> 4- Consumer 2 in auto-ack mode issues one credit and receives the message.
> 5- we repeated steps 3 and 4 ten times.
> 6- Consumer 1 acknowledges.
> The results were correct: all the messages were correctly distributed to
> the idle consumer.
>
> For use case 2:
> 1- the producer send 10 messages while no credits were issued yet by the
> consumers.
> 2- Consumer 1 issues one credit, receives a message without acknowledging.
> 3- Consumer 2 in auto-ack mode issues one credit and times out after 5
> seconds if nothing is received.
> 4- we repeated step 3 eight times.
> 5- Consumer 1 acknowledges.
> The results were not as expected: 4 messages were blocked in the outbound
> queue of the consumer 1 and consumer 2 was able to receive only 5 messages.
> We analysed the traces to follow the messages. We found that 4 messages
> were blocked in the dispatch 1.
> Conclusion: if no consumers are issuing credits (are busy) then the
> incoming messages will be pre-assigned automatically by the dispatch router
> to the listeners (in a round robin way?).
>
> Is it an expected behavior in the dispatch router? is it not supposed to
> wait for a credit to be issued before binding the message to an outbound
> queue?
>

Yes, this is the expected behavior.  The router does not propagate each
individual credit from receiver to sender.  It would be impractical to do
so, would not scale well, and probably still wouldn't provide the behavior
you expect.  What the router does is to use delivery settlement as the way
to control credit flow to producers.  If the link capacity is 250, each
producer will be limited to 250 unsettled deliveries at any time.  As the
deliveries are settled, more credit is issued.  This scales well in large
networks, keeps a limit on the memory consumed by deliveries, and allows
for high delivery rates.

Is this a real use case or are you experimenting to learn how the router
works?

Under steady state flow, the messages will be delivered to the consumers in
proportion to the rate at which the consumers acknowledge (settle)
deliveries.  If a consumer attaches a receiving link but withholds credit,
the router network will route deliveries to that consumer in anticipation
of credit being issued.  It is an anti-pattern to attach a receiving link
and stop issuing credit.  Credit should be used to control the rate of
delivery.  If you want to stop delivery, detach the receiver.

If you really want a completely synchronous transfer across your network,
you can set the link capacity on the broker connections to 1.  This will
limit the number of in-flight unsettled deliveries on each incoming
auto-link to 1.  It will be slow and will be prone to stalling, especially
if your consumers withhold acknowledgement.

If you want to check the paths of the messages, I attached the routers logs
> of the use case 2.
>
> Best regards,
> Rabih
>
> On Mon, Jul 15, 2019 at 8:14 PM Ganesh Murthy  wrote:
>
>> On Mon, Jul 15, 2019 at 1:26 PM Rabih M  wrote:
>>
>> > Hello,
>> >
>> > We are testing with the trace+ log. We will brief you of the results
>> when
>> > they are ready.
>> >
>> > Our goal is not to load balance equally the messages between the
>> consumers
>> > but we would like the dispatch router to send the message to the free
>> > consumer.
>> >
>>
>> What you did (setting the linkCapacity on the producer and consumer
>> listeners to 1) is the right thing to do if you want the router
>> to send the message to *any* free consumer.
>>
>> If Consumer C1 is processing the first message and does not issue the next
>> credit until it finishes processing the first message and if the second
>> message arrives to Router 1 when C1 is
>> still processing the message, then Router 1 will definitely forward the
>> message to Router 2. BUT if C1 is fast enough and ends up processing the
>> first message and immediately issues credit, the second message if it
>> arrives in Router 1 will also
>> be sent to C1 (because the router prefers local consumers).
>>
>> Remember that the key here is which Router ends up getting the message
>> since each broker has two autoLinks to both routers and we don't know
>> which
>> autoLink the broker will choose.
>>
>> But overall, with your new configuration, the router will send the message
>> to a consumer that is not busy no further configuration is necessary.
>>
>> Specially if the consumer does a long calculation, we do not want to block
>> > the message in the outbound queue knowing there is another idle

Re: Qpid Proton C++ 32-bit Support?

2019-06-25 Thread Ted Ross
Hi Matt,

There are downstream distributions of Proton, including the C++ client,
that are still being built for 32-bit architectures (i686).  There's
nothing in the code that prevents a 32-bit build.  From where are you
getting your RPMs?

-Ted

On Tue, Jun 25, 2019 at 3:20 PM MattR  wrote:

> Hi All,
>
> I was wondering, does Qpid Proton C++ still support 32-bit builds? We have
> a
> couple of legacy applications that are 32-bit using the old (OLD) Qpid
> (0.32
> if I remember correctly) while the rest are based on 64-bit. Currently I
> can
> only find x86_64 based rpms, so I'm assuming that is now the be-all
> end-all.
> Is there a way to build/install the Qpid Proton C++ lib as 32-bit or do I
> need to convince the higher-ups that we need to switch solely to 64-bit
> support?
>
> Thanks,
>
> Matt R.
>
>
>
> --
> Sent from:
> http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: [VOTE] Release Qpid Dispatch Router 1.6.0 (RC2)

2019-03-28 Thread Ted Ross
+1

Built and tested on Fedora 29 and Centos 7.  Reviewed the configuration
schema for backward compatibility.

-Ted

On Tue, Mar 26, 2019 at 7:43 PM Ken Giusti  wrote:

> +1
>
> Tested interoperability between RC and 1.5.0 using oslo.messaging smoke
> tests.
> No memory pool leaks observed
>
> Built and ran unit tests successfully on Ubuntu 18
>
> On Mon, Mar 25, 2019 at 12:41 PM Ganesh Murthy  wrote:
>
> > Hello All,
> >
> >  Please cast your vote on this thread to release RC2 as the
> > official Qpid Dispatch Router version  1.6.0.
> >
> > RC2 of Qpid Dispatch Router version 1.6.0 can be found here:
> >
> > https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.6.0-rc2/
> >
> > The following features, improvements, and bug fixes are introduced in
> > 1.6.0:
> >
> > Features -
> > DISPATCH-1278 - Add support for prometheus metrics export
> >
> > Improvements -
> > DISPATCH-1243 - Change Valgrind configuration to run only
> > qdrouterd under valgrind
> > DISPATCH-1251 - Allow traffic animations to run simultaneaously on
> > console's topology page.
> > DISPATCH-1255 - Test execution with python3 modifies files in
> > source directory
> > DISPATCH-1269 - Improve error handling for remote_sasl.c plugin
> > DISPATCH-1281 - Performance - Batch the freeing of messages from
> > the core thread
> > DISPATCH-1289 - Logging and Management Enhancements
> > DISPATCH-1290 - expose simple http base health check
> > DISPATCH-1291 - Update console to use router-generated link
> settlement
> > rates
> > DISPATCH-1296 - Change use of
> > pn_ssl_domain_allow_unsecured_client() to
> > pn_transport_require_encryption()
> > DISPATCH-1299 - Provide API access to the already existing
> > safe-reference capability in alloc-pool
> >
> > Bug fixes -
> > DISPATCH-1242 - [tools] Scraper does not highlight incomplete
> transfers
> > DISPATCH-1244 - New senders/receivers/edge routers first appear
> > too close to router in console
> > DISPATCH-1245 - Console build generates new warnings about
> > potential vulnerabilities
> > DISPATCH-1247 - Leak of bitmask during message annotation
> > DISPATCH-1248 - leak of core timers on shutdown
> > DISPATCH-1252 - Display connect page if console is disconnected
> > DISPATCH-1254 - qdstat sometimes raises "TypeError: 'NoneType'
> > object is not iterable"
> > DISPATCH-1257 - qdstat & qdmanage send bad initial response for
> > EXTERNAL if --sasl-mechanisms is specified
> > DISPATCH-1260 - Closing traffic animation doesn't always work
> > DISPATCH-1261 - Builds failing on CentOS7
> > DISPATCH-1262 - GCC 8.2 format-truncation error in router/src/main.c
> > DISPATCH-1263 - Symbol for sender/receiver is incorrect on
> > console's topology page
> > DISPATCH-1265 - Delivery_abort test causes inter-router session error
> > DISPATCH-1267 - Bad_configuration test fails intermittently
> > DISPATCH-1272 - Router crashes when detach from receiver and
> > detach from broker arrive at the same time on a link route
> > DISPATCH-1273 - 'to' field not authorized against valid targets
> > for anonymous sender
> > DISPATCH-1275 - Enable deletion of connections based on connection id
> > DISPATCH-1276 - Spontaneous drop of client connection causes crash
> > on edge router
> > DISPATCH-1277 - max-frame-size defaults to 2147483647 if it is not
> > specified in the policy
> > DISPATCH-1285 - Router crashes occasionally on
> > system_tests_delivery_abort
> > DISPATCH-1287 - router gets confused by clients response to drain
> > and subsequently issue too little credit
> > DISPATCH-1288 - Optionally enforce access policy on connections
> > established by the router
> > DISPATCH-1292 - Coverity issues on master branch
> > DISPATCH-1293 - Show traffic for stand-alone router
> > DISPATCH-1297 - Fix buffer reference counting for multiframe fanout
> > messages
> > DISPATCH-1301 - Management messages lost
> >
> > Thanks
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
> --
> -K
>


Re: Dispatch Router prefetch

2019-03-19 Thread Ted Ross
Yes, please do.  Jira is the right place to capture the requirements for
this feature.

-Ted

On Tue, Mar 19, 2019 at 9:27 AM HADI Ali  wrote:

> Hello,
>
> Concerning handling the TTL at the level of the Dispatch Router, should we
> open a Jira ticket to track the issue and continue the discussion ?
> Depending on the priority of this issue on both sides, we are open to
> contribute if needed.
>
> Regards,
> Ali
>
> -Original Message-
> From: HADI Ali
> Sent: mercredi 13 mars 2019 11:14
> To: users@qpid.apache.org
> Subject: RE: Dispatch Router prefetch
>
> We support both, it depends on the use case (we have multiple services
> using the messaging).
>
> -Original Message-
> From: Robbie Gemmell 
> Sent: mardi 12 mars 2019 15:15
> To: users@qpid.apache.org
> Subject: Re: Dispatch Router prefetch
>
> What acknowledgement mode mode are you using?
>
> On Tue, 12 Mar 2019 at 13:22, HADI Ali  wrote:
> >
> > Hello,
> >
> > In our use case we have polling consumers with a prefetch policy of zero
> that issues one credit at a time every few seconds. Between two receive,
> the consumer will be attached with zero credit.
> > Thus, not considering a consumer to be a routable destination until it
> issues initial credit would address the problem only for the first message,
> because the dispatch will still prefetch possibly expired messages as soon
> as the destination is considered routable.
> >
> > In this use case we are consuming a few messages per minutes and TTLs
> are between 2 to 5 seconds. Concerning the granularity, one second should
> be sufficient for us.
> >
> > We also noticed that the broker is not forwarding the TTL set at the
> level of the queue. Is this an expected behavior?
> >
> > Thanks,
> > Ali
> >
> > -Original Message-
> > From: Ted Ross 
> > Sent: lundi 11 mars 2019 15:32
> > To: users@qpid.apache.org
> > Subject: Re: Dispatch Router prefetch
> >
> > On Fri, Mar 8, 2019 at 9:19 AM Gordon Sim  wrote:
> >
> > > On 08/03/2019 2:12 pm, Gordon Sim wrote:
> > > > On 08/03/2019 12:59 pm, HADI Ali wrote:
> > > >> Hello,
> > > >>
> > > >> We are actually using in our cluster multiple brokers and thus we
> > > >> need to define the same address on multiple brokers.
> > > >> For this, we cannot use linkroutes as suggested, but we still
> > > >> need to have the correct behavior of the TTL in our cluster.
> > > >>
> > > >> Is it an option to manage the TTL of the message at the level of
> > > >> the dispatch router since we have all of the information needed
> > > >> in the message headers?
> > > >
> > > > It doesn't do that at present, but it doesn't seem like an
> > > > reasonable enhancement to me.
> > >
> > > Sorry, meant to say it doesn't seem like an *un*reasonable enhancement!
> > >
> >
> > I'd like to better understand the use case here.  We've avoided adding
> any kind of TTL support in Dispatch Router up to this point.
> >
> > I assume, based on the fact that prefetch-1 didn't solve your problem,
> that you have consumers that are attached but don't issue credit for long
> periods of time.  Is this accurate?
> >
> > What is the pattern of your consumers?  Do they attach, then later issue
> credit to process a message?  How many messages per second/minute/hour do
> your consumers handle?  Do they issue one credit at a time?
> >
> > What are the typical TTLs in your messages?  How granular does the
> expiration need to be (i.e. how accurate of a timer would need to be used
> to tag each incoming delivery)?  Would one-second granularity be
> sufficient, or do you need milliseconds?
> >
> > An alternate approach would be to not consider a consumer to be a
> routable destination until it issues initial credit.  Would this address
> your problem?
> >
> >
> > >
> > > >> In Internet Protocol, ipv4 for example, the routers manage the
> > > >> TTL and discard any expired messages.
> > > >>
> > > >> Or make it feasible to have the autolinks propagate the credit
> > > >> directly from consumers?
> > > >
> > > > This isn't really possible when you have autolinks for same
> > > > address to multiple brokers. If the consumer gives 10 credits, how
> > > > do you propagate that to two brokers?  5 each? What if they don't
> both have 5 messages?
> > > > 10 each? Then you are 

Re: Dispatch Router prefetch

2019-03-11 Thread Ted Ross
On Mon, Mar 11, 2019 at 11:26 AM Gordon Sim  wrote:

> On 11/03/2019 2:32 pm, Ted Ross wrote:
> > We've avoided adding any kind of TTL support in Dispatch Router up to
> > this point.
>
> Dropping an expired message is less work than delivering it, even
> delivering pre-settled, I suspect.
>
> The ttl is defined in the header of the message, before annotations, so
> in the case of message routing at least we will always have read past
> that point. Parsing out the ttl will have some cost, but is suspect not
> a huge one.
>
> I wouldn't advocate setting up timers to trigger the processing of
> expired messages, but wherever it makes sense it could be part of the
> processing of deliveries, as a way of saving effort as much as anything.
>

Agreed.  We would need to store with the delivery an arrival timestamp of
sufficient granularity to satisfy the requirements.  If the granularity is
large (one second), this will have very little impact on the performance of
the router.


>
> I probably wouldn't bother too much initially with adjusting the ttl
> either, as generally messages flowing into the router network will
> pretty quickly reach the egress router (which is where at present they
> are 'delayed' until there is credit).
>
> Anyway, not advocating that this is needed, just commenting that I don't
> think it needs to have a negative effect on efficiency (indeed it could
> even be seen as an optimisation).
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Dispatch Router prefetch

2019-03-11 Thread Ted Ross
On Fri, Mar 8, 2019 at 9:19 AM Gordon Sim  wrote:

> On 08/03/2019 2:12 pm, Gordon Sim wrote:
> > On 08/03/2019 12:59 pm, HADI Ali wrote:
> >> Hello,
> >>
> >> We are actually using in our cluster multiple brokers and thus we need
> >> to define the same address on multiple brokers.
> >> For this, we cannot use linkroutes as suggested, but we still need to
> >> have the correct behavior of the TTL in our cluster.
> >>
> >> Is it an option to manage the TTL of the message at the level of the
> >> dispatch router since we have all of the information needed in the
> >> message headers?
> >
> > It doesn't do that at present, but it doesn't seem like an reasonable
> > enhancement to me.
>
> Sorry, meant to say it doesn't seem like an *un*reasonable enhancement!
>

I'd like to better understand the use case here.  We've avoided adding any
kind of TTL support in Dispatch Router up to this point.

I assume, based on the fact that prefetch-1 didn't solve your problem, that
you have consumers that are attached but don't issue credit for long
periods of time.  Is this accurate?

What is the pattern of your consumers?  Do they attach, then later issue
credit to process a message?  How many messages per second/minute/hour do
your consumers handle?  Do they issue one credit at a time?

What are the typical TTLs in your messages?  How granular does the
expiration need to be (i.e. how accurate of a timer would need to be used
to tag each incoming delivery)?  Would one-second granularity be
sufficient, or do you need milliseconds?

An alternate approach would be to not consider a consumer to be a routable
destination until it issues initial credit.  Would this address your
problem?


>
> >> In Internet Protocol, ipv4 for example, the routers manage the TTL and
> >> discard any expired messages.
> >>
> >> Or make it feasible to have the autolinks propagate the credit
> >> directly from consumers?
> >
> > This isn't really possible when you have autolinks for same address to
> > multiple brokers. If the consumer gives 10 credits, how do you propagate
> > that to two brokers?  5 each? What if they don't both have 5 messages?
> > 10 each? Then you are back to the situation where you have more credit
> > issued at source than the consumer has granted.
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: [VOTE] Release Apache Qpid Proton 0.27.0

2019-02-07 Thread Ted Ross
+1 (assuming the below warning is not a blocker)
Installed and ran tests on Fedora 29
Tested against Qpid Dispatch Router
-Ted

On Thu, Feb 7, 2019 at 1:58 PM Ted Ross  wrote:

> Should have added that I'm running on Fedora 29.  The swig version is
> swig-3.0.12-21.fc29.
>
> On Thu, Feb 7, 2019 at 1:56 PM Ted Ross  wrote:
>
>> Still testing, but I saw the following warning during the build.  It
>> appears to be in SWIG-generated code, so this might not be easy to address.
>>
>> [100%] Building C object
>> python/CMakeFiles/_cproton.dir/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c.o
>> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:
>> In function ‘SWIG_Python_addvarlink’:
>> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23177:9:
>> warning: ‘strncpy’ specified bound depends on the length of the source
>> argument [-Wstringop-overflow=]
>>  strncpy(gv->name,name,size);
>>  ^~~
>> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23174:21:
>> note: length computed here
>>size_t size = strlen(name)+1;
>>  ^~~~
>> This warning did not halt the build.
>>
>> -Ted
>>
>> On Thu, Feb 7, 2019 at 12:41 PM Roddie Kieley  wrote:
>>
>>> +1
>>>
>>> I checked out the 0.27.0-rc1 tag on a Fedora 29 box
>>> - default cmake ../qpid-proton && cmake --build . && ctest -VV
>>> - no failures
>>> - ran all c examples
>>> - ran all cpp examples w/exception of service_bus
>>>
>>> I checked out the 0.27.0-rc1 tag on a 10.11.6 OSX box with Xcode 7.3.1
>>> - cmake -DCMAKE_OSX_DEPLOYMENT_TARGET=10.11 -DBUILD_RUBY=NO -DBUILD_GO=NO
>>> -DRUNTIME_CHECK=OFF ../qpid-proton && cmake --build . && ctest -VV
>>> - no failures
>>> - ran all c examples
>>> - ran a selection of cpp examples
>>>
>>> On Wed, Feb 6, 2019 at 10:32 AM Robbie Gemmell >> >
>>> wrote:
>>>
>>> > Hi folks,
>>> >
>>> > I have put together a spin for a Qpid Proton 0.27.0 release, please
>>> > give it a test out and vote accordingly.
>>> >
>>> > The files can be grabbed from:
>>> > https://dist.apache.org/repos/dist/dev/qpid/proton/0.27.0-rc1/
>>> >
>>> > The JIRAs assigned are:
>>> >
>>> >
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12344242
>>> >
>>> > It is tagged as 0.27.0-rc1.
>>> >
>>> > Regards,
>>> > Robbie
>>> >
>>> > -
>>> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> > For additional commands, e-mail: users-h...@qpid.apache.org
>>> >
>>> >
>>>
>>


Re: [VOTE] Release Apache Qpid Proton 0.27.0

2019-02-07 Thread Ted Ross
Should have added that I'm running on Fedora 29.  The swig version is
swig-3.0.12-21.fc29.

On Thu, Feb 7, 2019 at 1:56 PM Ted Ross  wrote:

> Still testing, but I saw the following warning during the build.  It
> appears to be in SWIG-generated code, so this might not be easy to address.
>
> [100%] Building C object
> python/CMakeFiles/_cproton.dir/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c.o
> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:
> In function ‘SWIG_Python_addvarlink’:
> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23177:9:
> warning: ‘strncpy’ specified bound depends on the length of the source
> argument [-Wstringop-overflow=]
>  strncpy(gv->name,name,size);
>  ^~~
> /home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23174:21:
> note: length computed here
>size_t size = strlen(name)+1;
>  ^~~~
> This warning did not halt the build.
>
> -Ted
>
> On Thu, Feb 7, 2019 at 12:41 PM Roddie Kieley  wrote:
>
>> +1
>>
>> I checked out the 0.27.0-rc1 tag on a Fedora 29 box
>> - default cmake ../qpid-proton && cmake --build . && ctest -VV
>> - no failures
>> - ran all c examples
>> - ran all cpp examples w/exception of service_bus
>>
>> I checked out the 0.27.0-rc1 tag on a 10.11.6 OSX box with Xcode 7.3.1
>> - cmake -DCMAKE_OSX_DEPLOYMENT_TARGET=10.11 -DBUILD_RUBY=NO -DBUILD_GO=NO
>> -DRUNTIME_CHECK=OFF ../qpid-proton && cmake --build . && ctest -VV
>> - no failures
>> - ran all c examples
>> - ran a selection of cpp examples
>>
>> On Wed, Feb 6, 2019 at 10:32 AM Robbie Gemmell 
>> wrote:
>>
>> > Hi folks,
>> >
>> > I have put together a spin for a Qpid Proton 0.27.0 release, please
>> > give it a test out and vote accordingly.
>> >
>> > The files can be grabbed from:
>> > https://dist.apache.org/repos/dist/dev/qpid/proton/0.27.0-rc1/
>> >
>> > The JIRAs assigned are:
>> >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12344242
>> >
>> > It is tagged as 0.27.0-rc1.
>> >
>> > Regards,
>> > Robbie
>> >
>> > -
>> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> > For additional commands, e-mail: users-h...@qpid.apache.org
>> >
>> >
>>
>


Re: [VOTE] Release Apache Qpid Proton 0.27.0

2019-02-07 Thread Ted Ross
Still testing, but I saw the following warning during the build.  It
appears to be in SWIG-generated code, so this might not be easy to address.

[100%] Building C object
python/CMakeFiles/_cproton.dir/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c.o
/home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:
In function ‘SWIG_Python_addvarlink’:
/home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23177:9:
warning: ‘strncpy’ specified bound depends on the length of the source
argument [-Wstringop-overflow=]
 strncpy(gv->name,name,size);
 ^~~
/home/ross/tmp/proton/qpid-proton-0.27.0/build/python/CMakeFiles/_cproton.dir/cprotonPYTHON_wrap.c:23174:21:
note: length computed here
   size_t size = strlen(name)+1;
 ^~~~
This warning did not halt the build.

-Ted

On Thu, Feb 7, 2019 at 12:41 PM Roddie Kieley  wrote:

> +1
>
> I checked out the 0.27.0-rc1 tag on a Fedora 29 box
> - default cmake ../qpid-proton && cmake --build . && ctest -VV
> - no failures
> - ran all c examples
> - ran all cpp examples w/exception of service_bus
>
> I checked out the 0.27.0-rc1 tag on a 10.11.6 OSX box with Xcode 7.3.1
> - cmake -DCMAKE_OSX_DEPLOYMENT_TARGET=10.11 -DBUILD_RUBY=NO -DBUILD_GO=NO
> -DRUNTIME_CHECK=OFF ../qpid-proton && cmake --build . && ctest -VV
> - no failures
> - ran all c examples
> - ran a selection of cpp examples
>
> On Wed, Feb 6, 2019 at 10:32 AM Robbie Gemmell 
> wrote:
>
> > Hi folks,
> >
> > I have put together a spin for a Qpid Proton 0.27.0 release, please
> > give it a test out and vote accordingly.
> >
> > The files can be grabbed from:
> > https://dist.apache.org/repos/dist/dev/qpid/proton/0.27.0-rc1/
> >
> > The JIRAs assigned are:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12344242
> >
> > It is tagged as 0.27.0-rc1.
> >
> > Regards,
> > Robbie
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>


Re: [VOTE] Release Qpid Dispatch Router 1.5.0 (RC2)

2019-01-14 Thread Ted Ross
+1
Tested compatibility with version 1.4.1

On Mon, Jan 14, 2019 at 2:27 PM Ernest Allen  wrote:

> +1
>
> Fedora 29
> Tested build. Noticed that there is a new warning when building the
> console. I'll add a jira for that, but I don't believe that it is a
> blocker.
> Started routers
> Started console
> Started some messaging traffic
>
>
> On Mon, Jan 14, 2019 at 1:22 PM Ken Giusti  wrote:
>
> > +1
> >
> > Tested on both Ubuntu 16.04 and 18.04:
> > all unit tests pass
> > oslo.messaging smoke tests pass
> > - additional run of 1.4.1<-->1.5.0-rc2 configuration
> >
> > On Mon, Jan 14, 2019 at 12:11 PM Chuck Rolke  wrote:
> > >
> > > +1
> > >
> > > * Fedora 29, Python 2.7.15, qpid-proton master, openssl v1.1.1
> > > * Verified signatures
> > > * Built from source, ran self tests (observed issue with ssl self test
> > as noted)
> > > * Ran cursory local network tests - all normal
> > >
> > >
> > > - Original Message -
> > > > From: "Ganesh Murthy" 
> > > > To: users@qpid.apache.org
> > > > Sent: Friday, January 11, 2019 3:57:57 PM
> > > > Subject: [VOTE] Release Qpid Dispatch Router 1.5.0 (RC2)
> > > >
> > > > Hello All,
> > > >
> > > >  Please cast your vote on this thread to release RC2 as the
> > > > official Qpid Dispatch Router version  1.5.0.
> > > >
> > > > RC2 of Qpid Dispatch Router version 1.5.0 can be found here:
> > > >
> > > > https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.5.0-rc2/
> > > >
> > > > The following features, improvements, and bug fixes are introduced in
> > 1.5.0:
> > > >
> > > > Features -
> > > >
> > > > DISPATCH-1142 - Edge Router Module - Connection manager to select
> > > > the active uplink
> > > > DISPATCH-1143 - Connection-scoped link routes
> > > > DISPATCH-1145 - Edge Router - Implement address proxy component
> > > > DISPATCH-1150 - A request/response message client API for core
> > > > DISPATCH-1154 - Synchronize routed link configurations on edge
> > > > router with interior router
> > > > DISPATCH-1156 - Delivery echo-prevention for edge routers
> > > > DISPATCH-1194 - Asynchronous address lookup on attach for Edge to
> > > > determine if there are link-route destinations
> > > > DISPATCH-1224 - Waypoints may be attached by external containers
> > > > without using auto-links
> > > >
> > > > Improvements -
> > > > DISPATCH-1141 - Add an event API in the router core to more
> > > > cleanly support module interactions
> > > > DISPATCH-1147 - Expose address priority
> > > > DISPATCH-1152 - Improvements to the core-endpoint API
> > > > DISPATCH-1158 - Add background map to console's topology page
> > > > DISPATCH-1159 - Remove the term "uplink" from the edge router -
> > > > It's confusing
> > > > DISPATCH-1160 - Add edge address tracking module to interior
> > > > routers which will inform edges of mobile address receiver changes
> > > > DISPATCH-1161 - Handle edge routers in the console
> > > > DISPATCH-1162 - Documentation updates related to Edge Router
> > > > DISPATCH-1165 - Generate egress-link histograms for more kinds of
> > > > connections
> > > > DISPATCH-1166 - Expand the mouseover area for connections on the
> > > > console's topology page
> > > > DISPATCH-1168 - Display additional detail for end-point
> > > > connections, and edge-routers on the console's topology page
> > > > DISPATCH-1178 - Allow unspecified router-id in configuration -
> > > > select a random ID
> > > > DISPATCH-1191 - Log files could use some analysis and summary
> tools
> > > > DISPATCH-1193 - Smoothly transition colors on console's traffic
> > > > congestion view
> > > > DISPATCH-1195 - Continually update detail info on conole topology
> > page
> > > > DISPATCH-1199 - [tools] Log scraper tool should be moved to tools
> > > > directory
> > > > DISPATCH-1200 - [Test] system_tests_edge_router must import 're'
> > > > DISPATCH-1201 - [tools] Scraper is mishandling transfers with no
> > > > AMQP properties
> > > > DISPATCH-1202 - [tools] Scraper README is stale
> > > > DISPATCH-1204 - Add console tests for edge router
> > > > DISPATCH-1205 - Allow signed int values >= 0 be parsed as
> unsigned
> > int
> > > > DISPATCH-1206 - Consolidate similar HTML templates into an
> > > > angularjs directive
> > > > DISPATCH-1207 - [tools] Scraper does not handle session
> recreation
> > > > over same connection
> > > > DISPATCH-1208 - [tools] Scraper is slow with large number of
> links
> > > > DISPATCH-1209 - Add and enabling gate to control the
> > > > initialization of core modules
> > > > DISPATCH-1210 - [tools] Scraper could find and show unsettled
> > transfers
> > > > DISPATCH-1211 - Show rate of acceptedDeliveries in console detail
> > > > for edge routers
> > > > DISPATCH-1216 - [tools] Scraper should sort links by
> source/target
> > > > address
> > > > DISPATCH-1227 - Add a policy setting to allow or 

Re: [Dispatch Router] Wrong IDs in the logs?

2018-11-02 Thread Ted Ross
I tried reproducing your symptom with multiple connectors and multiple auto
links to the same broker.  I see what I (and you) expect, an activation on
each individual connection, with distinct connection identifiers.  I agree
with Ganesh, a simple reproducer would be helpful.

-Ted

On Fri, Nov 2, 2018 at 6:51 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:

> Hello,
>
> @Ted We are using the connector name.
>
> @Ganesh We are using a JMS client to send the management requests after
> the dispatch-router has started. I'll try to reproduce with a script using
> qdmanage.
>
> Olivier
>
> -Original Message-
> From: Ted Ross 
> Sent: mercredi 31 octobre 2018 18:35
> To: users@qpid.apache.org
> Subject: Re: [Dispatch Router] Wrong IDs in the logs?
>
> Olivier,
>
> How do you specify the connection in your autolinks?  Are you using
> container-id or connector name?
>
> -Ted
>
> On Wed, Oct 31, 2018 at 10:30 AM VERMEULEN Olivier <
> olivier.vermeu...@murex.com> wrote:
>
> > Hello,
> >
> > We're currently using the version 1.3.0 of the dispatch-router.
> > We are creating 10 connectors to our broker and 10 autolinks per
> > topic/queue (1 for each connector).
> > This "connection pool" allows us to greatly improve the performances but
> > we noticed something strange in the logs.
> > The connectors are named green-lx-slave1_47341_X where X is a number from
> > 1 to 10.
> > But if you look at the last part of the logs, the "Auto Link Activated"
> > part is listing 10 times green-lx-slave1_47341_1 ...
> >
> > When checking the dispatch-router with qdstat everything looks fine.
> > Is it just a problem with the logs?
> >
> > Thanks,
> > Olivier
> >
> >
> >
> > [2018-11-01T02:07:13.552Z] - INFO Removing the existing work-dir:
> >
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> > [2018-11-01T02:07:13.554Z] - INFO Creating Dispatch-Router work-dir:
> >
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> > [2018-11-01T02:07:13.556Z] - INFO Setting PYTHON and OPENSSL
> > [2018-11-01T02:07:13.557Z] - INFO Generating new config file from
> template
> > file
> > [2018-11-01T02:07:13.563Z] - INFO Starting Dispatch-Router
> > 'messaging-dispatch-router' with work dir
> >
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> > and config file
> >
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router/dispatch.conf
> > [2018-11-01T02:07:13.564Z] - INFO Wait for Dispatch-Router instance to
> > start
> > 2018-11-01 03:07:13.655961 +0100 AGENT (debug) Add entity:
> > LogEntity(enable=debug+, identity=log/DEFAULT, includeSource=False,
> > includeTimestamp=True, module=DEFAULT, name=log/DEFAULT,
> >
> outputFile=/data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/logs//messaging/messaging-dispatch-router/messaging-dispatch-router_AGENT.DEFAULT_c34196c3-7808-41b4-b19f-403bd7bbd4f5.log,
> > type=org.apache.qpid.dispatch.log)
> > 2018-11-01 03:07:13.656311 +0100 AGENT (debug) Add entity:
> > LogEntity(identity=log/HTTP, module=HTTP, name=log/HTTP,
> > type=org.apache.qpid.dispatch.log)
> > 2018-11-01 03:07:13.656592 +0100 AGENT (debug) Add entity:
> > LogEntity(identity=log/ROUTER_LS, module=ROUTER_LS, name=log/ROUTER_LS,
> > type=org.apache.qpid.dispatch.log)
> > 2018-11-01 03:07:13.656831 +0100 AGENT (debug) Add entity:
> > LogEntity(identity=log/PYTHON, module=PYTHON, name=log/PYTHON,
> > type=org.apache.qpid.dispatch.log)
> > 2018-11-01 03:07:13.657110 +0100 AGENT (debug) Add entity:
> > LogEntity(identity=log/ROUTER_MA, module=ROUTER_MA, name=log/ROUTER_MA,
> > type=org.apache.qpid.dispatch.log)
> > 2018-11-01 03:07:13.657379 +0100 AGENT (debug) Add entity:
> > LogEntity(identity=log/CONN_MGR, module=CONN_MGR, name=log/CONN_MGR,
> > type=org.apache.qpid.dispatc

Re: [Dispatch Router] Wrong IDs in the logs?

2018-10-31 Thread Ted Ross
Olivier,

How do you specify the connection in your autolinks?  Are you using
container-id or connector name?

-Ted

On Wed, Oct 31, 2018 at 10:30 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:

> Hello,
>
> We're currently using the version 1.3.0 of the dispatch-router.
> We are creating 10 connectors to our broker and 10 autolinks per
> topic/queue (1 for each connector).
> This "connection pool" allows us to greatly improve the performances but
> we noticed something strange in the logs.
> The connectors are named green-lx-slave1_47341_X where X is a number from
> 1 to 10.
> But if you look at the last part of the logs, the "Auto Link Activated"
> part is listing 10 times green-lx-slave1_47341_1 ...
>
> When checking the dispatch-router with qdstat everything looks fine.
> Is it just a problem with the logs?
>
> Thanks,
> Olivier
>
>
>
> [2018-11-01T02:07:13.552Z] - INFO Removing the existing work-dir:
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> [2018-11-01T02:07:13.554Z] - INFO Creating Dispatch-Router work-dir:
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> [2018-11-01T02:07:13.556Z] - INFO Setting PYTHON and OPENSSL
> [2018-11-01T02:07:13.557Z] - INFO Generating new config file from template
> file
> [2018-11-01T02:07:13.563Z] - INFO Starting Dispatch-Router
> 'messaging-dispatch-router' with work dir
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router
> and config file
> /data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/components/messaging/messaging-dispatch-router/work-messaging-dispatch-router/dispatch.conf
> [2018-11-01T02:07:13.564Z] - INFO Wait for Dispatch-Router instance to
> start
> 2018-11-01 03:07:13.655961 +0100 AGENT (debug) Add entity:
> LogEntity(enable=debug+, identity=log/DEFAULT, includeSource=False,
> includeTimestamp=True, module=DEFAULT, name=log/DEFAULT,
> outputFile=/data/src/new/v3.1.build.dev.60387.amber.pi11.2/Messaging/bugfix_messaging_random-failures/component/messaging/messaging-packaging/target/test-classes/logs//messaging/messaging-dispatch-router/messaging-dispatch-router_AGENT.DEFAULT_c34196c3-7808-41b4-b19f-403bd7bbd4f5.log,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.656311 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/HTTP, module=HTTP, name=log/HTTP,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.656592 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ROUTER_LS, module=ROUTER_LS, name=log/ROUTER_LS,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.656831 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/PYTHON, module=PYTHON, name=log/PYTHON,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.657110 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ROUTER_MA, module=ROUTER_MA, name=log/ROUTER_MA,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.657379 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/CONN_MGR, module=CONN_MGR, name=log/CONN_MGR,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.657666 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ROUTER_HELLO, module=ROUTER_HELLO,
> name=log/ROUTER_HELLO, type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.657911 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/SERVER, module=SERVER, name=log/SERVER,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.658174 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/POLICY, module=POLICY, name=log/POLICY,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.658474 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/CONTAINER, module=CONTAINER, name=log/CONTAINER,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.658759 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/AGENT, module=AGENT, name=log/AGENT,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.659041 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ERROR, module=ERROR, name=log/ERROR,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.659336 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ROUTER_CORE, module=ROUTER_CORE,
> name=log/ROUTER_CORE, type=org.apache.qpid.dispatch.log)
> 2018-11-01 03:07:13.659613 +0100 AGENT (debug) Add entity:
> LogEntity(identity=log/ROUTER, module=ROUTER, name=log/ROUTER,
> type=org.apache.qpid.dispatch.log)
> 2018-11-01 

Re: [VOTE] Release Qpid Dispatch Router 1.4.1 (RC1)

2018-10-23 Thread Ted Ross
Ok, we seem to have a problem now.  Github claims to be 100% back online
but the qpid-dispatch build still doesn't work.

-Ted

On Mon, Oct 22, 2018 at 11:31 AM Chuck Rolke  wrote:

> conditional +1
>
> * Fedora 27, proton master
> * checksums check
> * build/test *without console* OK
>
> Console requires packages that get downloaded from github,
> github is inaccessible, and so the console build fails.
> Will test again when github is back on line.
>
>
> - Original Message -
> > From: "Ganesh Murthy" 
> > To: users@qpid.apache.org
> > Sent: Friday, October 19, 2018 10:12:13 AM
> > Subject: [VOTE] Release Qpid Dispatch Router 1.4.1 (RC1)
> >
> > Hello All,
> >
> >  Please cast your vote on this thread to release RC1 as the
> > official Qpid Dispatch Router version  1.4.1.
> >
> > RC1 of Qpid Dispatch Router version 1.4.1 can be found here:
> >
> > https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.4.1-rc1/
> >
> > The following bugs are fixed in 1.4.1:
> >
> > DISPATCH-1148 - auth plugin should indicate version in open
> properties
> > DISPATCH-1149 - authz plugin can no longer override conf file policy
> >
> > Thanks
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Problems building dispatch-router 1.3.0

2018-09-28 Thread Ted Ross
Were there any core dumps in the test directories?

On Fri, Sep 28, 2018 at 6:07 AM Rabih M  wrote:

> Hello,
>
> We are trying to build the latest released version of the dispatch router
> 1.3.0.
> We are using redhat OS (2.6.32-358.el6.x86_64 GNU/Linux), GCC 4.9.2 and
> Python
> 2.7.8.
>
> And we have 2 failing unit tests:
>
> system_tests_two_routers:
>
> 
>
> [linux] 30: FAIL: test_10_propagated_disposition
> (system_tests_two_routers.TwoRouterTest)
>
> [linux] 30: 
>
> [linux] 30: Traceback (most recent call last):
>
> [linux] 30:   File
>
> ".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_two_routers.py",
> line 191, in test_10_propagated_disposition
>
> [linux] 30: test.run()
>
> [linux] 30:   File
>
> ".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_two_routers.py",
> line 1229, in run
>
> [linux] 30: self.test.assertEqual(['accept', 'reject'],
> sorted(self.settled))
>
> [linux] 30: AssertionError: Lists differ: [u'accept', u'reject'] !=
> [u'reject']
>
> [linux] 30:
>
> [linux] 30: First differing element 0:
>
> [linux] 30: accept
>
> [linux] 30: reject
>
> [linux] 30:
>
> [linux] 30: First list contains 1 additional elements.
>
> [linux] 30: First extra element 1:
>
> [linux] 30: reject
>
> [linux] 30:
>
> [linux] 30: - [u'accept', u'reject']
>
> [linux] 30: + [u'reject']
>
> [linux] 30:
>
>
> And system_tests_console :
>
>
>
> Test command: /opt/rh/python27/root/usr/bin/python
> ".../dispatch-workspace/build-dir/qpid-dispatch/tests/run.py" "-x"
> "unit2" "-v" "system_tests_console"
>
> [linux] 48: Test timeout computed to be: 1500
>
> [linux] 48: ERROR
>
> [linux] 48:
>
> [linux] 48: =
>
> [linux] 48: ERROR: setUpClass (system_tests_console.ConsoleTest)
>
> [linux] 48: -
>
> [linux] 48: Traceback (most recent call last):
>
> [linux] 48:   File
>
> "/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_console.py",
> line 45, in setUpClass
>
> [linux] 48: cls.router = cls.tester.qdrouterd('test-router', config)
>
> [linux] 48:   File
> ".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 557, in qdrouterd
>
> [linux] 48: return self.cleanup(Qdrouterd(*args, **kwargs))
>
> [linux] 48:   File
> ".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 352, in __init__
>
> [linux] 48: self.wait_ready()
>
> [linux] 48:   File
>
> "/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 478, in wait_ready
>
> [linux] 48: self.wait_ports(**retry_kwargs)
>
> [linux] 48:   File
>
> "/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 463, in wait_ports
>
> [linux] 48: wait_ports(self.ports_family, **retry_kwargs)
>
> [linux] 48:   File
>
> "/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 185, in wait_ports
>
> [linux] 48: wait_port(port=port, protocol_family=protocol_family,
> **retry_kwargs)
>
> [linux] 48:   File
>
> "/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
> line 177, in wait_port
>
> [linux] 48: raise Exception("wait_port timeout on host %s port %s:
> %s"%(host, port, e))
>
> [linux] 48: Exception: wait_port timeout on host 127.0.0.1 port 27703:
> [Errno 111] Connection refused
>
>
> Any idea why this is happening?
>
> Best regards,
> Rabih
>


Re: Building qpid proton proactor on linux 2.6...

2018-09-06 Thread Ted Ross
timerfd can be fairly easily replaced with a socketpair, where one
socket is used in the poll/select and the other is used to signal
activation.  To wake up the poll/select, write a character into the
signal socket.  You will need to be sure to read the characters out of
the select socket to make sure you don't get a full buffer and
back-pressure.

-Ted

On Thu, Sep 6, 2018 at 7:26 AM,   wrote:
>
> Hi,
>
> I am trying to port a project which was implemented using qpid-proton-cpp 
> from a fairly recent Fedora linux kernel to an older RHEL linux 2.6.
>
> The older kernel is required because the end user unfortunately only runs the 
> older linux kernel, and due to company policy they cannot upgrade in the near 
> future.
>
> Unfortunately when we try to build on linux kernel 2.6 there are dependency 
> issues which relate to the older kernel not including support for timerfd 
> which is used in qpid-proton-proactor.
>
> We did also encounter some other build issues with some of the cpp templates 
> in proton-cpp due to bugs in the older GNU compiler version which we fixed 
> using workarounds, but getting the proactor to work on the older kernel looks 
> like it will be less than trivial without timerfd.
>
> I do see what appears to be an older implementation for IO etc named reactor 
> as opposed to proactor in the proton c implementations, but unfortunately the 
> cpp container implementation which we are using does not seem to have an 
> implementation using reactor instead of proactor?
>
> Is there any way around this - is it possible to use container without having 
> timerfd support in the kernel?
>
> Performance isn't important for us in this so a simple socket read loop would 
> be fine - just I am not sure how to shoehorn that into what we have already 
> implemented using container...?
>
> Thanks
> N
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Qpid Dispatch Router 1.3.0 (RC1)

2018-08-09 Thread Ted Ross
+1

* Tested against master Proton-c on Fedora 27.
* Verified backward compatibility of the configuration schema

-Ted

On Wed, Aug 8, 2018 at 8:19 AM, Ganesh Murthy  wrote:
> +1
> * Validated signatures and checksums
> * Checked for presence of LICENSE and NOTICE files
> * Ran mvn apache-rat:check, no files with missing license headers found.
> * Built from source against Proton master in Fedora 27 and ran system
> tests. All tests passed
>
>
> On Tue, Aug 7, 2018 at 3:07 PM, Ganesh Murthy  wrote:
>
>> Hello All,
>>
>>  Please cast your vote on this thread to release RC1 as the
>> official Qpid Dispatch Router version  1.3.0.
>>
>> RC1 of Qpid Dispatch Router version 1.3.0 can be found here:
>>
>> https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.3.0-rc1/
>>
>> The following improvements and bug fixes are introduced in 1.3.0:
>>
>> Improvements -
>>
>> DISPATCH-977 - Document transaction support
>> DISPATCH-1038 - Console should prevent the deletion of an http listener
>> DISPATCH-1054 - Add console test to make test
>> DISPATCH-1059 - Force Overview and Entities tree to be full page height
>> DISPATCH-1064 - Doc link route reconnect behavior
>> DISPATCH-1065 - Doc new router statistics
>> DISPATCH-1066 - Document capability to restrict TLS and SSL
>> protocol versions used in connections
>> DISPATCH-1067 - Doc improvements for router policies
>> DISPATCH-1070 - Use patternfly cards on overview page
>> DISPATCH-1075 - Dropdown list of routers on the console's Entities
>> page should be sorted
>> DISPATCH-1076 - Don't concat console's source files into a single file
>>
>> Bug fixes -
>> DISPATCH-322 - Graph icon is missing when browser window is narrow
>> DISPATCH-1008 - Router should preserve original connection
>> information when attempting to make failover connections
>> DISPATCH-1061 - Clear popups on console's topology page
>> DISPATCH-1062 - Link address can be reported incorrectly as
>> mobile+phase-0
>> DISPATCH-1063 - Receiver unable to receive messages on waypoint
>> address with external-address in two router case
>> DISPATCH-1069 - memory grows on a long-lived connection when links
>> are opened and closed
>> DISPATCH-1071 - Switching between traffic visualizations sometimes
>> shows both
>> DISPATCH-1072 - Number of clients doesn't always update on topology
>> page
>> DISPATCH-1074 - Fix mouseover on an address on console's Chord page
>> DISPATCH-1077 - Reported rate of message traffic is incorrect on
>> console's 'message traffic' page
>> DISPATCH-1078 - Tab bar icon changes for topology page
>> DISPATCH-1080 - system_tests_ssl failing consistently on Travis
>> DISPATCH-1083 - File console/stand-alone/package-lock.json
>> constantly regenerated
>> DISPATCH-1084 - The color for new addresses on topology
>> visualizations is incorrect
>> DISPATCH-1085 - When sender closes connection after sending a
>> large streaming message, receiver gets aborted message
>> DISPATCH-1087 - qdstat and qdmanage dont run on environments that
>> have only python3
>> DISPATCH-1089 - Dispatch creates sender autolinks with null source
>> terminus and receiver autolinks with null target terminus
>> DISPATCH-1091 - name collision with 'builtins' library in python2
>> DISPATCH-1092 - in some cases qdrouterd crashes due to stale
>> pn_session_t
>> DISPATCH-1093 - adding connectors dynamically causes extra
>> connections for existing connectors
>> DISPATCH-1094 - Log file messages out of order according to time stamps
>> DISPATCH-1095 - Skipped system tests are marked as failed on rhel6
>> DISPATCH-1097 - Fix Coverity issue on master branch
>>
>> Thanks
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Inter-connected dispatch-routers

2018-07-25 Thread Ted Ross
On Wed, Jul 25, 2018 at 5:48 AM, VERMEULEN Olivier
 wrote:
> Hello
>
> Thanks for your replies
>
> @Ganesh, I'm trying to setup the same use case with a more recent 
> dispatch-router (1.0.0) but so far I can't even make the dispatch-router 
> work. My autolinks are in failed state with the following error: "received 
> Attach with remote null terminus". Do you have an idea?
>
> @Ted, shouldn't the configuration be symmetric? If for example I have 3 
> dispatch-routers (A, B, C) with A having connectors on B and C and the other 
> two having a listener, what happens if A crashes? B and C won't be able to 
> communicate no?

You can set up a full mesh of the three routers by adding a connector
from B to C.  A two-router mesh requires one connector; A three-router
mesh requires three connectors (one for each edge in the graph).
Having two inter-router connections between any pair of routers is a
misconfiguration.

>
> Thanks,
> Olivier
>
> -Original Message-
> From: Ted Ross 
> Sent: lundi 23 juillet 2018 18:20
> To: users@qpid.apache.org
> Subject: Re: Inter-connected dispatch-routers
>
> Unrelated to your symptom, there is a misconfiguration in your setup.
> You have two inter-router connections between your routers, one established 
> in each direction (i.e. each router has an inter-router listener _and_ 
> connector).  You only need one inter-router connection.
>
> -Ted
>
> On Mon, Jul 23, 2018 at 10:18 AM, VERMEULEN Olivier 
>  wrote:
>> Hello,
>>
>> I started 2 Dispatch-Routers (version 0.7.0) and 1 Broker-J (version
>> 7.0.3) My first Dispatch-Router has an out autolink on a topic:
>>
>> router {
>> id: router.10104
>> mode: interior
>> worker-threads: 4
>> }
>> listener {
>> host : 0.0.0.0
>> port: 10104
>> role: normal
>> authenticatePeer: no
>> }
>> listener {
>> host : 0.0.0.0
>> port: 10204
>> role: inter-router
>> authenticatePeer: no
>> }
>> connector {
>> name: router.10205
>> host: dell440srv
>> port: 10205
>> role: inter-router
>> }
>> address {
>> prefix: myTopic
>> waypoint: yes
>> }
>> address {
>> prefix: myQueue
>> waypoint: yes
>> }
>> connector {
>> name: broker.5673
>> host: dell440srv
>> port: 5673
>> role: route-container
>> }
>> autoLink {
>> name: broker.5673.myTopic
>> addr: myTopic
>> connection: broker.5673
>> dir: out
>> }
>>
>> My second Dispatch-Router has an in autolink on a queue bound to the 
>> previous topic:
>>
>> router {
>> id: router.10105
>> mode: interior
>> worker-threads: 4
>> }
>> listener {
>> host : 0.0.0.0
>> port: 10105
>> role: normal
>> authenticatePeer: no
>> }
>> listener {
>> host : 0.0.0.0
>> port: 10205
>> role: inter-router
>> authenticatePeer: no
>> }
>> connector {
>> name: router.10204
>> host: dell440srv
>> port: 10204
>> role: inter-router
>> }
>> address {
>> prefix: myTopic
>> waypoint: yes
>> }
>> address {
>> prefix: myQueue
>> waypoint: yes
>> }
>> connector {
>> name: broker.5673
>> host: dell440srv
>> port: 5673
>> role: route-container
>> }
>> autoLink {
>> name: broker.5673.myQueue
>> addr: myQueue
>> connection: broker.5673
>> dir: in
>> }
>>
>> Both Dispatch-Routers are inter-connected.
>> Now when I connect to the second dispatch-router I can send and receive 
>> messages.
>> But when I connect to the first one, I can only send... the receive does not 
>> find anything...
>> Did I miss something in the configuration?
>>
>> Thanks,
>> Olivier
>>
>> ***
>>
>> This e-mail contains information for the intended recipient only. It may 
>> contain proprietary material or confidential information. If you are not the 
>> intended recipient you are not authorised to distribute, copy or use this 
>> e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
>> and accepts no responsibility for any loss or damage arising from its use. 
>> If you have received this e-mail in error please notify immediately the 
>> sender and delete the original email received, any attachments and all 
>> cop

Re: Inter-connected dispatch-routers

2018-07-23 Thread Ted Ross
Unrelated to your symptom, there is a misconfiguration in your setup.
You have two inter-router connections between your routers, one
established in each direction (i.e. each router has an inter-router
listener _and_ connector).  You only need one inter-router connection.

-Ted

On Mon, Jul 23, 2018 at 10:18 AM, VERMEULEN Olivier
 wrote:
> Hello,
>
> I started 2 Dispatch-Routers (version 0.7.0) and 1 Broker-J (version 7.0.3)
> My first Dispatch-Router has an out autolink on a topic:
>
> router {
> id: router.10104
> mode: interior
> worker-threads: 4
> }
> listener {
> host : 0.0.0.0
> port: 10104
> role: normal
> authenticatePeer: no
> }
> listener {
> host : 0.0.0.0
> port: 10204
> role: inter-router
> authenticatePeer: no
> }
> connector {
> name: router.10205
> host: dell440srv
> port: 10205
> role: inter-router
> }
> address {
> prefix: myTopic
> waypoint: yes
> }
> address {
> prefix: myQueue
> waypoint: yes
> }
> connector {
> name: broker.5673
> host: dell440srv
> port: 5673
> role: route-container
> }
> autoLink {
> name: broker.5673.myTopic
> addr: myTopic
> connection: broker.5673
> dir: out
> }
>
> My second Dispatch-Router has an in autolink on a queue bound to the previous 
> topic:
>
> router {
> id: router.10105
> mode: interior
> worker-threads: 4
> }
> listener {
> host : 0.0.0.0
> port: 10105
> role: normal
> authenticatePeer: no
> }
> listener {
> host : 0.0.0.0
> port: 10205
> role: inter-router
> authenticatePeer: no
> }
> connector {
> name: router.10204
> host: dell440srv
> port: 10204
> role: inter-router
> }
> address {
> prefix: myTopic
> waypoint: yes
> }
> address {
> prefix: myQueue
> waypoint: yes
> }
> connector {
> name: broker.5673
> host: dell440srv
> port: 5673
> role: route-container
> }
> autoLink {
> name: broker.5673.myQueue
> addr: myQueue
> connection: broker.5673
> dir: in
> }
>
> Both Dispatch-Routers are inter-connected.
> Now when I connect to the second dispatch-router I can send and receive 
> messages.
> But when I connect to the first one, I can only send... the receive does not 
> find anything...
> Did I miss something in the configuration?
>
> Thanks,
> Olivier
>
> ***
>
> This e-mail contains information for the intended recipient only. It may 
> contain proprietary material or confidential information. If you are not the 
> intended recipient you are not authorised to distribute, copy or use this 
> e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
> and accepts no responsibility for any loss or damage arising from its use. If 
> you have received this e-mail in error please notify immediately the sender 
> and delete the original email received, any attachments and all copies from 
> your system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [Dispatch Router] non-destructive consumers

2018-07-03 Thread Ted Ross
On Tue, Jul 3, 2018 at 9:34 AM, VERMEULEN Olivier
 wrote:
> Hello,
>
> I've been playing with the non-destructive consumers fro the past few days.
>
> First I will explain the use case that works.
> If I create a queue on a Broker-J (7.0.3) with ensureNonDestructiveConsumers 
> set to true and I put a message in it, then any consumer I create for this 
> queue will read this single message, as expected from non-destructive 
> consumers.
>
> Now if I add a Dispatch-Router (0.7.0) in front of my broker, then the first 
> consumer I create will receive the message but the second one will receive 
> null...
> Did I miss some configuration on the Dispatch-Router side?

I assume that you used an auto-link to connect to the broker queue.
In this case, there is only one consumer on the queue, the router, and
it gets a copy of the message to distribute accordingly.

If you used a link-route instead, then each remote consumer would have
its own subscription on the queue and get its own copy of the message.

Let me know if you'd like a more specific answer with regard to the
configuration.

-Ted

>
> Thanks,
> Olivier
> ***
>
> This e-mail contains information for the intended recipient only. It may 
> contain proprietary material or confidential information. If you are not the 
> intended recipient you are not authorised to distribute, copy or use this 
> e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
> and accepts no responsibility for any loss or damage arising from its use. If 
> you have received this e-mail in error please notify immediately the sender 
> and delete the original email received, any attachments and all copies from 
> your system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Qpid Dispatch Router 1.2.0 (RC2)

2018-07-02 Thread Ted Ross
+1

Tested against Proton 0.23 and 0.24 (Fedora 27)

On Mon, Jul 2, 2018 at 10:55 AM, Chuck Rolke  wrote:
> +1
>
> * Verified download
> * Fedora 27
> * Built against proton 0.24-rc1 tag
> * Self tests pass
>
>
> - Original Message -
>> From: "Ganesh Murthy" 
>> To: users@qpid.apache.org
>> Sent: Friday, June 29, 2018 3:21:48 PM
>> Subject: [VOTE] Release Qpid Dispatch Router 1.2.0 (RC2)
>>
>> Hello All,
>>
>>  Please cast your vote on this thread to release RC2 as the
>> official Qpid Dispatch Router version  1.2.0.
>>
>> RC2 of Qpid Dispatch Router version 1.2.0 can be found here:
>>
>> https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.2.0-rc2/
>>
>> The following features, improvements, and bug fixes are introduced in 1.2.0:
>>
>> Features -
>> DISPATCH-970 - Add a chord view of message flow to console
>> DISPATCH-980 - Allow address translation on link routes
>> DISPATCH-1014 - Visualize link congestion on topology page
>>
>> Improvements -
>> DISPATCH-965 - Python 3 compatibiliy
>> DISPATCH-982 - Handle small form-factor screens
>> DISPATCH-1002 - Animate message flow on the console's topology page.
>> DISPATCH-1013 - Enable vhost policies to be used in the router
>> config file (not just through separate JSON files)
>> DISPATCH-1015 - Improve visualization of connection and link info
>> on console's topology page.
>> DISPATCH-1016 - Consolidate console style sheets to improve load time
>> DISPATCH-1017 - Use a javascript build system for the console
>> DISPATCH-1020 - Detach expiring links with closed=true when peer
>> connectivity lost
>> DISPATCH-1024 - Latest version of qpid-proton is causing build
>> issues on Travis, to system tests using incorrect url with user and
>> password
>> DISPATCH-1049 - Add console tests
>> DISPATCH-1053 - Allow deliveries to be constrained to
>> router-control links by address state
>>
>> Bug fixes -
>> DISPATCH-969 - Dropdown menu doesn't work when browser is narrow
>> DISPATCH-976 - Allow policy for sources and targets to handle
>> multiple wildcards
>> DISPATCH-979 - self test mock policy manager does not forward
>> policy warnings
>> DISPATCH-984 - Json config file processing clobbers files with '#'
>> character in strings
>> DISPATCH-985 - Policy username substitution token is documented
>> incorrectly
>> DISPATCH-988 - Documentation of policy default vhost is wrong
>> DISPATCH-990 - Use patterns for policy vhost hostnames
>> DISPATCH-998 - Parse tree does not have remove function that takes
>> a string pattern
>> DISPATCH-1003 - Enable console support for connecting to listener
>> configured with saslMechanisms other than ANONYMOUS
>> DISPATCH-1008 - Router should preserve original connection
>> information when attempting to make failover connections
>> DISPATCH-1011 - Policy username substitution fails to match
>> certain user names
>> DISPATCH-1025 - User token not being replaced properly on a vhost
>> policy when defined in the prefix or suffix
>> DISPATCH-1026 - Router crashing when using
>> sourcePattern/targetPattern with multiple patterns and one of them
>> being user token when trying to open an unauthorized address
>> DISPATCH-1029 - State is not retained on Entities tree for console
>> DISPATCH-1030 - Empty table on Entities page of console
>> DISPATCH-1031 - Remove the links associated with a console from
>> the console's overview page
>> DISPATCH-1033 - Incorrect location for legend on Message flow page
>> in console
>> DISPATCH-1034 - saslPlugin option does not work with http option in
>> listener
>> DISPATCH-1036 - Dropdown lists on the Entity page are the wrong color
>> DISPATCH-1037 - Listeners with http enabled are not being shutdown
>> after they are deleted
>> DISPATCH-1041 - Add new test to validate global delivery counts
>> provided by the router
>> DISPATCH-1043 - In a two router network, qdstat -g is showing
>> non-zero values for "Ingress Count" even when no messages are sent
>> DISPATCH-1044 - Link routed deliveries not included in the global
>> transit and egress counts
>> DISPATCH-1045 - Sometimes close connetion after releasing partial
>> multi-frame messsage
>> DISPATCH-1046 - system_tests_policy fail in python3 environment
>> DISPATCH-1047 - system_tests_ssl fail when running under python3
>> environment
>> DISPATCH-1048 - system_tests_http fail when run under python3 environment
>> DISPATCH-1050 - sasl delegation plugin should set SNI to match auth
>> address
>> DISPATCH-1051 - Python memory leak via PyLong_FromLong
>> DISPATCH-1052 - minor lock leak in policy code
>> DISPATCH-1056 - Build fails making docs on python-3-only fedora 28
>> DISPATCH-1058 - Fix leaks/other code issues found by Coverity
>>
>> Thanks.
>>
>> -
>> To unsubscribe, e-mail: 

Re: [VOTE] Release Apache Qpid Proton 0.24.0

2018-06-26 Thread Ted Ross
+1

On Tue, Jun 26, 2018 at 3:22 PM, Justin Ross  wrote:
> +1.  Built on Fedora 26.  Built Dispatch against it.  Ran the Proton and
> Dispatch tests (no failures).  Ran Qpid JMS 0.24.0 RC1 against it in some
> benchmarks, both peer to peer and client server with Dispatch.
>
> On Tue, Jun 26, 2018 at 7:56 AM Robbie Gemmell 
> wrote:
>
>> Hi folks,
>>
>> I have put together a spin for a Qpid Proton 0.24.0 release, please
>> give it a test out and vote accordingly.
>>
>> The files can be grabbed from:
>> https://dist.apache.org/repos/dist/dev/qpid/proton/0.24.0-rc1/
>>
>> The JIRAs assigned are:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12343063
>>
>> It is tagged as 0.24.0-rc1.
>>
>> Regards,
>> Robbie
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Dispatch Config Store (was: Re: [Broker-J 7.0.3] Memory configuration store and messages recovery)

2018-06-26 Thread Ted Ross
On Mon, Jun 25, 2018 at 4:19 AM, Rob Godfrey  wrote:
> Hi Olivier,
>
> On Mon, 25 Jun 2018 at 09:40, VERMEULEN Olivier 
> wrote:
>
>> Ok so I must keep a persistent config store for the Broker.
>> But if I'm not mistaken the Dispacth-Router only supports a non-persistent
>> config store.
>> So are there any plans to implement new config stores for the
>> dispatch-router?
>>
>>
> Updated the subject in the hopes that someone more familiar with the
> current and future capabilities of Dispatch can answer your questions.
>
> -- Rob

Thanks Rob,

Olivier is correct.  The configuration for Dispatch can be fully
captured in a configuration file.  However, changes made to the
configuration via the management protocol are not persistent.  The
router has no means for generating a configuration file based on its
current configuration.

I would be interested in gathering requirements for persistent
configuration for Qpid Dispatch Router.  This is something that I've
given a bit of thought to.  It should be noted that the Enmasse
project (Messaging as a Service in Kubernetes/OpenShift) addresses
this in a limited way.

I believe a persistent configuration solution for Qpid Dispatch Router
should provide the following:

  - Synchronization of configuration across a set of routers in a network;
  - Multi-tenant isolation (i.e. configuration access/storage
separated by tenant/vhost);

-Ted

>
>
>> Thanks,
>> Olivier
>>
>> -Original Message-
>> From: Rob Godfrey 
>> Sent: vendredi 22 juin 2018 13:20
>> To: users@qpid.apache.org
>> Cc: Keith Wall 
>> Subject: Re: [Broker-J 7.0.3] Memory configuration store and messages
>> recovery
>>
>> Sure - but at that point you really need to provide the vhost config
>> before the vhost itself starts up... would providing all the config
>> (including the queue UUIDs) in a virtualHostInitialConfiguration provided
>> to the VirtualHostNode on creation help I wonder...
>>
>> -- Rob
>>
>> On Fri, 22 Jun 2018 at 12:14, VERMEULEN Olivier <
>> olivier.vermeu...@murex.com>
>> wrote:
>>
>> > Hello Rob,
>> >
>> > The main problem is to keep my version of the config in sync with the
>> > one of each broker.
>> > And knowing that all our brokers have the same config the added
>> > complexity seems a bit overkill.
>> > Especially when you start thinking about broker restart, scale up,
>> > scale down...
>> >
>> > Olivier
>> >
>> > -Original Message-
>> > From: Rob Godfrey 
>> > Sent: vendredi 22 juin 2018 11:27
>> > To: users@qpid.apache.org
>> > Cc: Keith Wall 
>> > Subject: Re: [Broker-J 7.0.3] Memory configuration store and messages
>> > recovery
>> >
>> > In general, in order to be able to recover the queues from a message
>> > store, the broker needs to know details of the queue - not only its
>> > name, but also the type of the queue (standard, priority, LVQ, etc) as
>> > well as other queue properties... this is precisely the data which is
>> > stored in the configuration store, I don't think there's really any
>> > way around that problem.  What sort of problems are you experiencing
>> > by using a non-Memory configuration store in your setup?
>> >
>> > -- Rob
>> >
>> > On Fri, 22 Jun 2018 at 10:47, VERMEULEN Olivier <
>> > olivier.vermeu...@murex.com>
>> > wrote:
>> >
>> > > Hello Keith,
>> > >
>> > > Thanks for the quick answer.
>> > > Our target is a cluster of brokers and dispatch-routers.
>> > > To configure it we created a management component that centralizes
>> > > the configuration of all the Qpid components.
>> > > With this approach, having the broker store its own configuration is
>> > > redundant with what our management component does and causes some
>> > problems.
>> > > That's why I was looking into the memory configuration store which
>> > > would have made the broker behave like the dispatch-router regarding
>> > > dynamic configuration.
>> > >
>> > > Olivier
>> > >
>> > > -Original Message-
>> > > From: Keith W 
>> > > Sent: vendredi 22 juin 2018 10:14
>> > > To: users@qpid.apache.org
>> > > Subject: Re: [Broker-J 7.0.3] Memory configuration store and
>> > > messages recovery
>> > >
>> > > Hello Olivier
>> > >
>> > > Let me say first that the memory store's primary use-case is to aid
>> > > development and testing of Broker-J, by speeding ups the test cycle.
>> > > It also has utility when using embedding Broker-J in the unit tests
>> > > of another project and you want to discard all messaging state
>> > > between
>> > tests.
>> > >
>> > > You are correct in your analysis.   Queues (like all other objects)
>> > > are allocated a random UUID on creation.  This UUID is stored in the
>> > > configuration store together with the queue's name and all other
>> > > attributes.  Within the message store, the message instance records
>> > > refer to the queue's UUID rather than its name.  So if your set-up
>> > > is a persistent message store and volatile configuration store, yes,
>> > > you will lose messages.  This is expected.  The recovery phase of
>> 

Re: [dispatch] Seeking comments before landing a change to the docs tree

2018-06-26 Thread Ted Ross
On Tue, Jun 26, 2018 at 8:32 AM, Justin Ross  wrote:
> https://github.com/apache/qpid-dispatch/pull/296
>
> Ted, Ganesh, is this okay to merge?

Yes, do you wish to do it yourself?

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Qpid Dispatch Router 1.1.0 (RC4)

2018-05-29 Thread Ted Ross
+1

- Built on Fedora 27 against Proton 0.23.0.  Ran the test suite.
- Reviewed the configuration file format for backward-incompatible changes.

On Mon, May 28, 2018 at 8:44 PM, Chuck Rolke  wrote:
> +1
>
> * Built from source with Proton at d28fecf, three commits after 0.23.0.
> * Ran self tests
>
>
> - Original Message -
>> From: "Ganesh Murthy" 
>> To: users@qpid.apache.org
>> Sent: Friday, May 25, 2018 5:45:30 PM
>> Subject: [VOTE] Release Qpid Dispatch Router 1.1.0 (RC4)
>>
>> Hello All,
>>
>>  Please cast your vote on this thread to release RC4 as the
>> official Qpid Dispatch Router version  1.1.0.
>>
>> RC4 of Qpid Dispatch Router version 1.1.0 can be found here:
>>
>> https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.1.0-rc4/
>>
>> The following features, improvements, and bug fixes are introduced in 1.1.0:
>>
>> Features -
>> DISPATCH-89 - Model the legacy topic exchange behavior of qpidd
>> DISPATCH-834 - Create config tool to create/read/update/delete
>> router config files
>> DISPATCH-856 - Return router's hostname as a read-only attribute
>> on the router entity
>> DISPATCH-892 - Support code coverage testing
>> DISPATCH-911 - Add link and address level counters at the global
>> router level
>> DISPATCH-932 - Provide per-ingress router counts for deliveries on
>> egress links
>>
>> Improvements -
>> DISPATCH-859 - Introduce SYSTEMD and SYSVINIT cmake switches to
>> install files accordingly
>> DISPATCH-861 - Update to recent rhea.js
>> DISPATCH-864 - Remove the SYSTEMD and SYSVINIT flags introduced by
>> DISPATCH-859
>> DISPATCH-872 - Add a counter for dropped-presettleds on links
>> DISPATCH-878 - qdrouterd should log real port if port 0 was
>> specified for the listener port property in qdrouterd.conf
>> DISPATCH-884 - Add schema property to allow configurable TLS
>> protocol versions
>> DISPATCH-885 - Modify qd_compose_insert_[string,symbol,binary] to
>> add zero-length [string, symbol, binary] for null input
>> DISPATCH-888 - Balanced distribution algorithm visits each link to
>> determine the best_eligible_link
>> DISPATCH-901 - add authz support to auth service plugin
>> DISPATCH-904 - Add charts to overview page
>> DISPATCH-918 - Improve router config consistency and metadata
>> DISPATCH-921 - Install console dependencies with npm during make install
>> DISPATCH-923 - Clean up javascript to pass eslint tests without errors
>> DISPATCH-925 - Doc: Update anchor name format
>> DISPATCH-938 - Doc: Remove the "Configuration Reference"
>> DISPATCH-942 - allow resumable link routes to be refused
>> DISPATCH-946 - Detect if npm install needs to be executed and
>> display a message
>> DISPATCH-951 - log details for the proton found during build
>> DISPATCH-963 - Router crash during shutdown in system_tests_distribution
>> DISPATCH-971 - Revert DISPATCH-744 - Don't reject unsettled multicasts
>> DISPATCH-972 - Dispatch Router doc should be consistent with "sudo" usage
>>
>> Bug fixes -
>> DISPATCH-580 - Log stats should be graphable
>> DISPATCH-590 - List of log modules on the overview page is
>> occationally doubled
>> DISPATCH-801 - Stand-alone version of the console does not open at
>> all when running offline
>> DISPATCH-831 - Change conntector.cost default value to 1 instead of '1'
>> DISPATCH-869 - Multiple brokers in a topology are displayed as a
>> single broker
>> DISPATCH-875 - Document address and link route wildcards
>> DISPATCH-876 - config file linkRoute should use connection instead
>> of connector
>> DISPATCH-877 - Document how to configure TLS ciphers
>> DISPATCH-879 - Document how Dispatch Router uses alternate failover URLs
>> DISPATCH-880 - Document how Dispatch Router disconnects connections
>> DISPATCH-886 - Console does not properly escape HTML in entity names
>> DISPATCH-891 - Router incref assert in system_tests_delivery_abort
>> DISPATCH-893 - Compile fails using libwebsockets 7
>> DISPATCH-894 - Unable to run system tests on CentOS 6 (Python 2.6)
>> DISPATCH-902 - Intermittent crash with link to broker when broker closed
>> DISPATCH-905 - Dispatch Router not failing over to slave broker
>> when master broker goes away
>> DISPATCH-907 - cannot set address phase via qdmanage tool
>> DISPATCH-910 - Inter-router connections with dir 'in' have no host name
>> DISPATCH-912 - system_tests_user_id_proxy and system_tests_policy failing
>> DISPATCH-915 - connection rhost  not calculated soon enough
>> DISPATCH-916 - qdmanage get-attributes and get-operations not
>> taking into account passed in type
>> DISPATCH-919 - Display a warning when running Dispatch tests if
>> python-unittest2 is not installed
>> DISPATCH-922 - Subsecond timestamps improperly formatted
>> DISPATCH-927 - detach not echoed back on multi-hop link route
>> DISPATCH-928 - calling 

Re: Handling of undeliverable messages in Dispatch Router

2018-05-29 Thread Ted Ross
 >> >
>> >> > +1 - I've looked at revoking credit before. AMQP does allow it but we
>> >> don't
>> >> > test our clients for it and I'd be surprised if it actually worked.
>> >> > Proton-C models it as negative credit, which is surprising and wrong -
>> >> IMO
>> >> > to correctly handle revoked credit the client needs to keep a stack of
>> >> past
>> >> > credit history, pushing onto the stack each time it loses credit and
>> >> > popping when the consequences of that batch of "lost credit" have been
>> >> > worked out. The simple algorithm of a single credit number works
>> properly
>> >> > only if credit is never revoked. Proton's negative credit doesn't give
>> >> you
>> >> > enough info to handle in-flight messages except in simple cases, and a
>> >> > signed/unsigned conversion slip turns revoked credit into
>> near-infinite
>> >> > credit. Wahey!
>> >> >
>> >> > It's probably something we should address and test across the board
>> since
>> >> > it is a feature of AMQP, but we haven't needed it so far and I would
>> >> avoid
>> >> > it if there's another solution as even if we do put our house in order
>> >> > it'll be a backwards interop issue for a long time.
>> >> >
>> >> >
>> >> >> Overall, I also like the idea of releasing the messages. Keeping the
>> >> >> messages around increases the router's
>> >> >> memory footprint. The messages could stay forever in the sender's
>> >> >> undelivered FIFO if the sender does not disconnect
>> >> >> and a receiver never shows up.
>> >> >>
>> >> >> >
>> >> >> > - Original Message -
>> >> >> > > From: "Ted Ross" 
>> >> >> > > To: users@qpid.apache.org
>> >> >> > > Sent: Thursday, May 24, 2018 11:59:32 AM
>> >> >> > > Subject: Re: Handling of undeliverable messages in Dispatch
>> Router
>> >> >> > >
>> >> >> > > I've given this a bit more thought and I think that the second
>> >> option
>> >> >> > > is the correct one.  Philosophically, Qpid Dispatch Router is
>> about
>> >> >> > > minimizing the number of deliveries in flight.  This reduces
>> >> latency,
>> >> >> > > reduces memory use, and increases aggregate capacity and scale.
>> >> >> > >
>> >> >> > > Releasing rather than holding undeliverable messages is more
>> in-line
>> >> >> > > with this philosophy.  TTL should be implemented in brokers that
>> >> hold
>> >> >> > > messages in queues for extended periods of time.
>> >> >> > >
>> >> >> > > I'll raise a Jira for this.
>> >> >> > >
>> >> >> > > -Ted
>> >> >> > >
>> >> >> > > On Thu, May 24, 2018 at 7:56 AM, Ted Ross 
>> wrote:
>> >> >> > > > Hi Kai,
>> >> >> > > >
>> >> >> > > > What you describe is the current behavior of the router.  When
>> the
>> >> >> > > > consumer detaches, the router does not revoke the credit
>> already
>> >> >> given
>> >> >> > > > to the producer.  There are two ways we can address this issue
>> (I
>> >> >> > > > agree that the current behavior is not optimal).
>> >> >> > > >
>> >> >> > > > We could implement time-to-live expiration so the delivery
>> would
>> >> be
>> >> >> > > > rejected if it sits in the buffer longer than the specified
>> TTL.
>> >> >> > > >
>> >> >> > > > Alternatively, we could release deliveries for which there is
>> no
>> >> >> > > > longer a valid destination.  This leaves the "retry or not"
>> >> decision
>> >> >> > > > up to the producer.
>> >> >> > > >
>> >> >> > > > Thoughts?
>> >> >> > > >
>> >> >> > > > -Ted
>>

Re: Handling of undeliverable messages in Dispatch Router

2018-05-25 Thread Ted Ross
https://issues.apache.org/jira/browse/DISPATCH-1012

On Fri, May 25, 2018 at 8:20 AM, Hudalla Kai (INST/ECS4)
<kai.huda...@bosch-si.com> wrote:
> I agree, releasing undeliverable messages sounds like the reasonable thing to 
> do and would indeed solve our problem.
>
>
> @Ted: can you post the JIRA's URL so that we can track it?
>
> ________
> From: Ted Ross <tr...@redhat.com>
> Sent: Thursday, May 24, 2018 5:59:32 PM
> To: users@qpid.apache.org
> Subject: Re: Handling of undeliverable messages in Dispatch Router
>
> I've given this a bit more thought and I think that the second option
> is the correct one.  Philosophically, Qpid Dispatch Router is about
> minimizing the number of deliveries in flight.  This reduces latency,
> reduces memory use, and increases aggregate capacity and scale.
>
> Releasing rather than holding undeliverable messages is more in-line
> with this philosophy.  TTL should be implemented in brokers that hold
> messages in queues for extended periods of time.
>
> I'll raise a Jira for this.
>
> -Ted
>
> On Thu, May 24, 2018 at 7:56 AM, Ted Ross <tr...@redhat.com> wrote:
>> Hi Kai,
>>
>> What you describe is the current behavior of the router.  When the
>> consumer detaches, the router does not revoke the credit already given
>> to the producer.  There are two ways we can address this issue (I
>> agree that the current behavior is not optimal).
>>
>> We could implement time-to-live expiration so the delivery would be
>> rejected if it sits in the buffer longer than the specified TTL.
>>
>> Alternatively, we could release deliveries for which there is no
>> longer a valid destination.  This leaves the "retry or not" decision
>> up to the producer.
>>
>> Thoughts?
>>
>> -Ted
>>
>> On Thu, May 24, 2018 at 4:53 AM, Hudalla Kai (INST/ECS4)
>> <kai.huda...@bosch-si.com> wrote:
>>> Hi,
>>>
>>>
>>> we are experiencing some unwanted/unexpected behavior when using message 
>>> routing in Dispatch Router 1.0.1.
>>>
>>>
>>>   1.  Receiver opens a receiver link on control/my-tenant/my-device
>>>   2.  Sender opens a sender link on control/my-tenant/my-device
>>>   3.  Sender gets credit from the router
>>>   4.  Receiver closes its link with the router
>>>   5.  Sender sends an unsettled message on its sender link
>>>   6.  dispatch router does not accept nor reject the message, in fact, the 
>>> sender does not get any disposition at all
>>>   7.  As soon as the receiver opens a new link on the address, it gets the 
>>> message
>>>
>>> Is this the intended behavior? The Dispatch Router book states in section 
>>> 4.2 [1]:
>>>
>>>
>>> Address semantics include the following considerations:
>>>
>>>   *   Routing pattern - direct, multicast, balanced
>>>
>>>   *   Undeliverable action - drop, hold and retry, redirect
>>>
>>>   *   Reliability - N destinations, etc.
>>>
>>> In particular, the "undeliverable action" seems to be of importance here 
>>> (the default seems to be "hold and retry"). Is this configurable? In our 
>>> case it would be more desirable to have the router reject the message 
>>> instead.
>>>
>>>
>>> [1] 
>>> https://qpid.apache.org/releases/qpid-dispatch-1.0.1/book/index.html#addressing
>>>
>>>
>>> Mit freundlichen Grüßen / Best regards
>>>
>>> Kai Hudalla
>>> Chief Software Architect
>>>
>>> Bosch Software Innovations GmbH
>>> Ullsteinstraße 128
>>> 12109 Berlin
>>> GERMANY
>>> www.bosch-si.com<http://www.bosch-si.com>
>>>
>>> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB 
>>> 148411 B;
>>> Chairman of the Supervisory Board: Dr.-Ing. Thorsten Lücke; Managing 
>>> Directors: Dr. Stefan Ferber, Michael Hahn
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Apache Qpid Proton 0.23.0

2018-05-24 Thread Ted Ross
+1

- Built from sources and successfully ran the tests (on Fedora 27)
- Built Dispatch Router 1.1.x against 0.23.0 and successfully ran the tests
- Built Dispatch Router master against 0.23.0 and successfully ran the tests

On Thu, May 24, 2018 at 6:27 AM, Timothy Bish  wrote:
> On 05/23/2018 03:53 PM, Robbie Gemmell wrote:
>>
>> I have put together a spin for a Qpid Proton 0.23.0 release, please
>> give it a test out and vote accordingly.
>>
>> The source archive can be grabbed from:
>> https://dist.apache.org/repos/dist/dev/qpid/proton/0.23.0-rc1/
>>
>> The JIRAs assigned are:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12342974
>>
>> It is tagged as 0.23.0-rc1.
>>
>> NOTE: Dispatch 1.0.1 compiles and runs against 0.23.0, but many of its
>> tests will fail due to the removal of Messenger. Dispatch 1.1.0 will
>> follow from its master soon and resolve that.
>>
>> Regards,
>> Robbie
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>
> +1
>
> * Validated signatures and checksums
> * Checked for license and notice files
> * Built from source and ran the tests
> * Ran C, CPP, and Python examples against an Artemis broker install
>
>
> --
> Tim Bish
> twitter: @tabish121
> blog: http://timbish.blogspot.com/
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Handling of undeliverable messages in Dispatch Router

2018-05-24 Thread Ted Ross
I've given this a bit more thought and I think that the second option
is the correct one.  Philosophically, Qpid Dispatch Router is about
minimizing the number of deliveries in flight.  This reduces latency,
reduces memory use, and increases aggregate capacity and scale.

Releasing rather than holding undeliverable messages is more in-line
with this philosophy.  TTL should be implemented in brokers that hold
messages in queues for extended periods of time.

I'll raise a Jira for this.

-Ted

On Thu, May 24, 2018 at 7:56 AM, Ted Ross <tr...@redhat.com> wrote:
> Hi Kai,
>
> What you describe is the current behavior of the router.  When the
> consumer detaches, the router does not revoke the credit already given
> to the producer.  There are two ways we can address this issue (I
> agree that the current behavior is not optimal).
>
> We could implement time-to-live expiration so the delivery would be
> rejected if it sits in the buffer longer than the specified TTL.
>
> Alternatively, we could release deliveries for which there is no
> longer a valid destination.  This leaves the "retry or not" decision
> up to the producer.
>
> Thoughts?
>
> -Ted
>
> On Thu, May 24, 2018 at 4:53 AM, Hudalla Kai (INST/ECS4)
> <kai.huda...@bosch-si.com> wrote:
>> Hi,
>>
>>
>> we are experiencing some unwanted/unexpected behavior when using message 
>> routing in Dispatch Router 1.0.1.
>>
>>
>>   1.  Receiver opens a receiver link on control/my-tenant/my-device
>>   2.  Sender opens a sender link on control/my-tenant/my-device
>>   3.  Sender gets credit from the router
>>   4.  Receiver closes its link with the router
>>   5.  Sender sends an unsettled message on its sender link
>>   6.  dispatch router does not accept nor reject the message, in fact, the 
>> sender does not get any disposition at all
>>   7.  As soon as the receiver opens a new link on the address, it gets the 
>> message
>>
>> Is this the intended behavior? The Dispatch Router book states in section 
>> 4.2 [1]:
>>
>>
>> Address semantics include the following considerations:
>>
>>   *   Routing pattern - direct, multicast, balanced
>>
>>   *   Undeliverable action - drop, hold and retry, redirect
>>
>>   *   Reliability - N destinations, etc.
>>
>> In particular, the "undeliverable action" seems to be of importance here 
>> (the default seems to be "hold and retry"). Is this configurable? In our 
>> case it would be more desirable to have the router reject the message 
>> instead.
>>
>>
>> [1] 
>> https://qpid.apache.org/releases/qpid-dispatch-1.0.1/book/index.html#addressing
>>
>>
>> Mit freundlichen Grüßen / Best regards
>>
>> Kai Hudalla
>> Chief Software Architect
>>
>> Bosch Software Innovations GmbH
>> Ullsteinstraße 128
>> 12109 Berlin
>> GERMANY
>> www.bosch-si.com
>>
>> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB 
>> 148411 B;
>> Chairman of the Supervisory Board: Dr.-Ing. Thorsten Lücke; Managing 
>> Directors: Dr. Stefan Ferber, Michael Hahn

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Handling of undeliverable messages in Dispatch Router

2018-05-24 Thread Ted Ross
Hi Kai,

What you describe is the current behavior of the router.  When the
consumer detaches, the router does not revoke the credit already given
to the producer.  There are two ways we can address this issue (I
agree that the current behavior is not optimal).

We could implement time-to-live expiration so the delivery would be
rejected if it sits in the buffer longer than the specified TTL.

Alternatively, we could release deliveries for which there is no
longer a valid destination.  This leaves the "retry or not" decision
up to the producer.

Thoughts?

-Ted

On Thu, May 24, 2018 at 4:53 AM, Hudalla Kai (INST/ECS4)
 wrote:
> Hi,
>
>
> we are experiencing some unwanted/unexpected behavior when using message 
> routing in Dispatch Router 1.0.1.
>
>
>   1.  Receiver opens a receiver link on control/my-tenant/my-device
>   2.  Sender opens a sender link on control/my-tenant/my-device
>   3.  Sender gets credit from the router
>   4.  Receiver closes its link with the router
>   5.  Sender sends an unsettled message on its sender link
>   6.  dispatch router does not accept nor reject the message, in fact, the 
> sender does not get any disposition at all
>   7.  As soon as the receiver opens a new link on the address, it gets the 
> message
>
> Is this the intended behavior? The Dispatch Router book states in section 4.2 
> [1]:
>
>
> Address semantics include the following considerations:
>
>   *   Routing pattern - direct, multicast, balanced
>
>   *   Undeliverable action - drop, hold and retry, redirect
>
>   *   Reliability - N destinations, etc.
>
> In particular, the "undeliverable action" seems to be of importance here (the 
> default seems to be "hold and retry"). Is this configurable? In our case it 
> would be more desirable to have the router reject the message instead.
>
>
> [1] 
> https://qpid.apache.org/releases/qpid-dispatch-1.0.1/book/index.html#addressing
>
>
> Mit freundlichen Grüßen / Best regards
>
> Kai Hudalla
> Chief Software Architect
>
> Bosch Software Innovations GmbH
> Ullsteinstraße 128
> 12109 Berlin
> GERMANY
> www.bosch-si.com
>
> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB 
> 148411 B;
> Chairman of the Supervisory Board: Dr.-Ing. Thorsten Lücke; Managing 
> Directors: Dr. Stefan Ferber, Michael Hahn

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Service Bus, connecting services behind firewall

2018-05-15 Thread Ted Ross
On Tue, May 15, 2018 at 2:38 AM, Gordon Sim  wrote:
> On 15/05/18 04:49, Mansour Al Akeel wrote:
>>
>> Gordon,
>> Thank you for replying. I am sorry I didn't explain it well. Maybe I was
>> relying on the link on ServiceMix mailing list. I will try to explain it
>> again here.
>>
>> We have two systems, on separate networks. System A and System B. Each of
>> them provide different services. Some services in System A, need to call
>> services on System B. However, since each of them on a different network,
>> we have to rely on opening holes in the firewalls.
>>
>> I think we can replace the existing setup, with a AMQP. For example a
>> service on system A (Service A), can call a service on System B (Service
>> B), buy connecting to an instance of QPID, and send a message to a queue
>> called INPUT_A. Service B would get this message, and process it, then
>> reply on queue OUTPUT_B. Service A would then select the reply based on
>> CorrelationId, and match it with the request.

There's a much better pattern for request/reply that uses temporary
addresses for the client.  This causes replies to be sent directly to
the clients without the need for a selector.

>
>
> Yes, you can use queues. You can also communicate end-to-end using the
> dispatch router.

+1..  Queues are not a real benefit for the Request/Reply pattern
unless the clients and servers are not present at the same time.
Using the router allows you to get the cross-datacenter benefit
reliably without the need for clustered brokers or backing stores.

>
>> Therefore we can establish a connection between those two services.
>> This is summarized in the request/response pattern. I was wondering if
>> there's a library that allows to do this without having to deal with the
>> details. After searching, I found something close to what I have in mind
>> (and you replied to the user).
>>
>>
>> http://qpid.2158936.n2.nabble.com/RPC-over-AMQP-with-Hessian-td5103076.html
>>
>> Here's an RPC over messaging, (or Request/Reply) implementation by
>> RabbitMQ, https://www.rabbitmq.com/tutorials/tutorial-six-java.html
>
>
> There is a simple client/server (i.e. request/response) example included in
> the qpid jms examples[1][2]
>
>> I will check out the project "https://github.com/ebourg/qpid-hessian; and
>> test it out. If there's nothing similar to in the Qpid, it will be nice to
>> have it.
>
>
> That looks pretty old, I suspect you will have to tweak things to get it to
> work at all.
>
>
> [1]
> https://git1-us-west.apache.org/repos/asf?p=qpid-jms.git;a=blob;f=qpid-jms-examples/src/main/java/org/apache/qpid/jms/example/Client.java;h=482af7954c28d4e99c0bafc7b689e97fe17869b6;hb=HEAD
> [2]
> https://git1-us-west.apache.org/repos/asf?p=qpid-jms.git;a=blob;f=qpid-jms-examples/src/main/java/org/apache/qpid/jms/example/Server.java;h=ef12d6b6d67fcf654ebb49af9a49e30aad5a61c3;hb=HEAD
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Qpid Dispatch Router 1.1.0 (RC1)

2018-04-25 Thread Ted Ross
-1 from me based on Gordon's observation.  This should be an easy fix.

-Ted

On Wed, Apr 25, 2018 at 5:45 PM, Gordon Sim  wrote:
> On 25/04/18 15:28, Ganesh Murthy wrote:
>>
>> Hello All,
>>
>>   Please cast your vote on this thread to release RC1 as the
>> official Qpid Dispatch Router version  1.1.0.
>>
>> RC1 of Qpid Dispatch Router version 1.1.0 can be found here:
>>
>> https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.1.0-rc1/
>
>
> Minor (but potentially annoying) backwards compatibility issue: there are
> some attributes that used to be on the router entity type that are now only
> on the new routerStats (connectionCount etc). It would be nice to retain
> backward compat if possible, so tentative -1 if that would be relatively
> easy to do.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Proposed Feature Removal from Dispatch Router

2018-04-12 Thread Ted Ross
For the record, here is the Jira for the feature in question:

https://issues.apache.org/jira/browse/DISPATCH-744

On Thu, Apr 12, 2018 at 6:20 PM, Ted Ross <tr...@redhat.com> wrote:
> We added a feature back in 1.0.0 to reject unsettled deliveries to
> multicast addresses by default.  This can be disabled through
> configuration but is on by default.
>
> The rationale was that the router would accept and settle unsettled
> multicasts even though it might not have delivered the messages to any
> consumer.  The rejection with error code was intended to inform users
> that they should pre-settle deliveries to multicast addresses in
> keeping with the best-effort nature of multicast routing.
>
> In practice, this is more of an annoyance because none of the example
> clients (and apparently the users' clients) actually do anything with
> the error code in the rejected delivery.  The router appears to
> silently drop such messages for no good reason and good will is wasted
> in chasing down the issue to "oh, you should turn off this handy
> feature".
>
> The recently raised https://issues.apache.org/jira/browse/DISPATCH-966
> is caused by this feature as well.  This is because the router can
> stream large messages in multiple transfers.  The first transfer is
> used for routing and the last transfer should be used to determine the
> settlement status of the delivery.  It is not a trivial fix to make
> this work correctly.
>
> For the above two reasons, I propose that we back out this feature and
> allow multicasting with unsettled deliveries.  We should add a clear
> note in the documentation that states that multicast is best-effort,
> regardless of the settlement status of the deliveries.
>
> Any objections from the users?
>
> -Ted

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Proposed Feature Removal from Dispatch Router

2018-04-12 Thread Ted Ross
We added a feature back in 1.0.0 to reject unsettled deliveries to
multicast addresses by default.  This can be disabled through
configuration but is on by default.

The rationale was that the router would accept and settle unsettled
multicasts even though it might not have delivered the messages to any
consumer.  The rejection with error code was intended to inform users
that they should pre-settle deliveries to multicast addresses in
keeping with the best-effort nature of multicast routing.

In practice, this is more of an annoyance because none of the example
clients (and apparently the users' clients) actually do anything with
the error code in the rejected delivery.  The router appears to
silently drop such messages for no good reason and good will is wasted
in chasing down the issue to "oh, you should turn off this handy
feature".

The recently raised https://issues.apache.org/jira/browse/DISPATCH-966
is caused by this feature as well.  This is because the router can
stream large messages in multiple transfers.  The first transfer is
used for routing and the last transfer should be used to determine the
settlement status of the delivery.  It is not a trivial fix to make
this work correctly.

For the above two reasons, I propose that we back out this feature and
allow multicasting with unsettled deliveries.  We should add a clear
note in the documentation that states that multicast is best-effort,
regardless of the settlement status of the deliveries.

Any objections from the users?

-Ted

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Qpid Dispatch Router release (1.1.0)

2018-04-05 Thread Ted Ross
On Tue, Apr 3, 2018 at 12:24 PM, Ted Ross <tr...@redhat.com> wrote:
> Since the release of Qpid Proton 0.22.0, Qpid Dispatch Router's tests
> fail against the latest released Proton because of the removal of the
> deprecated Messenger API.  We are planning to push some of the
> in-progress Jiras to the next release so we can produce version 1.1.0
> soon.
>
> Please look for a release candidate for 1.1.0 this week.

This will not arrive until next week.  We haven't yet closed out all the issues.

>
> -Ted

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Qpid Dispatch Router release (1.1.0)

2018-04-03 Thread Ted Ross
Since the release of Qpid Proton 0.22.0, Qpid Dispatch Router's tests
fail against the latest released Proton because of the removal of the
deprecated Messenger API.  We are planning to push some of the
in-progress Jiras to the next release so we can produce version 1.1.0
soon.

Please look for a release candidate for 1.1.0 this week.

-Ted

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: qpidd C++ broker memory leak on Redhat 6.8?

2018-03-05 Thread Ted Ross
The problem that I'm familiar with is specific to RHEL6.  I'm not
aware of any available patch for this issue (if that's even what you
are experiencing).  Do you have the ability to test using RHEL7?

On Mon, Mar 5, 2018 at 3:09 PM, jbelch  wrote:
> Any other thoughts?  It doesn't seem to be an issue on earlier versions of
> glibc.  Is that correct?  If so, is there an OS patch or something?  We may
> have to abandon the qpid broker and go to something else if we can't figure
> this out.
>
>
>
> --
> Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Apache Qpid Proton 0.21.0

2018-02-28 Thread Ted Ross
+1
 - Built and Installed on Fedora 27
 - Proton tests all pass
 - Built qpid-dispatch master against 0.21.0, no build issues
 - Dispatch tests all pass


On Wed, Feb 28, 2018 at 2:33 PM, Robbie Gemmell
 wrote:
> Hi folks,
>
> I have put together a spin for a Qpid Proton 0.21.0 release, please
> give it a test out and vote accordingly.
>
> The source archive can be grabbed from:
> https://dist.apache.org/repos/dist/dev/qpid/proton/0.21.0-rc1/
>
> The JIRAs assigned are:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313720=12342274
>
> It is tagged as 0.21.0-rc1.
>
> Regards,
> Robbie
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: qpidd C++ broker memory leak on Redhat 6.8?

2018-02-27 Thread Ted Ross
This is an _old_ post that addresses a similar issue and proposes a
solution.  I believe this was specific to RHEL 6.

http://qpid.2158936.n2.nabble.com/qpidd-using-approx-10x-memory-tp6730073p6775634.html

-Ted

On Tue, Feb 27, 2018 at 9:49 AM, jbelch  wrote:
> I have used the qpidd C++ broker on several projects over the years.  I
> started using 0.7 about 6 years ago, used 0.20 about 4 years ago, and I am
> currently using 0.34.  I have noticed a memory leak when running on the
> system.  qpidd starts out at about 10mb resident memory and the resident
> memory usage increases about 30mb each day.  After 75 days, we are using
> resident memory in the 3gb range.  We have a mix of durable queues and
> topics, probably about 20 total.  When I use the qpid-stat script, there
> don't appear to be any queues or topics with messages hanging around.  When
> I run valgrind, it gives me a report that states there are no memory leaks.
> Does anyone have any idea?
>
>
>
> --
> Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[ANNOUNCE] Apache Qpid Dispatch 1.0.1 released

2018-02-26 Thread Ted Ross
The Apache Qpid (http://qpid.apache.org) community is pleased to
announce the immediate availability of Apache Qpid Dispatch 1.0.1.

Qpid Dispatch is a router for the Advanced Message Queuing Protocol 1.0
(AMQP 1.0, ISO/IEC 19464, http://www.amqp.org). It provides a flexible
and scalable interconnect between AMQP endpoints, whether they be clients,
brokers, or other AMQP-enabled services.

The release is available now from our website:
https://qpid.apache.org/releases/qpid-dispatch-1.0.1/index.html

Release notes can be found at:
http://qpid.apache.org/releases/qpid-dispatch-1.0.1/release-notes.html

Thanks to all involved.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[VOTE] Release Qpid Dispatch Router 1.0.1 (RC1)

2018-02-20 Thread Ted Ross
Please vote on this thread to release qpid-dispatch 1.0.1-rc1 as the
official 1.0.1.

The release can be found here:

https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.0.1-rc1/

The following defects were fixed in this release:

DISPATCH-874 - unable to load .json or .woff2 files from local
file system from http port
DISPATCH-881 - Inbound pre-settled messages causes memory leak of deliveries
DISPATCH-882 - router buffers messages for slow presettled receiver
DISPATCH-883 - Router crashes when it processes management request
for connections
DISPATCH-887 - Dispatch reestablishes connection inspite of
deleting the connector
DISPATCH-889 - linkRoute patterns beginning with #/string match
substrings after the /
DISPATCH-895 - qpid-dispatch crashes with a SEGFAULT in libqpid-proton
DISPATCH-900 - Memory leak when repeatedly opening and closing connections
DISPATCH-908 - Router loses dispositions over receive link on
qpid-interop-test 2-node test
DISPATCH-914 - qd_connector_t leaks mutexes
DISPATCH-920 - Enabled policy blocks inter-router links

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Dispatch router 1.0.0 seems to not always propagate link credit when using link routing

2018-02-20 Thread Ted Ross
Marcel,

I don't see anything that you are doing wrong in this case.  I will
see if I can reproduce your result.

-Ted

On Tue, Feb 20, 2018 at 9:57 AM, Marcel Meulemans
 wrote:
> I am experiencing a problem while using the dispatch router as a proxy to
> the artemis broker. I have done some investigation and it looks like the
> dispatch router is not always forwarding link credit to the broker.
>
> I have the following setup:
> proton clients 0.9.0 (old, i know :p) <---> qdrouterd 1.0.0 <> artemis
> 2.4.0
>
> The clients are doing some anycast messaging between themselves and artemis
> is providing some queues through which these messages are flowing. Qpid
> dispatch is sitting in between and doing link routing (see the
> qdrouterd.conf below). At around 100 clients (all connecting at the same
> time) things start to go wrong, as in some message do not reach their
> destinations (about 10% of the clients are effected). The messages actually
> "hang" in artemis because artemis cannot deliver the messages because the
> link the message should be delivered on has no credit. After some
> investigation is looks like qdrouterd is not always forwarding the credit
> given by the clients to the link-routing link it set up with artemis. I have
> attached two trace logs from qdrouterd (formatted a bit so you can put the
> side to side) illustrating the problem. The flow.log is the filtered trace
> output showing a successful session, the no-flow.log is a failing session.
> You can see in the failing session that the flow frames for two newly
> attached links are not "forwarded" over the link-route link as I expect they
> should (and which is the case in the successful session).
>
> Am I doing something wrong or is this a bug?
>
> Cheers,
> Marcel
>
>
> // qdrouterd.conf
>
> router {
> mode: standalone
> id: arcbus.proxy
> }
>
> listener {
> host: 0.0.0.0
> port: 5672
> authenticatePeer: no
> saslMechanisms: ANONYMOUS
> }
>
> connector {
> name: broker-service
> host: 127.0.0.1
> port: 5670
> role: route-container
> saslMechanisms: ANONYMOUS
> }
>
> linkRoute {
> name: anycast-to-broker
> prefix: anycast.
> dir: out
> connection: broker-service
> }
>
> linkRoute {
> name: anycast-from-broker
> prefix: anycast.
> dir: in
> connection: broker-service
> }
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Significant memory leak in qpid-dispatch 1.0.0

2018-02-12 Thread Ted Ross
I've moved eleven jiras into 1.0.1 for consideration.  These are all
resolved and are candidates for backport into 1.0.1.

DISPATCH-874 - unable to load .json or .woff2 files from local
file system from http port
DISPATCH-881 - Inbound pre-settled messages causes memory leak of deliveries
DISPATCH-882 - router buffers messages for slow presettled receiver
DISPATCH-883 - Router crashes when it processes management request
for connections
DISPATCH-887 - Dispatch reestablishes connection inspite of
deleting the connector
DISPATCH-889 - linkRoute patterns beginning with #/string match
substrings after the /
DISPATCH-895 - qpid-dispatch crashes with a SEGFAULT in libqpid-proton
DISPATCH-900 - Memory leak when repeatedly opening and closing connections
DISPATCH-908 - Router loses dispositions over receive link on
qpid-interop-test 2-node test
DISPATCH-914 - qd_connector_t leaks mutexes
DISPATCH-920 - Enabled policy blocks inter-router links

-Ted

On Fri, Feb 9, 2018 at 8:03 AM, Ted Ross <tr...@redhat.com> wrote:
> I agree.  Let's do a 1.0.1 with this fix.
>
> -Ted
>
> On Wed, Feb 7, 2018 at 2:54 PM, Ken Giusti <kgiu...@redhat.com> wrote:
>> Folks,
>>
>> Can we get a fix release for qpid-dispatch containing the fix to
>> https://issues.apache.org/jira/browse/DISPATCH-881 ?
>>
>> This fixes a per-delivery leak that's present in the 1.0.0 release.  This
>> is a per-message leak that causes the router's memory use to climb linear
>> to the amount of traffic handled.
>>
>> thanks
>>
>> --
>> -K

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Significant memory leak in qpid-dispatch 1.0.0

2018-02-09 Thread Ted Ross
I agree.  Let's do a 1.0.1 with this fix.

-Ted

On Wed, Feb 7, 2018 at 2:54 PM, Ken Giusti  wrote:
> Folks,
>
> Can we get a fix release for qpid-dispatch containing the fix to
> https://issues.apache.org/jira/browse/DISPATCH-881 ?
>
> This fixes a per-delivery leak that's present in the 1.0.0 release.  This
> is a per-message leak that causes the router's memory use to climb linear
> to the amount of traffic handled.
>
> thanks
>
> --
> -K

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [VOTE] Release Apache Qpid Proton 0.20.0

2018-01-25 Thread Ted Ross
+1

 - Installed and ran tests on Fedora 27 - no failures
 - Tested against Qpid Dispatch Router (master) - no failures

-Ted

On Thu, Jan 25, 2018 at 7:46 AM, Robbie Gemmell 
wrote:

> Hi folks,
>
> I have put together a spin for a Qpid Proton 0.20.0 release, please
> give it a test out and vote accordingly.
>
> The source archive can be grabbed from:
> https://dist.apache.org/repos/dist/dev/qpid/proton/0.20.0-rc1/
>
> The JIRAs currently assigned are:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> projectId=12313720=12342219
>
> It is tagged as 0.20.0-rc1.
>
> Regards,
> Robbie
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


[RESULT] [VOTE] Release Qpid Dispatch Router 1.0.0 (RC3)

2017-11-17 Thread Ted Ross
[resending with the proper subject]

This vote is now closed.  There were 5 binding +1s and no other votes.  The
release is approved.

I will update the tags and the distributions shortly.  The website will be
updated after the mirrors have synced.

Thanks all,
-Ted


[RESULT][VOTE] Release Qpid Dispatch Router 1.0.0 (RC3)

2017-11-17 Thread Ted Ross
This vote is now closed.  There were 5 binding +1s and no other votes.  The
release is approved.

I will update the tags and the distributions shortly.  The website will be
updated after the mirrors have synced.

Thanks all,
-Ted


Re: [VOTE] Release Qpid Dispatch Router 1.0.0 (RC3)

2017-11-17 Thread Ted Ross
+1

On Thu, Nov 16, 2017 at 3:59 PM, Ernest Allen <eal...@redhat.com> wrote:

> +1
>
> * tested the stand-alone console
>
> On Tue, Nov 14, 2017 at 9:01 AM, Ted Ross <tr...@redhat.com> wrote:
>
> > Please cast your vote on this thread to release RC3 as the official Qpid
> > Dispatch Router version 1.0.0.
> >
> > Qpid Dispatch Router 1.0.0 RC3 can be found here:
> >
> > https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.0.0-rc3/
> >
> > The following issues have been fixed since RC1:
> >
> > DISPATCH-767 - Message Cut-Through/Streaming for efficient handling
> of
> > large messages
> > DISPATCH-847 - Fix issues discovered by Coverity
> > DISPATCH-858 - Simplify hard to follow LICENSE file
> > DISPATCH-865 - Segmentation fault while running 2-node Artemis tests
> > DISPATCH-867 - Messages stuck going through link route
> > DISPATCH-870 - connection improperly reopened from closed connector
> > DISPATCH-873 - new routes calculated wrongly after connector deletion
> >
> > The following features, improvements, and bug fixes are introduced in
> > 1.0.0:
> >
> > Features:
> >
> > DISPATCH-390 - New pn_proactor-based IO driver
> > DISPATCH-731 - Support wildcard tenant vhosts in address prefix
> > configuration
> > DISPATCH-767 - Message Cut-Through/Streaming for efficient handling
> of
> > large messages
> > DISPATCH-775 - allow authentication against a remote server
> > DISPATCH-803 - refuse attach to undefined addresses
> > DISPATCH-813 - Support wildcard format for link-routes
> > DISPATCH-818 - Honor failoverList provided by connected brokers
> > DISPATCH-844 - Make TLS cipher suites configurable
> >
> > Improvements:
> >
> > DISPATCH-209 - Three+ router test is needed in the system test suite.
> > DISPATCH-525 - What are proper names and units for protocol
> > configuration settings?
> > DISPATCH-551 - disconnect connections that do not complete initial
> > protocol handshake within a given time
> > DISPATCH-584 - Large, highly redundant router networks generate
> > excessive inter-router message traffic
> > DISPATCH-744 - Reject unsettled deliveries to multicast addresses by
> > default
> > DISPATCH-770 - Show error if creating an entity using web console
> fails
> > DISPATCH-771 - Mark mandatory fields when creating a new entity in
> the
> > web console
> > DISPATCH-788 - Create peer linkage for presettled deliveries so we
> can
> > use this to handle muticast dispositions
> > DISPATCH-795 - Sort entity names on Schema page to make them easier
> to
> > find
> > DISPATCH-796 - The Python and the C management agents do not have an
> > AMQP header in their responses.
> > DISPATCH-809 - Add options to enable Sanitizers to CMake build
> > DISPATCH-827 - Large message discard buffer too small
> > DISPATCH-828 - Discarded message processing does not close callback
> > window
> > DISPATCH-839 - Improve the batching of allocated objects
> > DISPATCH-858 - Simplify hard to follow LICENSE file
> >
> > Bugs Fixed:
> >
> > DISPATCH-421 - Toasts messages are not logged in the rolldown
> "logging
> > console"
> > DISPATCH-430 - Cursor snaps way above peaks in a rate chart
> > DISPATCH-571 - Driver spins when a listener accepts a socket while
> FDs
> > are all in use
> > DISPATCH-737 - qdstat and qdmanage always force sasl exchange
> > DISPATCH-741 - Coverity scan reported errors in Qpid Dispatch master
> > DISPATCH-743 - Intermittent SSL Failure
> > DISPATCH-747 - Console does not handle connection errors well
> > DISPATCH-748 - Error message shown when rapidly clicking treeview on
> > left side of hawtio console: Uncaught TypeError: Cannot read property
> > 'height' of null
> > DISPATCH-749 - unmapping all link-routing addresses leaves half of
> > addresses mapped
> > DISPATCH-750 - Missing icons and bad rendering of dynatree treeviews
> in
> > Microsoft Edge 14
> > DISPATCH-752 - With more than one outbound SSL connections, failure
> in
> > one affects all others
> > DISPATCH-753 - Neither version of console is usable on Internet
> > Explorer 10 or 11
> > DISPATCH-754 - Output of qdstat shows authentication on client SSL
> > connections as anonymous (x.509)
> > DISPATCH-756 - Fix Fedora and Ubuntu docker files to use libuv and
> > libwebsockets
> >

[VOTE] Release Qpid Dispatch Router 1.0.0 (RC3)

2017-11-14 Thread Ted Ross
Please cast your vote on this thread to release RC3 as the official Qpid
Dispatch Router version 1.0.0.

Qpid Dispatch Router 1.0.0 RC3 can be found here:

https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.0.0-rc3/

The following issues have been fixed since RC1:

DISPATCH-767 - Message Cut-Through/Streaming for efficient handling of
large messages
DISPATCH-847 - Fix issues discovered by Coverity
DISPATCH-858 - Simplify hard to follow LICENSE file
DISPATCH-865 - Segmentation fault while running 2-node Artemis tests
DISPATCH-867 - Messages stuck going through link route
DISPATCH-870 - connection improperly reopened from closed connector
DISPATCH-873 - new routes calculated wrongly after connector deletion

The following features, improvements, and bug fixes are introduced in 1.0.0:

Features:

DISPATCH-390 - New pn_proactor-based IO driver
DISPATCH-731 - Support wildcard tenant vhosts in address prefix
configuration
DISPATCH-767 - Message Cut-Through/Streaming for efficient handling of
large messages
DISPATCH-775 - allow authentication against a remote server
DISPATCH-803 - refuse attach to undefined addresses
DISPATCH-813 - Support wildcard format for link-routes
DISPATCH-818 - Honor failoverList provided by connected brokers
DISPATCH-844 - Make TLS cipher suites configurable

Improvements:

DISPATCH-209 - Three+ router test is needed in the system test suite.
DISPATCH-525 - What are proper names and units for protocol
configuration settings?
DISPATCH-551 - disconnect connections that do not complete initial
protocol handshake within a given time
DISPATCH-584 - Large, highly redundant router networks generate
excessive inter-router message traffic
DISPATCH-744 - Reject unsettled deliveries to multicast addresses by
default
DISPATCH-770 - Show error if creating an entity using web console fails
DISPATCH-771 - Mark mandatory fields when creating a new entity in the
web console
DISPATCH-788 - Create peer linkage for presettled deliveries so we can
use this to handle muticast dispositions
DISPATCH-795 - Sort entity names on Schema page to make them easier to
find
DISPATCH-796 - The Python and the C management agents do not have an
AMQP header in their responses.
DISPATCH-809 - Add options to enable Sanitizers to CMake build
DISPATCH-827 - Large message discard buffer too small
DISPATCH-828 - Discarded message processing does not close callback
window
DISPATCH-839 - Improve the batching of allocated objects
DISPATCH-858 - Simplify hard to follow LICENSE file

Bugs Fixed:

DISPATCH-421 - Toasts messages are not logged in the rolldown "logging
console"
DISPATCH-430 - Cursor snaps way above peaks in a rate chart
DISPATCH-571 - Driver spins when a listener accepts a socket while FDs
are all in use
DISPATCH-737 - qdstat and qdmanage always force sasl exchange
DISPATCH-741 - Coverity scan reported errors in Qpid Dispatch master
DISPATCH-743 - Intermittent SSL Failure
DISPATCH-747 - Console does not handle connection errors well
DISPATCH-748 - Error message shown when rapidly clicking treeview on
left side of hawtio console: Uncaught TypeError: Cannot read property
'height' of null
DISPATCH-749 - unmapping all link-routing addresses leaves half of
addresses mapped
DISPATCH-750 - Missing icons and bad rendering of dynatree treeviews in
Microsoft Edge 14
DISPATCH-752 - With more than one outbound SSL connections, failure in
one affects all others
DISPATCH-753 - Neither version of console is usable on Internet
Explorer 10 or 11
DISPATCH-754 - Output of qdstat shows authentication on client SSL
connections as anonymous (x.509)
DISPATCH-756 - Fix Fedora and Ubuntu docker files to use libuv and
libwebsockets
DISPATCH-757 - Qpid Dispatch does not compile under Ubuntu
DISPATCH-758 - test_listen_error() in system_tests_one_router.py and
system_tests_http.py hang inside a docker environment
DISPATCH-759 - Core thread consumed deleting deliveries
DISPATCH-761 - Router crash on abrupt close of sender/receiver
connections
DISPATCH-762 - Hawtio console does not show details about a connection
whereas stand-alone console does
DISPATCH-763 - Router crashes when config file defines listener { addr:
} instead of { host: }
DISPATCH-765 - Three unit tests failing under travis on trusty
DISPATCH-766 - Update Dockerfile-ubuntu to include libwebsockets
DISPATCH-768 - On topology page, show connections that go to more than
one router
DISPATCH-769 - Links popup on topology page only shows a single link
DISPATCH-772 - Buttons to Create and Delete entity disappear after
navigation in either version of console
DISPATCH-777 - [system_tests_drain] pn_object_free: corrupted
double-linked list
DISPATCH-779 - Credit is not issued for multicast address when no
receiver is connected
DISPATCH-780 - When link is 

Re: Routing messages between two brokers

2017-10-27 Thread Ted Ross
Hi Steve,

Setting up the addresses as waypoints configures the router to properly
route producers _to_ the broker and consumers _from_ the broker.  Your case
is a little different.  Try this alternative configuration:

autoLink {
addr: to.myapp
connection: appbroker
dir: in
phase: 0
}

autoLink {
addr: to.myapp
connection: otherbroker
dir: out
phase: 0
}

autoLink {
addr: from.myapp
connection: appbroker
dir: out
phase: 0
}

autoLink {
addr: from.myapp
connection: otherbroker
dir: in
phase: 0
}

By default, _out_ autolinks take phase-0 to match directly attached
producers and _in_ autolinks take phase-1 to match directly attached
consumers.  Identifying the address as "waypoint: yes" causes consumers
(normally attached listener links) to assume phase-1.

In your case, there is no multi-phase path (i.e. producer-to-broker;
broker-to-consumer), so you can dispense with the waypoint addresses and
explicitly set your autolink phases to 0.

In case it's not clear:  Dispatch has a notion of "address phase" which is
a way of dividing a single address into multiple sub-addresses for
routing.  If the address is for a broker/queue, the routing from producer
to broker is separate and independent from the routing from broker to
consumer.  From outside of the router, there appears to be only one
address, but in the router's forwarding table, there are multiple distinct
addresses.

Let me know if you have any other problems with this.  It should work fine.

-Ted

On Thu, Oct 26, 2017 at 6:26 PM, Steve Huston  wrote:

> I am bootstrapping my dispatch router knowledge on a little project driven
> by the need to feed messages through:
>
> - AMQP 0-10 client sends messages to a broker that speaks both 0-10 and 1.0
> - dispatch router takes messages out of that dual-protocol broker and
> routes them to another broker that speaks only 1.0
>
> ... and back the other way, with a different named queue.
>
> So here is the qdrouterd.conf pieces for this goal:
>
> router {
> mode: standalone
> id: Router.A
> }
>
> #Listener for the dispatch-router management connections - qdstat
> listener {
> host: 0.0.0.0
> port: amqp
> authenticatePeer: no
> }
>
> connector {
> name: otherbroker
> host: otherhost
> port: 5672
> role: route-container
> }
>
> connector {
> name: appbroker
> host: 0.0.0.0
> port: 10053
> role: route-container
> }
>
> address {
> prefix: to.myapp
> waypoint: yes
> }
>
> autoLink {
> addr: to.myapp
> connection: appbroker
> dir: in
> }
> autoLink {
> addr: to.myapp
> connection: otherbroker
> dir: out
> }
>
> address {
> prefix: from.myapp
> waypoint: yes
> }
>
> autoLink {
> addr: from.myapp
> connection: appbroker
> dir: out
> }
> autoLink {
> addr: from.myapp
> connection: otherbroker
> dir: in
> }
>
>
> When qdrouterd runs, it appears to set up the connections and links, but
> no messages are retrieved from 'appbroker' 'to.myapp' - and there are many
> messages sitting in that queue waiting.
>
> Is there something I'm missing to actually get the messages to flow?
>
> Thanks,
> -Steve
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: [VOTE] Release Apache Qpid Proton 0.18.0

2017-10-19 Thread Ted Ross
Adel,

I saw the same error.  I then noticed that I had not installed the
cyrus-sasl packages.  Once I installed these packages, the pthread-once
error went away.  Not that I understand why, but that's what I saw.

-Ted

On Thu, Oct 19, 2017 at 11:53 AM, Adel Boutros  wrote:

> Hello again,
>
>
> I will have to vote -1 here. I am having issues with openssl on Linux (I
> am using openssl 1.0.2h)
>
>
> [ 31%] Building C object proton-c/CMakeFiles/qpid-proton.dir/src/reactor/
> connection.c.o
> CMakeFiles/qpid-proton-core.dir/src/ssl/openssl.c.o: In function
> `ensure_initialized':
> /data/jenkins-slave/home/workspace/proton-acceptance/
> proton-workspace/qpid-proton-0.18.0-rc1/proton-c/src/ssl/openssl.c:1505:
> undefined reference to `pthread_once'
> collect2: error: ld returned 1 exit status
>
>
> Regards,
>
> Adel
>
> 
> From: Adel Boutros
> Sent: Thursday, October 19, 2017 5:42:01 PM
> To: users@qpid.apache.org
> Subject: Re: [VOTE] Release Apache Qpid Proton 0.18.0
>
>
> Hello,
>
>
> I am trying to build proton 0.18.0 but I am facing errors with ruby-gem.
> Do you know how I can deactivate this target?
>
>
> [  5%] Generating qpid_proton-0.18.0.gem
> /usr/bin/gem:8:in `require': no such file to load -- rubygems (LoadError)
> from /usr/bin/gem:8
> make[2]: *** [proton-c/bindings/ruby/qpid_proton-0.18.0.gem] Error 1
> make[1]: *** [proton-c/bindings/ruby/CMakeFiles/ruby-gem.dir/all] Error 2
>
>
> Regards,
>
> Adel
>
> 
> From: Jakub Scholz 
> Sent: Thursday, October 19, 2017 5:04:59 PM
> To: users@qpid.apache.org
> Subject: Re: [VOTE] Release Apache Qpid Proton 0.18.0
>
> +1. I build it from source code and used it with Qpid C++ broker (master)
> and Qpid Dispatch (1.0.0 RC1) and run my tests against them. All seems to
> work fine.
>
> On Wed, Oct 18, 2017 at 7:29 PM, Robbie Gemmell 
> wrote:
>
> > Hi folks,
> >
> > I have put together a first spin for a Qpid Proton 0.18.0 release,
> > please test it and vote accordingly.
> >
> > The source archive can be grabbed from:
> > https://dist.apache.org/repos/dist/dev/qpid/proton/0.18.0-rc1/
> >
> > The JIRAs currently assigned are:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12313720=12338903
> >
> > It is tagged as 0.18.0-rc1.
> >
> > Regards,
> > Robbie
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>


Re: [VOTE] Release Apache Qpid Proton 0.18.0

2017-10-19 Thread Ted Ross
+1

- Built, installed, and ran the tests on Centos7
- Built the qpid-dispatch 1.0.0 RC against 0.18.0 and ran the full test
suite


On Wed, Oct 18, 2017 at 5:22 PM, Timothy Bish  wrote:

> +1
>
> * Validated signature and checksum
> * Verified License and Notice files present
> * Built from source and ran self tests.
>
>
> On 10/18/2017 01:29 PM, Robbie Gemmell wrote:
>
>> Hi folks,
>>
>> I have put together a first spin for a Qpid Proton 0.18.0 release,
>> please test it and vote accordingly.
>>
>> The source archive can be grabbed from:
>> https://dist.apache.org/repos/dist/dev/qpid/proton/0.18.0-rc1/
>>
>> The JIRAs currently assigned are:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
>> ctId=12313720=12338903
>>
>> It is tagged as 0.18.0-rc1.
>>
>> Regards,
>> Robbie
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>
>>
> --
> Tim Bish
> twitter: @tabish121
> blog: http://timbish.blogspot.com/
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Qpid Dispatch Router 1.0.0 Release Candidate

2017-10-18 Thread Ted Ross
Team,

RC1 of Qpid Dispatch Router version 1.0.0 can be found here:

https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.0.0-rc1/

Please use this for testing and evaluation.  I will hold off the vote until
there has been some testing done.  There have already been some minor
console issues identified, so there will be a second release candidate.

Thanks,
-Ted


Features:

DISPATCH-390 - New pn_proactor-based IO driver
DISPATCH-731 - Support wildcard tenant vhosts in address prefix
configuration
DISPATCH-767 - Message Cut-Through/Streaming for efficient handling of
large messages
DISPATCH-775 - allow authentication against a remote server
DISPATCH-803 - refuse attach to undefined addresses
DISPATCH-813 - Support wildcard format for link-routes
DISPATCH-818 - Honor failoverList provided by connected brokers
DISPATCH-844 - Make TLS cipher suites configurable

Improvements:

DISPATCH-209 - Three+ router test is needed in the system test suite.
DISPATCH-525 - What are proper names and units for protocol
configuration settings?
DISPATCH-551 - disconnect connections that do not complete initial
protocol handshake within a given time
DISPATCH-584 - Large, highly redundant router networks generate
excessive inter-router message traffic
DISPATCH-744 - Reject unsettled deliveries to multicast addresses by
default
DISPATCH-770 - Show error if creating an entity using web console fails
DISPATCH-771 - Mark mandatory fields when creating a new entity in the
web console
DISPATCH-788 - Create peer linkage for presettled deliveries so we can
use this to handle muticast dispositions
DISPATCH-795 - Sort entity names on Schema page to make them easier to
find
DISPATCH-796 - The Python and the C management agents do not have an
AMQP header in their responses.
DISPATCH-809 - Add options to enable Sanitizers to CMake build
DISPATCH-827 - Large message discard buffer too small
DISPATCH-828 - Discarded message processing does not close callback
window
DISPATCH-839 - Improve the batching of allocated objects

Bugs Fixed:

DISPATCH-421 - Toasts messages are not logged in the rolldown "logging
console"
DISPATCH-430 - Cursor snaps way above peaks in a rate chart
DISPATCH-571 - Driver spins when a listener accepts a socket while FDs
are all in use
DISPATCH-737 - qdstat and qdmanage always force sasl exchange
DISPATCH-741 - Coverity scan reported errors in Qpid Dispatch master
DISPATCH-743 - Intermittent SSL Failure
DISPATCH-747 - Console does not handle connection errors well
DISPATCH-748 - Error message shown when rapidly clicking treeview on
left side of hawtio console: Uncaught TypeError: Cannot read property
'height' of null
DISPATCH-749 - unmapping all link-routing addresses leaves half of
addresses mapped
DISPATCH-752 - With more than one outbound SSL connections, failure in
one affects all others
DISPATCH-754 - Output of qdstat shows authentication on client SSL
connections as anonymous (x.509)
DISPATCH-756 - Fix Fedora and Ubuntu docker files to use libuv and
libwebsockets
DISPATCH-757 - Qpid Dispatch does not compile under Ubuntu
DISPATCH-758 - test_listen_error() in system_tests_one_router.py and
system_tests_http.py hang inside a docker environment
DISPATCH-759 - Core thread consumed deleting deliveries
DISPATCH-765 - Three unit tests failing under travis on trusty
DISPATCH-766 - Update Dockerfile-ubuntu to include libwebsockets
DISPATCH-768 - On topology page, show connections that go to more than
one router
DISPATCH-769 - Links popup on topology page only shows a single link
DISPATCH-777 - [system_tests_drain] pn_object_free: corrupted
double-linked list
DISPATCH-779 - Credit is not issued for multicast address when no
receiver is connected
DISPATCH-780 - When link is disallowed due to target/source name it
should not return amqp:resource-limit-exceeded
DISPATCH-784 - Delivery annotations are not propagated by the router
DISPATCH-787 - c epoll proactor can raise SIGPIPE
DISPATCH-789 - Console breaks when quickly moving between tabs (both
hawtio and stand-alone)
DISPATCH-790 - The right-mouse-click menu on Topology tab appears
off-center in the stand-alone console
DISPATCH-791 - The node representing Console in Topology tab is not
displayed
DISPATCH-792 - Freezing and moving nodes is somewhat broken (in either
version of console)
DISPATCH-794 - Arrows in topology graph for IE 11 are hollow
DISPATCH-798 - Policy description in Dispatch Router book seems to be
incorrect
DISPATCH-799 - Using USE_VALGRIND does not invoke valgrind when running
tests
DISPATCH-800 - Hawtio version of the console is unable to connect to a
router when running offline
DISPATCH-802 - refuse transaction coordination links if they can't be
routed to a coordinator
DISPATCH-804 - connectors ignore addr
DISPATCH-805 - System 

Re: Plans for the Qpid Dispatch Router 1.0.0

2017-09-20 Thread Ted Ross
Hi Adel,

I have some thoughts about this request but I'll put them on the Jira for
posterity.

-Ted

On Tue, Sep 19, 2017 at 6:26 AM, Adel Boutros <adelbout...@live.com> wrote:

> Hello Ted,
>
>
> A nice early Christmas gift for me would be https://issues.apache.org/
> jira/browse/DISPATCH-773 (You don't have to be that generous though if it
> is not possible).
>
>
> Regards,
>
> Adel
>
> 
> From: Ted Ross <tr...@redhat.com>
> Sent: Monday, September 18, 2017 10:08:18 PM
> To: users@qpid.apache.org
> Subject: Plans for the Qpid Dispatch Router 1.0.0
>
> Folks,
>
> We have a good slate of resolved or almost-resolved issues for the next
> Dispatch Router release.  I would like to put out a release candidate at
> the end of the month.  If anyone has any needs or priorities for this
> release, please discuss on this thread.
>
> Regards,
> -Ted
>


Re: Dispatch router discovery

2017-09-20 Thread Ted Ross
Hi Thomas,

Thanks for the note.  We've heard several requests for similar discovery
features, but specific requirements have been somewhat elusive.  The
EnMasse project (Messaging as a service in Kubernetes/OpenShift) has a
platform-specific solution to this problem.  Other environments would
require other solutions.

Are you suggesting that a protocol be established for "finding my closest
messaging access point"?  Perhaps something similar to DHCP or ARP.  For
example, the AMQP clients could be modified to send a broadcast query that
any listening Dispatch Router (or broker) would respond to.  The client
would then use the hostname/IP-address from the first response it receives
to establish an AMQP connection.

Other possibilities include extensions to DNS (similar to MX records for
email) or DHCP (adding the host's configured messaging access point).

-Ted

On Wed, Sep 20, 2017 at 4:27 AM, Thomas Hartwig 
wrote:

> Hi,
>
> I really like the dispatch router project. I only have one wish remain: Is
> it somehow possible to "detect" a router present in the local network? Once
> a router is detected it can be used to establish high level connections by
> any client without the need of configure it to a static address.
> I think of something like broadcast or multicast announcement mechanism.
> Did you ever consider a solution for this?
>
> Thanks
> Thomas
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Plans for the Qpid Dispatch Router 1.0.0

2017-09-19 Thread Ted Ross
Kai,

DISPATCH-775 is in.  The Fix-Version was mistakenly not set on the Jira.

-Ted

On Tue, Sep 19, 2017 at 8:27 AM, Kai <sophokles...@gmail.com> wrote:

> Would be great to have https://issues.apache.org/jira/browse/DISPATCH-775
> in there.
>
> Regards,
> Kai
>
> On Tue, Sep 19, 2017 at 12:26 PM Adel Boutros <adelbout...@live.com>
> wrote:
>
> > Hello Ted,
> >
> >
> > A nice early Christmas gift for me would be
> > https://issues.apache.org/jira/browse/DISPATCH-773 (You don't have to be
> > that generous though if it is not possible).
> >
> >
> > Regards,
> >
> > Adel
> >
> > 
> > From: Ted Ross <tr...@redhat.com>
> > Sent: Monday, September 18, 2017 10:08:18 PM
> > To: users@qpid.apache.org
> > Subject: Plans for the Qpid Dispatch Router 1.0.0
> >
> > Folks,
> >
> > We have a good slate of resolved or almost-resolved issues for the next
> > Dispatch Router release.  I would like to put out a release candidate at
> > the end of the month.  If anyone has any needs or priorities for this
> > release, please discuss on this thread.
> >
> > Regards,
> > -Ted
> >
>


Re: Dispatch Router load balancing config questions

2017-08-01 Thread Ted Ross
Dan,

There's one issue with your configuration which doesn't affect the load
balancing but will cause problems with receiving messages from the
brokers.  In the address.prefix, you use "foo.#".  This is a pure prefix
and it should simply be "foo".  The wildcards are coming in the next
release but are not implemented in the code you are using.

Regarding your actual question.  I assume that you are testing this
configuration under light load (i.e. sending one message at a time).

The way that the balancing works is that it will route to the consumer
(broker) with the fewest outstanding deliveries + inter-router cost.  This
means that it will favor the local broker over the remote one if there are
no in-flight deliveries.  The default (and minimum) cost for an
inter-router connection is 1.  You can set it to a higher value in the
listener or connector.

If you are sending one-at-a-time synchronous sends, they will always go to
the local broker because the broker's zero outstanding deliveries will
always be less than the inter-router cost of 1.  If you send multiple
deliveries asynchronously, you will see them being distributed to both
brokers in the network.  You can make the local-affinity stronger by
increasing the inter-router cost.

-Ted

On Tue, Aug 1, 2017 at 3:47 AM, Dan Langford <danlangf...@gmail.com> wrote:

> Last week I had a thread with lots of little questions around Dispatch
> Routers. Ted Ross has been awesome to answer most of those. As a result I
> feel like I have my QDR config shaping up a bit better. HOWEVER with some
> more very focused questions I thought it would be best to start a new
> thread. The problem I am seeing is that the routers are not distributing
> the message load across other brokers on the network. Here is a little
> diagram:
>
>
> RouterA-03 > ArtemisBrokerA
>  |
>  |
>  |
> \/
> RouterB-05 > ArtemisBrokerB
>
>
> *NOTE: i am currently using Qpid Dispatch Router from a RHEL repo. v 0.7.0*
>
>
> Connections from clients come in through an F5 VIP which forwards those
> connections to either host L-03-A or L-05-B. Each of those hosts have a
> Qpid Dispatch Router installed in front of an Artemis broker. dispatch
> router on L-05-B is listening on an additional port that dispatch router
> L-03-A connects to for inter-router communication.
>
> If I go around my F5 VIP so i know I am connecting straight to L-05-B and I
> send any number of messages into the router there all of those messages end
> up in ArtemisBrokerB. I was hoping that some would go to BrokerB and some
> would go via RouterA over to BrokerA. Now when BrokerB is taken down
> CURRENTLY the messages sent to Router L-05-B ARE routed through Router
> L-03-A and then out to ArtemisBrokerB. (Currently receiving messages pull
> them in from both hosts so my questions only lies in message production at
> the moment.)
>
> Do you agree that with this configuration you would expect messages to be
> load balanced between the two routes? I would like to paste in some of our
> config and results from running qdstat. If you observe something that is
> misconfigured and are able to highlight it i would be very appreciative.
> First the config files for QDR instances. I tried my best to scrub them of
> specific IPs and hostnames.
>
> *L-03-A qrouterd.conf*
>
>
> *router {*
>
> *mode: interior*
>
> *id: Router.A*
>
> *}*
>
> *log {*
>
> *module: DEFAULT*
>
> *enable: debug+*
>
> *timestamp: yes*
>
> *}*
>
> *sslProfile {*
>
> *name: my-ssl*
>
> *certFile: /opt/org/my-ssl-info.pem*
>
> *keyFile: /opt/org/my-ssl-info.pem*
>
> *password: hellokitty42*
>
> *}*
>
> *listener {*
>
> *role: normal*
>
> *host: 0.0.0.0*
>
> *port: 5671*
>
> *authenticatePeer: no*
>
> *saslMechanisms: ANONYMOUS*
>
> *sslProfile: my-ssl*
>
> *}*
>
> *connector {*
>
> *name: local-artemis*
>
> *role: route-container*
>
> *host: L-03-A*
>
> *port: 61616*
>
> *saslMechanisms: ANONYMOUS*
>
> *}*
>
> *connector {*
>
> *name: routerb*
>
> *role: inter-router*
>
> *host: L-05-B*
>
> *port: 6671*
>
> *saslMechanisms: ANONYMOUS*
>
> *}*
>
> *address {*
>
> *prefix: foo.#*
>
> *waypoint: yes*
>
> *distribution: balanced*
>
> *}*
>
> *autoLink {*
>
> *addr: foo.bar*
>
> *dir: in*
>
> *connection: local-artemis*
>
> *}*
>
> *autoLink {*
>
> *addr: foo.bar*
>
> *dir: out*
>
> *connection: local-artemis*
>
> *}*
>
>
> *L-05-B qrouterd.conf*
>

Re: Dispatch Router questions

2017-07-26 Thread Ted Ross
On Fri, Jul 21, 2017 at 7:12 PM, Dan Langford <danlangf...@gmail.com> wrote:

> On Thu, Jul 20, 2017 at 9:58 AM Ted Ross <tr...@redhat.com> wrote:
>
> > On Wed, Jul 19, 2017 at 7:36 PM, Dan Langford <danlangf...@gmail.com>
> > wrote:
> >
> > > > - Can I configure QDR to autoLink in and out ANY/ALL addresses?
> > > No.  There is no way currently for QDR to know what queues are present
> on
> > > its connected brokers.  It would not be difficult to write a program to
> > > synchronize autolinks to existing queues.
> >
>
> You are right it wouldn't be that difficult. Also with artemis I can turn
> on autocreation of queues and then then use QDR as the spot to manage what
> queues can exist. Not bad. What about synchronizing autoLink config across
> routers in a QDR network? are messages to the $management queue broadcast
> throughout the cluster? i could always resend the necessary messages
> through the _topo address namespace to get it to the other routers.
>
>
> > > > - Artemis doesn't support vhosts. Can I configure connections to
> > vhost:Foo
> > > > address:bar actually be address:Foo.bar when the message goes back to
> > the
> > > > broker?
> > > Yes.  There is a multi-tenancy feature for listeners that does exactly
> > what
> > > you are asking for.  If you add the attribute "multiTenant: yes" to the
> > > configuration of a listener in the qdrouterd.conf file, clients
> connected
> > > via that listener will have their addresses annotated as vhost/addr in
> > the
> > > router.
> >
>
> ok this is going to be perfect.  i am starting to feel more comfortable
> with everything in this config file
>
> > > - Can I configure QDR to pass auth through to the broker and let the
> > broker
> > > > decide is the user is authenticated and authorized? Inversely can I
> > > > configure QDR to be the only determinate of auth?
> > > Presently, QDR expects to be the sole determiner of authentic identity.
>
> > There is an open request to add a SASL proxy that might be used to allow
> > > the broker to do authentication on behalf of the router, but that
> hasn't
> > > made it into master yet.
> >
>
> this is one part that has me a little stuck. QDR is the sole determiner of
> auth identity. but QDR delegates to a cyrus sasl config right? and cyrus
> sasl has some local DB options or sql or ldap or it can delegate to
> kerberos or pam and i am just starting to feel a little lost in all my auth
> option because its been a long time since i have been through all that. i
> will figure it out well enough. i kind of wish there was a way i could send
> a message in through $management to add a new user/pass to the sasldb but
> ill figure something out.
>
> also, in regards to auth where is it that i specify what users have access
> to what addresses? it looks like that might be in the config in
> vhost>groups but then i see a policy area of the config. ill start in the
> vhost>groups area and see how far i get
>
>
> > > > I think depending on what I learn on these topics I will likely have
> > more
> > > > questions. Thank you to anybody who is able to give me a lead or
> point
> > me
> > > > to a config that may serve as an example. I really do appreciate it.
> > > Please don't hesitate to ask more questions or point out where there is
> > > lack of documentation.  We appreciate it as well.
> >
>
> so i had another question come up in my research today. i have a single F5
> BIG IP VIP that sits in front of all my VMs that are across two different
> geographic locations. due to the two locations i want, well, two of
> everything in a way that i can use all the resources at my disposal but
> still function if one location goes offline. So here are (R)outers and
> (B)rokers in locations (a) and (b)
>
> in order for me to be able to produce messages into Ba and Bb i found that
> each one of my Routers needed a connection to each one of my Brokers.
>
> Essentially:
> Ra --> Ba
> Ra --> Bb
> Rb --> Ba
> Rb --> Bb
>
> Graphically:
> Ra --- Ba
>\ /
>/ \
> Ra --- Bb
>
> it was really cool that i could send messages to Ra and see them fill up
> both Ba and Bb. Receiving across both brokers also worked. But i was hoping
> for more of a configuration where the Routers where only connected to a
> single Broker and all the Routers knew about each other.
>
> Essentially:
> Ra --> Ba
> Rb --> Bb
> Ra <-> Rb
>
> Graphically:
> Ra --- Ba
> ||

Re: Meaning of linkCapacity

2017-07-24 Thread Ted Ross
On Mon, Jul 24, 2017 at 10:36 AM, Hudalla Kai (INST/ECS4) <
kai.huda...@bosch-si.com> wrote:

> Ted,
>
> thank you for your hints and the pointer to the issue regarding the new
> concept for handing out credits.
> However, with the current mechanism in place, i.e. the Dispatch Router
> flowing credits to senders based on the "linkCapacity" property, I find it
> hard to believe that it is completely unrelated to the session window size.
> Consider the following example:
>
> We have configured the router with a session window of 200 frames of 1000
> bytes each.
> We now connect a slow consumer to the router which, lets say, can process
> 50 messages of 1000 bytes per second and flows 100 credits to the router on
> link establishment. We then connect a fast sender to the router which is
> capable of sending 500 messages of 1000 bytes per sec to the router.
>
> Now the router flows 250 credits (the default) to the sender and the
> sender immediately sends its first 200 messages to the router which now
> needs to buffer most of these messages because of the slow consumer. I
> reckon that the router does now adapts its flowing of new credit to the
> sender according to the rate at which the buffered messages get settled
> instead of simply issuing another fixed amount of credits to the sender,
> i.e. the buffer's fill ratio should be a factor in determining how many
> credits are flown to the sender(s), or am I mistaken?
>

I think your understanding is mostly correct.  The router provides 250
(linkCapacity) credits to a newly attached sender and that sender can then
immediately transfer 250 deliveries.  The rate at which the credit is
replenished will match the rate at which the slower consumer settles
deliveries.  The link capacity only affects the number of outstanding
deliveries per sender that can be buffered in the router network.

In your example, the output buffer on the consumer's link is feeding the
consumer and may be throttled by the session window or the credits supplied
be the receiver, whichever is slowest.  In either case, the actual flow
from the sender is tied to the settlement rate of the consumer(s).

The new scheme is only different in that it might issue fewer than
linkCapacity credits if there is limited output capacity for all the
producers for an address.

-Ted


>
> Mit freundlichen Grüßen / Best regards
>
> Kai Hudalla
> Chief Software Architect
>
> Bosch Software Innovations GmbH
> Schöneberger Ufer 89-91
> 10785 Berlin
> GERMANY
> www.bosch-si.com
>
> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB
> 148411 B;
> Executives: Dr.-Ing. Rainer Kallenbach, Michael Hahn
>
> 
> From: Ted Ross <tr...@redhat.com>
> Sent: Friday, July 21, 2017 17:13
> To: users@qpid.apache.org
> Subject: Re: Meaning of linkCapacity
>
> Kai,
>
> Please take a look at https://issues.apache.org/jira/browse/DISPATCH-781
> for insight into how link capacity is used for end-to-end flow control and
> a prototyped new way to use it.
>
> A quick summary is that synchronizing credit across a network is
> impractical at scale.  Instead, we use link capacity to establish local
> credit loops (between endpoint and connected router) and then use delivery
> settlement as the control for end-to-end flow control (and load balancing).
>
> maxFrameSize and maxSessionFrames are for session flow control and are
> completely unrelated to link/credit flow control.  linkCapacity is
> therefore not related to maxFrameSize/maxSessionFrames.
>
> -Ted
>
> On Fri, Jul 21, 2017 at 10:27 AM, Hudalla Kai (INST/ECS4) <
> kai.huda...@bosch-si.com> wrote:
>
> > Hi,
> >
> > I am wondering what the (practical) meaning of the "linkCapacity"
> > configuration property on listeners is and in particular, how it is
> related
> > to the "maxFrameSize" and "maxSessionFrames" properties. Do the latter
> ones
> > pose a limit on the former one? My experience is that if I do not specify
> > any value for "linkCapacity" then the Dispatch Router flows 250 credits
> to
> > a sender link connecting to it (assuming that a consumer has connected
> with
> > a receiver link interested in the relevant address). This number,
> however,
> > seems to be unrelated to the number of credits the consumer has flowed to
> > Dispatch Router, which lets me assume that the credits to flow to a
> sender
> > are mainly related to, well, what exactly?
> >
> > Can somebody shed some light on this?
> >
> > Mit freundlichen Grüßen / Best regards
> >
> > Kai Hudalla
> > Chief Software Architect
> >
&g

Re: Meaning of linkCapacity

2017-07-21 Thread Ted Ross
Kai,

Please take a look at https://issues.apache.org/jira/browse/DISPATCH-781
for insight into how link capacity is used for end-to-end flow control and
a prototyped new way to use it.

A quick summary is that synchronizing credit across a network is
impractical at scale.  Instead, we use link capacity to establish local
credit loops (between endpoint and connected router) and then use delivery
settlement as the control for end-to-end flow control (and load balancing).

maxFrameSize and maxSessionFrames are for session flow control and are
completely unrelated to link/credit flow control.  linkCapacity is
therefore not related to maxFrameSize/maxSessionFrames.

-Ted

On Fri, Jul 21, 2017 at 10:27 AM, Hudalla Kai (INST/ECS4) <
kai.huda...@bosch-si.com> wrote:

> Hi,
>
> I am wondering what the (practical) meaning of the "linkCapacity"
> configuration property on listeners is and in particular, how it is related
> to the "maxFrameSize" and "maxSessionFrames" properties. Do the latter ones
> pose a limit on the former one? My experience is that if I do not specify
> any value for "linkCapacity" then the Dispatch Router flows 250 credits to
> a sender link connecting to it (assuming that a consumer has connected with
> a receiver link interested in the relevant address). This number, however,
> seems to be unrelated to the number of credits the consumer has flowed to
> Dispatch Router, which lets me assume that the credits to flow to a sender
> are mainly related to, well, what exactly?
>
> Can somebody shed some light on this?
>
> Mit freundlichen Grüßen / Best regards
>
> Kai Hudalla
> Chief Software Architect
>
> Bosch Software Innovations GmbH
> Schöneberger Ufer 89-91
> 10785 Berlin
> GERMANY
> www.bosch-si.com
>
> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB
> 148411 B;
> Executives: Dr.-Ing. Rainer Kallenbach, Michael Hahn
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Dispatch Router questions

2017-07-20 Thread Ted Ross
On Wed, Jul 19, 2017 at 7:36 PM, Dan Langford  wrote:

> So I am struggling to wrap my head around some dispatch Router concepts and
> was wondering if somebody would be willing to point me in the right
> direction on one or more of my idea.
>
> Background: I am doing some due diligence at my place of employment
> regarding AMQP1.0 brokers and currently I am trying to see what Artemis w/
> HA, Colocation, and Replication looks like. Artemis does not currently
> support load-balancing AMQP messages through its cluster and they suggested
> I use QDR for that.
>
> So as I tried to jump into QDR I just found myself lost on some of these
> concepts and terms and I struggled finding examples, guides, or tutorials.
> I am just wanting load balancing of incoming messages to two brokers. For
> HA reasons I want 2 QDR nodes able to "front" these two brokers.  As it
> currently stands here are my questions:
>
> - Can I configure QDR to autoLink in and out ANY/ALL addresses?
>

No.  There is no way currently for QDR to know what queues are present on
its connected brokers.  It would not be difficult to write a program to
synchronize autolinks to existing queues.


>
> - Artemis doesn't support vhosts. Can I configure connections to vhost:Foo
> address:bar actually be address:Foo.bar when the message goes back to the
> broker?
>

Yes.  There is a multi-tenancy feature for listeners that does exactly what
you are asking for.  If you add the attribute "multiTenant: yes" to the
configuration of a listener in the qdrouterd.conf file, clients connected
via that listener will have their addresses annotated as vhost/addr in the
router.


>
> - Can I configure QDR to pass auth through to the broker and let the broker
> decide is the user is authenticated and authorized? Inversely can I
> configure QDR to be the only determinate of auth?
>

Presently, QDR expects to be the sole determiner of authentic identity.
There is an open request to add a SASL proxy that might be used to allow
the broker to do authentication on behalf of the router, but that hasn't
made it into master yet.


>
> I think depending on what I learn on these topics I will likely have more
> questions. Thank you to anybody who is able to give me a lead or point me
> to a config that may serve as an example. I really do appreciate it.
>

Please don't hesitate to ask more questions or point out where there is
lack of documentation.  We appreciate it as well.

-Ted


Re: Dispatch Router throughput

2017-07-17 Thread Ted Ross
When doing throughput benchmarking, I would strongly recommend using
unsettled deliveries.  With pre-settled deliveries, there is no effective
end-to-end flow control.  If there is congestion (senders faster than
receivers), the router will aggressively discard excess pre-settled
deliveries.  If you use unsettled deliveries, the router will be able to
provide smooth end-to-end flow control.

It is my understanding from the benchmarking that we have done that
unsettled deliveries perform as well or better than pre-settled deliveries.

-Ted

On Mon, Jul 17, 2017 at 3:23 AM, Hudalla Kai (INST/ECS4) <
kai.huda...@bosch-si.com> wrote:

> Thanks everybody for your help with this problem. I think that we will
> first need to isolate the problem further down to the lower layers.
> Currently, there are too many components involved AFAIC. We will now first
> create a test client just based on vertx-proton in order to rule out our
> Hono specific code on top of it.
>
> We will come back with the outcome of this once we have run some tests
> using the plain client.
>
> Mit freundlichen Grüßen / Best regards
>
> Kai Hudalla
> Chief Software Architect
>
> Bosch Software Innovations GmbH
> Schöneberger Ufer 89-91
> 10785 Berlin
> GERMANY
> www.bosch-si.com
>
> Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB
> 148411 B;
> Executives: Dr.-Ing. Rainer Kallenbach, Michael Hahn
>
> 
> From: Ganesh Murthy 
> Sent: Friday, July 14, 2017 15:14
> To: users@qpid.apache.org
> Subject: Re: Dispatch Router throughput
>
> On Fri, Jul 14, 2017 at 8:43 AM, Gordon Sim  wrote:
>
> > On 14/07/17 13:27, Hudalla Kai (INST/ECS4) wrote:
> >
> >> Not yet, but I can certainly turn it on.
> >>
> >
> > No! I was checking it was off as the full logging can slow things down a
> > lot (especially logging to a terminal).
> >
> > Is your test client available somewhere? I'm a bit puzzled as to what is
> > going on and perhaps if I run it we can at least figure out whether its
> the
> > client or the router env that is causing the issue.
>
> Along with the test client, can you also please send your router config
> file? Thanks.
>
> >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


  1   2   3   4   >