[jira] [Commented] (PROTON-1800) BlockingConnection descriptor leak

2018-04-03 Thread Cliff Jansen (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16425053#comment-16425053
 ] 

Cliff Jansen commented on PROTON-1800:
--

As a workaround, adding

 

    client.receiver = None

 

in the finally clause works for me.

> BlockingConnection descriptor leak
> --
>
> Key: PROTON-1800
> URL: https://issues.apache.org/jira/browse/PROTON-1800
> Project: Qpid Proton
>  Issue Type: Bug
>Affects Versions: proton-c-0.21.0
>Reporter: Andy Smith
>Priority: Major
> Attachments: sync_client.py
>
>
> Modified collectd python plugin from using persistent connection to 
> connection per read. Following change, detected descriptor leak.
> Attached modification to sync_client.py exhibits the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-918) Improve router config consistency and metadata

2018-04-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424798#comment-16424798
 ] 

ASF subversion and git services commented on DISPATCH-918:
--

Commit f36c90335c3fdaff49472ade1a7acd3077aaa56b in qpid-dispatch's branch 
refs/heads/master from [~ganeshmurthy]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=f36c903 ]

DISPATCH-918 - Deprecated some attributes of the sslProfile entity and 
introduced replacement with clearer names


> Improve router config consistency and metadata
> --
>
> Key: DISPATCH-918
> URL: https://issues.apache.org/jira/browse/DISPATCH-918
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Management Agent
>Reporter: Justin Ross
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Proposed changes from review.  The items marked PRIO1 are more important.  
> All changes must be backward-compatible.
> [https://docs.google.com/spreadsheets/d/14ugjxlc-ETYZXwN9eWD-D1YWrRAfydj9EJNmyUaZrD0/edit?usp=sharing]
> This also includes flags we'd like to get added to the metadata so we can 
> generate better docs from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-952) qdrouterd seg fault after reporting "too many sessions"

2018-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424768#comment-16424768
 ] 

ASF GitHub Bot commented on DISPATCH-952:
-

Github user ganeshmurthy closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/274


> qdrouterd seg fault after reporting "too many sessions"
> ---
>
> Key: DISPATCH-952
> URL: https://issues.apache.org/jira/browse/DISPATCH-952
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Alan Conway
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Reported at [https://bugzilla.redhat.com/show_bug.cgi?id=1561876]
>  
> {code:java}
> Currently running Satellite 6.3 with 5K clients. The clients are managed by 2 
> capsules:
> Capsule 1: 3K clients
> Capsule 2: 2K clients
> Logs from Capsule 1:
> [root@c02-h10-r620-vm1 ~]# journalctl | grep qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/gshadow: name=qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: new group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> useradd[19145]: new user: name=qdrouterd, UID=996, GID=993, 
> home=/var/lib/qdrouterd, shell=/sbin/nologin
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> qdrouterd[16084]: [0x7fe3f0016aa0]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com kernel: 
> qdrouterd[16087]: segfault at 88 ip 7fe40b79d820 sp 7fe3fd5f9298 
> error 6 in libqpid-proton.so.10.0.0[7fe40b77f000+4b000]
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> Mar 29 01:02:09 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> /usr/sbin/katello-service[1740]: *** status failed: qdrouterd ***
> Logs from Capsule 2:
> [root@c02-h10-r620-vm2 ~]# systemctl status qdrouterd
> ● qdrouterd.service - Qpid Dispatch router daemon
>Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/qdrouterd.service.d
>└─limits.conf
>Active: failed (Result: signal) since Wed 2018-03-28 10:58:02 EDT; 14h ago
>   Process: 1158 ExecStart=/usr/sbin/qdrouterd -c 
> /etc/qpid-dispatch/qdrouterd.conf (code=killed, signal=SEGV)
>  Main PID: 1158 (code=killed, signal=SEGV)
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Started Qpid Dispatch router daemon.
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Starting Qpid Dispatch router daemon...
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:unable to find an open available channel 
> within limit of 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:process error -2
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch pull request #274: DISPATCH-952 - Limit number of sessions to ...

2018-04-03 Thread ganeshmurthy
Github user ganeshmurthy closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/274


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8154) Website updates Q2 2018

2018-04-03 Thread Justin Ross (JIRA)
Justin Ross created QPID-8154:
-

 Summary: Website updates Q2 2018
 Key: QPID-8154
 URL: https://issues.apache.org/jira/browse/QPID-8154
 Project: Qpid
  Issue Type: Task
  Components: Website
Reporter: Justin Ross
Assignee: Justin Ross






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-8015) Website updates Q4 2017

2018-04-03 Thread Justin Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Ross closed QPID-8015.
-
Resolution: Done

> Website updates Q4 2017
> ---
>
> Key: QPID-8015
> URL: https://issues.apache.org/jira/browse/QPID-8015
> Project: Qpid
>  Issue Type: Task
>  Components: Website
>Reporter: Justin Ross
>Assignee: Justin Ross
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-8048) Qpid C++ 1.38.0 release tasks

2018-04-03 Thread Justin Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Ross closed QPID-8048.
-
Resolution: Done

> Qpid C++ 1.38.0 release tasks
> -
>
> Key: QPID-8048
> URL: https://issues.apache.org/jira/browse/QPID-8048
> Project: Qpid
>  Issue Type: Task
>  Components: C++ Broker, C++ Client
>Reporter: Justin Ross
>Assignee: Justin Ross
>Priority: Major
> Fix For: qpid-cpp-1.38.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1806) Release qpid_proton gem with heartbeating implemented

2018-04-03 Thread Justin Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Ross resolved PROTON-1806.
-
Resolution: Done

> Release qpid_proton gem with heartbeating implemented
> -
>
> Key: PROTON-1806
> URL: https://issues.apache.org/jira/browse/PROTON-1806
> Project: Qpid Proton
>  Issue Type: Wish
>  Components: ruby-binding
>Reporter: Miha Plesko
>Assignee: Justin Ross
>Priority: Major
>
> We use qpid_proton gem in RedHat CloudForms to capture events from ActiveMQ 
> queue. Recently we discovered a blocking bug in this gem which results in 
> client connection being disconnected every 2-3 minutes accompanied with file 
> descriptor leakage bug as described here:
> https://issues.apache.org/jira/browse/PROTON-1782 and
> https://issues.apache.org/jira/browse/PROTON-1791
> Both issues are fixed by now and merged qpid_proton master branch, many 
> thanks to [~aconway], [~astitcher], [~jdanek] for amazing response time.
>  
> I'm opening this ticket to discuss release plans for the qpid_proton gem. 
> CloudForms is about to be released first week in April and our initial plan 
> was to monkey-patch the gem to include those fixes. It turns our, however, 
> that changes modify gem's core to much to do the monkey-patching, hence we 
> decided to wait for you guys to perform the next release.
>  
> Q: Do you happen to know specific date when releasing new gem version is 
> supposed to happen? I remember we mentioned "within a couple of weeks" here 
> https://issues.apache.org/jira/browse/PROTON-1782?focusedCommentId=16401841&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16401841
>  , but we would need as good estimate as possible since we need to 
> communicate to customers how long they need to wait
>  
> Thanks for your great work, looking forward to be hearing from you!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-1806) Release qpid_proton gem with heartbeating implemented

2018-04-03 Thread Justin Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Ross updated PROTON-1806:

Fix Version/s: proton-c-0.22.0

> Release qpid_proton gem with heartbeating implemented
> -
>
> Key: PROTON-1806
> URL: https://issues.apache.org/jira/browse/PROTON-1806
> Project: Qpid Proton
>  Issue Type: Wish
>  Components: ruby-binding
>Reporter: Miha Plesko
>Assignee: Justin Ross
>Priority: Major
> Fix For: proton-c-0.22.0
>
>
> We use qpid_proton gem in RedHat CloudForms to capture events from ActiveMQ 
> queue. Recently we discovered a blocking bug in this gem which results in 
> client connection being disconnected every 2-3 minutes accompanied with file 
> descriptor leakage bug as described here:
> https://issues.apache.org/jira/browse/PROTON-1782 and
> https://issues.apache.org/jira/browse/PROTON-1791
> Both issues are fixed by now and merged qpid_proton master branch, many 
> thanks to [~aconway], [~astitcher], [~jdanek] for amazing response time.
>  
> I'm opening this ticket to discuss release plans for the qpid_proton gem. 
> CloudForms is about to be released first week in April and our initial plan 
> was to monkey-patch the gem to include those fixes. It turns our, however, 
> that changes modify gem's core to much to do the monkey-patching, hence we 
> decided to wait for you guys to perform the next release.
>  
> Q: Do you happen to know specific date when releasing new gem version is 
> supposed to happen? I remember we mentioned "within a couple of weeks" here 
> https://issues.apache.org/jira/browse/PROTON-1782?focusedCommentId=16401841&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16401841
>  , but we would need as good estimate as possible since we need to 
> communicate to customers how long they need to wait
>  
> Thanks for your great work, looking forward to be hearing from you!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-952) qdrouterd seg fault after reporting "too many sessions"

2018-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424576#comment-16424576
 ] 

ASF GitHub Bot commented on DISPATCH-952:
-

Github user ted-ross commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
@ganeshmurthy I agree with @alanconway.  Let's simplify this and use single 
session for all out-going links for all connection roles.


> qdrouterd seg fault after reporting "too many sessions"
> ---
>
> Key: DISPATCH-952
> URL: https://issues.apache.org/jira/browse/DISPATCH-952
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Alan Conway
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Reported at [https://bugzilla.redhat.com/show_bug.cgi?id=1561876]
>  
> {code:java}
> Currently running Satellite 6.3 with 5K clients. The clients are managed by 2 
> capsules:
> Capsule 1: 3K clients
> Capsule 2: 2K clients
> Logs from Capsule 1:
> [root@c02-h10-r620-vm1 ~]# journalctl | grep qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/gshadow: name=qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: new group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> useradd[19145]: new user: name=qdrouterd, UID=996, GID=993, 
> home=/var/lib/qdrouterd, shell=/sbin/nologin
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> qdrouterd[16084]: [0x7fe3f0016aa0]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com kernel: 
> qdrouterd[16087]: segfault at 88 ip 7fe40b79d820 sp 7fe3fd5f9298 
> error 6 in libqpid-proton.so.10.0.0[7fe40b77f000+4b000]
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> Mar 29 01:02:09 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> /usr/sbin/katello-service[1740]: *** status failed: qdrouterd ***
> Logs from Capsule 2:
> [root@c02-h10-r620-vm2 ~]# systemctl status qdrouterd
> ● qdrouterd.service - Qpid Dispatch router daemon
>Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/qdrouterd.service.d
>└─limits.conf
>Active: failed (Result: signal) since Wed 2018-03-28 10:58:02 EDT; 14h ago
>   Process: 1158 ExecStart=/usr/sbin/qdrouterd -c 
> /etc/qpid-dispatch/qdrouterd.conf (code=killed, signal=SEGV)
>  Main PID: 1158 (code=killed, signal=SEGV)
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Started Qpid Dispatch router daemon.
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Starting Qpid Dispatch router daemon...
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:unable to find an open available channel 
> within limit of 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:process error -2
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch issue #274: DISPATCH-952 - Limit number of sessions to one on ...

2018-04-03 Thread ted-ross
Github user ted-ross commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
@ganeshmurthy I agree with @alanconway.  Let's simplify this and use single 
session for all out-going links for all connection roles.


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-918) Improve router config consistency and metadata

2018-04-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424535#comment-16424535
 ] 

ASF subversion and git services commented on DISPATCH-918:
--

Commit 81fdd61fd2bf06884ce903055a1dce16dc91cf3d in qpid-dispatch's branch 
refs/heads/master from [~ganeshmurthy]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=81fdd61 ]

DISPATCH-918 - Deprecated some attributes of the log entities and and 
introduced replacements with clearer names


> Improve router config consistency and metadata
> --
>
> Key: DISPATCH-918
> URL: https://issues.apache.org/jira/browse/DISPATCH-918
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Management Agent
>Reporter: Justin Ross
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Proposed changes from review.  The items marked PRIO1 are more important.  
> All changes must be backward-compatible.
> [https://docs.google.com/spreadsheets/d/14ugjxlc-ETYZXwN9eWD-D1YWrRAfydj9EJNmyUaZrD0/edit?usp=sharing]
> This also includes flags we'd like to get added to the metadata so we can 
> generate better docs from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-872) Add a counter for dropped-presettleds on links

2018-04-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424518#comment-16424518
 ] 

ASF subversion and git services commented on DISPATCH-872:
--

Commit 45f8833a0b079afd8e54db9d92a6f877ce5a78c8 in qpid-dispatch's branch 
refs/heads/master from [~tr...@redhat.com]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=45f8833 ]

DISPATCH-872 - Fixed misalignment of column names/values in qdstat -lv


> Add a counter for dropped-presettleds on links
> --
>
> Key: DISPATCH-872
> URL: https://issues.apache.org/jira/browse/DISPATCH-872
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Reporter: Ted Ross
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> When the router drops pre-settled deliveries during congestion, those drops 
> are not counted or reported.  A new counter should be added for links to 
> account for dropped presettled deliveries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-373) Support for OAuth flow and setting of "Authorization" Header for WS upgrade request

2018-04-03 Thread Michael Bolz (JIRA)

[ 
https://issues.apache.org/jira/browse/QPIDJMS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424511#comment-16424511
 ] 

Michael Bolz commented on QPIDJMS-373:
--

Hi [~gemmellr],

Thanks a lot for clarification.

As explained my first thought was to extend the 
{{org.apache.qpid.jms.transports.netty.NettyWsTransport.NettyWebSocketTransportHandler}}
 because the _OAuth+AuthHeader_ seems to fit there and in my case OAuth would 
be only used in combination with WS.
But I agree that the OAuth part could be in an own {{OAuthTransport}} (which 
requests the token for the Authorization Header) and the {{NettyWsTransport}} 
only gets extended with the option to set the “Authorization Header value”.
As transport key 
({{org.apache.qpid.jms.transports.TransportFactory#findTransportFactory}} I 
would propose a new scheme “{{amqpws+oauth}}” (which should be compliant to 
[rfc3986|https://tools.ietf.org/html/rfc3986#section-3.1]).
So it is clear when the new {{OAuthTransport}} will be used.

If this approach sounds reasonable for you I would create another POC with this 
approach for further discussion and feedback.


> Support for OAuth flow and setting of "Authorization" Header for WS upgrade 
> request
> ---
>
> Key: QPIDJMS-373
> URL: https://issues.apache.org/jira/browse/QPIDJMS-373
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Michael Bolz
>Priority: Major
>
> Add support for OAuth flow ("client_credentials" and "password") and setting 
> of "Authorization" Header during WebSocket connection handshake.
> Used "Authorization" Header or OAuth settings should/could be set via the 
> "transport" parameters (TransportOptions).
>  
> As PoC I created a [Fork|https://github.com/mibo/qpid-jms/tree/ws_add_header] 
> and have done one commit for the [add of the Authorization 
> Header|https://github.com/mibo/qpid-jms/commit/711052f0891556db0da6e7d68908b2f9dafadede]
>  and one commit for the [OAuth 
> flow|https://github.com/mibo/qpid-jms/commit/de70f0d3e4441358a239b3e776455201c133895d].
>  
> Hope this feature is not only interesting for me.
> If yes, I will add the currently missing tests to my contribution and do a 
> pull request.
>  
> Regards, Michael



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1806) Release qpid_proton gem with heartbeating implemented

2018-04-03 Thread Justin Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424496#comment-16424496
 ] 

Justin Ross commented on PROTON-1806:
-

The 0.22.0 GA gem is now available.

https://rubygems.org/gems/qpid_proton/versions/0.22.0

> Release qpid_proton gem with heartbeating implemented
> -
>
> Key: PROTON-1806
> URL: https://issues.apache.org/jira/browse/PROTON-1806
> Project: Qpid Proton
>  Issue Type: Wish
>  Components: ruby-binding
>Reporter: Miha Plesko
>Assignee: Justin Ross
>Priority: Major
>
> We use qpid_proton gem in RedHat CloudForms to capture events from ActiveMQ 
> queue. Recently we discovered a blocking bug in this gem which results in 
> client connection being disconnected every 2-3 minutes accompanied with file 
> descriptor leakage bug as described here:
> https://issues.apache.org/jira/browse/PROTON-1782 and
> https://issues.apache.org/jira/browse/PROTON-1791
> Both issues are fixed by now and merged qpid_proton master branch, many 
> thanks to [~aconway], [~astitcher], [~jdanek] for amazing response time.
>  
> I'm opening this ticket to discuss release plans for the qpid_proton gem. 
> CloudForms is about to be released first week in April and our initial plan 
> was to monkey-patch the gem to include those fixes. It turns our, however, 
> that changes modify gem's core to much to do the monkey-patching, hence we 
> decided to wait for you guys to perform the next release.
>  
> Q: Do you happen to know specific date when releasing new gem version is 
> supposed to happen? I remember we mentioned "within a couple of weeks" here 
> https://issues.apache.org/jira/browse/PROTON-1782?focusedCommentId=16401841&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16401841
>  , but we would need as good estimate as possible since we need to 
> communicate to customers how long they need to wait
>  
> Thanks for your great work, looking forward to be hearing from you!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-952) qdrouterd seg fault after reporting "too many sessions"

2018-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424358#comment-16424358
 ] 

ASF GitHub Bot commented on DISPATCH-952:
-

Github user ganeshmurthy commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
@ted-ross do you agree with @alanconway comments ? If yes, I will make 
single session the default.


> qdrouterd seg fault after reporting "too many sessions"
> ---
>
> Key: DISPATCH-952
> URL: https://issues.apache.org/jira/browse/DISPATCH-952
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Alan Conway
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Reported at [https://bugzilla.redhat.com/show_bug.cgi?id=1561876]
>  
> {code:java}
> Currently running Satellite 6.3 with 5K clients. The clients are managed by 2 
> capsules:
> Capsule 1: 3K clients
> Capsule 2: 2K clients
> Logs from Capsule 1:
> [root@c02-h10-r620-vm1 ~]# journalctl | grep qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/gshadow: name=qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: new group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> useradd[19145]: new user: name=qdrouterd, UID=996, GID=993, 
> home=/var/lib/qdrouterd, shell=/sbin/nologin
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> qdrouterd[16084]: [0x7fe3f0016aa0]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com kernel: 
> qdrouterd[16087]: segfault at 88 ip 7fe40b79d820 sp 7fe3fd5f9298 
> error 6 in libqpid-proton.so.10.0.0[7fe40b77f000+4b000]
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> Mar 29 01:02:09 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> /usr/sbin/katello-service[1740]: *** status failed: qdrouterd ***
> Logs from Capsule 2:
> [root@c02-h10-r620-vm2 ~]# systemctl status qdrouterd
> ● qdrouterd.service - Qpid Dispatch router daemon
>Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/qdrouterd.service.d
>└─limits.conf
>Active: failed (Result: signal) since Wed 2018-03-28 10:58:02 EDT; 14h ago
>   Process: 1158 ExecStart=/usr/sbin/qdrouterd -c 
> /etc/qpid-dispatch/qdrouterd.conf (code=killed, signal=SEGV)
>  Main PID: 1158 (code=killed, signal=SEGV)
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Started Qpid Dispatch router daemon.
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Starting Qpid Dispatch router daemon...
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:unable to find an open available channel 
> within limit of 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:process error -2
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch issue #274: DISPATCH-952 - Limit number of sessions to one on ...

2018-04-03 Thread ganeshmurthy
Github user ganeshmurthy commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
@ted-ross do you agree with @alanconway comments ? If yes, I will make 
single session the default.


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-952) qdrouterd seg fault after reporting "too many sessions"

2018-04-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424352#comment-16424352
 ] 

ASF GitHub Bot commented on DISPATCH-952:
-

Github user alanconway commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
Why not simply make single-session the default and (for now) only 
behaviour? I can't see any benefit to multiple sessions, in particular not 
session-per-link - AMQP is designed on the assumption of many links per 
session. So why leave this unhelpful behaviour lying around? If there's some 
future use case that requires it we can add it then, when we have some context 
where it is useful.


> qdrouterd seg fault after reporting "too many sessions"
> ---
>
> Key: DISPATCH-952
> URL: https://issues.apache.org/jira/browse/DISPATCH-952
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Alan Conway
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Reported at [https://bugzilla.redhat.com/show_bug.cgi?id=1561876]
>  
> {code:java}
> Currently running Satellite 6.3 with 5K clients. The clients are managed by 2 
> capsules:
> Capsule 1: 3K clients
> Capsule 2: 2K clients
> Logs from Capsule 1:
> [root@c02-h10-r620-vm1 ~]# journalctl | grep qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/gshadow: name=qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: new group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> useradd[19145]: new user: name=qdrouterd, UID=996, GID=993, 
> home=/var/lib/qdrouterd, shell=/sbin/nologin
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> qdrouterd[16084]: [0x7fe3f0016aa0]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com kernel: 
> qdrouterd[16087]: segfault at 88 ip 7fe40b79d820 sp 7fe3fd5f9298 
> error 6 in libqpid-proton.so.10.0.0[7fe40b77f000+4b000]
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> Mar 29 01:02:09 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> /usr/sbin/katello-service[1740]: *** status failed: qdrouterd ***
> Logs from Capsule 2:
> [root@c02-h10-r620-vm2 ~]# systemctl status qdrouterd
> ● qdrouterd.service - Qpid Dispatch router daemon
>Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/qdrouterd.service.d
>└─limits.conf
>Active: failed (Result: signal) since Wed 2018-03-28 10:58:02 EDT; 14h ago
>   Process: 1158 ExecStart=/usr/sbin/qdrouterd -c 
> /etc/qpid-dispatch/qdrouterd.conf (code=killed, signal=SEGV)
>  Main PID: 1158 (code=killed, signal=SEGV)
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Started Qpid Dispatch router daemon.
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Starting Qpid Dispatch router daemon...
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:unable to find an open available channel 
> within limit of 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:process error -2
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch issue #274: DISPATCH-952 - Limit number of sessions to one on ...

2018-04-03 Thread alanconway
Github user alanconway commented on the issue:

https://github.com/apache/qpid-dispatch/pull/274
  
Why not simply make single-session the default and (for now) only 
behaviour? I can't see any benefit to multiple sessions, in particular not 
session-per-link - AMQP is designed on the assumption of many links per 
session. So why leave this unhelpful behaviour lying around? If there's some 
future use case that requires it we can add it then, when we have some context 
where it is useful.


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-843) Support for message groups

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-843:
--
Fix Version/s: (was: 1.2.0)
   Backlog

> Support for message groups
> --
>
> Key: DISPATCH-843
> URL: https://issues.apache.org/jira/browse/DISPATCH-843
> Project: Qpid Dispatch
>  Issue Type: New Feature
>  Components: Router Node
>Affects Versions: 0.8.0
>Reporter: Ken Giusti
>Priority: Major
> Fix For: Backlog
>
>
> Currently dispatch router ignores the group-id, group-sequence, and 
> reply-to-group-id in the Message's property header.  This means the router 
> does not consider grouping constraints when it determines the route for any 
> given message.
> This JIRA tracks the design and implementation of message group aware routing 
> support in the Qpid Dispatch Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-817) Honor TTL on unroutable messages

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-817:
--
Fix Version/s: (was: 1.2.0)
   Backlog

> Honor TTL on unroutable messages
> 
>
> Key: DISPATCH-817
> URL: https://issues.apache.org/jira/browse/DISPATCH-817
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.8.0
>Reporter: Ganesh Murthy
>Priority: Major
> Fix For: Backlog
>
>
> Consider the following scenario - 
> - A receiver connects to the router attaching on an address.
> - A sender connects to the router attaching to the same address, the router 
> gives credit so that the sender can start sending messages.
> - The sender starts sending messages and the receiver suddenly drops off.
> - If the message has not been routed (i.e. it's in the link buffer of the 
> sender's link), it will remain in this buffer until there is a consumer 
> available to receive it or until the sender disconnects.
> - If the sender stays connected and the receiver never shows up again, these 
> unroutable messages stay in the sender link's link buffer forever. In this 
> case the router must check for the TTL on these deliveries and when it 
> expires, the delivery should be settled with RELEASED



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-854) Remove shut-down memory leaks

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-854:
--
Fix Version/s: (was: 1.2.0)
   Backlog

> Remove shut-down memory leaks
> -
>
> Key: DISPATCH-854
> URL: https://issues.apache.org/jira/browse/DISPATCH-854
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Ted Ross
>Priority: Major
> Fix For: Backlog
>
>
> Remove the valgrind issues that are caused by allocated memory being left 
> unfreed at process shutdown.  This will be easier when the "container" layer 
> is removed and router-node is integrated directly with the Proton Proactor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-863) Excessive locking between message receive and send

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-863:
--
Fix Version/s: (was: 1.2.0)
   Backlog

> Excessive locking between message receive and send
> --
>
> Key: DISPATCH-863
> URL: https://issues.apache.org/jira/browse/DISPATCH-863
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Affects Versions: 1.0.0
>Reporter: Chuck Rolke
>Priority: Major
> Fix For: Backlog
>
>
> Support for streaming messages (commit c9262728) introduced locking between 
> message receive and message send.
> Before streaming:
> * message_receive creates the message buffer list with no locks
> * message_send sends messages with no locks
> * When all copies of the message are sent then the last sender deletes the 
> message content including all buffers
> With streaming:
> * message_receive takes the content lock (per buffer) as each buffer is added
> * message_send takes the content lock (per buffer) as each buffer is 
> consumed. This happens once for each message copy.
> * message_send possibly frees the buffer if all message copies have sent that 
> buffer.
> * When all copies of the message are sent then the last sender deletes the 
> message content. All buffers are already freed.
> There may be a problem with all those locks:
> * *Lock ownership* If the message is being streamed to N destinations then 
> the buffer lock will have contention from N I/O threads and the mutex 
> ownership will surely bounce from thread to thread.
> * *Lock per buffer add/remove* If a 1 Mbyte message is streamed to N 
> destinations then the buffer lock will be taken 2000 times by message_receive 
> and (N * 2000) times by message_send.
> With some careful design lock usage could be greatly reduced or even 
> eliminated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-863) Excessive locking between message receive and send

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-863:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Excessive locking between message receive and send
> --
>
> Key: DISPATCH-863
> URL: https://issues.apache.org/jira/browse/DISPATCH-863
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Affects Versions: 1.0.0
>Reporter: Chuck Rolke
>Priority: Major
> Fix For: 1.2.0
>
>
> Support for streaming messages (commit c9262728) introduced locking between 
> message receive and message send.
> Before streaming:
> * message_receive creates the message buffer list with no locks
> * message_send sends messages with no locks
> * When all copies of the message are sent then the last sender deletes the 
> message content including all buffers
> With streaming:
> * message_receive takes the content lock (per buffer) as each buffer is added
> * message_send takes the content lock (per buffer) as each buffer is 
> consumed. This happens once for each message copy.
> * message_send possibly frees the buffer if all message copies have sent that 
> buffer.
> * When all copies of the message are sent then the last sender deletes the 
> message content. All buffers are already freed.
> There may be a problem with all those locks:
> * *Lock ownership* If the message is being streamed to N destinations then 
> the buffer lock will have contention from N I/O threads and the mutex 
> ownership will surely bounce from thread to thread.
> * *Lock per buffer add/remove* If a 1 Mbyte message is streamed to N 
> destinations then the buffer lock will be taken 2000 times by message_receive 
> and (N * 2000) times by message_send.
> With some careful design lock usage could be greatly reduced or even 
> eliminated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-854) Remove shut-down memory leaks

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-854:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Remove shut-down memory leaks
> -
>
> Key: DISPATCH-854
> URL: https://issues.apache.org/jira/browse/DISPATCH-854
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Ted Ross
>Priority: Major
> Fix For: 1.2.0
>
>
> Remove the valgrind issues that are caused by allocated memory being left 
> unfreed at process shutdown.  This will be easier when the "container" layer 
> is removed and router-node is integrated directly with the Proton Proactor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-845) Allow connecting containers to declare their availability for link routes

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-845:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Allow connecting containers to declare their availability for link routes
> -
>
> Key: DISPATCH-845
> URL: https://issues.apache.org/jira/browse/DISPATCH-845
> Project: Qpid Dispatch
>  Issue Type: New Feature
>  Components: Router Node
>Reporter: Ted Ross
>Assignee: Ted Ross
>Priority: Major
> Fix For: 1.2.0
>
>
> In the case where a container wishes to connect to a router network and 
> accept incoming routed link attaches (i.e. become a destination for link 
> routes), it is now quite complicated to do so.  First, the connected router 
> must be configured with a listener in the route-container role.  Second, 
> there must be linkRoute objects configured for each prefix or pattern for the 
> connected container.
> A more efficient mechanism for dynamic/ephemeral link routes can be supported 
> as follows:
> * A container opening a connection to the router may provide a connection 
> property that contains a list of prefixes and/or patterns for link routes.
> * During the lifecycle of that connection, the router maintains active 
> link-route addresses targeting that container.
> This feature allows for lightweight establishment of link-route destinations 
> without the need for connection roles and configured link-routes with 
> independently managed lifecycles (active, inactive, etc.).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-939) Router aborts transfer, closes connection with error on QIT amqp_large_contnet_test

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-939:
--
Component/s: Container

> Router aborts transfer, closes connection with error on QIT 
> amqp_large_contnet_test
> ---
>
> Key: DISPATCH-939
> URL: https://issues.apache.org/jira/browse/DISPATCH-939
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Kim van der Riet
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.2.0
>
> Attachments: dispatch.multiframe.06.pcapng, qdrouterd.conf
>
>
> The Qpid Interop Test large content test repeatedly fails when run against a 
> single-node dispatch router (dispatch config file attached).
> The test that reproduces this most readily but without too much length is the 
> following:
> {noformat}
> python -m qpid_interop_test.amqp_large_content_test --include-shim ProtonCpp 
> --include-type list
> WARNING: Rhea Javascript shims not found
> Test Broker: qpid-dispatch-router v.1.0.0 on 
> test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase) ... ERROR
> ==
> ERROR: test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase)
> --
> Traceback (most recent call last):
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 196, in inner_test_method
> timeout)
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 121, in run_test
> raise InteropTestError('Receive shim \'%s\':\n%s' % (receive_shim.NAME, 
> receive_obj))
> InteropTestError: Receive shim 'ProtonCpp':
> amqp_large_content_test receiver error: receiver read failure
> --
> Ran 1 test in 0.801s
> FAILED (errors=1)
> {noformat}
> The router left the following message:
> {noformat}
> SERVER (info) Connection from ::1:57020 (to ::1:amqp) failed: 
> amqp:connection:framing-error connection aborted{noformat}
> The attached capture file shows a typical observable sequence of events on 
> the wire that lead up to the failure. The test that created this error 
> consists of 4 messages:
>  # A 1MB message consisting of a list containing a single 1MB string 
> (delivery-id 0);
>  # A 1MB message consisting of a list containing 16 64kB strings (delivery-id 
> 1);
>  # A 10MB message consisting of a list containing a single 10MB string 
> (delivery-id 2);
>  # A 10MB message consisting of a list containing 16 655MB strings 
> (delivery-id 3).
> The following is a summary of what transpires:
>  * *Frame 1527:* The sender completes sending the last message (delivery-id 
> 3) to the router, and closes the connection (without waiting for 
> dispositions). At this point, the receiver is in the process of being sent 
> message-id 2.
>  * *Frame 1539:* Last transfer for message-id 2 from dispatch to receiver.
>  * *Frame 1545 - 1598:* Message-id 3 starts being sent to receiver.
>  * *Frame 1600:* Dispatch router returns close to sender (initiated in *frame 
> 1527*). No errors or arguments.
>  * *Frame 1605:* Receiver sends flow, disposition for delivery-id 2 
> (completed in *frame 1539*).
>  * *Frame 1607 - 1618:* Continue to send message with delivery-id 3 to 
> receiver.
>  * *Frame 1619:* Transfer aborted. A transfer performative with more=False, 
> Settled=True, Aborted=True.
>  * *Frame 1622:* Close sent from Dispatch to Receiver with Condition: 
> {{amqp:connection:framing-error}}, Description: {{connection aborted}}.
> All instances of this error have occurred between dispatch router and the 
> receiver once the sender has closed its connection.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-930) Edge Router mode for improved scale and efficiency

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-930:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Edge Router mode for improved scale and efficiency
> --
>
> Key: DISPATCH-930
> URL: https://issues.apache.org/jira/browse/DISPATCH-930
> Project: Qpid Dispatch
>  Issue Type: New Feature
>  Components: Router Node
>Reporter: Ted Ross
>Assignee: Ted Ross
>Priority: Major
> Fix For: 1.2.0
>
> Attachments: EdgeRouter.pdf
>
>
> Introduce a new router operating mode called "edge" that allows a router to 
> join a network with a single uplink to an "interior" router.  Such routers 
> can be proliferated without limit and allow for greatly increased network 
> size.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-939) Router aborts transfer, closes connection with error on QIT amqp_large_contnet_test

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-939:
--
Fix Version/s: (was: 1.1.0)
   1.2.0

> Router aborts transfer, closes connection with error on QIT 
> amqp_large_contnet_test
> ---
>
> Key: DISPATCH-939
> URL: https://issues.apache.org/jira/browse/DISPATCH-939
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Kim van der Riet
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.2.0
>
> Attachments: dispatch.multiframe.06.pcapng, qdrouterd.conf
>
>
> The Qpid Interop Test large content test repeatedly fails when run against a 
> single-node dispatch router (dispatch config file attached).
> The test that reproduces this most readily but without too much length is the 
> following:
> {noformat}
> python -m qpid_interop_test.amqp_large_content_test --include-shim ProtonCpp 
> --include-type list
> WARNING: Rhea Javascript shims not found
> Test Broker: qpid-dispatch-router v.1.0.0 on 
> test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase) ... ERROR
> ==
> ERROR: test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase)
> --
> Traceback (most recent call last):
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 196, in inner_test_method
> timeout)
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 121, in run_test
> raise InteropTestError('Receive shim \'%s\':\n%s' % (receive_shim.NAME, 
> receive_obj))
> InteropTestError: Receive shim 'ProtonCpp':
> amqp_large_content_test receiver error: receiver read failure
> --
> Ran 1 test in 0.801s
> FAILED (errors=1)
> {noformat}
> The router left the following message:
> {noformat}
> SERVER (info) Connection from ::1:57020 (to ::1:amqp) failed: 
> amqp:connection:framing-error connection aborted{noformat}
> The attached capture file shows a typical observable sequence of events on 
> the wire that lead up to the failure. The test that created this error 
> consists of 4 messages:
>  # A 1MB message consisting of a list containing a single 1MB string 
> (delivery-id 0);
>  # A 1MB message consisting of a list containing 16 64kB strings (delivery-id 
> 1);
>  # A 10MB message consisting of a list containing a single 10MB string 
> (delivery-id 2);
>  # A 10MB message consisting of a list containing 16 655MB strings 
> (delivery-id 3).
> The following is a summary of what transpires:
>  * *Frame 1527:* The sender completes sending the last message (delivery-id 
> 3) to the router, and closes the connection (without waiting for 
> dispositions). At this point, the receiver is in the process of being sent 
> message-id 2.
>  * *Frame 1539:* Last transfer for message-id 2 from dispatch to receiver.
>  * *Frame 1545 - 1598:* Message-id 3 starts being sent to receiver.
>  * *Frame 1600:* Dispatch router returns close to sender (initiated in *frame 
> 1527*). No errors or arguments.
>  * *Frame 1605:* Receiver sends flow, disposition for delivery-id 2 
> (completed in *frame 1539*).
>  * *Frame 1607 - 1618:* Continue to send message with delivery-id 3 to 
> receiver.
>  * *Frame 1619:* Transfer aborted. A transfer performative with more=False, 
> Settled=True, Aborted=True.
>  * *Frame 1622:* Close sent from Dispatch to Receiver with Condition: 
> {{amqp:connection:framing-error}}, Description: {{connection aborted}}.
> All instances of this error have occurred between dispatch router and the 
> receiver once the sender has closed its connection.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-939) Router aborts transfer, closes connection with error on QIT amqp_large_contnet_test

2018-04-03 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-939:
--
Priority: Major  (was: Critical)

> Router aborts transfer, closes connection with error on QIT 
> amqp_large_contnet_test
> ---
>
> Key: DISPATCH-939
> URL: https://issues.apache.org/jira/browse/DISPATCH-939
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: Kim van der Riet
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
> Attachments: dispatch.multiframe.06.pcapng, qdrouterd.conf
>
>
> The Qpid Interop Test large content test repeatedly fails when run against a 
> single-node dispatch router (dispatch config file attached).
> The test that reproduces this most readily but without too much length is the 
> following:
> {noformat}
> python -m qpid_interop_test.amqp_large_content_test --include-shim ProtonCpp 
> --include-type list
> WARNING: Rhea Javascript shims not found
> Test Broker: qpid-dispatch-router v.1.0.0 on 
> test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase) ... ERROR
> ==
> ERROR: test_list_ProtonCpp->ProtonCpp (__main__.ListTestCase)
> --
> Traceback (most recent call last):
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 196, in inner_test_method
> timeout)
> File 
> "/home/kvdr/RedHat/install/lib/python2.7/site-packages/qpid_interop_test/amqp_large_content_test.py",
>  line 121, in run_test
> raise InteropTestError('Receive shim \'%s\':\n%s' % (receive_shim.NAME, 
> receive_obj))
> InteropTestError: Receive shim 'ProtonCpp':
> amqp_large_content_test receiver error: receiver read failure
> --
> Ran 1 test in 0.801s
> FAILED (errors=1)
> {noformat}
> The router left the following message:
> {noformat}
> SERVER (info) Connection from ::1:57020 (to ::1:amqp) failed: 
> amqp:connection:framing-error connection aborted{noformat}
> The attached capture file shows a typical observable sequence of events on 
> the wire that lead up to the failure. The test that created this error 
> consists of 4 messages:
>  # A 1MB message consisting of a list containing a single 1MB string 
> (delivery-id 0);
>  # A 1MB message consisting of a list containing 16 64kB strings (delivery-id 
> 1);
>  # A 10MB message consisting of a list containing a single 10MB string 
> (delivery-id 2);
>  # A 10MB message consisting of a list containing 16 655MB strings 
> (delivery-id 3).
> The following is a summary of what transpires:
>  * *Frame 1527:* The sender completes sending the last message (delivery-id 
> 3) to the router, and closes the connection (without waiting for 
> dispositions). At this point, the receiver is in the process of being sent 
> message-id 2.
>  * *Frame 1539:* Last transfer for message-id 2 from dispatch to receiver.
>  * *Frame 1545 - 1598:* Message-id 3 starts being sent to receiver.
>  * *Frame 1600:* Dispatch router returns close to sender (initiated in *frame 
> 1527*). No errors or arguments.
>  * *Frame 1605:* Receiver sends flow, disposition for delivery-id 2 
> (completed in *frame 1539*).
>  * *Frame 1607 - 1618:* Continue to send message with delivery-id 3 to 
> receiver.
>  * *Frame 1619:* Transfer aborted. A transfer performative with more=False, 
> Settled=True, Aborted=True.
>  * *Frame 1622:* Close sent from Dispatch to Receiver with Condition: 
> {{amqp:connection:framing-error}}, Description: {{connection aborted}}.
> All instances of this error have occurred between dispatch router and the 
> receiver once the sender has closed its connection.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-918) Improve router config consistency and metadata

2018-04-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424317#comment-16424317
 ] 

ASF subversion and git services commented on DISPATCH-918:
--

Commit dcae1f4338d4beed3b0b5f1a03a0bd313ba84f6f in qpid-dispatch's branch 
refs/heads/master from [~ganeshmurthy]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=dcae1f4 ]

DISPATCH-918 - Deprecated some listener/connector attributes and introduced 
replacements with much clearer names


> Improve router config consistency and metadata
> --
>
> Key: DISPATCH-918
> URL: https://issues.apache.org/jira/browse/DISPATCH-918
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Management Agent
>Reporter: Justin Ross
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Proposed changes from review.  The items marked PRIO1 are more important.  
> All changes must be backward-compatible.
> [https://docs.google.com/spreadsheets/d/14ugjxlc-ETYZXwN9eWD-D1YWrRAfydj9EJNmyUaZrD0/edit?usp=sharing]
> This also includes flags we'd like to get added to the metadata so we can 
> generate better docs from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-8152:
-
Description: 
Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
unexpected deletion of the data files owned by JE.  BDB registers a Watcher per 
Environment.  In Broker-J terms this amounts to one watcher per BDB backed 
virtualhost or virtualhost node.

Watchers consume operating system resources.  If the resources are exceeded, 
Broker-J will fail with an exception like this:

{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}



  was:
Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
unexpected deletion of the data files owned by JE.

In some poorly configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Though the root of the problem is a poorly configured environment, it would be 
safer to disable file deletion detection watcher by default.


> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Minor
>
> Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
> unexpected deletion of the data files owned by JE.  BDB registers a Watcher 
> per Environment.  In Broker-J terms this amounts to one watcher per BDB 
> backed virtualhost or virtualhost node.
> Watchers consume operating system resources.  If the resources are exceeded, 
> Broker-J will fail with an exception like this:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Rob Godfrey (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424222#comment-16424222
 ] 

Rob Godfrey commented on QPID-8152:
---

I agree with [~k-wall] it would seem better to document than to turn off the 
feature.  If we were trying to be really kind we could try to identify the 
exception when it is thrown and offer guidance in the log/exception message.

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Minor
>
> Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
> unexpected deletion of the data files owned by JE.
> In some poorly configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Though the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424201#comment-16424201
 ] 

Alex Rudyy commented on QPID-8152:
--

The upper limit of the number of inotify instances per real user ID is set to 
128 by default on many linux based operation systems.
$ sysctl fs.inotify.max_user_instances
fs.inotify.max_user_instances = 128

Each BDB virtual host creates its own inotify service for detection of 
unexpectedly deleted files. Thus, if a number of virtual hosts on all brokers 
in the environment exceed 128, the issue would manifest on start-up of 129th 
virtual host and the rest of remaining virtual hosts.

The number of max_user_instances can be raised if required. Alternatively, BDB 
JE watchers can be disabled by running brokers with 
-Dje.log.detectFileDelete=false. The latter will not have any effect on broker 
functionality. The broker with watchers disabled would continue to operate as 
before. Detection of unexpectedly deleted files is an auxiliary feature which 
would allow failing early than later. The bdb je files should not be ever 
deleted manually in production systems (apart from the cases when data is 
restored from backup or file system gets corrupted).



> [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
> avoid running into "user inotify limit" 
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-8152:
-
Priority: Minor  (was: Major)

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Minor
>
> Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
> unexpected deletion of the data files owned by JE.
> In some poorly configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Though the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-8152:
-
Description: 
Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
unexpected deletion of the data files owned by JE.

In some poorly configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Though the root of the problem is a poorly configured environment, it would be 
safer to disable file deletion detection watcher by default.

  was:
BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Thought the root of the problem is a poorly configured environment, it would be 
safer to disable file deletion detection watcher by default.


> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> Newer versions of BDB JE register {{java.nio.file.WatchService}} to detect 
> unexpected deletion of the data files owned by JE.
> In some poorly configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Though the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-8152:
-
Affects Version/s: qpid-java-broker-7.0.0

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.0, qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424205#comment-16424205
 ] 

Keith Wall commented on QPID-8152:
--

Qpid Broker-J v7.0.0 and higher shipped Berkeley DB JE 7.4.5.  This JE release 
has a safety feature that monitors the data directory owned by BDB checking for 
external modification.  If external modification is detected, JE now fails 
early.  To perform this check JE uses a {{WatchService}} - which imposes 
demands on the Operating System.   The user of Broker-J needs to be aware of 
this new environmental demand and ensure that the environment gives sufficient 
resources.

I am not sure that turning off this useful feature is the right thing to do.   
Surely better to document?   If the user wants to turn off the feature then 
this is possible via Broker-J by setting {{je.log.detectFileDelete}} to false.

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Summary: [Broker-J][BDB] Virtual host start-up fails with IOException: User 
limit of inotify instances reached or too many open files  (was: 
[Broker-J][BDB] Virtual host start-up fails with .IOException: User limit of 
inotify instances reached or too many open files)

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8152.
--
Resolution: Won't Fix

> [Broker-J][BDB] Virtual host start-up fails with IOException: User limit of 
> inotify instances reached or too many open files
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Virtual host start-up fails with .IOException: User limit of inotify instances reached or too many open files

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Issue Type: Bug  (was: Improvement)
   Summary: [Broker-J][BDB] Virtual host start-up fails with .IOException: 
User limit of inotify instances reached or too many open files  (was: 
[Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
avoid running into "user inotify limit" )

> [Broker-J][BDB] Virtual host start-up fails with .IOException: User limit of 
> inotify instances reached or too many open files
> -
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Attachment: (was: inotifytest.tar.gz)

> [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
> avoid running into "user inotify limit" 
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8153) [JMS AMQP 0-x] JMS AMQP 0-x should be able to send SNI as part of TLS handshake

2018-04-03 Thread Alex Rudyy (JIRA)
Alex Rudyy created QPID-8153:


 Summary: [JMS AMQP 0-x] JMS AMQP 0-x should be able to send SNI as 
part of TLS handshake
 Key: QPID-8153
 URL: https://issues.apache.org/jira/browse/QPID-8153
 Project: Qpid
  Issue Type: Improvement
  Components: JMS AMQP 0-x
Affects Versions: qpid-java-client-0-x-6.3.0
Reporter: Alex Rudyy
 Fix For: qpid-java-client-0-x-6.3.1


Qpid JMS AMQP 0-x client does not provide SNI as part of TLS handshake. The a 
client should be able to indicate which hostname it is attempting to connect to 
by using SNI TLS extension.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Description: 
BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Thought the root of the problem is a poorly configured environment, it would be 
safer to disable file deletion detection watcher by default.

  was:
BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Thought the root of the problem is a purely configured environment, it would be 
safer to disable file deletion detection watcher by default.


> [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
> avoid running into "user inotify limit" 
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
> Attachments: inotifytest.tar.gz
>
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Attachment: inotifytest.tar.gz

> [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
> avoid running into "user inotify limit" 
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
> Attachments: inotifytest.tar.gz
>
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a poorly configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8152:
-
Description: 
BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{noformat}

Thought the root of the problem is a purely configured environment, it would be 
safer to disable file deletion detection watcher by default.

  was:
BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{nofromat}

Thought the root of the problem is a purely configured environment, it would be 
safer to disable file deletion detection watcher by default.


> [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to 
> avoid running into "user inotify limit" 
> 
>
> Key: QPID-8152
> URL: https://issues.apache.org/jira/browse/QPID-8152
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3
>Reporter: Alex Rudyy
>Priority: Major
>
> BDB JE use registers watcher service to detect unexpected deletion of the 
> file.
> In some purely configured environments the user inotify limit can be breached 
> on virtual host/virtual host node start-up. This situation ends-up in 
> exception like the one below:
> {noformat}
> Caused by: java.io.IOException: User limit of inotify instances reached or 
> too many open files
> at 
> sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
> at 
> sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
> at 
> com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
> ... 29 common frames omitted
> {noformat}
> Thought the root of the problem is a purely configured environment, it would 
> be safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8152) [Broker-J][BDB] Disable BDB JE file deletion watcher by default in order to avoid running into "user inotify limit"

2018-04-03 Thread Alex Rudyy (JIRA)
Alex Rudyy created QPID-8152:


 Summary: [Broker-J][BDB] Disable BDB JE file deletion watcher by 
default in order to avoid running into "user inotify limit" 
 Key: QPID-8152
 URL: https://issues.apache.org/jira/browse/QPID-8152
 Project: Qpid
  Issue Type: Improvement
  Components: Broker-J
Affects Versions: qpid-java-broker-7.0.3
Reporter: Alex Rudyy


BDB JE use registers watcher service to detect unexpected deletion of the file.
In some purely configured environments the user inotify limit can be breached 
on virtual host/virtual host node start-up. This situation ends-up in exception 
like the one below:
{noformat}
Caused by: java.io.IOException: User limit of inotify instances reached or too 
many open files
at 
sun.nio.fs.LinuxWatchService.(LinuxWatchService.java:64)
at 
sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at 
com.sleepycat.je.log.FileDeletionDetector.(FileDeletionDetector.java:85)
... 29 common frames omitted

{nofromat}

Thought the root of the problem is a purely configured environment, it would be 
safer to disable file deletion detection watcher by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPIDIT-120) ProtonCpp LargeContentTest Sender closes connection too soon

2018-04-03 Thread Kim van der Riet (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPIDIT-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kim van der Riet reassigned QPIDIT-120:
---

Assignee: Kim van der Riet

> ProtonCpp LargeContentTest Sender closes connection too soon
> 
>
> Key: QPIDIT-120
> URL: https://issues.apache.org/jira/browse/QPIDIT-120
> Project: Apache QPID Interoperability Test Suite
>  Issue Type: Bug
>  Components: Proton C++ Shim
>Affects Versions: 0.1.0
>Reporter: Chuck Rolke
>Assignee: Kim van der Riet
>Priority: Major
>
> In researching the bug reported in DISPATCH-939 (router closes connection 
> with error) a problem in the test has emerged.
> The Cpp shim uses on_tracker_accept and then closes the connection when when 
> the number of confirmed messages equals the number of total messages. In 
> theory, then, the test should never close the connection before the router 
> has confirmed and accepted all the messages. From the trace in DISPATCH-939 
> the connection is closed about 2 mS after the fourth Sender message has gone 
> over the wire to Dispatch. Not enough dispositions have been received to 
> cover the number of messages sent so why has the connection been closed?
> Adding some print debugging reveals the issue. In the test as it is today the 
> _totalMessages_ is 2. However, in the send loop the actual number of messages 
> sent is 2 times the number of elements in the incoming list of test values. 
> In today's case the values list has four elements so a total of 8 messages 
> should go over the wire.
> A hack '* 4' is added to the on_tracker_accept to make the test work:
> {{if (_msgsConfirmed >= _totalMsgs * 4) {}}
> print debugging session shows:
> {{InteropTestError: Send shim 'ProtonCpp':}}
> {{on_container_start: _totalMsgs: 2}}
> {{on_sendable: msgsSent: 1}}
> {{on_sendable: msgsSent: 2}}
> {{on_sendable: msgsSent: 3}}
> {{on_sendable: msgsSent: 4}}
> {{on_sendable: msgsSent: 5}}
> {{on_sendable: msgsSent: 6}}
> {{on_sendable: msgsSent: 7}}
> {{on_sendable: msgsSent: 8}}
> {{on_tracker_accept: msgsConfirmed: 1}}
> {{on_tracker_accept: msgsConfirmed: 2}}
> {{on_tracker_accept: msgsConfirmed: 3}}
> {{on_tracker_accept: msgsConfirmed: 4}}
> {{on_tracker_accept: msgsConfirmed: 5}}
> {{on_tracker_accept: msgsConfirmed: 6}}
> {{on_tracker_accept: msgsConfirmed: 7}}
> {{on_tracker_accept: msgsConfirmed: 8}}
>  The test needs to get the test elementsList factor back into the tracker to 
> decide correctly when to close the connection.
> This test has still been valuable pointing out an issue in Dispatch that 
> needs some attention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDIT-120) ProtonCpp LargeContentTest Sender closes connection too soon

2018-04-03 Thread Kim van der Riet (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPIDIT-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kim van der Riet updated QPIDIT-120:

Affects Version/s: 0.1.0

> ProtonCpp LargeContentTest Sender closes connection too soon
> 
>
> Key: QPIDIT-120
> URL: https://issues.apache.org/jira/browse/QPIDIT-120
> Project: Apache QPID Interoperability Test Suite
>  Issue Type: Bug
>  Components: Proton C++ Shim
>Affects Versions: 0.1.0
>Reporter: Chuck Rolke
>Priority: Major
>
> In researching the bug reported in DISPATCH-939 (router closes connection 
> with error) a problem in the test has emerged.
> The Cpp shim uses on_tracker_accept and then closes the connection when when 
> the number of confirmed messages equals the number of total messages. In 
> theory, then, the test should never close the connection before the router 
> has confirmed and accepted all the messages. From the trace in DISPATCH-939 
> the connection is closed about 2 mS after the fourth Sender message has gone 
> over the wire to Dispatch. Not enough dispositions have been received to 
> cover the number of messages sent so why has the connection been closed?
> Adding some print debugging reveals the issue. In the test as it is today the 
> _totalMessages_ is 2. However, in the send loop the actual number of messages 
> sent is 2 times the number of elements in the incoming list of test values. 
> In today's case the values list has four elements so a total of 8 messages 
> should go over the wire.
> A hack '* 4' is added to the on_tracker_accept to make the test work:
> {{if (_msgsConfirmed >= _totalMsgs * 4) {}}
> print debugging session shows:
> {{InteropTestError: Send shim 'ProtonCpp':}}
> {{on_container_start: _totalMsgs: 2}}
> {{on_sendable: msgsSent: 1}}
> {{on_sendable: msgsSent: 2}}
> {{on_sendable: msgsSent: 3}}
> {{on_sendable: msgsSent: 4}}
> {{on_sendable: msgsSent: 5}}
> {{on_sendable: msgsSent: 6}}
> {{on_sendable: msgsSent: 7}}
> {{on_sendable: msgsSent: 8}}
> {{on_tracker_accept: msgsConfirmed: 1}}
> {{on_tracker_accept: msgsConfirmed: 2}}
> {{on_tracker_accept: msgsConfirmed: 3}}
> {{on_tracker_accept: msgsConfirmed: 4}}
> {{on_tracker_accept: msgsConfirmed: 5}}
> {{on_tracker_accept: msgsConfirmed: 6}}
> {{on_tracker_accept: msgsConfirmed: 7}}
> {{on_tracker_accept: msgsConfirmed: 8}}
>  The test needs to get the test elementsList factor back into the tracker to 
> decide correctly when to close the connection.
> This test has still been valuable pointing out an issue in Dispatch that 
> needs some attention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-918) Improve router config consistency and metadata

2018-04-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424055#comment-16424055
 ] 

ASF subversion and git services commented on DISPATCH-918:
--

Commit 535afa8dd1c910d6269efc2d320cd2abf3154d19 in qpid-dispatch's branch 
refs/heads/master from [~ganeshmurthy]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=535afa8 ]

DISPATCH-918 - Deprecated some router attributes and introduced replacements 
with much clearer names


> Improve router config consistency and metadata
> --
>
> Key: DISPATCH-918
> URL: https://issues.apache.org/jira/browse/DISPATCH-918
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Management Agent
>Reporter: Justin Ross
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Proposed changes from review.  The items marked PRIO1 are more important.  
> All changes must be backward-compatible.
> [https://docs.google.com/spreadsheets/d/14ugjxlc-ETYZXwN9eWD-D1YWrRAfydj9EJNmyUaZrD0/edit?usp=sharing]
> This also includes flags we'd like to get added to the metadata so we can 
> generate better docs from it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1792) clean out older releases from the dist repo

2018-04-03 Thread Robbie Gemmell (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved PROTON-1792.

Resolution: Done

> clean out older releases from the dist repo
> ---
>
> Key: PROTON-1792
> URL: https://issues.apache.org/jira/browse/PROTON-1792
> Project: Qpid Proton
>  Issue Type: Task
>  Components: release
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
>
> The Python bindings for proton-c are published at PyPi, and in order to allow 
> a seemless install there, a system was added such that if the relevant 
> proton-c version couldn't be located on the system already then the setup
> would implicitly compile and install a local version for use.
> Unfortunately, when this functionality was added in earlier releases, around 
> 0.9, reference was made to apache.org/dist/qpid/proton/ to download the 
> matching release archive for installation.
> The 0.15.0 package at PyPi instead got the files via the GitHub mirror for 
> the source repo, and so wasn't affected.
> The issue was addressed in proton-0.16.0 via PROTON-1330, to distribute the 
> needed proton-c files within the generated python binding source bundle.
> As a result of the issue older releases have been left at 
> apache.org/dist/qpid/proton/ longer than they should, and also on the mirrors 
> as a side effect of this, to avoid breaking things for actually relying on 
> the bindings implicit proton-c install functionality.
> Now that significant time has passed since a release was last affected by the 
> issue, and from chatting with [~kgiusti] and [~jr...@redhat.com] we aren't 
> aware of significant remaining usage of these older versions by other 
> projects, it seems a reasonable time to now clear them out.
> Anyone affected can still use the older python binding versions by grabbing 
> the appropriate proton release from the 
> [archives|https://archive.apache.org/dist/qpid/proton/] and installing from 
> source, if there is a reason they cant simply upgrade to a newer version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1792) clean out older releases from the dist repo

2018-04-03 Thread Robbie Gemmell (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423920#comment-16423920
 ] 

Robbie Gemmell commented on PROTON-1792:


Old releases removed from 
https://dist.apache.org/repos/dist/release/qpid/proton/ in r26102:
{noformat}
Date: Tue Apr  3 10:51:04 2018
New Revision: 26102

Log:
PROTON-1792: clean out old proton releases from the dist repo/mirrors, left 
behind for historic PyPi usage
{noformat}

> clean out older releases from the dist repo
> ---
>
> Key: PROTON-1792
> URL: https://issues.apache.org/jira/browse/PROTON-1792
> Project: Qpid Proton
>  Issue Type: Task
>  Components: release
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
>
> The Python bindings for proton-c are published at PyPi, and in order to allow 
> a seemless install there, a system was added such that if the relevant 
> proton-c version couldn't be located on the system already then the setup
> would implicitly compile and install a local version for use.
> Unfortunately, when this functionality was added in earlier releases, around 
> 0.9, reference was made to apache.org/dist/qpid/proton/ to download the 
> matching release archive for installation.
> The 0.15.0 package at PyPi instead got the files via the GitHub mirror for 
> the source repo, and so wasn't affected.
> The issue was addressed in proton-0.16.0 via PROTON-1330, to distribute the 
> needed proton-c files within the generated python binding source bundle.
> As a result of the issue older releases have been left at 
> apache.org/dist/qpid/proton/ longer than they should, and also on the mirrors 
> as a side effect of this, to avoid breaking things for actually relying on 
> the bindings implicit proton-c install functionality.
> Now that significant time has passed since a release was last affected by the 
> issue, and from chatting with [~kgiusti] and [~jr...@redhat.com] we aren't 
> aware of significant remaining usage of these older versions by other 
> projects, it seems a reasonable time to now clear them out.
> Anyone affected can still use the older python binding versions by grabbing 
> the appropriate proton release from the 
> [archives|https://archive.apache.org/dist/qpid/proton/] and installing from 
> source, if there is a reason they cant simply upgrade to a newer version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (PROTON-1792) clean out older releases from the dist repo

2018-04-03 Thread Robbie Gemmell (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell reassigned PROTON-1792:
--

Assignee: Robbie Gemmell

> clean out older releases from the dist repo
> ---
>
> Key: PROTON-1792
> URL: https://issues.apache.org/jira/browse/PROTON-1792
> Project: Qpid Proton
>  Issue Type: Task
>  Components: release
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
>
> The Python bindings for proton-c are published at PyPi, and in order to allow 
> a seemless install there, a system was added such that if the relevant 
> proton-c version couldn't be located on the system already then the setup
> would implicitly compile and install a local version for use.
> Unfortunately, when this functionality was added in earlier releases, around 
> 0.9, reference was made to apache.org/dist/qpid/proton/ to download the 
> matching release archive for installation.
> The 0.15.0 package at PyPi instead got the files via the GitHub mirror for 
> the source repo, and so wasn't affected.
> The issue was addressed in proton-0.16.0 via PROTON-1330, to distribute the 
> needed proton-c files within the generated python binding source bundle.
> As a result of the issue older releases have been left at 
> apache.org/dist/qpid/proton/ longer than they should, and also on the mirrors 
> as a side effect of this, to avoid breaking things for actually relying on 
> the bindings implicit proton-c install functionality.
> Now that significant time has passed since a release was last affected by the 
> issue, and from chatting with [~kgiusti] and [~jr...@redhat.com] we aren't 
> aware of significant remaining usage of these older versions by other 
> projects, it seems a reasonable time to now clear them out.
> Anyone affected can still use the older python binding versions by grabbing 
> the appropriate proton release from the 
> [archives|https://archive.apache.org/dist/qpid/proton/] and installing from 
> source, if there is a reason they cant simply upgrade to a newer version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-952) qdrouterd seg fault after reporting "too many sessions"

2018-04-03 Thread Robbie Gemmell (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423764#comment-16423764
 ] 

Robbie Gemmell commented on DISPATCH-952:
-

{quote}...and connections to brokers, which could have > 32k links. I think the 
fix is to use a single shared session on such connections. Since traffic on a 
single connection gets serialized in any case, there's no real benefit to 
multiple sessions on a connection.
{quote}
It can to some peers, e.g. broker-j will see improved performance from multiple 
sessions when persistent messages are involved.

Its also worth noting that the protocol actually allows 65k sessions, so 
proton-c is artificially limiting things there it seems (presumably due to 
magic-bit usage).

> qdrouterd seg fault after reporting "too many sessions"
> ---
>
> Key: DISPATCH-952
> URL: https://issues.apache.org/jira/browse/DISPATCH-952
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Reporter: Alan Conway
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.1.0
>
>
> Reported at [https://bugzilla.redhat.com/show_bug.cgi?id=1561876]
>  
> {code:java}
> Currently running Satellite 6.3 with 5K clients. The clients are managed by 2 
> capsules:
> Capsule 1: 3K clients
> Capsule 2: 2K clients
> Logs from Capsule 1:
> [root@c02-h10-r620-vm1 ~]# journalctl | grep qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: group added to /etc/gshadow: name=qdrouterd
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> groupadd[19140]: new group: name=qdrouterd, GID=993
> Mar 26 03:00:47 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> useradd[19145]: new user: name=qdrouterd, UID=996, GID=993, 
> home=/var/lib/qdrouterd, shell=/sbin/nologin
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> qdrouterd[16084]: [0x7fe3f0016aa0]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:39:06 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com kernel: 
> qdrouterd[16087]: segfault at 88 ip 7fe40b79d820 sp 7fe3fd5f9298 
> error 6 in libqpid-proton.so.10.0.0[7fe40b77f000+4b000]
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:39:07 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> Mar 29 01:02:09 c02-h10-r620-vm1.rdu.openstack.engineering.redhat.com 
> /usr/sbin/katello-service[1740]: *** status failed: qdrouterd ***
> Logs from Capsule 2:
> [root@c02-h10-r620-vm2 ~]# systemctl status qdrouterd
> ● qdrouterd.service - Qpid Dispatch router daemon
>Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /etc/systemd/system/qdrouterd.service.d
>└─limits.conf
>Active: failed (Result: signal) since Wed 2018-03-28 10:58:02 EDT; 14h ago
>   Process: 1158 ExecStart=/usr/sbin/qdrouterd -c 
> /etc/qpid-dispatch/qdrouterd.conf (code=killed, signal=SEGV)
>  Main PID: 1158 (code=killed, signal=SEGV)
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Started Qpid Dispatch router daemon.
> Mar 28 07:38:46 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Starting Qpid Dispatch router daemon...
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:unable to find an open available channel 
> within limit of 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:process error -2
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> qdrouterd[1158]: [0x7f36a000a170]:pn_session: too many sessions: 32768  
> channel_max is 32767
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service: main process exited, code=killed, 
> status=11/SEGV
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: Unit qdrouterd.service entered failed state.
> Mar 28 10:58:02 c02-h10-r620-vm2.rdu.openstack.engineering.redhat.com 
> systemd[1]: qdrouterd.service failed.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

[jira] [Commented] (DISPATCH-951) log details for the proton found during build

2018-04-03 Thread Robbie Gemmell (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423746#comment-16423746
 ] 

Robbie Gemmell commented on DISPATCH-951:
-

Thanks Chuck!

> log details for the proton found during build
> -
>
> Key: DISPATCH-951
> URL: https://issues.apache.org/jira/browse/DISPATCH-951
> Project: Qpid Dispatch
>  Issue Type: Improvement
>Affects Versions: 1.0.1, 1.1.0
>Reporter: Robbie Gemmell
>Assignee: Chuck Rolke
>Priority: Minor
> Fix For: 1.1.0
>
>
> The router build logs info about various things it finds when running cmake, 
> e.g Python and libwebsockets, but it does not emit the details for proton, 
> e.g the way qpid-cpp does. It would be useful if it did.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-373) Support for OAuth flow and setting of "Authorization" Header for WS upgrade request

2018-04-03 Thread Robbie Gemmell (JIRA)

[ 
https://issues.apache.org/jira/browse/QPIDJMS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423736#comment-16423736
 ] 

Robbie Gemmell commented on QPIDJMS-373:


There isn't a way to add additional headers in the existing NettyWsTransport, 
as I say adding that seems reasonable.

The reason for suggesting the added ws prefix is that such options would be 
more tied to a specific transport (ws) than the TCP or SSL/TLS options that 
apply to everything that uses those respectively, and so it would be good to 
group them out for clarity.

The existing NettyWsTransport and NettyTcpTransport are themselves example of 
how you could add your own. Try following things from 
TransportFactory.findTransportFactory(String).

> Support for OAuth flow and setting of "Authorization" Header for WS upgrade 
> request
> ---
>
> Key: QPIDJMS-373
> URL: https://issues.apache.org/jira/browse/QPIDJMS-373
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Michael Bolz
>Priority: Major
>
> Add support for OAuth flow ("client_credentials" and "password") and setting 
> of "Authorization" Header during WebSocket connection handshake.
> Used "Authorization" Header or OAuth settings should/could be set via the 
> "transport" parameters (TransportOptions).
>  
> As PoC I created a [Fork|https://github.com/mibo/qpid-jms/tree/ws_add_header] 
> and have done one commit for the [add of the Authorization 
> Header|https://github.com/mibo/qpid-jms/commit/711052f0891556db0da6e7d68908b2f9dafadede]
>  and one commit for the [OAuth 
> flow|https://github.com/mibo/qpid-jms/commit/de70f0d3e4441358a239b3e776455201c133895d].
>  
> Hope this feature is not only interesting for me.
> If yes, I will add the currently missing tests to my contribution and do a 
> pull request.
>  
> Regards, Michael



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-6825) AmqPorts should stop performing a TCP accept once the port has reached maximum concurrent connections

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall resolved QPID-6825.
--
Resolution: Invalid

> AmqPorts should stop performing a TCP accept once the port has reached 
> maximum concurrent connections
> -
>
> Key: QPID-6825
> URL: https://issues.apache.org/jira/browse/QPID-6825
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Priority: Major
>
> The Java Broker's AmqPort model object has the concept of maximum number of 
> concurrent connections, however, this check is enforce only after the socket 
> accept has been performed.  This means that system resources are already 
> consumed.  The check should be changed so that before the port accepts a 
> connection, it checks first to ensure that the port has capacity for the new 
> connection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6825) AmqPorts should stop performing a TCP accept once the port has reached maximum concurrent connections

2018-04-03 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16423658#comment-16423658
 ] 

Keith Wall commented on QPID-6825:
--

I think my original analysis was flawed.  I was under the impression that when 
a port reached its maximum connections, it would still allocate a network 
buffer only then to decide the port was full and needed to be closed.  I see 
this is not the case.   I think the code's current behaviour, accepting the 
socket then closing it after logging a PRT-1005 is reasonable.  Marking invalid.

> AmqPorts should stop performing a TCP accept once the port has reached 
> maximum concurrent connections
> -
>
> Key: QPID-6825
> URL: https://issues.apache.org/jira/browse/QPID-6825
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Priority: Major
>
> The Java Broker's AmqPort model object has the concept of maximum number of 
> concurrent connections, however, this check is enforce only after the socket 
> accept has been performed.  This means that system resources are already 
> consumed.  The check should be changed so that before the port accepts a 
> connection, it checks first to ensure that the port has capacity for the new 
> connection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8141) [JMS AMQP 0-x] Cannot publish message with address based destinations falsely identified as resolved due to unset routing key and exchange name

2018-04-03 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-8141:
-
Fix Version/s: qpid-java-client-0-x-6.3.1

> [JMS AMQP 0-x] Cannot publish message with address based destinations falsely 
> identified as resolved due to unset routing key and exchange name
> ---
>
> Key: QPID-8141
> URL: https://issues.apache.org/jira/browse/QPID-8141
> Project: Qpid
>  Issue Type: Bug
>  Components: JMS AMQP 0-x
>Affects Versions: qpid-java-client-0-x-6.3.0
>Reporter: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-client-0-x-6.3.1
>
>
> Address based destination resolution functionality 
> {{AMQSession#resolveAddress}} sets a number of fields like {{routing key}}, 
> {{exchange name}}, etc as part of invocation of 
> {{AMQSession#setLegacyFieldsForQueueType}}. If destination object is 
> identified as resolved {{AMQSession#isResolved}} the essential fields are not 
> set on the destination object. As result, publishing attempts with such 
> destination objects will fail due to routing issues like the one reported 
> below:
> {noformat}
> Caused by: org.apache.qpid.AMQConnectionClosedException: Error: No route for 
> message with exchange 'null' and routing key 'null' [error code: 312(no 
> route)]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.qpid.AMQException.cloneForCurrentThread(AMQException.java:81)
>   at 
> org.apache.qpid.AMQException.cloneForCurrentThread(AMQException.java:24)
>   at 
> org.apache.qpid.client.AMQProtocolHandler.writeCommandFrameAndWaitForReply(AMQProtocolHandler.java:638)
>   at 
> org.apache.qpid.client.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:675)
>   at 
> org.apache.qpid.client.AMQProtocolHandler.syncWrite(AMQProtocolHandler.java:669)
>   at 
> org.apache.qpid.client.AMQSession_0_8.commitImpl(AMQSession_0_8.java:271)
>   at org.apache.qpid.client.AMQSession.commit(AMQSession.java:913)
>   ... 6 more
> Caused by: org.apache.qpid.AMQConnectionClosedException: Error: No route for 
> message with exchange 'null' and routing key 'null' [error code: 312(no 
> route)]
>   at 
> org.apache.qpid.client.handler.ConnectionCloseMethodHandler.methodReceived(ConnectionCloseMethodHandler.java:90)
>   at 
> org.apache.qpid.client.handler.ClientMethodDispatcherImpl.dispatchConnectionClose(ClientMethodDispatcherImpl.java:227)
>   at 
> org.apache.qpid.framing.ConnectionCloseBody.execute(ConnectionCloseBody.java:105)
>   at 
> org.apache.qpid.client.state.AMQStateManager.methodReceived(AMQStateManager.java:118)
>   at 
> org.apache.qpid.client.AMQProtocolHandler.methodBodyReceived(AMQProtocolHandler.java:531)
>   at 
> org.apache.qpid.client.protocol.AMQProtocolSession.methodFrameReceived(AMQProtocolSession.java:460)
>   at 
> org.apache.qpid.framing.AMQMethodBodyImpl.handle(AMQMethodBodyImpl.java:66)
>   at 
> org.apache.qpid.client.AMQProtocolHandler.received(AMQProtocolHandler.java:480)
>   at 
> org.apache.qpid.client.AMQConnectionDelegate_8_0$ReceiverClosedWaiter.received(AMQConnectionDelegate_8_0.java:549)
>   at 
> org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:164)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The clients applications creating new destination objects on every publishing 
> attempt using the same session are impacted by the issue.
> The following code snippet demonstrate the problem:
> {code}
> MessageProducer messageProducer = session.createProducer(null);
> for (int i=0;i {
> Message message = session.createTextMessage("Test");
> messageProducer.send(session.createQueue(String.format(
> 
> "ADDR:test;{\"create\":\"always\",\"node\":{\"type\":\"queue\",\"durable\":true}}")),
>  message);
> }
> session.commit();
> {code}
> The work around would be to avoid creation of destination objects every time. 
> For example, Qpid JNDI properties can be used to declare and cache the 
> destination objects. 
> {code}
> destination.queue=ADDR:test;{"create":"always","node":"type":"queue","durable":true}}
> {code}
> {code}
> Destination destination = (Destination)context.lookup("queue")
> {code}
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: d