[jira] [Updated] (PROTON-2344) memory leak and close_waits in qpid-proton-c / python2-qpid-proton when dropping timeouted connection

2021-03-11 Thread Pavel Moravec (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated PROTON-2344:
--
Attachment: mimic_goferd_consumer-with-receive-noSSL.py

> memory leak and close_waits in qpid-proton-c / python2-qpid-proton when 
> dropping timeouted connection
> -
>
> Key: PROTON-2344
> URL: https://issues.apache.org/jira/browse/PROTON-2344
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.33.0
>Reporter: Pavel Moravec
>Priority: Major
> Attachments: mimic_goferd_consumer-with-receive-noSSL.py
>
>
> Packages used from EPEL:
>  * qpid-proton-c-0.33.0-1.el7.x86_64
>  * python2-qpid-proton-0.33.0-1.el7.x86_64
>  
> reproducer idea: connect to a broker with a consumer and some heartbeat set, 
> and mimic dropping packets until the client drops the connection.
>  
> Particular reproducer:
>  * create a queue pulp.agent.TEST.0 in a broker
>  * run below reproducer script (can be further simplified, just modify the 
> ROUTER_ADDRESS)
>  * mimic packets drops on output:
>  
> {noformat}
> port=5647
> a="-I"
> while true; do
>   echo "$(date): setting $a"
>   iptables $a OUTPUT -p tcp --dport $port -j DROP
>   if [ $a = "-I" ]; then
>     a="-D"
>   else
>     a="-I"
>   fi
>   sleep 5
> done{noformat}
>  
> .. and monitor memory usage and CLOSE_WAIT connections.
>  
> The issue sounds to be a regression from python-qpid-proton-0.28.0-3 where I 
> cant reproduce it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2344) memory leak and close_waits in qpid-proton-c / python2-qpid-proton when dropping timeouted connection

2021-03-11 Thread Pavel Moravec (Jira)
Pavel Moravec created PROTON-2344:
-

 Summary: memory leak and close_waits in qpid-proton-c / 
python2-qpid-proton when dropping timeouted connection
 Key: PROTON-2344
 URL: https://issues.apache.org/jira/browse/PROTON-2344
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: proton-c-0.33.0
Reporter: Pavel Moravec
 Attachments: mimic_goferd_consumer-with-receive-noSSL.py

Packages used from EPEL:
 * qpid-proton-c-0.33.0-1.el7.x86_64
 * python2-qpid-proton-0.33.0-1.el7.x86_64

 

reproducer idea: connect to a broker with a consumer and some heartbeat set, 
and mimic dropping packets until the client drops the connection.

 

Particular reproducer:
 * create a queue pulp.agent.TEST.0 in a broker
 * run below reproducer script (can be further simplified, just modify the 
ROUTER_ADDRESS)
 * mimic packets drops on output:

 
{noformat}
port=5647
a="-I"
while true; do
  echo "$(date): setting $a"
  iptables $a OUTPUT -p tcp --dport $port -j DROP
  if [ $a = "-I" ]; then
    a="-D"
  else
    a="-I"
  fi
  sleep 5
done{noformat}
 

.. and monitor memory usage and CLOSE_WAIT connections.

 

The issue sounds to be a regression from python-qpid-proton-0.28.0-3 where I 
cant reproduce it.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1870) better logging for ssl

2019-08-07 Thread Pavel Moravec (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901790#comment-16901790
 ] 

Pavel Moravec commented on PROTON-1870:
---

If this Jira is meant to be a placeholder for use cases when to improve SSL 
response codes, then when a remote peer drops TCP connection (with established 
SSL+AMQP connection.session on it) and it drops the TCP connection by sending 
FIN+ACK packet, then 
[https://github.com/apache/qpid-proton/blob/master/c/src/ssl/openssl.c#L215] 
generates "SSL Failure: Unknown error" string - too generic, suggesting there 
was an error while an ordinary connection closure happened.

> better logging for ssl
> --
>
> Key: PROTON-1870
> URL: https://issues.apache.org/jira/browse/PROTON-1870
> Project: Qpid Proton
>  Issue Type: Improvement
>Reporter: Gordon Sim
>Priority: Major
>  Labels: logging
>
> Would be nice to have better logging for ssl connections, particularly where 
> they  fail, e.g. the sni used, the ca the peer cert is signed with etc



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-905) Long-lived connections leak sessions and links

2018-07-04 Thread Pavel Moravec (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532420#comment-16532420
 ] 

Pavel Moravec commented on PROTON-905:
--

I think this Jira is behind qdrouterd memory accumulation as described in 
[https://issues.jboss.org/browse/ENTMQIC-2023] - copying the (more trivial, 
slower) reproducer of a client connecting to qdrouterd here:

 
|{color:#00}import random{color}|
|{color:#00}from proton.utils import BlockingConnection{color}|
|{color:#00}from time import sleep{color}|
|{color:#00}from uuid import uuid4{color}|
| |
|{color:#00}ROUTER_ADDRESS = "proton+amqp://0.0.0.0:5672"{color}|
|{color:#00}ADDRESS = "test.address"{color}|
|{color:#00}HEARTBEAT = 5{color}|
|{color:#00}SLEEP_MIN = 0.1{color}|
|{color:#00}SLEEP_MAX = 0.2{color}|
| |
|{color:#00}conn = BlockingConnection(ROUTER_ADDRESS, ssl_domain=None, 
heartbeat=HEARTBEAT){color}|
| |
|{color:#00}while True:{color}|
|{color:#00} recv = conn.create_receiver('%s' %(ADDRESS), 
name=str(uuid4()), dynamic=False, options=None){color}|
|{color:#00} sleep(random.uniform(SLEEP_MIN,SLEEP_MAX)){color}|
|{color:#00} recv.close(){color}|
|{color:#00} sleep(random.uniform(SLEEP_MIN,SLEEP_MAX)) {color}|

> Long-lived connections leak sessions and links
> --
>
> Key: PROTON-905
> URL: https://issues.apache.org/jira/browse/PROTON-905
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.9.1, 0.10
>Reporter: Ken Giusti
>Priority: Minor
>  Labels: leak
> Fix For: proton-c-future
>
> Attachments: test-send.py
>
>
> I found this issue while debugging a crash dump of qpidd.
> Long lived connections do not free its sessions/link.
> This only applies when NOT using the event model.  The version of qpidd I 
> tested against (0.30) still uses the iterative model.  Point to consider, I 
> don't know why this is the case.
> Details:  I have a test script that opens a single connection, then 
> continually creates sessions/links over that connection, sending one message 
> before closing and freeing the sessions/links.  See attached.
> Over time the qpidd run time consumes all memory on the system and is killed 
> by OOM.  To be clear, I'm using drain to remove all sent messages - there is 
> no message build up.
> On debugging this, I'm finding thousands of session objects on the 
> connections free sessions weakref list.  Every one of those sessions has a 
> refcount of one.
> Once the connection is finalized, all session objects are freed.  But until 
> then, freed sessions continue to accumulate indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8184) [linearstore] Recovery intermittently produces JERR_EFP_BADEFPDIRNAME error followed by core

2018-05-10 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-8184:

Description: 
Some users are experiencing difficulty recovering the store, especially when 
there are a large  number of queues (several thousand). The log files show the 
following pattern:

{{JERR_EFP_BADEFPDIRNAME}} in which some arbitrary number which is not 
divisible by 4 is being used as the EFP file size (called EFP directory in the 
log), followed by a segfault:
{noformat}
May 4 18:55:00 somehostname qpidd[6240]: 2018-05-04 18:55:00 [Store] warning 
Linear Store: EmptyFilePool create failed: jexception 0x0d03 
EmptyFilePool::fileSizeKbFromDirName() threw JERR_EFP_BADEFPDIRNAME: Bad Empty 
File Pool directory name (must be 'NNNk', where NNN is a number which is a 
multiple of 4) (Partition: 1; EFP directory: '9k')
May 4 18:55:00 somehostname kernel: qpidd[6240]: segfault at 10 ip 
7f4219af8e19 sp 7ffc227a6350 error 4 in 
linearstore.so[7f4219ac4000+bd000]{noformat}
 In the event that the random number _is_ divisible by 4, a randomly sized 
directory containing no files may appear in the partition EFP.

  was:
Some users are experiencing difficulty recovering the store, especially when 
there are a large  number of queues (several thousand). The log files show the 
following pattern:

{{JERR_EFP_BADEFPDIRNAME}} in which some arbitrary number which is not 
divisible by 4 is being used as the EFP file size (called EFP directory in the 
log), followed by a segfault:
{noformat}
May 4 18:55:00 prodrhs1l qpidd[6240]: 2018-05-04 18:55:00 [Store] warning 
Linear Store: EmptyFilePool create failed: jexception 0x0d03 
EmptyFilePool::fileSizeKbFromDirName() threw JERR_EFP_BADEFPDIRNAME: Bad Empty 
File Pool directory name (must be 'NNNk', where NNN is a number which is a 
multiple of 4) (Partition: 1; EFP directory: '9k')
May 4 18:55:00 prodrhs1l kernel: qpidd[6240]: segfault at 10 ip 
7f4219af8e19 sp 7ffc227a6350 error 4 in 
linearstore.so[7f4219ac4000+bd000]{noformat}
 In the event that the random number _is_ divisible by 4, a randomly sized 
directory containing no files may appear in the partition EFP.


> [linearstore] Recovery intermittently produces JERR_EFP_BADEFPDIRNAME error 
> followed by core
> 
>
> Key: QPID-8184
> URL: https://issues.apache.org/jira/browse/QPID-8184
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Kim van der Riet
>Assignee: Kim van der Riet
>Priority: Major
>
> Some users are experiencing difficulty recovering the store, especially when 
> there are a large  number of queues (several thousand). The log files show 
> the following pattern:
> {{JERR_EFP_BADEFPDIRNAME}} in which some arbitrary number which is not 
> divisible by 4 is being used as the EFP file size (called EFP directory in 
> the log), followed by a segfault:
> {noformat}
> May 4 18:55:00 somehostname qpidd[6240]: 2018-05-04 18:55:00 [Store] warning 
> Linear Store: EmptyFilePool create failed: jexception 0x0d03 
> EmptyFilePool::fileSizeKbFromDirName() threw JERR_EFP_BADEFPDIRNAME: Bad 
> Empty File Pool directory name (must be 'NNNk', where NNN is a number which 
> is a multiple of 4) (Partition: 1; EFP directory: '9k')
> May 4 18:55:00 somehostname kernel: qpidd[6240]: segfault at 10 ip 
> 7f4219af8e19 sp 7ffc227a6350 error 4 in 
> linearstore.so[7f4219ac4000+bd000]{noformat}
>  In the event that the random number _is_ divisible by 4, a randomly sized 
> directory containing no files may appear in the partition EFP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8095) ssl_skip_hostname_check behaves like having True as default

2018-02-05 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-8095:
---

 Summary: ssl_skip_hostname_check behaves like having True as 
default
 Key: QPID-8095
 URL: https://issues.apache.org/jira/browse/QPID-8095
 Project: Qpid
  Issue Type: Bug
  Components: Python Client
Reporter: Pavel Moravec


Although python client connection option "ssl_skip_hostname_check" has default 
value False, hostname verification is skipped when one does not specify this 
option. That means, the evaluation logic of this option overrides the default 
to True.

 

Due to the option name and also the natural request to be more secure by 
default (and rather weaken security only when specifically asked for), I 
suggest to change the evaluation logic to stand with default False. I.e. when 
the option is not specified, SSL hostname check is _not_ skipped / is performed.

 

Proposed patch:

 

 
{code:java}
--- /usr/lib/python2.7/site-packages/qpid/messaging/transports.py    2018-02-05 
08:34:22.008242874 +0100
+++ /usr/lib/python2.7/site-packages/qpid/messaging/transports.py    2018-02-05 
09:03:22.232313386 +0100
@@ -111,7 +111,7 @@ else:
 
   # if user manually set flag to false then require cert
   actual = getattr(conn, "_ssl_skip_hostname_check_actual", None)
-  if actual is not None and conn.ssl_skip_hostname_check is False:
+  if actual is not True:
 validate = CERT_REQUIRED
 
   self.tls = wrap_socket(self.socket, keyfile=conn.ssl_keyfile,
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-7788) Linearstore doesnt move to EFP latest journal files when deleting a durable queue

2017-05-18 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-7788.
-
   Resolution: Fixed
Fix Version/s: qpid-cpp-1.37.0

> Linearstore doesnt move to EFP latest journal files when deleting a durable 
> queue
> -
>
> Key: QPID-7788
> URL: https://issues.apache.org/jira/browse/QPID-7788
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
> Fix For: qpid-cpp-1.37.0
>
>
> When deleting an empty durable queue, the last empty page that it held is not 
> moved to EFP but dropped.
> Even more, when deleting a non-empty durable queue where the queue content 
> spans more than the last file, no such `jrnl` file is returned to EFP.
> That makes `/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use` (or similar 
> directory depending to configuration) growing over time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7788) Linearstore doesnt move to EFP latest journal files when deleting a durable queue

2017-05-18 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7788:
---

 Summary: Linearstore doesnt move to EFP latest journal files when 
deleting a durable queue
 Key: QPID-7788
 URL: https://issues.apache.org/jira/browse/QPID-7788
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Reporter: Pavel Moravec
Assignee: Pavel Moravec


When deleting an empty durable queue, the last empty page that it held is not 
moved to EFP but dropped.

Even more, when deleting a non-empty durable queue where the queue content 
spans more than the last file, no such `jrnl` file is returned to EFP.

That makes `/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use` (or similar 
directory depending to configuration) growing over time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-7786) qpidd segfaults during startup when SSL certificate cant be read

2017-05-18 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-7786.
-
   Resolution: Fixed
Fix Version/s: qpid-cpp-1.37.0

> qpidd segfaults during startup when SSL certificate cant be read
> 
>
> Key: QPID-7786
> URL: https://issues.apache.org/jira/browse/QPID-7786
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
> Fix For: qpid-cpp-1.37.0
>
>
> When qpidd can't read NSS password file, or when SSL certificate name is not 
> found / readable in the NSS database, qpidd segfaults at startup with 
> backtrace:
> {code}
> (gdb) bt
> #0  0x7f3010a4f704 in qpid::sys::SocketAddress::nextAddress 
> (this=this@entry=0x7ffe36dc0570) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/posix/SocketAddress.cpp:321
> #1  0x7f301113ec17 in qpid::sys::SocketAcceptor::listen 
> (this=this@entry=0x29bf500, interfaces=..., port=port@entry=5671, 
> backlog=backlog@entry=10, factory=...)
> at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SocketTransport.cpp:150
> #2  0x7f3010fdfdbb in qpid::sys::SslPlugin::initialize 
> (this=0x7f3011407180 , target=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SslPlugin.cpp:126
> #3  0x7f3010a876af in operator() (a1=..., p=, 
> this=) at /usr/include/boost/bind/mem_fn_template.hpp:165
> #4  operator(), 
> boost::_bi::list1 > (a=, 
> f=, 
> this=) at /usr/include/boost/bind/bind.hpp:313
> #5  operator() (a1=@0x2488ce0: 0x7f3011407180 
> , this=) at 
> /usr/include/boost/bind/bind_template.hpp:47
> #6  for_each<__gnu_cxx::__normal_iterator std::vector >, boost::_bi::bind_t qpid::Plugin, qpid::Plugin::Target&>, boost::_bi::list2, 
> boost::reference_wrapper > > > (__f=..., __last=..., 
> __first=) at /usr/include/c++/4.8.2/bits/stl_algo.h:4417
> #7  qpid::(anonymous namespace)::each_plugin boost::_mfi::mf1, 
> boost::_bi::list2, 
> boost::reference_wrapper > > > (f=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:73
> #8  0x7f3010a877a2 in qpid::Plugin::initializeAll (t=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:91
> #9  0x7f3010ffc99a in qpid::broker::Broker::Broker (this=0x249bae0, 
> conf=...) at /usr/src/debug/qpid-cpp-0.34/src/qpid/broker/Broker.cpp:376
> #10 0x00405c82 in qpid::broker::QpiddBroker::execute 
> (this=this@entry=0x7ffe36dc284e, options=0x24909a0) at 
> /usr/src/debug/qpid-cpp-0.34/src/posix/QpiddBroker.cpp:229
> #11 0x00409d04 in qpid::broker::run_broker (argc=3, 
> argv=0x7ffe36dc2be8, hidden=) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpidd.cpp:108
> #12 0x7f300fb0db35 in __libc_start_main (main=0x404ce0  char**)>, argc=3, ubp_av=0x7ffe36dc2be8, init=, 
> fini=, rtld_fini=, 
> stack_end=0x7ffe36dc2bd8) at ../csu/libc-start.c:274
> #13 0x00404f51 in _start ()
> (gdb) list
> 316   (void) getAddrInfo(*this);
> 317   }
> 318   }
> 319   
> 320   bool SocketAddress::nextAddress() const {
> 321   bool r = currentAddrInfo->ai_next != 0;
> 322   if (r)
> 323   currentAddrInfo = currentAddrInfo->ai_next;
> 324   return r;
> 325   }
> (gdb) p currentAddrInfo
> $2 = (addrinfo *) 0x0
> (gdb) 
> {code}
> It is OK if the broker won't start, but it should not segfault.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7786) qpidd segfaults during startup when SSL certificate cant be read

2017-05-18 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-7786:

Summary: qpidd segfaults during startup when SSL certificate cant be read  
(was: [qpid-cpp] qpidd segfaults during startup when SSL certificate cant be 
read)

> qpidd segfaults during startup when SSL certificate cant be read
> 
>
> Key: QPID-7786
> URL: https://issues.apache.org/jira/browse/QPID-7786
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
>
> When qpidd can't read NSS password file, or when SSL certificate name is not 
> found / readable in the NSS database, qpidd segfaults at startup with 
> backtrace:
> {code}
> (gdb) bt
> #0  0x7f3010a4f704 in qpid::sys::SocketAddress::nextAddress 
> (this=this@entry=0x7ffe36dc0570) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/posix/SocketAddress.cpp:321
> #1  0x7f301113ec17 in qpid::sys::SocketAcceptor::listen 
> (this=this@entry=0x29bf500, interfaces=..., port=port@entry=5671, 
> backlog=backlog@entry=10, factory=...)
> at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SocketTransport.cpp:150
> #2  0x7f3010fdfdbb in qpid::sys::SslPlugin::initialize 
> (this=0x7f3011407180 , target=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SslPlugin.cpp:126
> #3  0x7f3010a876af in operator() (a1=..., p=, 
> this=) at /usr/include/boost/bind/mem_fn_template.hpp:165
> #4  operator(), 
> boost::_bi::list1 > (a=, 
> f=, 
> this=) at /usr/include/boost/bind/bind.hpp:313
> #5  operator() (a1=@0x2488ce0: 0x7f3011407180 
> , this=) at 
> /usr/include/boost/bind/bind_template.hpp:47
> #6  for_each<__gnu_cxx::__normal_iterator std::vector >, boost::_bi::bind_t qpid::Plugin, qpid::Plugin::Target&>, boost::_bi::list2, 
> boost::reference_wrapper > > > (__f=..., __last=..., 
> __first=) at /usr/include/c++/4.8.2/bits/stl_algo.h:4417
> #7  qpid::(anonymous namespace)::each_plugin boost::_mfi::mf1, 
> boost::_bi::list2, 
> boost::reference_wrapper > > > (f=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:73
> #8  0x7f3010a877a2 in qpid::Plugin::initializeAll (t=...) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:91
> #9  0x7f3010ffc99a in qpid::broker::Broker::Broker (this=0x249bae0, 
> conf=...) at /usr/src/debug/qpid-cpp-0.34/src/qpid/broker/Broker.cpp:376
> #10 0x00405c82 in qpid::broker::QpiddBroker::execute 
> (this=this@entry=0x7ffe36dc284e, options=0x24909a0) at 
> /usr/src/debug/qpid-cpp-0.34/src/posix/QpiddBroker.cpp:229
> #11 0x00409d04 in qpid::broker::run_broker (argc=3, 
> argv=0x7ffe36dc2be8, hidden=) at 
> /usr/src/debug/qpid-cpp-0.34/src/qpidd.cpp:108
> #12 0x7f300fb0db35 in __libc_start_main (main=0x404ce0  char**)>, argc=3, ubp_av=0x7ffe36dc2be8, init=, 
> fini=, rtld_fini=, 
> stack_end=0x7ffe36dc2bd8) at ../csu/libc-start.c:274
> #13 0x00404f51 in _start ()
> (gdb) list
> 316   (void) getAddrInfo(*this);
> 317   }
> 318   }
> 319   
> 320   bool SocketAddress::nextAddress() const {
> 321   bool r = currentAddrInfo->ai_next != 0;
> 322   if (r)
> 323   currentAddrInfo = currentAddrInfo->ai_next;
> 324   return r;
> 325   }
> (gdb) p currentAddrInfo
> $2 = (addrinfo *) 0x0
> (gdb) 
> {code}
> It is OK if the broker won't start, but it should not segfault.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7786) [qpid-cpp] qpidd segfaults during startup when SSL certificate cant be read

2017-05-18 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7786:
---

 Summary: [qpid-cpp] qpidd segfaults during startup when SSL 
certificate cant be read
 Key: QPID-7786
 URL: https://issues.apache.org/jira/browse/QPID-7786
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Reporter: Pavel Moravec
Assignee: Pavel Moravec


When qpidd can't read NSS password file, or when SSL certificate name is not 
found / readable in the NSS database, qpidd segfaults at startup with backtrace:

{code}
(gdb) bt
#0  0x7f3010a4f704 in qpid::sys::SocketAddress::nextAddress 
(this=this@entry=0x7ffe36dc0570) at 
/usr/src/debug/qpid-cpp-0.34/src/qpid/sys/posix/SocketAddress.cpp:321
#1  0x7f301113ec17 in qpid::sys::SocketAcceptor::listen 
(this=this@entry=0x29bf500, interfaces=..., port=port@entry=5671, 
backlog=backlog@entry=10, factory=...)
at /usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SocketTransport.cpp:150
#2  0x7f3010fdfdbb in qpid::sys::SslPlugin::initialize (this=0x7f3011407180 
, target=...) at 
/usr/src/debug/qpid-cpp-0.34/src/qpid/sys/SslPlugin.cpp:126
#3  0x7f3010a876af in operator() (a1=..., p=, 
this=) at /usr/include/boost/bind/mem_fn_template.hpp:165
#4  operator(), 
boost::_bi::list1 > (a=, f=, 
this=) at /usr/include/boost/bind/bind.hpp:313
#5  operator() (a1=@0x2488ce0: 0x7f3011407180 
, this=) at 
/usr/include/boost/bind/bind_template.hpp:47
#6  for_each<__gnu_cxx::__normal_iterator >, boost::_bi::bind_t, boost::_bi::list2, 
boost::reference_wrapper > > > (__f=..., __last=..., 
__first=) at /usr/include/c++/4.8.2/bits/stl_algo.h:4417
#7  qpid::(anonymous namespace)::each_plugin, 
boost::_bi::list2, boost::reference_wrapper 
> > > (f=...) at /usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:73
#8  0x7f3010a877a2 in qpid::Plugin::initializeAll (t=...) at 
/usr/src/debug/qpid-cpp-0.34/src/qpid/Plugin.cpp:91
#9  0x7f3010ffc99a in qpid::broker::Broker::Broker (this=0x249bae0, 
conf=...) at /usr/src/debug/qpid-cpp-0.34/src/qpid/broker/Broker.cpp:376
#10 0x00405c82 in qpid::broker::QpiddBroker::execute 
(this=this@entry=0x7ffe36dc284e, options=0x24909a0) at 
/usr/src/debug/qpid-cpp-0.34/src/posix/QpiddBroker.cpp:229
#11 0x00409d04 in qpid::broker::run_broker (argc=3, 
argv=0x7ffe36dc2be8, hidden=) at 
/usr/src/debug/qpid-cpp-0.34/src/qpidd.cpp:108
#12 0x7f300fb0db35 in __libc_start_main (main=0x404ce0 , 
argc=3, ubp_av=0x7ffe36dc2be8, init=, fini=, 
rtld_fini=, 
stack_end=0x7ffe36dc2bd8) at ../csu/libc-start.c:274
#13 0x00404f51 in _start ()
(gdb) list
316 (void) getAddrInfo(*this);
317 }
318 }
319 
320 bool SocketAddress::nextAddress() const {
321 bool r = currentAddrInfo->ai_next != 0;
322 if (r)
323 currentAddrInfo = currentAddrInfo->ai_next;
324 return r;
325 }
(gdb) p currentAddrInfo
$2 = (addrinfo *) 0x0
(gdb) 
{code}

It is OK if the broker won't start, but it should not segfault.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-749) unmapping all link-routing addresses leaves half of addresses mapped

2017-04-22 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec reassigned DISPATCH-749:
--

Assignee: Pavel Moravec

> unmapping all link-routing addresses leaves half of addresses mapped
> 
>
> Key: DISPATCH-749
> URL: https://issues.apache.org/jira/browse/DISPATCH-749
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
>
> Setup:
> qpidd < - > qdrouterd(S) < - > qdrouterd(C) < - clients
> where clients are link-routing via the qdrouterd network to qpidd.
> Under specific situations (see 
> https://bugzilla.redhat.com/show_bug.cgi?id=1426242 for details), when 
> qdrouterd(S) is not available for some time, qdrouterd(C) returns 
> "qd:no-route-to-dest" to its clients - so far so good.
> But the error persists even after qdrouterd(S) is up, connected from 
> qdrouterd(C), all links established and addresses mapped.
> The cause is, 
> https://github.com/apache/qpid-dispatch/blob/master/python/qpid_dispatch_internal/router/node.py#L536-L537
>  does _not_ unmap all  addresses:
> {code}
> $ python
> Python 2.7.5 (default, Aug  2 2016, 04:20:16) 
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> mobile_addresses = ['a.', 'b.', 'c.', 'd.']
> >>> for addr in mobile_addresses:
> ...   mobile_addresses.remove(addr)
> ... 
> >>> print mobile_addresses
> ['b.', 'd.']
> >>> 
> {code}
> We can't iterate a list that way while removing items from it.
> Trivial fix allows so:
> {code}
> for addr in mobile_addresses[:]:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-749) unmapping all link-routing addresses leaves half of addresses mapped

2017-04-22 Thread Pavel Moravec (JIRA)
Pavel Moravec created DISPATCH-749:
--

 Summary: unmapping all link-routing addresses leaves half of 
addresses mapped
 Key: DISPATCH-749
 URL: https://issues.apache.org/jira/browse/DISPATCH-749
 Project: Qpid Dispatch
  Issue Type: Bug
  Components: Router Node
Reporter: Pavel Moravec


Setup:

qpidd < - > qdrouterd(S) < - > qdrouterd(C) < - clients

where clients are link-routing via the qdrouterd network to qpidd.

Under specific situations (see 
https://bugzilla.redhat.com/show_bug.cgi?id=1426242 for details), when 
qdrouterd(S) is not available for some time, qdrouterd(C) returns 
"qd:no-route-to-dest" to its clients - so far so good.

But the error persists even after qdrouterd(S) is up, connected from 
qdrouterd(C), all links established and addresses mapped.

The cause is, 
https://github.com/apache/qpid-dispatch/blob/master/python/qpid_dispatch_internal/router/node.py#L536-L537
 does _not_ unmap all  addresses:

{code}
$ python
Python 2.7.5 (default, Aug  2 2016, 04:20:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> mobile_addresses = ['a.', 'b.', 'c.', 'd.']
>>> for addr in mobile_addresses:
...   mobile_addresses.remove(addr)
... 
>>> print mobile_addresses
['b.', 'd.']
>>> 
{code}


We can't iterate a list that way while removing items from it.

Trivial fix allows so:

{code}
for addr in mobile_addresses[:]:
{code}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7677) [C++ broker] broker requires much more memory for AMQP1.0 subscription than for 0-10 one

2017-02-19 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7677:
---

 Summary: [C++ broker] broker requires much more memory for AMQP1.0 
subscription than for 0-10 one
 Key: QPID-7677
 URL: https://issues.apache.org/jira/browse/QPID-7677
 Project: Qpid
  Issue Type: Improvement
  Components: C++ Broker
Affects Versions: 0.32
Reporter: Pavel Moravec


Having an AMQP 1.0 consumer of a queue increases memory usage of qpidd by much 
more than having an AMQP 0-10 consumer - the difference is approx. 0.5MB.

This affects scalable usage of the broker, where having thousands of 1.0 
consumers require gigabytes of memory.


Trivial test:

{code}
qpid-config add queue qqq

for i in $(seq 1 10); do qpid-receive -a qqq -f & sleep 0.1; done

# this added small 26kB VSZ and 75kB RSS per one consumer/receiver

for i in $(seq 1 10); do qpid-receive -a qqq -f --connection-options 
"{protocol:'amqp1.0'}" & sleep 0.1; done

# this added 640kB VSZ and 805kB RSS per one consumer/receiver
{code}

Better test: have multiple consumers on the same AMQP connection (I can provide 
such program in case of interest).


Running such scenario under valgrind/massif showed the extra memory is consumed 
by:

->91.90% (143,361,792B) 0x5D544F1: 
qpid::broker::amqp::OutgoingFromQueue::OutgoingFromQueue(qpid::broker::Broker&, 
std::string const&, std::string const&, boost::shared_ptr, 
pn_link_t*, qpid::broker::amqp::Session&, qpid::sys::OutputControl&, 
qpid::broker::SubscriptionType, bool, bool) (Outgoing.h:57)

And really, when I changed in:

https://github.com/apache/qpid-cpp/blob/master/src/qpid/broker/amqp/Outgoing.cpp#L68


the "deliveries(5000)" to "deliveries(50)", memory consumption dramatically 
decreased.


I don't understand this part of code, but please make the 
"CircularArray deliveries(5000)" more memory efficient.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7325) [C++ broker] Active HA broker memory leak when creating from an autoDel queue

2016-06-27 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7325:
---

 Summary: [C++ broker] Active HA broker memory leak when 
creating from an autoDel queue
 Key: QPID-7325
 URL: https://issues.apache.org/jira/browse/QPID-7325
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: qpid-cpp-0.34
Reporter: Pavel Moravec


Having a consumer that (in a loop):
- creates an autoDelete queue
- subscribes to it
- unsubscribes
against a HA cluster, primary broker memory consumption grows over the time.

Steps to Reproduce:
1. Start 3 brokers in a HA cluster (mine reproducer uses options:

qpidd --port=5672 --store-dir=_5672 --log-to-file=qpidd.5672.log 
--data-dir=_5672 --auth=no --log-to-stderr=no --ha-cluster=yes 
--ha-brokers-url=localhost:5672,localhost:5673,localhost:5674 --ha-replicate=all

)

2. Run simple qpid-receive in a loop:
while true; do
  qpid-receive -a "autoDelQueue;  {create:always, node:{ 
x-declare:{auto-delete:True}}}"
  sleep 0.1
done

3. Monitor memory usage of primary broker.


Observations from variants of reproducer:

- standalone broker does not exhibit that mem.leak

- even standalone broker in HA cluster does not - backup brokers are mandatory 
for the leak

- anyway, replicator bridge queues on primary stand almost everytime empty, no 
bursts of messages occur there.

- just the primary broker is affected, backups are OK

- amount of leaked memory does not depend on number of backups (very similar 
mem.usage when having 1,2 or 4 backups)

- valgrind does not show any leaked or excessive "still reachable" memory, even 
after 1 hour test where memory consumption of valgrind grew evidently

- curiously, every running "qpid-stat -q" causes _additional_ memory is leaked 
- maybe due to the fact it uses auxiliary autoDel queue as well (but just one 
while the leak is much bigger)?

- bug not is present in 0.30

- --ha-replicate=all is crucial, even --ha-replication=configuration does _not_ 
trigger the leak

- adding --enable-qmf2=no prevents most the leak - memory still grows (sic!) 
but evidently slower




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-7182) [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario

2016-04-06 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-7182.
-
   Resolution: Fixed
Fix Version/s: qpid-cpp-next

> [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario
> --
>
> Key: QPID-7182
> URL: https://issues.apache.org/jira/browse/QPID-7182
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Clustering
>Affects Versions: qpid-cpp-next
>Reporter: Pavel Moravec
>Assignee: Alan Conway
> Fix For: qpid-cpp-next
>
>
> Following scenario from QPID-7149 with --ha-replicate=all, with whatever 
> patch fixing it applied or not, CPU usage of backup brokers grow over the 
> time.
> gdb shows one active thread always with backtrace:
> {noformat}
> #0  0x7f9295fc9c98 in find (this=0x7f9270055840, data= out>)
> at 
> /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/tr1_impl/hashtable:786
> #1  qpid::ha::QueueReplicator::dequeueEvent (this=0x7f9270055840, data= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:306
> #2  0x7f9295fca82b in operator() (this=0x7f9270055840, deliverable= optimized out>)
> at /usr/include/boost/function/function_template.hpp:1013
> #3  qpid::ha::QueueReplicator::route (this=0x7f9270055840, deliverable= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:329
> #4  0x7f9296b9b854 in qpid::broker::SemanticState::route 
> (this=0x7f927001d088, msg=..., strategy=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SemanticState.cpp:506
> #5  0x7f9296bb8ab7 in qpid::broker::SessionState::handleContent 
> (this=0x7f927001cec0, frame=)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:233
> #6  0x7f9296bb90a1 in qpid::broker::SessionState::handleIn 
> (this=0x7f927001cec0, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:293
> #7  0x7f92965d4c31 in qpid::amqp_0_10::SessionHandler::handleIn 
> (this=0x7f927002fbb0, f=...)
> at /data_xfs/qpid/cpp/src/qpid/amqp_0_10/SessionHandler.cpp:93
> #8  0x7f9296b29a2b in operator() (this=0x7f9270002060, frame=...) at 
> /data_xfs/qpid/cpp/src/qpid/framing/Handler.h:39
> #9  qpid::broker::ConnectionHandler::handle (this=0x7f9270002060, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/ConnectionHandler.cpp:93
> #10 0x7f9296b247e8 in qpid::broker::amqp_0_10::Connection::received 
> (this=0x7f9270001e80, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/amqp_0_10/Connection.cpp:198
> #11 0x7f9296ab2863 in qpid::amqp_0_10::Connection::decode 
> (this=0x7f92700018b0, buffer=, 
> size=) at 
> /data_xfs/qpid/cpp/src/qpid/amqp_0_10/Connection.cpp:59
> #12 0x7f92965fdca0 in qpid::sys::AsynchIOHandler::readbuff 
> (this=0x7f9279b0, buff=0x7f9270001880)
> at /data_xfs/qpid/cpp/src/qpid/sys/AsynchIOHandler.cpp:138
> #13 0x7f929657be89 in operator() (this=0x7f927a50, h=...) at 
> /usr/include/boost/function/function_template.hpp:1013
> #14 qpid::sys::posix::AsynchIO::readable (this=0x7f927a50, h=...) at 
> /data_xfs/qpid/cpp/src/qpid/sys/posix/AsynchIO.cpp:453
> #15 0x7f92966025b3 in boost::function1 qpid::sys::DispatchHandle&>::operator() (this=, 
> a0=) at 
> /usr/include/boost/function/function_template.hpp:1013
> #16 0x7f9296601246 in qpid::sys::DispatchHandle::processEvent 
> (this=0x7f927a58, type=qpid::sys::Poller::READABLE)
> at /data_xfs/qpid/cpp/src/qpid/sys/DispatchHandle.cpp:280
> #17 0x7f92965a1d1d in process (this=0x7961c0) at 
> /data_xfs/qpid/cpp/src/qpid/sys/Poller.h:131
> ..
> {noformat}
> or with:
> {noformat}
> #0  0x0032c4c0a7b0 in pthread_mutex_unlock () from /lib64/libpthread.so.0
> #1  0x7fb0958038fa in qpid::sys::Mutex::unlock (this= out>) at /data_ext4/qpid/cpp/src/qpid/sys/posix/Mutex.h:120
> #2  0x7fb095840628 in ~ScopedLock (this=0x112cfd0, data= out>) at /data_ext4/qpid/cpp/src/qpid/sys/Mutex.h:34
> #3  qpid::ha::QueueReplicator::dequeueEvent (this=0x112cfd0, data= optimized out>)
> at /data_ext4/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:308
> ..
> {noformat}
> Not sure where the busy loop origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7182) [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario

2016-04-06 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228796#comment-15228796
 ] 

Pavel Moravec commented on QPID-7182:
-

I agree the CPU increase over time is gone on current trunk. Printing 
"e.ids.size()" shows stable value 1 and does not grow.

Seems the patch for QPID-7149 fixed also this.

> [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario
> --
>
> Key: QPID-7182
> URL: https://issues.apache.org/jira/browse/QPID-7182
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Clustering
>Affects Versions: qpid-cpp-next
>Reporter: Pavel Moravec
>Assignee: Alan Conway
> Fix For: qpid-cpp-next
>
>
> Following scenario from QPID-7149 with --ha-replicate=all, with whatever 
> patch fixing it applied or not, CPU usage of backup brokers grow over the 
> time.
> gdb shows one active thread always with backtrace:
> {noformat}
> #0  0x7f9295fc9c98 in find (this=0x7f9270055840, data= out>)
> at 
> /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/tr1_impl/hashtable:786
> #1  qpid::ha::QueueReplicator::dequeueEvent (this=0x7f9270055840, data= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:306
> #2  0x7f9295fca82b in operator() (this=0x7f9270055840, deliverable= optimized out>)
> at /usr/include/boost/function/function_template.hpp:1013
> #3  qpid::ha::QueueReplicator::route (this=0x7f9270055840, deliverable= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:329
> #4  0x7f9296b9b854 in qpid::broker::SemanticState::route 
> (this=0x7f927001d088, msg=..., strategy=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SemanticState.cpp:506
> #5  0x7f9296bb8ab7 in qpid::broker::SessionState::handleContent 
> (this=0x7f927001cec0, frame=)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:233
> #6  0x7f9296bb90a1 in qpid::broker::SessionState::handleIn 
> (this=0x7f927001cec0, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:293
> #7  0x7f92965d4c31 in qpid::amqp_0_10::SessionHandler::handleIn 
> (this=0x7f927002fbb0, f=...)
> at /data_xfs/qpid/cpp/src/qpid/amqp_0_10/SessionHandler.cpp:93
> #8  0x7f9296b29a2b in operator() (this=0x7f9270002060, frame=...) at 
> /data_xfs/qpid/cpp/src/qpid/framing/Handler.h:39
> #9  qpid::broker::ConnectionHandler::handle (this=0x7f9270002060, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/ConnectionHandler.cpp:93
> #10 0x7f9296b247e8 in qpid::broker::amqp_0_10::Connection::received 
> (this=0x7f9270001e80, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/amqp_0_10/Connection.cpp:198
> #11 0x7f9296ab2863 in qpid::amqp_0_10::Connection::decode 
> (this=0x7f92700018b0, buffer=, 
> size=) at 
> /data_xfs/qpid/cpp/src/qpid/amqp_0_10/Connection.cpp:59
> #12 0x7f92965fdca0 in qpid::sys::AsynchIOHandler::readbuff 
> (this=0x7f9279b0, buff=0x7f9270001880)
> at /data_xfs/qpid/cpp/src/qpid/sys/AsynchIOHandler.cpp:138
> #13 0x7f929657be89 in operator() (this=0x7f927a50, h=...) at 
> /usr/include/boost/function/function_template.hpp:1013
> #14 qpid::sys::posix::AsynchIO::readable (this=0x7f927a50, h=...) at 
> /data_xfs/qpid/cpp/src/qpid/sys/posix/AsynchIO.cpp:453
> #15 0x7f92966025b3 in boost::function1 qpid::sys::DispatchHandle&>::operator() (this=, 
> a0=) at 
> /usr/include/boost/function/function_template.hpp:1013
> #16 0x7f9296601246 in qpid::sys::DispatchHandle::processEvent 
> (this=0x7f927a58, type=qpid::sys::Poller::READABLE)
> at /data_xfs/qpid/cpp/src/qpid/sys/DispatchHandle.cpp:280
> #17 0x7f92965a1d1d in process (this=0x7961c0) at 
> /data_xfs/qpid/cpp/src/qpid/sys/Poller.h:131
> ..
> {noformat}
> or with:
> {noformat}
> #0  0x0032c4c0a7b0 in pthread_mutex_unlock () from /lib64/libpthread.so.0
> #1  0x7fb0958038fa in qpid::sys::Mutex::unlock (this= out>) at /data_ext4/qpid/cpp/src/qpid/sys/posix/Mutex.h:120
> #2  0x7fb095840628 in ~ScopedLock (this=0x112cfd0, data= out>) at /data_ext4/qpid/cpp/src/qpid/sys/Mutex.h:34
> #3  qpid::ha::QueueReplicator::dequeueEvent (this=0x112cfd0, data= optimized out>)
> at /data_ext4/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:308
> ..
> {noformat}
> Not sure where the busy loop origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7182) [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario

2016-04-04 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15223921#comment-15223921
 ] 

Pavel Moravec commented on QPID-7182:
-

Good catch:

adding warning to QueueReplicator.cpp:

{noformat}
QPID_LOG(trace, logPrefix << "Dequeue " << e.ids);
//TODO: should be able to optimise the following
QPID_LOG(warning, "PavelM: e.size()=" << e.ids.size() << " 
positions.size()=" << positions.size());
for (ReplicationIdSet::iterator i = e.ids.begin(); i != e.ids.end(); ++i) {
{noformat}

I see e.ids.size() growing over the time. That means the for cycle below takes 
longer and longer in each call of the QueueReplicator::dequeueEvent method.

> [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario
> --
>
> Key: QPID-7182
> URL: https://issues.apache.org/jira/browse/QPID-7182
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Clustering
>Affects Versions: qpid-cpp-next
>Reporter: Pavel Moravec
>Assignee: Alan Conway
>
> Following scenario from QPID-7149 with --ha-replicate=all, with whatever 
> patch fixing it applied or not, CPU usage of backup brokers grow over the 
> time.
> gdb shows one active thread always with backtrace:
> {noformat}
> #0  0x7f9295fc9c98 in find (this=0x7f9270055840, data= out>)
> at 
> /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/tr1_impl/hashtable:786
> #1  qpid::ha::QueueReplicator::dequeueEvent (this=0x7f9270055840, data= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:306
> #2  0x7f9295fca82b in operator() (this=0x7f9270055840, deliverable= optimized out>)
> at /usr/include/boost/function/function_template.hpp:1013
> #3  qpid::ha::QueueReplicator::route (this=0x7f9270055840, deliverable= optimized out>)
> at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:329
> #4  0x7f9296b9b854 in qpid::broker::SemanticState::route 
> (this=0x7f927001d088, msg=..., strategy=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SemanticState.cpp:506
> #5  0x7f9296bb8ab7 in qpid::broker::SessionState::handleContent 
> (this=0x7f927001cec0, frame=)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:233
> #6  0x7f9296bb90a1 in qpid::broker::SessionState::handleIn 
> (this=0x7f927001cec0, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:293
> #7  0x7f92965d4c31 in qpid::amqp_0_10::SessionHandler::handleIn 
> (this=0x7f927002fbb0, f=...)
> at /data_xfs/qpid/cpp/src/qpid/amqp_0_10/SessionHandler.cpp:93
> #8  0x7f9296b29a2b in operator() (this=0x7f9270002060, frame=...) at 
> /data_xfs/qpid/cpp/src/qpid/framing/Handler.h:39
> #9  qpid::broker::ConnectionHandler::handle (this=0x7f9270002060, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/ConnectionHandler.cpp:93
> #10 0x7f9296b247e8 in qpid::broker::amqp_0_10::Connection::received 
> (this=0x7f9270001e80, frame=...)
> at /data_xfs/qpid/cpp/src/qpid/broker/amqp_0_10/Connection.cpp:198
> #11 0x7f9296ab2863 in qpid::amqp_0_10::Connection::decode 
> (this=0x7f92700018b0, buffer=, 
> size=) at 
> /data_xfs/qpid/cpp/src/qpid/amqp_0_10/Connection.cpp:59
> #12 0x7f92965fdca0 in qpid::sys::AsynchIOHandler::readbuff 
> (this=0x7f9279b0, buff=0x7f9270001880)
> at /data_xfs/qpid/cpp/src/qpid/sys/AsynchIOHandler.cpp:138
> #13 0x7f929657be89 in operator() (this=0x7f927a50, h=...) at 
> /usr/include/boost/function/function_template.hpp:1013
> #14 qpid::sys::posix::AsynchIO::readable (this=0x7f927a50, h=...) at 
> /data_xfs/qpid/cpp/src/qpid/sys/posix/AsynchIO.cpp:453
> #15 0x7f92966025b3 in boost::function1 qpid::sys::DispatchHandle&>::operator() (this=, 
> a0=) at 
> /usr/include/boost/function/function_template.hpp:1013
> #16 0x7f9296601246 in qpid::sys::DispatchHandle::processEvent 
> (this=0x7f927a58, type=qpid::sys::Poller::READABLE)
> at /data_xfs/qpid/cpp/src/qpid/sys/DispatchHandle.cpp:280
> #17 0x7f92965a1d1d in process (this=0x7961c0) at 
> /data_xfs/qpid/cpp/src/qpid/sys/Poller.h:131
> ..
> {noformat}
> or with:
> {noformat}
> #0  0x0032c4c0a7b0 in pthread_mutex_unlock () from /lib64/libpthread.so.0
> #1  0x7fb0958038fa in qpid::sys::Mutex::unlock (this= out>) at /data_ext4/qpid/cpp/src/qpid/sys/posix/Mutex.h:120
> #2  0x7fb095840628 in ~ScopedLock (this=0x112cfd0, data= out>) at /data_ext4/qpid/cpp/src/qpid/sys/Mutex.h:34
> #3  qpid::ha::QueueReplicator::dequeueEvent (this=0x112cfd0, data= optimized out>)
> at /data_ext4/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:308
> ..
> {noformat}
> Not sure where the busy loop origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (QPID-7182) [C++ broker] high CPU usage on backup brokers following QPID-7149 scenario

2016-04-04 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7182:
---

 Summary: [C++ broker] high CPU usage on backup brokers following 
QPID-7149 scenario
 Key: QPID-7182
 URL: https://issues.apache.org/jira/browse/QPID-7182
 Project: Qpid
  Issue Type: Bug
  Components: C++ Clustering
Affects Versions: qpid-cpp-next
Reporter: Pavel Moravec
Assignee: Alan Conway


Following scenario from QPID-7149 with --ha-replicate=all, with whatever patch 
fixing it applied or not, CPU usage of backup brokers grow over the time.

gdb shows one active thread always with backtrace:

{noformat}
#0  0x7f9295fc9c98 in find (this=0x7f9270055840, data=)
at 
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/tr1_impl/hashtable:786
#1  qpid::ha::QueueReplicator::dequeueEvent (this=0x7f9270055840, data=)
at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:306
#2  0x7f9295fca82b in operator() (this=0x7f9270055840, deliverable=)
at /usr/include/boost/function/function_template.hpp:1013
#3  qpid::ha::QueueReplicator::route (this=0x7f9270055840, deliverable=)
at /data_xfs/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:329
#4  0x7f9296b9b854 in qpid::broker::SemanticState::route 
(this=0x7f927001d088, msg=..., strategy=...)
at /data_xfs/qpid/cpp/src/qpid/broker/SemanticState.cpp:506
#5  0x7f9296bb8ab7 in qpid::broker::SessionState::handleContent 
(this=0x7f927001cec0, frame=)
at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:233
#6  0x7f9296bb90a1 in qpid::broker::SessionState::handleIn 
(this=0x7f927001cec0, frame=...)
at /data_xfs/qpid/cpp/src/qpid/broker/SessionState.cpp:293
#7  0x7f92965d4c31 in qpid::amqp_0_10::SessionHandler::handleIn 
(this=0x7f927002fbb0, f=...)
at /data_xfs/qpid/cpp/src/qpid/amqp_0_10/SessionHandler.cpp:93
#8  0x7f9296b29a2b in operator() (this=0x7f9270002060, frame=...) at 
/data_xfs/qpid/cpp/src/qpid/framing/Handler.h:39
#9  qpid::broker::ConnectionHandler::handle (this=0x7f9270002060, frame=...)
at /data_xfs/qpid/cpp/src/qpid/broker/ConnectionHandler.cpp:93
#10 0x7f9296b247e8 in qpid::broker::amqp_0_10::Connection::received 
(this=0x7f9270001e80, frame=...)
at /data_xfs/qpid/cpp/src/qpid/broker/amqp_0_10/Connection.cpp:198
#11 0x7f9296ab2863 in qpid::amqp_0_10::Connection::decode 
(this=0x7f92700018b0, buffer=, 
size=) at 
/data_xfs/qpid/cpp/src/qpid/amqp_0_10/Connection.cpp:59
#12 0x7f92965fdca0 in qpid::sys::AsynchIOHandler::readbuff 
(this=0x7f9279b0, buff=0x7f9270001880)
at /data_xfs/qpid/cpp/src/qpid/sys/AsynchIOHandler.cpp:138
#13 0x7f929657be89 in operator() (this=0x7f927a50, h=...) at 
/usr/include/boost/function/function_template.hpp:1013
#14 qpid::sys::posix::AsynchIO::readable (this=0x7f927a50, h=...) at 
/data_xfs/qpid/cpp/src/qpid/sys/posix/AsynchIO.cpp:453
#15 0x7f92966025b3 in boost::function1::operator() (this=, 
a0=) at 
/usr/include/boost/function/function_template.hpp:1013
#16 0x7f9296601246 in qpid::sys::DispatchHandle::processEvent 
(this=0x7f927a58, type=qpid::sys::Poller::READABLE)
at /data_xfs/qpid/cpp/src/qpid/sys/DispatchHandle.cpp:280
#17 0x7f92965a1d1d in process (this=0x7961c0) at 
/data_xfs/qpid/cpp/src/qpid/sys/Poller.h:131
..
{noformat}

or with:

{noformat}
#0  0x0032c4c0a7b0 in pthread_mutex_unlock () from /lib64/libpthread.so.0
#1  0x7fb0958038fa in qpid::sys::Mutex::unlock (this=) 
at /data_ext4/qpid/cpp/src/qpid/sys/posix/Mutex.h:120
#2  0x7fb095840628 in ~ScopedLock (this=0x112cfd0, data=) at /data_ext4/qpid/cpp/src/qpid/sys/Mutex.h:34
#3  qpid::ha::QueueReplicator::dequeueEvent (this=0x112cfd0, data=)
at /data_ext4/qpid/cpp/src/qpid/ha/QueueReplicator.cpp:308
..
{noformat}

Not sure where the busy loop origins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7149) [HA] active HA broker memory leak when ring queue discards overflow messages

2016-03-29 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15215607#comment-15215607
 ] 

Pavel Moravec commented on QPID-7149:
-

It allows everything:

acl allow all all

I explicitly set that to overwrite the default used ACL file with a rule 
disabling federation links (what would _prevent_ the mem.leak).

> [HA] active HA broker memory leak when ring queue discards overflow messages
> 
>
> Key: QPID-7149
> URL: https://issues.apache.org/jira/browse/QPID-7149
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
> Environment: RHEL6
> qpid trunk svn rev. 1735384
> - issue seen in very old releases (since active-passive HA cluster initial 
> implementation, most probably)
> libstdc++-devel-4.4.7-4.el6.x86_64
> gcc-c++-4.4.7-4.el6.x86_64
> libgcc-4.4.7-4.el6.x86_64
> libstdc++-4.4.7-4.el6.x86_64
> gcc-4.4.7-4.el6.x86_64
>Reporter: Pavel Moravec
>Assignee: Alan Conway
>
> There is a memory leak on active HA broker, triggered most probably by 
> purging overflow message from a ring queue. Basic scenario is to setup HA 
> cluster, promote to primary and feed forever a ring queue with messages.
> Detailed scenario:
> 1) Start brokers and promote one to primary:
> {noformat}
> start_broker() {
>   port=$1
>   shift
>   rm -rf _${port}
>   mkdir _${port}
>   nohup qpidd --load-module=ha.so --port=$port 
> --log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no 
> --log-to-stderr=no --ha-cluster=yes 
> --ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
> --ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
>   sleep 1
> }
> killall qpidd qpid-receive 2> /dev/null
> rm -f qpidd.*.log
> start_broker 5672
> sleep 1
> qpid-ha promote -b $(hostname):5672 --cluster-manager
> sleep 1
> start_broker 5673
> sleep 1
> start_broker 5674
> {noformat}
> 2) Create ring queues and send there messages (it is enough to have 1 queue, 
> having more should show the leak faster):
> {noformat}
> for i in $(seq 0 9); do
>   qpid-config add queue FromKeyServer_$i --max-queue-size=1 
> --max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
> done
> while true; do
>   for j in $(seq 1 10); do
>   for i in $(seq 1 10); do
>   for k in $(seq 0 9); do
>   qpid-send -a FromKeyServer_$k -m 100 
> --send-rate=50 -- priority=$(($((RANDOM))%10)) &
>   done
>   done
>   wait
>   while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
> | awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
>   sleep 1
>   done
>   done
>   date
>   ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
> $1 }'
> done
> {noformat}
> (the "while [ $(qpid-stat -q | .." cycle is there just to slow down the 
> message enqueues to ensure replication federation queues dont have big 
> backlog - that would interfere with memory consumpiton observation)
> 3) Run those scripts and monitor memory consumption.
> - without using priority queues and sending messages without priorities, leak 
> is evident as well - sometimes smaller, sometimes the same
> - valgrind (on some older versions I tested before more thoroughly) detects 
> nothing (neither leaked memory or reachable at shutdown)
> - same leak is evident even with --ha-replicate=none
> - number of backup brokers does not affect the memory leak



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7149) [HA] active HA broker memory leak when ring queue discards overflow messages

2016-03-20 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15200045#comment-15200045
 ] 

Pavel Moravec commented on QPID-7149:
-

The leak is present even if

{noformat}
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=none
{noformat}

is used in the reproducer and _no_ backup broker is running.

Removing the above options causes no leak is present.

> [HA] active HA broker memory leak when ring queue discards overflow messages
> 
>
> Key: QPID-7149
> URL: https://issues.apache.org/jira/browse/QPID-7149
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
> Environment: RHEL6
> qpid trunk svn rev. 1735384
> - issue seen in very old releases (since active-passive HA cluster initial 
> implementation, most probably)
> libstdc++-devel-4.4.7-4.el6.x86_64
> gcc-c++-4.4.7-4.el6.x86_64
> libgcc-4.4.7-4.el6.x86_64
> libstdc++-4.4.7-4.el6.x86_64
> gcc-4.4.7-4.el6.x86_64
>Reporter: Pavel Moravec
>
> There is a memory leak on active HA broker, triggered most probably by 
> purging overflow message from a ring queue. Basic scenario is to setup HA 
> cluster, promote to primary and feed forever a ring queue with messages.
> Detailed scenario:
> 1) Start brokers and promote one to primary:
> {noformat}
> start_broker() {
>   port=$1
>   shift
>   rm -rf _${port}
>   mkdir _${port}
>   nohup qpidd --load-module=ha.so --port=$port 
> --log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no 
> --log-to-stderr=no --ha-cluster=yes 
> --ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
> --ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
>   sleep 1
> }
> killall qpidd qpid-receive 2> /dev/null
> rm -f qpidd.*.log
> start_broker 5672
> sleep 1
> qpid-ha promote -b $(hostname):5672 --cluster-manager
> sleep 1
> start_broker 5673
> sleep 1
> start_broker 5674
> {noformat}
> 2) Create ring queues and send there messages (it is enough to have 1 queue, 
> having more should show the leak faster):
> {noformat}
> for i in $(seq 0 9); do
>   qpid-config add queue FromKeyServer_$i --max-queue-size=1 
> --max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
> done
> while true; do
>   for j in $(seq 1 10); do
>   for i in $(seq 1 10); do
>   for k in $(seq 0 9); do
>   qpid-send -a FromKeyServer_$k -m 100 
> --send-rate=50 -- priority=$(($((RANDOM))%10)) &
>   done
>   done
>   wait
>   while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
> | awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
>   sleep 1
>   done
>   done
>   date
>   ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
> $1 }'
> done
> {noformat}
> (the "while [ $(qpid-stat -q | .." cycle is there just to slow down the 
> message enqueues to ensure replication federation queues dont have big 
> backlog - that would interfere with memory consumpiton observation)
> 3) Run those scripts and monitor memory consumption.
> - without using priority queues and sending messages without priorities, leak 
> is evident as well - sometimes smaller, sometimes the same
> - valgrind (on some older versions I tested before more thoroughly) detects 
> nothing (neither leaked memory or reachable at shutdown)
> - same leak is evident even with --ha-replicate=none
> - number of backup brokers does not affect the memory leak



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7150) HA memory leak in primary broker when overwriting messages in a ring queue

2016-03-19 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201143#comment-15201143
 ] 

Pavel Moravec commented on QPID-7150:
-

This does not fix https://bugzilla.redhat.com/show_bug.cgi?id=1318180 fully. 
See https://issues.apache.org/jira/browse/QPID-7149 where:

- the leak is present with --ha-replicate=none also where the modified code 
isn't executed
- testing upstream qpid with this fix, the leak is still present (I will 
compare if it is smaller, so if the code change fixes something (I expect so))

> HA memory leak in primary broker when overwriting messages in a ring queue
> --
>
> Key: QPID-7150
> URL: https://issues.apache.org/jira/browse/QPID-7150
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Clustering
>Affects Versions: 0.32
>Reporter: Alan Conway
>Assignee: Alan Conway
> Fix For: qpid-cpp-next
>
>
> From https://bugzilla.redhat.com/show_bug.cgi?id=1318180
> HA memory leak in primary broker when overwriting messages in a ring queue
> ReplicatingSubscription accumulates IDs of dequeued messages to send on
> dispatch. It should clear the accumulated IDs once sent. Due to a merge error,
> since:
> 014f0f3 QPID-4327: HA TX transactions: basic replication.
> The ID set is not cleared, causing it to accumulate memory slowly.
> This leak would be particularly noticeable on a busy ring-queue since a
> ring-queue generates a dequeue event for every enqueue once it reaches its max
> size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7149) [HA] active HA broker memory leak when ring queue discards overflow messages

2016-03-19 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7149:
---

 Summary: [HA] active HA broker memory leak when ring queue 
discards overflow messages
 Key: QPID-7149
 URL: https://issues.apache.org/jira/browse/QPID-7149
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
 Environment: RHEL6

qpid trunk svn rev. 1735384
- issue seen in very old releases (since active-passive HA cluster initial 
implementation, most probably)

libstdc++-devel-4.4.7-4.el6.x86_64
gcc-c++-4.4.7-4.el6.x86_64
libgcc-4.4.7-4.el6.x86_64
libstdc++-4.4.7-4.el6.x86_64
gcc-4.4.7-4.el6.x86_64

Reporter: Pavel Moravec


There is a memory leak on active HA broker, triggered most probably by purging 
overflow message from a ring queue. Basic scenario is to setup HA cluster, 
promote to primary and feed forever a ring queue with messages.

Detailed scenario:

1) Start brokers and promote one to primary:

start_broker() {
port=$1
shift
rm -rf _${port}
mkdir _${port}
nohup qpidd --load-module=ha.so --port=$port 
--log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no --log-to-stderr=no 
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
sleep 1
}


killall qpidd qpid-receive 2> /dev/null
rm -f qpidd.*.log
start_broker 5672
sleep 1
qpid-ha promote -b $(hostname):5672 --cluster-manager
sleep 1
start_broker 5673
sleep 1
start_broker 5674


2) Create ring queues and send there messages (it is enough to have 1 queue, 
having more should show the leak faster):

for i in $(seq 0 9); do
qpid-config add queue FromKeyServer_$i --max-queue-size=1 
--max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
done

while true; do
for j in $(seq 1 10); do
for i in $(seq 1 10); do
for k in $(seq 0 9); do
qpid-send -a FromKeyServer_$k -m 100 
--send-rate=50 -- priority=$(($((RANDOM))%10)) &
done
done
wait
while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
| awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
sleep 1
done
done
date
ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
$1 }'
done

(the "while [ $(qpid-stat -q | .." cycle is there just to slow down the message 
enqueues to ensure replication federation queues dont have big backlog - that 
would interfere with memory consumpiton observation)


3) Run those scripts and monitor memory consumption.

- without using priority queues and sending messages without priorities, leak 
is evident as well - but much smaller
- valgrind (on some older versions I tested before more thoroughly) detects 
nothing (neither leaked memory or reachable at shutdown)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7149) [HA] active HA broker memory leak when ring queue discards overflow messages

2016-03-19 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-7149:

Description: 
There is a memory leak on active HA broker, triggered most probably by purging 
overflow message from a ring queue. Basic scenario is to setup HA cluster, 
promote to primary and feed forever a ring queue with messages.

Detailed scenario:

1) Start brokers and promote one to primary:

start_broker() {
port=$1
shift
rm -rf _${port}
mkdir _${port}
nohup qpidd --load-module=ha.so --port=$port 
--log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no --log-to-stderr=no 
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
sleep 1
}


killall qpidd qpid-receive 2> /dev/null
rm -f qpidd.*.log
start_broker 5672
sleep 1
qpid-ha promote -b $(hostname):5672 --cluster-manager
sleep 1
start_broker 5673
sleep 1
start_broker 5674


2) Create ring queues and send there messages (it is enough to have 1 queue, 
having more should show the leak faster):

for i in $(seq 0 9); do
qpid-config add queue FromKeyServer_$i --max-queue-size=1 
--max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
done

while true; do
for j in $(seq 1 10); do
for i in $(seq 1 10); do
for k in $(seq 0 9); do
qpid-send -a FromKeyServer_$k -m 100 
--send-rate=50 -- priority=$(($((RANDOM))%10)) &
done
done
wait
while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
| awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
sleep 1
done
done
date
ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
$1 }'
done

(the "while [ $(qpid-stat -q | .." cycle is there just to slow down the message 
enqueues to ensure replication federation queues dont have big backlog - that 
would interfere with memory consumpiton observation)


3) Run those scripts and monitor memory consumption.

- without using priority queues and sending messages without priorities, leak 
is evident as well - sometimes smaller, sometimes the same
- valgrind (on some older versions I tested before more thoroughly) detects 
nothing (neither leaked memory or reachable at shutdown)
- same leak is evident even with `--ha-replicate=none`



  was:
There is a memory leak on active HA broker, triggered most probably by purging 
overflow message from a ring queue. Basic scenario is to setup HA cluster, 
promote to primary and feed forever a ring queue with messages.

Detailed scenario:

1) Start brokers and promote one to primary:

start_broker() {
port=$1
shift
rm -rf _${port}
mkdir _${port}
nohup qpidd --load-module=ha.so --port=$port 
--log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no --log-to-stderr=no 
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
sleep 1
}


killall qpidd qpid-receive 2> /dev/null
rm -f qpidd.*.log
start_broker 5672
sleep 1
qpid-ha promote -b $(hostname):5672 --cluster-manager
sleep 1
start_broker 5673
sleep 1
start_broker 5674


2) Create ring queues and send there messages (it is enough to have 1 queue, 
having more should show the leak faster):

for i in $(seq 0 9); do
qpid-config add queue FromKeyServer_$i --max-queue-size=1 
--max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
done

while true; do
for j in $(seq 1 10); do
for i in $(seq 1 10); do
for k in $(seq 0 9); do
qpid-send -a FromKeyServer_$k -m 100 
--send-rate=50 -- priority=$(($((RANDOM))%10)) &
done
done
wait
while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
| awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
sleep 1
done
done
date
ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
$1 }'
done

(the "while [ $(qpid-stat -q | .." cycle is there just to slow down the message 
enqueues to ensure replication federation queues dont have big backlog - that 
would interfere with memory consumpiton observation)


3) Run those scripts and monitor memory consumption.

- without using priority queues and sending messages without priorities, leak 
is evident as well - but much smaller
- valgrind (on some older versions I tested before more thoroughly) detects 
nothing (neither leaked memory or reachable at shutdown)




> [HA] active HA broker 

[jira] [Commented] (QPID-7150) HA memory leak in primary broker when overwriting messages in a ring queue

2016-03-19 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15201320#comment-15201320
 ] 

Pavel Moravec commented on QPID-7150:
-

the commit r1735439 improves the memory leak but does not prevent it.

> HA memory leak in primary broker when overwriting messages in a ring queue
> --
>
> Key: QPID-7150
> URL: https://issues.apache.org/jira/browse/QPID-7150
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Clustering
>Affects Versions: 0.32
>Reporter: Alan Conway
>Assignee: Alan Conway
> Fix For: qpid-cpp-next
>
>
> From https://bugzilla.redhat.com/show_bug.cgi?id=1318180
> HA memory leak in primary broker when overwriting messages in a ring queue
> ReplicatingSubscription accumulates IDs of dequeued messages to send on
> dispatch. It should clear the accumulated IDs once sent. Due to a merge error,
> since:
> 014f0f3 QPID-4327: HA TX transactions: basic replication.
> The ID set is not cleared, causing it to accumulate memory slowly.
> This leak would be particularly noticeable on a busy ring-queue since a
> ring-queue generates a dequeue event for every enqueue once it reaches its max
> size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7149) [HA] active HA broker memory leak when ring queue discards overflow messages

2016-03-19 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-7149:

Description: 
There is a memory leak on active HA broker, triggered most probably by purging 
overflow message from a ring queue. Basic scenario is to setup HA cluster, 
promote to primary and feed forever a ring queue with messages.

Detailed scenario:

1) Start brokers and promote one to primary:

{noformat}
start_broker() {
port=$1
shift
rm -rf _${port}
mkdir _${port}
nohup qpidd --load-module=ha.so --port=$port 
--log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no --log-to-stderr=no 
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
sleep 1
}


killall qpidd qpid-receive 2> /dev/null
rm -f qpidd.*.log
start_broker 5672
sleep 1
qpid-ha promote -b $(hostname):5672 --cluster-manager
sleep 1
start_broker 5673
sleep 1
start_broker 5674
{noformat}

2) Create ring queues and send there messages (it is enough to have 1 queue, 
having more should show the leak faster):

{noformat}
for i in $(seq 0 9); do
qpid-config add queue FromKeyServer_$i --max-queue-size=1 
--max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
done

while true; do
for j in $(seq 1 10); do
for i in $(seq 1 10); do
for k in $(seq 0 9); do
qpid-send -a FromKeyServer_$k -m 100 
--send-rate=50 -- priority=$(($((RANDOM))%10)) &
done
done
wait
while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
| awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
sleep 1
done
done
date
ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
$1 }'
done
{noformat}

(the "while [ $(qpid-stat -q | .." cycle is there just to slow down the message 
enqueues to ensure replication federation queues dont have big backlog - that 
would interfere with memory consumpiton observation)


3) Run those scripts and monitor memory consumption.

- without using priority queues and sending messages without priorities, leak 
is evident as well - sometimes smaller, sometimes the same
- valgrind (on some older versions I tested before more thoroughly) detects 
nothing (neither leaked memory or reachable at shutdown)
- same leak is evident even with --ha-replicate=none
- number of backup brokers does not affect the memory leak


  was:
There is a memory leak on active HA broker, triggered most probably by purging 
overflow message from a ring queue. Basic scenario is to setup HA cluster, 
promote to primary and feed forever a ring queue with messages.

Detailed scenario:

1) Start brokers and promote one to primary:

start_broker() {
port=$1
shift
rm -rf _${port}
mkdir _${port}
nohup qpidd --load-module=ha.so --port=$port 
--log-to-file=qpidd.$port.log --data-dir=_${port} --auth=no --log-to-stderr=no 
--ha-cluster=yes 
--ha-brokers-url="$(hostname):5672,$(hostname):5673,$(hostname):5674" 
--ha-replicate=all --acl-file=/root/qpidd.acl "$@" > /dev/null 2>&1 &
sleep 1
}


killall qpidd qpid-receive 2> /dev/null
rm -f qpidd.*.log
start_broker 5672
sleep 1
qpid-ha promote -b $(hostname):5672 --cluster-manager
sleep 1
start_broker 5673
sleep 1
start_broker 5674


2) Create ring queues and send there messages (it is enough to have 1 queue, 
having more should show the leak faster):

for i in $(seq 0 9); do
qpid-config add queue FromKeyServer_$i --max-queue-size=1 
--max-queue-count=10 --limit-policy=ring --argument=x-qpid-priorities=10
done

while true; do
for j in $(seq 1 10); do
for i in $(seq 1 10); do
for k in $(seq 0 9); do
qpid-send -a FromKeyServer_$k -m 100 
--send-rate=50 -- priority=$(($((RANDOM))%10)) &
done
done
wait
while [ $(qpid-stat -q | grep broker-replicator | sed "s/Y//g" 
| awk '{ print $2 }' | sort -n | tail -n1) != "0" ]; do
sleep 1
done
done
date
ps aux | grep qpidd | grep "port=5672" | awk -F "--store-dir" '{ print 
$1 }'
done

(the "while [ $(qpid-stat -q | .." cycle is there just to slow down the message 
enqueues to ensure replication federation queues dont have big backlog - that 
would interfere with memory consumpiton observation)


3) Run those scripts and monitor memory consumption.

- without using priority queues and sending messages without priorities, leak 
is evident as well - sometimes smaller, sometimes the same
- valgrind (on some older versions I tested 

[jira] [Created] (QPID-7134) [C++ client] Message::setContent("") does not work

2016-03-10 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7134:
---

 Summary: [C++ client] Message::setContent("") does not work
 Key: QPID-7134
 URL: https://issues.apache.org/jira/browse/QPID-7134
 Project: Qpid
  Issue Type: Bug
  Components: C++ Client
Affects Versions: qpid-cpp-0.34
Reporter: Pavel Moravec
Assignee: Gordon Sim


Message::setContent internally updates just "bytes" property of the Message 
object but not the "content" object. That brings problems when trying to reset 
content to empty one - original content is still stored in "content" object 
property and e.g. an attempt to send this "empty" message sends the message 
with original content.

Reproducer (outside a broker, feel free to add receiving initial message from 
the broker, or sending the result to it back):

{noformat}
#include 
#include 

using namespace std;
using namespace qpid::messaging;

int main(int argc, char* argv[])
{
qpid::types::Variant content("some content");
Message m1(content);
cout << "Message 1: initial content set to \"" << m1.getContent() << "\", 
contentSize = " << m1.getContentSize() << endl;
m1.setContent("message 1");
cout << "Message 1: after content set to \"" << m1.getContent() << "\", 
contentSize = " << m1.getContentSize() << endl;
m1.setContent(std::string());
cout << "Message 1: after content set to an empty string, the content is 
still \"" << m1.getContent() << "\" and contentSize = " << m1.getContentSize() 
<< endl;

return 0;
}
{noformat}

That returns:

{noformat}
Message 1: initial content set to "some content", contentSize = 12
Message 1: after content set to "message 1", contentSize = 9
Message 1: after content set to an empty string, the content is still "some 
content" and contentSize = 12
{noformat}

See "some content" of size 12 returned after an attempt to empty the message 
content. See it is the _original_ content before first setContent called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-7127) [C++ broker] Setting large idle timeout cause confuses timers in the C++ broker

2016-03-07 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-7127.
-
   Resolution: Fixed
 Assignee: Pavel Moravec
Fix Version/s: qpid-cpp-next

> [C++ broker] Setting large idle timeout cause confuses timers in the C++ 
> broker
> ---
>
> Key: QPID-7127
> URL: https://issues.apache.org/jira/browse/QPID-7127
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-0.34
>Reporter: Jakub Scholz
>Assignee: Pavel Moravec
> Fix For: qpid-cpp-next
>
>
> I run into following problem. When I try to connect with SwiftMQ AMQP client 
> (http://www.swiftmq.com/) to the Qpid C++ broker and don't specify idle 
> timeout, it will use in64_max. The Qpid broker seems to be fine with it and 
> opens the connection:
> ConnectionDispatcher, , visit, po=[POOpen, 
> containerId=356a476d-4678-4cfa-9680-8bf648b808d2@schojak, 
> maxFrameSize=2147483647, maxChannel=255, idleTimeout=9223372036854775807]
> ConnectionDispatcher, , visit, po=[POConnectionFrameReceived, frame=[Open 
> containerId=91655fa5-80d3-4cd1-9a72-51b82e36de00, maxFrameSize=4294967295, 
> channelMax=255, idleTimeOut=2147483647, 
> offeredCapabilities=[ANONYMOUS-RELAY], properties=[product=qpid-cpp, 
> platform=Linux, host=6a2d20e32f38, version=0.35]], sassl=false]
> However, the timers in the broker get crazy from it and start raising milions 
> of errors like this:
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.8867ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.8937ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.9006ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.9076ms[0ns]
> This seems to go on until the client disconnects or until the disk goes full. 
> IT also seems to cause some secondary problems (perfomance, the affected 
> client cannot close producer etc.)
> The int64 idle timeout seems to be bug in the SwiftMQ client. But Qpid should 
> definitely handle this better:
> - Not open the connection when the indle timeout is invalid
> - Make sure that the timer errors don't appear
> The problem seems to be present in both 0.34 as well as in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7127) [C++ broker] Setting large idle timeout cause confuses timers in the C++ broker

2016-03-07 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183539#comment-15183539
 ] 

Pavel Moravec commented on QPID-7127:
-

Reproducer: set idle timeout to . Then:

http://svn.apache.org/viewvc/qpid/trunk/qpid/cpp/src/qpid/broker/amqp/Connection.cpp?annotate=1713529#l334
 :

uint32_t timeout = pn_transport_get_remote_idle_timeout(transport);
if (timeout) {
// if idle generate empty frames at 1/2 the timeout interval as keepalives:
ticker = boost::intrusive_ptr(new 
ConnectionTickerTask((timeout+1)/2,
getBroker().getTimer(),
*this));

I.e. then ConnectionTickerTask is scheduled with zero timeout. So the task is 
always behind its schedule and spamming logs.

> [C++ broker] Setting large idle timeout cause confuses timers in the C++ 
> broker
> ---
>
> Key: QPID-7127
> URL: https://issues.apache.org/jira/browse/QPID-7127
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-0.34
>Reporter: Jakub Scholz
>
> I run into following problem. When I try to connect with SwiftMQ AMQP client 
> (http://www.swiftmq.com/) to the Qpid C++ broker and don't specify idle 
> timeout, it will use in64_max. The Qpid broker seems to be fine with it and 
> opens the connection:
> ConnectionDispatcher, , visit, po=[POOpen, 
> containerId=356a476d-4678-4cfa-9680-8bf648b808d2@schojak, 
> maxFrameSize=2147483647, maxChannel=255, idleTimeout=9223372036854775807]
> ConnectionDispatcher, , visit, po=[POConnectionFrameReceived, frame=[Open 
> containerId=91655fa5-80d3-4cd1-9a72-51b82e36de00, maxFrameSize=4294967295, 
> channelMax=255, idleTimeOut=2147483647, 
> offeredCapabilities=[ANONYMOUS-RELAY], properties=[product=qpid-cpp, 
> platform=Linux, host=6a2d20e32f38, version=0.35]], sassl=false]
> However, the timers in the broker get crazy from it and start raising milions 
> of errors like this:
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.8867ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.8937ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.9006ms[0ns]
> 2016-03-04 15:55:40 [System] error ConnectionTicker couldn't setup next timer 
> firing: 33.9076ms[0ns]
> This seems to go on until the client disconnects or until the disk goes full. 
> IT also seems to cause some secondary problems (perfomance, the affected 
> client cannot close producer etc.)
> The int64 idle timeout seems to be bug in the SwiftMQ client. But Qpid should 
> definitely handle this better:
> - Not open the connection when the indle timeout is invalid
> - Make sure that the timer errors don't appear
> The problem seems to be present in both 0.34 as well as in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-7020) uint16 AMQP0-10 message properties decoded as uint8

2016-01-24 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-7020.
-
   Resolution: Fixed
Fix Version/s: qpid-cpp-next

> uint16 AMQP0-10 message properties decoded as uint8
> ---
>
> Key: QPID-7020
> URL: https://issues.apache.org/jira/browse/QPID-7020
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-0.34
>Reporter: Pavel Moravec
> Fix For: qpid-cpp-next
>
>
> Description of problem:
> MessageTransfer::processProperties has trivial typo in decoding uin16 message 
> property type as uint8 variant:
> void MessageTransfer::processProperties(qpid::amqp::MapHandler& handler) const
> {
> ..
> switch (v.getType()) {
> case qpid::types::VAR_VOID:
> handler.handleVoid(key); break;
> case qpid::types::VAR_BOOL:
> handler.handleBool(key, v); break;
> case qpid::types::VAR_UINT8:
> handler.handleUint8(key, v); break;
> case qpid::types::VAR_UINT16:
> handler.handleUint8(key, v); break;
> ..
> See the latest line.
> Any attempt to call that line raises error:
> invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> One reproducer provided below.
> Version-Release number of selected component (if applicable):
> qpid-cpp-server-0.34-5.el6.x86_64
> How reproducible:
> 100%
> Steps to Reproduce:
> 1. Have this trivial program that creates queue message_queue, subscribes to 
> the queue, bind to amq.match with x-match:any,number:10809 matcher rule:
> $ cat send_uint16_t.cpp 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> using namespace qpid::messaging;
> using namespace qpid::types;
> using std::stringstream;
> using std::string;
> int main(int argc, char** argv) {
> const char* url = argc>1 ? argv[1] : "amqp:tcp:127.0.0.1:5672";
> 
> Connection connection(url, "");
> try {
> connection.open();
> Session session = connection.createSession();
> Receiver receiver = session.createReceiver("message_queue; {create: 
> always, node:{type:queue, durable:false, x-bindings:[{exchange:'amq.match', 
> queue:'message_queue', key:'key', arguments:{x-match:any,number:10809}}]}}");
> Sender sender = session.createSender("amq.match/key");
> Message msg("Some content");
> uint16_t number=10809;
> msg.setProperty("number", number);
> sender.send(msg);
> Message msg2 = receiver.fetch();
> std::cout << "Properties: " << msg2.getProperties() << std::endl
>   << "Content: " << msg.getContent() << std::endl;
> session.close();
> connection.close();
> return 0;
> } catch(const std::exception& error) {
> std::cout << error.what() << std::endl;
> connection.close();
> }
> return 1;   
> }
> 2. Compile it and run against a broker:
> g++ -Wall -lqpidclient -lqpidcommon -lqpidmessaging -lqpidtypes 
> send_uint16_t.cpp -o send_uint16_t
> ./send_uint16_t
> 3. Check output and also qpid logs
> Actual results:
> output:
> 2016-01-24 13:46:30 [Client] warning Broker closed connection: 501, invalid 
> conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> framing-error: invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> qpid error:
> 2016-01-24 13:46:30 [Broker] error Connection exception: framing-error: 
> invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> 2016-01-24 13:46:30 [Protocol] error Connection 
> qpid.127.0.0.1:5672-127.0.0.1:33825 closed by error: invalid conversion: 
> Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)(501)
> Expected results:
> output:
> a message is received and printed to stdout
> qpid logs:
> no error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-7020) uint16 AMQP0-10 message properties decoded as uint8

2016-01-24 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-7020.
---
Assignee: Pavel Moravec

> uint16 AMQP0-10 message properties decoded as uint8
> ---
>
> Key: QPID-7020
> URL: https://issues.apache.org/jira/browse/QPID-7020
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-0.34
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
> Fix For: qpid-cpp-next
>
>
> Description of problem:
> MessageTransfer::processProperties has trivial typo in decoding uin16 message 
> property type as uint8 variant:
> void MessageTransfer::processProperties(qpid::amqp::MapHandler& handler) const
> {
> ..
> switch (v.getType()) {
> case qpid::types::VAR_VOID:
> handler.handleVoid(key); break;
> case qpid::types::VAR_BOOL:
> handler.handleBool(key, v); break;
> case qpid::types::VAR_UINT8:
> handler.handleUint8(key, v); break;
> case qpid::types::VAR_UINT16:
> handler.handleUint8(key, v); break;
> ..
> See the latest line.
> Any attempt to call that line raises error:
> invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> One reproducer provided below.
> Version-Release number of selected component (if applicable):
> qpid-cpp-server-0.34-5.el6.x86_64
> How reproducible:
> 100%
> Steps to Reproduce:
> 1. Have this trivial program that creates queue message_queue, subscribes to 
> the queue, bind to amq.match with x-match:any,number:10809 matcher rule:
> $ cat send_uint16_t.cpp 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> using namespace qpid::messaging;
> using namespace qpid::types;
> using std::stringstream;
> using std::string;
> int main(int argc, char** argv) {
> const char* url = argc>1 ? argv[1] : "amqp:tcp:127.0.0.1:5672";
> 
> Connection connection(url, "");
> try {
> connection.open();
> Session session = connection.createSession();
> Receiver receiver = session.createReceiver("message_queue; {create: 
> always, node:{type:queue, durable:false, x-bindings:[{exchange:'amq.match', 
> queue:'message_queue', key:'key', arguments:{x-match:any,number:10809}}]}}");
> Sender sender = session.createSender("amq.match/key");
> Message msg("Some content");
> uint16_t number=10809;
> msg.setProperty("number", number);
> sender.send(msg);
> Message msg2 = receiver.fetch();
> std::cout << "Properties: " << msg2.getProperties() << std::endl
>   << "Content: " << msg.getContent() << std::endl;
> session.close();
> connection.close();
> return 0;
> } catch(const std::exception& error) {
> std::cout << error.what() << std::endl;
> connection.close();
> }
> return 1;   
> }
> 2. Compile it and run against a broker:
> g++ -Wall -lqpidclient -lqpidcommon -lqpidmessaging -lqpidtypes 
> send_uint16_t.cpp -o send_uint16_t
> ./send_uint16_t
> 3. Check output and also qpid logs
> Actual results:
> output:
> 2016-01-24 13:46:30 [Client] warning Broker closed connection: 501, invalid 
> conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> framing-error: invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> qpid error:
> 2016-01-24 13:46:30 [Broker] error Connection exception: framing-error: 
> invalid conversion: Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
> 2016-01-24 13:46:30 [Protocol] error Connection 
> qpid.127.0.0.1:5672-127.0.0.1:33825 closed by error: invalid conversion: 
> Cannot convert from uint16 to uint8 
> (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)(501)
> Expected results:
> output:
> a message is received and printed to stdout
> qpid logs:
> no error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7020) uint16 AMQP0-10 message properties decoded as uint8

2016-01-24 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-7020:
---

 Summary: uint16 AMQP0-10 message properties decoded as uint8
 Key: QPID-7020
 URL: https://issues.apache.org/jira/browse/QPID-7020
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: qpid-cpp-0.34
Reporter: Pavel Moravec


Description of problem:
MessageTransfer::processProperties has trivial typo in decoding uin16 message 
property type as uint8 variant:

void MessageTransfer::processProperties(qpid::amqp::MapHandler& handler) const
{
..
switch (v.getType()) {
case qpid::types::VAR_VOID:
handler.handleVoid(key); break;
case qpid::types::VAR_BOOL:
handler.handleBool(key, v); break;
case qpid::types::VAR_UINT8:
handler.handleUint8(key, v); break;
case qpid::types::VAR_UINT16:
handler.handleUint8(key, v); break;
..

See the latest line.

Any attempt to call that line raises error:

invalid conversion: Cannot convert from uint16 to uint8 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)

One reproducer provided below.


Version-Release number of selected component (if applicable):
qpid-cpp-server-0.34-5.el6.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Have this trivial program that creates queue message_queue, subscribes to 
the queue, bind to amq.match with x-match:any,number:10809 matcher rule:

$ cat send_uint16_t.cpp 

#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 

using namespace qpid::messaging;
using namespace qpid::types;

using std::stringstream;
using std::string;

int main(int argc, char** argv) {
const char* url = argc>1 ? argv[1] : "amqp:tcp:127.0.0.1:5672";

Connection connection(url, "");
try {
connection.open();
Session session = connection.createSession();
Receiver receiver = session.createReceiver("message_queue; {create: 
always, node:{type:queue, durable:false, x-bindings:[{exchange:'amq.match', 
queue:'message_queue', key:'key', arguments:{x-match:any,number:10809}}]}}");
Sender sender = session.createSender("amq.match/key");
Message msg("Some content");
uint16_t number=10809;
msg.setProperty("number", number);
sender.send(msg);
Message msg2 = receiver.fetch();
std::cout << "Properties: " << msg2.getProperties() << std::endl
  << "Content: " << msg.getContent() << std::endl;
session.close();
connection.close();
return 0;
} catch(const std::exception& error) {
std::cout << error.what() << std::endl;
connection.close();
}
return 1;   
}

2. Compile it and run against a broker:
g++ -Wall -lqpidclient -lqpidcommon -lqpidmessaging -lqpidtypes 
send_uint16_t.cpp -o send_uint16_t
./send_uint16_t

3. Check output and also qpid logs


Actual results:
output:
2016-01-24 13:46:30 [Client] warning Broker closed connection: 501, invalid 
conversion: Cannot convert from uint16 to uint8 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
framing-error: invalid conversion: Cannot convert from uint16 to uint8 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)

qpid error:
2016-01-24 13:46:30 [Broker] error Connection exception: framing-error: invalid 
conversion: Cannot convert from uint16 to uint8 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)
2016-01-24 13:46:30 [Protocol] error Connection 
qpid.127.0.0.1:5672-127.0.0.1:33825 closed by error: invalid conversion: Cannot 
convert from uint16 to uint8 
(/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/types/Variant.cpp:280)(501)


Expected results:
output:
a message is received and printed to stdout

qpid logs:
no error




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6966) C++ broker and client to support TLS1.1 and TLS1.2 by default

2016-01-05 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6966:

Summary: C++ broker and client to support TLS1.1 and TLS1.2 by default  
(was: C++ broker and client to support TLS1.1 and TLS1.2)

> C++ broker and client to support TLS1.1 and TLS1.2 by default
> -
>
> Key: QPID-6966
> URL: https://issues.apache.org/jira/browse/QPID-6966
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker, C++ Client
>Affects Versions: qpid-cpp-0.34
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
>
> Description of problem:
> Currently, neither C++ client or broker allows TLS1.1 or TLS1.2 protocol 
> versions. Please enable it, esp. since Java client 6.1 will disable TLS1.0 
> and use 1.1 and 1.2 only.
> Version-Release number of selected component (if applicable):
> qpid-cpp-server-0.34-5.el6.x86_64
> qpid-cpp-client-0.34-5.el6.x86_64
> How reproducible:
> 100%
> Steps to Reproduce:
> 1. Start qpid broker with SSL configured
> 2. openssl s_client -tls1_1 -connect localhost:5671
> 3. openssl s_client -tls1_2 -connect localhost:5671
> Actual results:
> Both 2 and 3 fails with:
> {noformat}
> 139817551390536:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version 
> number:s3_pkt.c:337:
> {noformat}
> Expected results:
> Both should return something like:
> {noformat}
> CONNECTED(0003)
> depth=0 CN = localhost
> verify error:num=18:self signed certificate
> verify return:1
> depth=0 CN = localhost
> verify return:1
> 140319888385864:error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
> certificate:s3_pkt.c:1256:SSL alert number 42
> 140319888385864:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake 
> failure:s3_pkt.c:596:
> ---
> Certificate chain
>  0 s:/CN=localhost
>i:/CN=localhost
> ---
> Server certificate
> -BEGIN CERTIFICATE-
> MIIBoDCCAQmgAwIBAgIFAKUDcMswDQYJKoZIhvcNAQEFBQAwFDESMBAGA1UEAxMJ
> bG9jYWxob3N0MB4XDTE1MTIzMDExMDYwN1oXDTE2MDMzMDExMDYwN1owFDESMBAG
> A1UEAxMJbG9jYWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCgCq6w
> o6FW7gIpAQu8y74wuREH6aGo6hc6YVfATz503o7dxqmUUKs6+DkqbEiDu43r51QL
> Sb7oduLMmrvC5TfhWEZGe3PYPOuCBbpqDxXs5kKlqSCuIbvDv1ua1WXdqb27/jGr
> d6Lf+DsnU+GXrGwLY1W1zchagmFU1P2dLh8JhQIDAQABMA0GCSqGSIb3DQEBBQUA
> A4GBACUauXrJB/P0za8mPj5As4uQ3kr7CHIAtFBEAd3MvVmf9RHniMU/resXeE1B
> CBOZ4kXmTvVQ+/kDxYTXO/pLq0wh4HHuZC4LrmlIHG2WagEskVnYgqJiHUchKi+8
> URu/CX4rW6/EdcAHhPsKX6nlHFFKYg5u9b9ZtQHYMrfryStZ
> -END CERTIFICATE-
> subject=/CN=localhost
> issuer=/CN=localhost
> ---
> Acceptable client certificate CA names
> /CN=dummy
> ---
> SSL handshake has read 565 bytes and written 202 bytes
> ---
> New, TLSv1/SSLv3, Cipher is AES128-GCM-SHA256
> Server public key is 1024 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> SSL-Session:
> Protocol  : TLSv1.2
> Cipher: AES128-GCM-SHA256
> Session-ID: 
> 7D6C1CB53B37700F2BF007D0D079AB72F26A9D289BCA8D98B5B3F1E283311991
> Session-ID-ctx: 
> Master-Key: 
> 448215BEAADBFF90B82B421D182F8AD7174426D9292835775C405A7C3AEC2763E5F2A1127E5AE210ADC6B7335EE1F6FA
> Key-Arg   : None
> Krb5 Principal: None
> PSK identity: None
> PSK identity hint: None
> Start Time: 1451483784
> Timeout   : 7200 (sec)
> Verify return code: 18 (self signed certificate)
> ---
> {noformat}
> Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-6966) C++ broker and client to support TLS1.1 and TLS1.2 by default

2016-01-05 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-6966.
-
   Resolution: Fixed
Fix Version/s: qpid-cpp-next

> C++ broker and client to support TLS1.1 and TLS1.2 by default
> -
>
> Key: QPID-6966
> URL: https://issues.apache.org/jira/browse/QPID-6966
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker, C++ Client
>Affects Versions: qpid-cpp-0.34
>Reporter: Pavel Moravec
>Assignee: Pavel Moravec
> Fix For: qpid-cpp-next
>
>
> Description of problem:
> Currently, neither C++ client or broker allows TLS1.1 or TLS1.2 protocol 
> versions. Please enable it, esp. since Java client 6.1 will disable TLS1.0 
> and use 1.1 and 1.2 only.
> Version-Release number of selected component (if applicable):
> qpid-cpp-server-0.34-5.el6.x86_64
> qpid-cpp-client-0.34-5.el6.x86_64
> How reproducible:
> 100%
> Steps to Reproduce:
> 1. Start qpid broker with SSL configured
> 2. openssl s_client -tls1_1 -connect localhost:5671
> 3. openssl s_client -tls1_2 -connect localhost:5671
> Actual results:
> Both 2 and 3 fails with:
> {noformat}
> 139817551390536:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version 
> number:s3_pkt.c:337:
> {noformat}
> Expected results:
> Both should return something like:
> {noformat}
> CONNECTED(0003)
> depth=0 CN = localhost
> verify error:num=18:self signed certificate
> verify return:1
> depth=0 CN = localhost
> verify return:1
> 140319888385864:error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
> certificate:s3_pkt.c:1256:SSL alert number 42
> 140319888385864:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake 
> failure:s3_pkt.c:596:
> ---
> Certificate chain
>  0 s:/CN=localhost
>i:/CN=localhost
> ---
> Server certificate
> -BEGIN CERTIFICATE-
> MIIBoDCCAQmgAwIBAgIFAKUDcMswDQYJKoZIhvcNAQEFBQAwFDESMBAGA1UEAxMJ
> bG9jYWxob3N0MB4XDTE1MTIzMDExMDYwN1oXDTE2MDMzMDExMDYwN1owFDESMBAG
> A1UEAxMJbG9jYWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCgCq6w
> o6FW7gIpAQu8y74wuREH6aGo6hc6YVfATz503o7dxqmUUKs6+DkqbEiDu43r51QL
> Sb7oduLMmrvC5TfhWEZGe3PYPOuCBbpqDxXs5kKlqSCuIbvDv1ua1WXdqb27/jGr
> d6Lf+DsnU+GXrGwLY1W1zchagmFU1P2dLh8JhQIDAQABMA0GCSqGSIb3DQEBBQUA
> A4GBACUauXrJB/P0za8mPj5As4uQ3kr7CHIAtFBEAd3MvVmf9RHniMU/resXeE1B
> CBOZ4kXmTvVQ+/kDxYTXO/pLq0wh4HHuZC4LrmlIHG2WagEskVnYgqJiHUchKi+8
> URu/CX4rW6/EdcAHhPsKX6nlHFFKYg5u9b9ZtQHYMrfryStZ
> -END CERTIFICATE-
> subject=/CN=localhost
> issuer=/CN=localhost
> ---
> Acceptable client certificate CA names
> /CN=dummy
> ---
> SSL handshake has read 565 bytes and written 202 bytes
> ---
> New, TLSv1/SSLv3, Cipher is AES128-GCM-SHA256
> Server public key is 1024 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> SSL-Session:
> Protocol  : TLSv1.2
> Cipher: AES128-GCM-SHA256
> Session-ID: 
> 7D6C1CB53B37700F2BF007D0D079AB72F26A9D289BCA8D98B5B3F1E283311991
> Session-ID-ctx: 
> Master-Key: 
> 448215BEAADBFF90B82B421D182F8AD7174426D9292835775C405A7C3AEC2763E5F2A1127E5AE210ADC6B7335EE1F6FA
> Key-Arg   : None
> Krb5 Principal: None
> PSK identity: None
> PSK identity hint: None
> Start Time: 1451483784
> Timeout   : 7200 (sec)
> Verify return code: 18 (self signed certificate)
> ---
> {noformat}
> Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-197) Provide list of current link routing

2015-12-01 Thread Pavel Moravec (JIRA)
Pavel Moravec created DISPATCH-197:
--

 Summary: Provide list of current link routing
 Key: DISPATCH-197
 URL: https://issues.apache.org/jira/browse/DISPATCH-197
 Project: Qpid Dispatch
  Issue Type: Improvement
  Components: Management Agent
Affects Versions: 0.5
Reporter: Pavel Moravec


Dispatch offers link routing but there is no way how to get a list of currently 
routed links via linkRouting. Such a list would be beneficial for 
troubleshooting issues relevant to link routing - i.e. in Red Hat Satellite6 
that utilizes qdrouterd primarily due to this feature.

Please add that management objects and let them query-able via e.g. extending 
"qdstat -l" or via some other qdstat option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-04 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14990387#comment-14990387
 ] 

Pavel Moravec commented on QPID-6822:
-

Upgrade procedure described in 
https://issues.apache.org/jira/browse/QPID-6822?focusedCommentId=14985205=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14985205
 (in downstream MRG documentation).

The problem that Peter had was two-fold: on upgraded systems by the lack of 
executing the procedure, and on fresh 0.34 installation the SELinux policy too 
old.

In upstream, just the upgrade procedure is missing, what I have asked [~kpvdr] 
to copy from the downstream docs. This can be done within this JIRA.

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>Assignee: Kim van der Riet
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-04 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14990386#comment-14990386
 ] 

Pavel Moravec commented on QPID-6822:
-

Upgrade procedure described in 
https://issues.apache.org/jira/browse/QPID-6822?focusedCommentId=14985205=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14985205
 (in downstream MRG documentation).

The problem that Peter had was two-fold: on upgraded systems by the lack of 
executing the procedure, and on fresh 0.34 installation the SELinux policy too 
old.

In upstream, just the upgrade procedure is missing, what I have asked [~kpvdr] 
to copy from the downstream docs. This can be done within this JIRA.

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>Assignee: Kim van der Riet
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-03 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6822:

Assignee: Kim van der Riet

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>Assignee: Kim van der Riet
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-03 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986970#comment-14986970
 ] 

Pavel Moravec commented on QPID-6822:
-

Kim,
could you please add the upgrade procedure to upstream documentation?

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>Assignee: Kim van der Riet
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-02 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985205#comment-14985205
 ] 

Pavel Moravec commented on QPID-6822:
-

There was some linearstore directory structure change between 0.32 and 0.34 ( 
kpvdr  / Kim van Der Riet would know).

Just a guess - have you restarted the broker after the upgrade? And did you 
follow the upgrade steps (I can find it now only in Red Hat manuals 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_MRG/3/html-single/Messaging_Installation_and_Configuration_Guide/index.html#Manually_Upgrade_Linearstore
 ).

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-6822) Failure on: qpid-config add queue --durable "queue-id"

2015-11-02 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985205#comment-14985205
 ] 

Pavel Moravec edited comment on QPID-6822 at 11/2/15 1:26 PM:
--

There was some linearstore directory structure change between 0.32 and 0.34 ( 
kpvdr  / Kim van Der Riet would know).

Just a guess - have you restarted the broker after the upgrade? And did you 
follow the upgrade steps (I can find it now only in Red Hat manuals 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_MRG/3/html-single/Messaging_Installation_and_Configuration_Guide/index.html#Manually_Upgrade_Linearstore
 )?


was (Author: pmoravec):
There was some linearstore directory structure change between 0.32 and 0.34 ( 
kpvdr  / Kim van Der Riet would know).

Just a guess - have you restarted the broker after the upgrade? And did you 
follow the upgrade steps (I can find it now only in Red Hat manuals 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_MRG/3/html-single/Messaging_Installation_and_Configuration_Guide/index.html#Manually_Upgrade_Linearstore
 ).

> Failure on: qpid-config add queue --durable "queue-id"
> --
>
> Key: QPID-6822
> URL: https://issues.apache.org/jira/browse/QPID-6822
> Project: Qpid
>  Issue Type: Bug
>Affects Versions: 0.32
> Environment: RedHat 7.1
>Reporter: Peter Lacko
>
> Attempt to create durable queue fails with:
> {noformat}
> $ qpid-config add queue --durable "queue"
> Failed: Exception: Exception from Agent: {u'error_code': 7, u'error_text': 
> 'Queue queue: create() failed: jexception 0x010c 
> EmptyFilePool::createSymLink() threw JERR__SYMLINK: Symbolic link 
> operation failed (file=/var/lib/qpidd/.qpidd/qls/p001/efp/2048k/in_use
> /85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl symlink=/var/lib/qpidd/.qpidd
> /qls/jrnl2/queue/85570c5b-b2ca-4883-bf86-8df746ac6ee4.jrnl errno=13 
> (Permission denied)) (/builddir/build/BUILD/qpid-cpp-0.34/src/qpid/linearstore
> /MessageStoreImpl.cpp:425)'}
> {noformat}
> Queue is created succesfully without {{--durable}} parameter.
> Problem occures after doing {{yum update}}. All qpid packages versions listed 
> below, before and after performing update:
> {noformat}
> $ diff qpid-before-update.txt qpid-after-update.txt
> 1,10c1,10
> < python-gofer-qpid.noarch   2.6.6-1.git.48.3141846.el7
> < python-qpid.x86_64 0.32-3.el7  @epel
> < python-qpid-common.x86_64  0.32-3.el7  @epel
> < python-qpid-qmf.x86_64 0.28-29.el7 @epel
> < qpid-cpp-client.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server.x86_64 0.32-3.el7  @epel
> < qpid-cpp-server-store.x86_64
> < qpid-proton-c.x86_64   0.9-3.el7   @epel
> < qpid-qmf.x86_640.28-29.el7 @epel
> < qpid-tools.x86_64  0.32-3.el7  @epel
> ---
> > python-gofer-qpid.noarch   2.6.6-2.el7 @pulp-2.7-beta
> > python-qpid.noarch 0.32-9.el7  @epel
> > python-qpid-common.noarch  0.32-9.el7  @epel
> > python-qpid-qmf.x86_64 0.32-1.el7  @epel
> > qpid-cpp-client.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server.x86_64 0.34-4.el7  @epel
> > qpid-cpp-server-linearstore.x86_64
> > qpid-proton-c.x86_64   0.10-2.el7  @epel
> > qpid-qmf.x86_640.32-1.el7  @epel
> > qpid-tools.noarch  0.32-9.el7  @epel
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (DISPATCH-156) [dispatch-tools] qdstat fails to connect to localhost via IPv4

2015-10-08 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed DISPATCH-156.
--
Resolution: Cannot Reproduce

It sounds it is fixed in upstream. It just does not work in my productised 
version qpid-dispatch-router-0.4-10.el7.x86_64 ..

Let close this upstream JIRA then (and I really should clone dispatch upstream 
sources and build from master, before filing an upstream JIRA).

> [dispatch-tools] qdstat fails to connect to localhost via IPv4
> --
>
> Key: DISPATCH-156
> URL: https://issues.apache.org/jira/browse/DISPATCH-156
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Management Agent
>Affects Versions: 0.4
>Reporter: Pavel Moravec
>Assignee: Ganesh Murthy
>Priority: Minor
> Fix For: 0.6
>
> Attachments: qdrouterd.conf
>
>
> Calling "qdstat -b localhost:5672 -c" tries to communicate to the dispatch 
> router via IPv6 only. If that fails, no attempt on IPv4 is done.
> Just for reference, /etc/hosts has:
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-156) [dispatch-tools] qdstat fails to connect to localhost via IPv4

2015-10-07 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947446#comment-14947446
 ] 

Pavel Moravec commented on DISPATCH-156:


Hi Ganesh,
use:

{code}
listener {
addr: 0.0.0.0
port: amqp
authenticatePeer: no
}
{code}

It is my understanding 0.0.0.0 means listening on any interface, right? But 
"qdstat -b localhost:5672 -c" wont work. (other two options - 127.0.0.1 or 
0.0.0.0 - work well).

> [dispatch-tools] qdstat fails to connect to localhost via IPv4
> --
>
> Key: DISPATCH-156
> URL: https://issues.apache.org/jira/browse/DISPATCH-156
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Management Agent
>Affects Versions: 0.4
>Reporter: Pavel Moravec
>Assignee: Ganesh Murthy
>Priority: Minor
> Fix For: 0.6
>
>
> Calling "qdstat -b localhost:5672 -c" tries to communicate to the dispatch 
> router via IPv6 only. If that fails, no attempt on IPv4 is done.
> Just for reference, /etc/hosts has:
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6740) [C++ broker, Python client] Message not delivered after "empty" exception

2015-09-17 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14791695#comment-14791695
 ] 

Pavel Moravec commented on QPID-6740:
-

Interesting use case.. Per my testing on upstream qpid (and 0.30 python-client 
while the issue sounds to be on broker side, imho), I have reproducer for the 
same using single python script:

{code}
from qpid.messaging import Connection
import threading

class ReceiverThread(threading.Thread):
def __init__(self,receiver):
super(ReceiverThread, self).__init__()
self.receiver=receiver

def run(self):
print receiver.fetch()

session1 = Connection.establish('amqp://127.0.0.1').session()
sender = session1.sender(
'mytestqueue; {create: always, node: {type: queue}}')
cleaner = session1.receiver('mytestqueue')
try:
print cleaner.fetch(timeout=0) # will raise a 
qpid.messaging.exceptions.Empty
except Exception,e:
print e

session2 = Connection.establish('amqp://127.0.0.1').session()
receiver = session2.receiver('mytestqueue')

thread1 = ReceiverThread(receiver)
thread1.start()
sender.send({'msg': 'test'})
thread1.join()
{code}

In either case (using original reproducer or mine), broker does not send the 
message anywhere. See e.g. "qpid-stat -q mytestqueue" and zero acquired 
messages.

> [C++ broker, Python client] Message not delivered after "empty" exception
> -
>
> Key: QPID-6740
> URL: https://issues.apache.org/jira/browse/QPID-6740
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker, Python Client
>Affects Versions: 0.32
> Environment: CentOS 7.1
> qpid (C++) 0.32, qpid-python 0.32
>Reporter: Viktor Horvath
>
> If this is not a bug, the documentation might need an update, as I'm not 
> aware of what I did wrong.
> Here is a test case, you will need two ipython consoles.
> {code:title=console 1}
> from qpid.messaging import Connection
> session = Connection.establish('amqp://127.0.0.1').session()
> sender = session.sender(
> 'mytestqueue; {create: always, node: {durable: True, type: queue}}')
> cleaner = session.receiver('mytestqueue')
> cleaner.fetch(timeout=0) # will raise a qpid.messaging.exceptions.Empty
> {code}
> (The receiver is named _cleaner_ because it is only supposed to "free" the 
> queue from any old messages.)
> {code:title=console 2}
> from qpid.messaging import Connection
> session = Connection.establish('amqp://127.0.0.1').session()
> receiver = session.receiver('mytestqueue')
> receiver.fetch()
> {code}
> Back to the first console:
> {code:title=console 1}
> sender.send({'msg': 'test'})
> {code}
> *This message does not arrive at the second console. The receiver.fetch() 
> still blocks.*
> I have the following observations about this situation:
> # One workaround is to call cleaner.close(), the blocking receiver will 
> immediately return the message.
> # Another workaround is to specify a time-out in the receiver.fetch() call 
> (test it with timeout=60). The message will be returned, though only at the 
> end of the time-out!
> # Sending a second message will result in immediate delivery of the first 
> message.
> # Once the situation is unblocked, you have to start anew in order to 
> experience the blocking situation again. Don't forget to call 
> session.acknowledge() before ending the console 2, or use a fresh queue.
> # The problem only manifests itself when receiver.fetch() starts before the 
> message is sent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-162) Dispatch sends Close with framing error over closed TCP connection

2015-09-03 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728614#comment-14728614
 ] 

Pavel Moravec commented on DISPATCH-162:


Maybe related https://issues.apache.org/jira/browse/DISPATCH-158 ?

> Dispatch sends Close with framing error over closed TCP connection
> --
>
> Key: DISPATCH-162
> URL: https://issues.apache.org/jira/browse/DISPATCH-162
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Affects Versions: 0.4
> Environment: Fedora Linux 4.0.5-200.fc21.x86_64 #1 SMP
> make test - built in self tests
>Reporter: Chuck Rolke
>
> From an Adverb network trace of self tests a repeating pattern is evident in 
> maybe a dozen instances:
> {noformat}
> ◊  ◊◊ 16.415479  Frame 4035  127.0.0.1:59760  -> 127.0.0.1:26330  ->   init 
> AMQP (0): (1.0.0), open [0]
> ◊  ◊◊ 16.418421  Frame 4037  127.0.0.1:59760 <-  127.0.0.1:26330 <-init 
> AMQP (0): (1.0.0)
> ◊  ◊◊ 16.418561  Frame 4039  127.0.0.1:59760 <-  127.0.0.1:26330 <-open 
> [0]
> ◊  ◊◊ 16.419732  Frame 4041  127.0.0.1:59760  -> 127.0.0.1:26330  ->   begin 
> [0,null], attach [0,0] sender link_104 (source: null, target: $management)
> ◊  ◊◊ 16.420060  Frame 4042  127.0.0.1:59760 <-  127.0.0.1:26330 <-begin 
> [0,0], attach [0,0] receiver link_104 (source: null, target: $management), 
> flow [0,0] (0,1000)
> ◊  ◊◊ 16.421171  Frame 4043  127.0.0.1:59760  -> 127.0.0.1:26330  ->   attach 
> [0,1] receiver link_105 (source: null, target: null)
> ◊  ◊◊ 16.421417  Frame 4044  127.0.0.1:59760 <-  127.0.0.1:26330 <-attach 
> [0,1] sender link_105 (source: endpoint_67, target: null)
> ◊  ◊◊ 16.422406  Frame 4045  127.0.0.1:59760  -> 127.0.0.1:26330  ->   flow 
> [0,1] (0,1), transfer [0,0] (0)
> ◊  ◊◊ 16.538553  Frame 4047  127.0.0.1:59760 <-  127.0.0.1:26330 <-flow 
> [0,0] (1,1000), transfer [0,1] (0), disposition [0] (receiver 0-0)
> ◊  ◊◊ 16.545259  Frame 4049  127.0.0.1:59760 <-  127.0.0.1:26330 <-close 
> [0]
>◊  close [0]
> Length: 84
> Doff: 2
> Type: AMQP (0)
> Channel: 0
> Performative: close (24)
> Arguments (list of 1 element)
>  Error (list of 3 elements)
>   Condition: amqp:connection:framing-error
>   Description: connection aborted
>   Info
> {noformat}
> The close performatives stand out as they are highlighted because they 
> contain an error.
> Going back to the wireshark trace raw data some TCP activity is revealed.  
> * Frame 4048 is a TCP \[FIN, ACK\] from the client closing its connection. 
> The client is done and does not perform an orderly AMQP shutdown.
> * Frame 4049 is the frame in question. The client connection is closed and 
> Dispatch should not send the Close frame.
> * Frame 4050 is a TCP \[RST\] from the now-closed client socket. Wireshark 
> highlights this in red indicating a protocol violation.
> The original trace analysis is available at 
> http://people.apache.org/~chug/blog/qdr/q2.html  It's a big, interesting view 
> of dispatch router doing its thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-158) Dispatch should ignore frames after receiving close frame

2015-08-27 Thread Pavel Moravec (JIRA)
Pavel Moravec created DISPATCH-158:
--

 Summary: Dispatch should ignore frames after receiving close frame
 Key: DISPATCH-158
 URL: https://issues.apache.org/jira/browse/DISPATCH-158
 Project: Qpid Dispatch
  Issue Type: Bug
  Components: Router Node
Affects Versions: 0.4
Reporter: Pavel Moravec
Priority: Minor


(not sure if component matches, or if the bug isn't in proton, please change 
accordingly)

Having a client / mobile subscriber receiving messages that suddenly closes the 
AMQP1.0 connection by sending close performative, the dispatch has to ignore 
any further performatives on the connection (ideally raising a warning). And it 
should not forward them further.

See https://issues.jboss.org/browse/ENTMQCL-14 for an example of malfunctioning 
client that sends close frame followed by flow frame (against AMQP spec). 
Dispatch then forwards the flow frame (with invalid channel number 65534 and 
invalid handle 4294967294) before closing the AMQP connection. This flow should 
not be forwarded but ignored - dont propagate misbehaviour in AMQP 1.0 
communication.

(dispatch router does not violate AMQP1.0 spec, just reacts on such violation 
by sending wrong and ridiculous flow frame)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-156) [dispatch-tools] qdstat fails to connect to localhost via IPv4

2015-08-24 Thread Pavel Moravec (JIRA)
Pavel Moravec created DISPATCH-156:
--

 Summary: [dispatch-tools] qdstat fails to connect to localhost via 
IPv4
 Key: DISPATCH-156
 URL: https://issues.apache.org/jira/browse/DISPATCH-156
 Project: Qpid Dispatch
  Issue Type: Bug
  Components: Management Agent
Affects Versions: 0.4
Reporter: Pavel Moravec
Priority: Minor


Calling qdstat -b localhost:5672 -c tries to communicate to the dispatch 
router via IPv6 only. If that fails, no attempt on IPv4 is done.

Just for reference, /etc/hosts has:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6698) [amqp1.0] connections drop when heartbeat is used and the time of day changes

2015-08-19 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702986#comment-14702986
 ] 

Pavel Moravec commented on QPID-6698:
-

It sounds this fixes also same issue on AMQP 0-10 - that's great.

 [amqp1.0] connections drop when heartbeat is used and the time of day changes
 -

 Key: QPID-6698
 URL: https://issues.apache.org/jira/browse/QPID-6698
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31, 0.32, qpid-cpp-0.34
Reporter: Ken Giusti
Assignee: Ken Giusti
 Fix For: qpid-cpp-next


 If heartbeats are enabled on a connection using AMQP 1.0, the connection will 
 drop the connection if the system clock is adjusted.  Qpidd thinks the 
 connection's idle timeout expired, when it actually hasn't



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-08-02 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-6491.
-
Resolution: Fixed

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: qpid-tools-next

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-4272) Large amounts of code are duplicated between the SSL and TCP transports

2015-07-07 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14616348#comment-14616348
 ] 

Pavel Moravec commented on QPID-4272:
-

Ah sorry, didnt know the new JIRA.

 Large amounts of code are duplicated between the SSL and TCP transports
 ---

 Key: QPID-4272
 URL: https://issues.apache.org/jira/browse/QPID-4272
 Project: Qpid
  Issue Type: Improvement
  Components: C++ Broker, C++ Client
Reporter: Andrew Stitcher
Assignee: Andrew Stitcher
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Reopened] (QPID-4272) Large amounts of code are duplicated between the SSL and TCP transports

2015-07-04 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec reopened QPID-4272:
-

The duplicate code removal is fine, but it hides error code / description 
coming from SSL. Originally:

{code}
cpp/src/qpid/sys/ssl/SslIo.cpp:
QPID_LOG(error, Error reading socket:   getErrorString(PR_GetError()));
{code}

now:
{code}
cpp/src/qpid/sys/posix/AsynchIO.cpp:
QPID_LOG(error, Error reading socket:   qpid::sys::strError(errno)  ( 
 errno  ) );
{code}

That means, most of nicely descripted SSL errors like:

{code}
Error reading socket: SSL peer cannot verify your certificate. [-12271]
{code}

are hidden in generous:
{code}
Error reading socket: Success(0)
{code}


Could be these SSL errors put back?

 Large amounts of code are duplicated between the SSL and TCP transports
 ---

 Key: QPID-4272
 URL: https://issues.apache.org/jira/browse/QPID-4272
 Project: Qpid
  Issue Type: Improvement
  Components: C++ Broker, C++ Client
Reporter: Andrew Stitcher
Assignee: Andrew Stitcher
 Fix For: 0.19






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-06-10 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14580098#comment-14580098
 ] 

Pavel Moravec commented on QPID-6491:
-

You mean saving the credentials in command output is the bad practice? I agree 
on that, just would like to clarify.

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-06-06 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575720#comment-14575720
 ] 

Pavel Moravec edited comment on QPID-6491 at 6/6/15 1:41 PM:
-

I realized the patch is wrong:

self.broker.saslUser is not the username and self.broker.authUser is not its 
password.

I.e. the there should be commit:
{code}
 url = BrokerURL(host=link.host, port=link.port, 
user=self.broker.saslUser, password=self.broker.authUser)
---
 url = BrokerURL(host=link.host, port=link.port, 
 user=self.broker.authUser, password=self.broker.authPass)
{code}

(to my defense, it can be spotted only when using credentials with 
username!=password).


Further, is printing credentials desired or rather disturbing? I.e. should be 
the output be:

{code}
Finding Linked Brokers:
company_B/password_B@localhost:6001... Ok
company_B/password_B@localhost:6002... Ok
company_B/password_B@localhost:6003... Ok
{code}

(that's current), or rather just:

{code}
Finding Linked Brokers:
localhost:6001... Ok
localhost:6002... Ok
localhost:6003... Ok
{code}

? I would vote for the second (without credentials), as 1) it's shorter and the 
user knows the credentials, 2) it can be shared with others without potential 
credentials leak.

Any objections if I would remove the credentials in the commit fixing the 
user/pass ?



was (Author: pmoravec):
I realized the patch is wrong:

self.broker.saslUser is not the username and self.broker.authUser is not its 
password.

I.e. the there should be commit:

{quote}
 url = BrokerURL(host=link.host, port=link.port, 
user=self.broker.saslUser, password=self.broker.authUser)
---
 url = BrokerURL(host=link.host, port=link.port, 
 user=self.broker.authUser, password=self.broker.authPass)
{quote}

(to my defense, it can be spotted only when using credentials with 
username!=password).


Further, is printing credentials desired or rather disturbing? I.e. should be 
the output be:

{quote}
Finding Linked Brokers:
company_B/password_B@localhost:6001... Ok
company_B/password_B@localhost:6002... Ok
company_B/password_B@localhost:6003... Ok
{quote}

(that's current), or rather just:

{quote}
Finding Linked Brokers:
localhost:6001... Ok
localhost:6002... Ok
localhost:6003... Ok
{quote}

? I would vote for the second (without credentials), as 1) it's shorter and the 
user knows the credentials, 2) it can be shared with others without potential 
credentials leak.

Any objections if I would remove the credentials in the commit fixing the 
user/pass ?


 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-06-06 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575720#comment-14575720
 ] 

Pavel Moravec edited comment on QPID-6491 at 6/6/15 1:38 PM:
-

I realized the patch is wrong:

self.broker.saslUser is not the username and self.broker.authUser is not its 
password.

I.e. the there should be commit:

{quote}
 url = BrokerURL(host=link.host, port=link.port, 
user=self.broker.saslUser, password=self.broker.authUser)
---
 url = BrokerURL(host=link.host, port=link.port, 
 user=self.broker.authUser, password=self.broker.authPass)
{quote}

(to my defense, it can be spotted only when using credentials with 
username!=password).


Further, is printing credentials desired or rather disturbing? I.e. should be 
the output be:

{quote}
Finding Linked Brokers:
company_B/password_B@localhost:6001... Ok
company_B/password_B@localhost:6002... Ok
company_B/password_B@localhost:6003... Ok
{quote}

(that's current), or rather just:

{quote}
Finding Linked Brokers:
localhost:6001... Ok
localhost:6002... Ok
localhost:6003... Ok
{quote}

? I would vote for the second (without credentials), as 1) it's shorter and the 
user knows the credentials, 2) it can be shared with others without potential 
credentials leak.

Any objections if I would remove the credentials in the commit fixing the 
user/pass ?



was (Author: pmoravec):
I realized the patch is wrong:

self.broker.saslUser is not the username and self.broker.authUser is not its 
password.

I.e. the there should be commit:

{quote}
 url = BrokerURL(host=link.host, port=link.port, 
user=self.broker.saslUser, password=self.broker.authUser)
---
 url = BrokerURL(host=link.host, port=link.port, 
 user=self.broker.authUser, password=self.broker.authPass)
{quote}

(to my defense, it can be spotted only when using credentials with 
username!=password).


Further, is printing credentials desired or rather disturbing? I.e. should be 
the output be:

{quote}
Finding Linked Brokers:
company_B/password_B@localhost:6001... Ok
company_B/password_B@localhost:6002... Ok
company_B/password_B@localhost:6003... Ok
{quote}

(that's current), or rather just:

Finding Linked Brokers:
localhost:6001... Ok
localhost:6002... Ok
localhost:6003... Ok

? I would vote for the second (without credentials), as 1) it's shorter and the 
user knows the credentials, 2) it can be shared with others without potential 
credentials leak.

Any objections if I would remove the credentials in the commit fixing the 
user/pass ?


 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Reopened] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-06-06 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec reopened QPID-6491:
-

I realized the patch is wrong:

self.broker.saslUser is not the username and self.broker.authUser is not its 
password.

I.e. the there should be commit:

{quote}
 url = BrokerURL(host=link.host, port=link.port, 
user=self.broker.saslUser, password=self.broker.authUser)
---
 url = BrokerURL(host=link.host, port=link.port, 
 user=self.broker.authUser, password=self.broker.authPass)
{quote}

(to my defense, it can be spotted only when using credentials with 
username!=password).


Further, is printing credentials desired or rather disturbing? I.e. should be 
the output be:

{quote}
Finding Linked Brokers:
company_B/password_B@localhost:6001... Ok
company_B/password_B@localhost:6002... Ok
company_B/password_B@localhost:6003... Ok
{quote}

(that's current), or rather just:

Finding Linked Brokers:
localhost:6001... Ok
localhost:6002... Ok
localhost:6003... Ok

? I would vote for the second (without credentials), as 1) it's shorter and the 
user knows the credentials, 2) it can be shared with others without potential 
credentials leak.

Any objections if I would remove the credentials in the commit fixing the 
user/pass ?


 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5866) [C++ client] AMQP 1.0 closing session without closing receiver first marks further messages as redelivered

2015-06-06 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-5866:

Description: 
Having a C++ AMQP 1.0 consumer with prefetch and closing its session without 
closing receiver first, the client does not send back to the broker disposition 
about unconsumed messages (that were buffered by the client due to prefetch but 
not offered to the application).

This causes next consumer to get messages with redelivered flag enabled / 
delivery count incremented.

Reproducer:

{code}
$ qpid-send --messages 3 --address q;{create:sender}

$ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
--messages 1 --address q
Properties: {sn:1, ts:1395841514445073615}

$ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
--messages 1 --address q
Redelivered: true
Properties: {sn:2, ts:1395841514445244860, x-amqp-delivery-count:1}

$
{code}

  was:
Having a C++ AMQP 1.0 consumer with prefetch and closing its session without 
closing receiver first, the client does not send back to the broker disposition 
about unconsumed messages (that were buffered by the client due to prefetch but 
not offered to the application).

This causes next consumer to get messages with redelivered flag enabled / 
delivery count incremented.

Reproducer:

$ qpid-send --messages 3 --address q;{create:sender}

$ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
--messages 1 --address q
Properties: {sn:1, ts:1395841514445073615}

$ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
--messages 1 --address q
Redelivered: true
Properties: {sn:2, ts:1395841514445244860, x-amqp-delivery-count:1}

$



 [C++ client] AMQP 1.0 closing session without closing receiver first marks 
 further messages as redelivered
 --

 Key: QPID-5866
 URL: https://issues.apache.org/jira/browse/QPID-5866
 Project: Qpid
  Issue Type: Bug
  Components: C++ Client
Affects Versions: 0.28
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: Future


 Having a C++ AMQP 1.0 consumer with prefetch and closing its session without 
 closing receiver first, the client does not send back to the broker 
 disposition about unconsumed messages (that were buffered by the client due 
 to prefetch but not offered to the application).
 This causes next consumer to get messages with redelivered flag enabled / 
 delivery count incremented.
 Reproducer:
 {code}
 $ qpid-send --messages 3 --address q;{create:sender}
 $ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
 --messages 1 --address q
 Properties: {sn:1, ts:1395841514445073615}
 $ qpid-receive --connection-options {protocol:amqp1.0} --print-headers true 
 --messages 1 --address q
 Redelivered: true
 Properties: {sn:2, ts:1395841514445244860, x-amqp-delivery-count:1}
 $
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-6551) [C++ broker]: linearstore raising JERR_LFCR_SEQNUMNOTFOUND after sending many DTX transactions

2015-05-21 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-6551.
-
   Resolution: Fixed
Fix Version/s: 0.33

Fixed by Kim's patch.

Verified this fixes the original problem with 2^16 journal files for tpl. 
Haven't be patient enough to verify the 2^32 journal files fix :).

 [C++ broker]: linearstore raising JERR_LFCR_SEQNUMNOTFOUND after sending many 
 DTX transactions
 --

 Key: QPID-6551
 URL: https://issues.apache.org/jira/browse/QPID-6551
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Reporter: Pavel Moravec
Assignee: Pavel Moravec
 Fix For: 0.33

 Attachments: JERR_LFCR_SEQNUMNOTFOUND.patch


 Sending many DTX transactions (such that tpl journal requires 64k journal 
 files) causes a transaction fails with JERR_LFCR_SEQNUMNOTFOUND journal error:
 jexception 0x0500 LinearFileController::find() threw 
 JERR_LFCR_SEQNUMNOTFOUND: File sequence number not found (fileSeqNumber=0)
 Reproducer:
 nohup ./src/qpidd --load-module=src/linearstore.so --efp-file-size=32 
 --log-to-file=/tmp/qpidd.log 
 # the --efp-file-size parameter is just for faster reproducer
 ./src/tests/qpid-txtest --dtx=yes --check=no --init=yes --tx-count=10 
 --total-messages=1000 --size=1
 nohup ./src/tests/qpid-txtest --dtx=yes --check=no --init=no 
 --tx-count=20 --size=1 
 After a (longer) while, linearstore raises JERR_LFCR_SEQNUMNOTFOUND and 
 subsequently various other exceptions/errors.
 The root cause is:
 - qpid/linearstore/journal/txn_map.h declares uint16_t pfid_
 - but it needs to store file sequence number stored as uint64_t elsewhere
 - for 65536th journal file (of tpl journal), re-casting uint64_t to uint16_t 
 returns obvious zero
 - but there is no file number zero in the journal 
 Fix just being tested:
 $ svn diff
 Index: qpid/linearstore/journal/txn_map.cpp
 ===
 --- qpid/linearstore/journal/txn_map.cpp  (revision 1680527)
 +++ qpid/linearstore/journal/txn_map.cpp  (working copy)
 @@ -36,7 +36,7 @@
  
  txn_data_t::txn_data_t(const uint64_t rid,
 const uint64_t drid,
 -   const uint16_t pfid,
 +   const uint64_t pfid,
 const uint64_t foffs,
 const bool enq_flag,
 const bool tpc_flag,
 Index: qpid/linearstore/journal/txn_map.h
 ===
 --- qpid/linearstore/journal/txn_map.h(revision 1680527)
 +++ qpid/linearstore/journal/txn_map.h(working copy)
 @@ -39,7 +39,7 @@
  {
  uint64_t rid_;  /// Record id for this operation
  uint64_t drid_; /// Dequeue record id for this operation
 -uint16_t pfid_; /// Physical file id, to be used when 
 transferring to emap on commit
 +uint64_t pfid_; /// Physical file id, to be used when 
 transferring to emap on commit
  uint64_t foffs_;/// Offset in file for this record
  bool enq_flag_; /// If true, enq op, otherwise deq op
  bool tpc_flag_; /// 2PC transaction if true
 @@ -47,7 +47,7 @@
  bool aio_compl_;/// Initially false, set to true when record 
 AIO returns
  txn_data_t(const uint64_t rid,
 const uint64_t drid,
 -   const uint16_t pfid,
 +   const uint64_t pfid,
 const uint64_t foffs,
 const bool enq_flag,
 const bool tpc_flag,
 $



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6551) [C++ broker]: linearstore raising JERR_LFCR_SEQNUMNOTFOUND after sending many DTX transactions

2015-05-21 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6551.
---

 [C++ broker]: linearstore raising JERR_LFCR_SEQNUMNOTFOUND after sending many 
 DTX transactions
 --

 Key: QPID-6551
 URL: https://issues.apache.org/jira/browse/QPID-6551
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Reporter: Pavel Moravec
Assignee: Pavel Moravec
 Fix For: 0.33

 Attachments: JERR_LFCR_SEQNUMNOTFOUND.patch


 Sending many DTX transactions (such that tpl journal requires 64k journal 
 files) causes a transaction fails with JERR_LFCR_SEQNUMNOTFOUND journal error:
 jexception 0x0500 LinearFileController::find() threw 
 JERR_LFCR_SEQNUMNOTFOUND: File sequence number not found (fileSeqNumber=0)
 Reproducer:
 nohup ./src/qpidd --load-module=src/linearstore.so --efp-file-size=32 
 --log-to-file=/tmp/qpidd.log 
 # the --efp-file-size parameter is just for faster reproducer
 ./src/tests/qpid-txtest --dtx=yes --check=no --init=yes --tx-count=10 
 --total-messages=1000 --size=1
 nohup ./src/tests/qpid-txtest --dtx=yes --check=no --init=no 
 --tx-count=20 --size=1 
 After a (longer) while, linearstore raises JERR_LFCR_SEQNUMNOTFOUND and 
 subsequently various other exceptions/errors.
 The root cause is:
 - qpid/linearstore/journal/txn_map.h declares uint16_t pfid_
 - but it needs to store file sequence number stored as uint64_t elsewhere
 - for 65536th journal file (of tpl journal), re-casting uint64_t to uint16_t 
 returns obvious zero
 - but there is no file number zero in the journal 
 Fix just being tested:
 $ svn diff
 Index: qpid/linearstore/journal/txn_map.cpp
 ===
 --- qpid/linearstore/journal/txn_map.cpp  (revision 1680527)
 +++ qpid/linearstore/journal/txn_map.cpp  (working copy)
 @@ -36,7 +36,7 @@
  
  txn_data_t::txn_data_t(const uint64_t rid,
 const uint64_t drid,
 -   const uint16_t pfid,
 +   const uint64_t pfid,
 const uint64_t foffs,
 const bool enq_flag,
 const bool tpc_flag,
 Index: qpid/linearstore/journal/txn_map.h
 ===
 --- qpid/linearstore/journal/txn_map.h(revision 1680527)
 +++ qpid/linearstore/journal/txn_map.h(working copy)
 @@ -39,7 +39,7 @@
  {
  uint64_t rid_;  /// Record id for this operation
  uint64_t drid_; /// Dequeue record id for this operation
 -uint16_t pfid_; /// Physical file id, to be used when 
 transferring to emap on commit
 +uint64_t pfid_; /// Physical file id, to be used when 
 transferring to emap on commit
  uint64_t foffs_;/// Offset in file for this record
  bool enq_flag_; /// If true, enq op, otherwise deq op
  bool tpc_flag_; /// 2PC transaction if true
 @@ -47,7 +47,7 @@
  bool aio_compl_;/// Initially false, set to true when record 
 AIO returns
  txn_data_t(const uint64_t rid,
 const uint64_t drid,
 -   const uint16_t pfid,
 +   const uint64_t pfid,
 const uint64_t foffs,
 const bool enq_flag,
 const bool tpc_flag,
 $



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6551) [C++ broker]: linearstore raising JERR_LFCR_SEQNUMNOTFOUND after sending many DTX transactions

2015-05-20 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6551:
---

 Summary: [C++ broker]: linearstore raising 
JERR_LFCR_SEQNUMNOTFOUND after sending many DTX transactions
 Key: QPID-6551
 URL: https://issues.apache.org/jira/browse/QPID-6551
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Reporter: Pavel Moravec
Assignee: Pavel Moravec


Sending many DTX transactions (such that tpl journal requires 64k journal 
files) causes a transaction fails with JERR_LFCR_SEQNUMNOTFOUND journal error:

jexception 0x0500 LinearFileController::find() threw JERR_LFCR_SEQNUMNOTFOUND: 
File sequence number not found (fileSeqNumber=0)

Reproducer:
nohup ./src/qpidd --load-module=src/linearstore.so --efp-file-size=32 
--log-to-file=/tmp/qpidd.log 
# the --efp-file-size parameter is just for faster reproducer

./src/tests/qpid-txtest --dtx=yes --check=no --init=yes --tx-count=10 
--total-messages=1000 --size=1
nohup ./src/tests/qpid-txtest --dtx=yes --check=no --init=no --tx-count=20 
--size=1 

After a (longer) while, linearstore raises JERR_LFCR_SEQNUMNOTFOUND and 
subsequently various other exceptions/errors.

The root cause is:
- qpid/linearstore/journal/txn_map.h declares uint16_t pfid_
- but it needs to store file sequence number stored as uint64_t elsewhere
- for 65536th journal file (of tpl journal), re-casting uint64_t to uint16_t 
returns obvious zero
- but there is no file number zero in the journal 


Fix just being tested:

$ svn diff
Index: qpid/linearstore/journal/txn_map.cpp
===
--- qpid/linearstore/journal/txn_map.cpp(revision 1680527)
+++ qpid/linearstore/journal/txn_map.cpp(working copy)
@@ -36,7 +36,7 @@
 
 txn_data_t::txn_data_t(const uint64_t rid,
const uint64_t drid,
-   const uint16_t pfid,
+   const uint64_t pfid,
const uint64_t foffs,
const bool enq_flag,
const bool tpc_flag,
Index: qpid/linearstore/journal/txn_map.h
===
--- qpid/linearstore/journal/txn_map.h  (revision 1680527)
+++ qpid/linearstore/journal/txn_map.h  (working copy)
@@ -39,7 +39,7 @@
 {
 uint64_t rid_;  /// Record id for this operation
 uint64_t drid_; /// Dequeue record id for this operation
-uint16_t pfid_; /// Physical file id, to be used when 
transferring to emap on commit
+uint64_t pfid_; /// Physical file id, to be used when 
transferring to emap on commit
 uint64_t foffs_;/// Offset in file for this record
 bool enq_flag_; /// If true, enq op, otherwise deq op
 bool tpc_flag_; /// 2PC transaction if true
@@ -47,7 +47,7 @@
 bool aio_compl_;/// Initially false, set to true when record AIO 
returns
 txn_data_t(const uint64_t rid,
const uint64_t drid,
-   const uint16_t pfid,
+   const uint64_t pfid,
const uint64_t foffs,
const bool enq_flag,
const bool tpc_flag,
$




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-5107) Trace queuesession deletion statistics show zero values for some counters everytime

2015-04-30 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-5107.
-
   Resolution: Fixed
Fix Version/s: 0.30

ok, closing it as fixed in (I suppose) 0.28, where all-except-queue stats are 
fixed properly. And opening a new one for the queue stats.

 Trace queuesession deletion statistics show zero values for some counters 
 everytime
 

 Key: QPID-5107
 URL: https://issues.apache.org/jira/browse/QPID-5107
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.22
Reporter: Pavel Moravec
Priority: Minor
  Labels: easyfix, easytest, patch
 Fix For: 0.30

 Attachments: QPID-5107.patch, QPID-5107_debugStats.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 Description of problem:
 qpid trace/logs statistics about object deletion. However some of these data 
 are wrong. In particular msgDepth for a queue is everytime zero (and 
 msgTotalDequeues equals to msgTotalEnqueues despite no consumer was 
 subscribed to the queue), or unackedMessages for a session is zero everytime 
 as well.
 Version-Release number of selected component (if applicable):
 qpid 0.22
 How reproducible:
 100%
 Steps to Reproduce:
 1) msgDepth:0 for queue:
 echo auth=no  /etc/qpid/qpidd.conf
 echo trace=yes  /etc/qpid/qpidd.conf
 echo log-to-file=/tmp/qpidd.log  /etc/qpid/qpidd.conf
 rm -rf /var/lib/qpidd/* /tmp/qpidd.log
 service qpidd restart
 qpid-send -m 123 -a testQueue; {create:always, delete:always}
 sleep 10  # just to let periodic processing to run  print out the stats
 grep Mgmt delete queue /tmp/qpidd.log
 Actual results:
 2013-08-29 14:05:38 [Model] trace Mgmt delete queue. id:testQueue Statistics: 
 {acquires:123, bindingCount:0, bindingCountHigh:0, bindingCountLow:0, 
 byteDepth:0, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
 bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, 
 byteTotalEnqueues:0, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
 consumerCountHigh:0, consumerCountLow:0, discardsLvq:0, discardsOverflow:0, 
 discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
 flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
 messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
 msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
 msgPersistEnqueues:0, msgTotalDequeues:123, msgTotalEnqueues:123, 
 msgTxnDequeues:0, msgTxnEnqueues:0, releases:0, reroutes:0, 
 unackedMessages:0, unackedMessagesHigh:0, unackedMessagesLow:0}
 Expected results:
 acquires:0
 msgTotalDequeues:0
 (several other counters are supposed to be wrong as well like byteFtdDequeues)
 2) Reproducer for unackedMessages:0 for session:
 qpid-send -m 11 -a myQueue; {create:always}
 qpid-receive -m 100 -a myQueue; {create:always} -f
 (in another terminal)
 qpid-tool
 list connection
 call ID_of_qpid-receive-connection close
 and now check result:
 grep Tx /tmp/qpidd.log | grep session
 should return unackedMessages:11 but returns zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5107) Trace queuesession deletion statistics show zero values for some counters everytime

2015-04-30 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-5107:

Fix Version/s: (was: 0.30)
   0.28

 Trace queuesession deletion statistics show zero values for some counters 
 everytime
 

 Key: QPID-5107
 URL: https://issues.apache.org/jira/browse/QPID-5107
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.22
Reporter: Pavel Moravec
Priority: Minor
  Labels: easyfix, easytest, patch
 Fix For: 0.28

 Attachments: QPID-5107.patch, QPID-5107_debugStats.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 Description of problem:
 qpid trace/logs statistics about object deletion. However some of these data 
 are wrong. In particular msgDepth for a queue is everytime zero (and 
 msgTotalDequeues equals to msgTotalEnqueues despite no consumer was 
 subscribed to the queue), or unackedMessages for a session is zero everytime 
 as well.
 Version-Release number of selected component (if applicable):
 qpid 0.22
 How reproducible:
 100%
 Steps to Reproduce:
 1) msgDepth:0 for queue:
 echo auth=no  /etc/qpid/qpidd.conf
 echo trace=yes  /etc/qpid/qpidd.conf
 echo log-to-file=/tmp/qpidd.log  /etc/qpid/qpidd.conf
 rm -rf /var/lib/qpidd/* /tmp/qpidd.log
 service qpidd restart
 qpid-send -m 123 -a testQueue; {create:always, delete:always}
 sleep 10  # just to let periodic processing to run  print out the stats
 grep Mgmt delete queue /tmp/qpidd.log
 Actual results:
 2013-08-29 14:05:38 [Model] trace Mgmt delete queue. id:testQueue Statistics: 
 {acquires:123, bindingCount:0, bindingCountHigh:0, bindingCountLow:0, 
 byteDepth:0, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
 bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, 
 byteTotalEnqueues:0, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
 consumerCountHigh:0, consumerCountLow:0, discardsLvq:0, discardsOverflow:0, 
 discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
 flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
 messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
 msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
 msgPersistEnqueues:0, msgTotalDequeues:123, msgTotalEnqueues:123, 
 msgTxnDequeues:0, msgTxnEnqueues:0, releases:0, reroutes:0, 
 unackedMessages:0, unackedMessagesHigh:0, unackedMessagesLow:0}
 Expected results:
 acquires:0
 msgTotalDequeues:0
 (several other counters are supposed to be wrong as well like byteFtdDequeues)
 2) Reproducer for unackedMessages:0 for session:
 qpid-send -m 11 -a myQueue; {create:always}
 qpid-receive -m 100 -a myQueue; {create:always} -f
 (in another terminal)
 qpid-tool
 list connection
 call ID_of_qpid-receive-connection close
 and now check result:
 grep Tx /tmp/qpidd.log | grep session
 should return unackedMessages:11 but returns zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6524) [C++ broker]: Fix for QPID-5107 incomplete for queues

2015-04-30 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6524:
---

 Summary: [C++ broker]: Fix for QPID-5107 incomplete for queues
 Key: QPID-6524
 URL: https://issues.apache.org/jira/browse/QPID-6524
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Priority: Minor


QPID-5107 fixes all except the very basic scenario when the queue is deleted 
before broker shutdown. I.e. in use case:

{quote}
qpid-config add queue testQueue1
qpid-send --address testQueue1 -m 13 --content-size=1024
qpid-config del queue testQueue1 --force
{quote}

the statistics show:

{quote}
trace Mgmt destroying queue. id:testQueue1 Statistics: {acquires:0, 
bindingCount:0, bindingCountHigh:1, bindingCountLow:0, byteDepth:0, 
byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, bytePersistDequeues:0, 
bytePersistEnqueues:0, byteTotalDequeues:14183, byteTotalEnqueues:14183, 
byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, consumerCountHigh:0, 
consumerCountLow:0, creator:anonymous, discardsLvq:0, discardsOverflow:0, 
discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
msgPersistEnqueues:0, msgTotalDequeues:13, msgTotalEnqueues:13, 
msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
unackedMessagesLow:0}
{quote}

See e.g. msgDepth:0 or msgTotalDequeues:13.

Those values are correct from the technical point of view (broker can delete 
only empty queue, after purging all its messages and deleting all bindings 
etc.), but they are not right from end-user perspective who sees the broker had 
some messages in the queue, but a log for deleting the queue does not mention 
them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5107) Trace queuesession deletion statistics show zero values for some counters everytime

2015-04-30 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521249#comment-14521249
 ] 

Pavel Moravec commented on QPID-5107:
-

Though the above commit fixed the problem at the time of committing it, now it 
seems to be broken the same way.. Reopening it.

 Trace queuesession deletion statistics show zero values for some counters 
 everytime
 

 Key: QPID-5107
 URL: https://issues.apache.org/jira/browse/QPID-5107
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.22
Reporter: Pavel Moravec
Priority: Minor
  Labels: easyfix, easytest, patch
 Attachments: QPID-5107.patch, QPID-5107_debugStats.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 Description of problem:
 qpid trace/logs statistics about object deletion. However some of these data 
 are wrong. In particular msgDepth for a queue is everytime zero (and 
 msgTotalDequeues equals to msgTotalEnqueues despite no consumer was 
 subscribed to the queue), or unackedMessages for a session is zero everytime 
 as well.
 Version-Release number of selected component (if applicable):
 qpid 0.22
 How reproducible:
 100%
 Steps to Reproduce:
 1) msgDepth:0 for queue:
 echo auth=no  /etc/qpid/qpidd.conf
 echo trace=yes  /etc/qpid/qpidd.conf
 echo log-to-file=/tmp/qpidd.log  /etc/qpid/qpidd.conf
 rm -rf /var/lib/qpidd/* /tmp/qpidd.log
 service qpidd restart
 qpid-send -m 123 -a testQueue; {create:always, delete:always}
 sleep 10  # just to let periodic processing to run  print out the stats
 grep Mgmt delete queue /tmp/qpidd.log
 Actual results:
 2013-08-29 14:05:38 [Model] trace Mgmt delete queue. id:testQueue Statistics: 
 {acquires:123, bindingCount:0, bindingCountHigh:0, bindingCountLow:0, 
 byteDepth:0, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
 bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, 
 byteTotalEnqueues:0, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
 consumerCountHigh:0, consumerCountLow:0, discardsLvq:0, discardsOverflow:0, 
 discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
 flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
 messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
 msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
 msgPersistEnqueues:0, msgTotalDequeues:123, msgTotalEnqueues:123, 
 msgTxnDequeues:0, msgTxnEnqueues:0, releases:0, reroutes:0, 
 unackedMessages:0, unackedMessagesHigh:0, unackedMessagesLow:0}
 Expected results:
 acquires:0
 msgTotalDequeues:0
 (several other counters are supposed to be wrong as well like byteFtdDequeues)
 2) Reproducer for unackedMessages:0 for session:
 qpid-send -m 11 -a myQueue; {create:always}
 qpid-receive -m 100 -a myQueue; {create:always} -f
 (in another terminal)
 qpid-tool
 list connection
 call ID_of_qpid-receive-connection close
 and now check result:
 grep Tx /tmp/qpidd.log | grep session
 should return unackedMessages:11 but returns zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6524) [C++ broker]: Fix for QPID-5107 incomplete for queues

2015-04-30 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14521348#comment-14521348
 ] 

Pavel Moravec commented on QPID-6524:
-

Currently, debugStats method responsible for the log is called from Queue class 
destructor. That is necessary for queues being deleted during broker shutdown, 
when no other method like Queue::destroyed() is called. So, the current log - 
until I see there is a simple test to detect shutting down broker - should 
remain as is.

To limit the user confusion, I suggest adding a new log based on patch:

{noformat}
Index: qpid/broker/Queue.cpp
===
--- qpid/broker/Queue.cpp   (revision 1676937)
+++ qpid/broker/Queue.cpp   (working copy)
@@ -1136,6 +1136,8 @@
 
 void Queue::destroyed()
 {
+if (mgmtObject != 0)
+mgmtObject-debugStats(deleting);
 unbind(broker-getExchanges());
 remove(0, 0, boost::bind(Queue::abandoned, this, _1), REPLICATOR/*even 
acquired message are treated as abandoned*/, false);
 if (alternateExchange.get()) {
{noformat}

In case of a queue deleted during broker runtime, trace logs will be:

{noformat}
2015-04-30 12:36:05 [Model] trace Mgmt deleting queue. id:testQueue1 
Statistics: {acquires:0, bindingCount:1, bindingCountHigh:1, bindingCountLow:0, 
byteDepth:14183, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, 
byteTotalEnqueues:14183, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
consumerCountHigh:0, consumerCountLow:0, creator:anonymous, discardsLvq:0, 
discardsOverflow:0, discardsPurge:0, discardsRing:0, discardsSubscriber:0, 
discardsTtl:0, flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:13, 
msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
msgPersistEnqueues:0, msgTotalDequeues:0, msgTotalEnqueues:13, 
msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
unackedMessagesLow:0}
2015-04-30 12:36:05 [Model] trace Mgmt destroying queue. id:testQueue1 
Statistics: {acquires:0, bindingCount:0, bindingCountHigh:1, bindingCountLow:0, 
byteDepth:0, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:14183, 
byteTotalEnqueues:14183, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
consumerCountHigh:0, consumerCountLow:0, creator:anonymous, discardsLvq:0, 
discardsOverflow:0, discardsPurge:0, discardsRing:0, discardsSubscriber:0, 
discardsTtl:0, flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
msgPersistEnqueues:0, msgTotalDequeues:13, msgTotalEnqueues:13, 
msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
unackedMessagesLow:0}
{noformat}

I.e. the 1st one with proper counters (e.g. msgDepth:13, msgTotalDequeues:0), 
second one with counters after the purge.


In case of a queue deleted during broker shutting-down, trace logs will be:

{noformat}
2015-04-30 12:36:10 [Model] trace Mgmt destroying queue. id:testQueue1 
Statistics: {acquires:0, bindingCount:0, bindingCountHigh:1, bindingCountLow:0, 
byteDepth:14183, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, 
bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, 
byteTotalEnqueues:14183, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, 
consumerCountHigh:0, consumerCountLow:0, creator:anonymous, discardsLvq:0, 
discardsOverflow:0, discardsPurge:0, discardsRing:0, discardsSubscriber:0, 
discardsTtl:0, flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:13, 
msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
msgPersistEnqueues:0, msgTotalDequeues:0, msgTotalEnqueues:13, 
msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
unackedMessagesLow:0}
{noformat}

I.e. with proper counters.




 [C++ broker]: Fix for QPID-5107 incomplete for queues
 -

 Key: QPID-6524
 URL: https://issues.apache.org/jira/browse/QPID-6524
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Priority: Minor

 QPID-5107 fixes all except the very basic scenario when the queue is deleted 
 before broker shutdown. I.e. in use case:

[jira] [Resolved] (QPID-6524) [C++ broker]: Fix for QPID-5107 incomplete for queues

2015-04-30 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-6524.
-
   Resolution: Fixed
Fix Version/s: Future
 Assignee: Pavel Moravec

Thanks Gordon for the mgmtObject idea.

 [C++ broker]: Fix for QPID-5107 incomplete for queues
 -

 Key: QPID-6524
 URL: https://issues.apache.org/jira/browse/QPID-6524
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: Future


 QPID-5107 fixes all except the very basic scenario when the queue is deleted 
 before broker shutdown. I.e. in use case:
 {quote}
 qpid-config add queue testQueue1
 qpid-send --address testQueue1 -m 13 --content-size=1024
 qpid-config del queue testQueue1 --force
 {quote}
 the statistics show:
 {quote}
 trace Mgmt destroying queue. id:testQueue1 Statistics: {acquires:0, 
 bindingCount:0, bindingCountHigh:1, bindingCountLow:0, byteDepth:0, 
 byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, bytePersistDequeues:0, 
 bytePersistEnqueues:0, byteTotalDequeues:14183, byteTotalEnqueues:14183, 
 byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, consumerCountHigh:0, 
 consumerCountLow:0, creator:anonymous, discardsLvq:0, discardsOverflow:0, 
 discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
 flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
 messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
 msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
 msgPersistEnqueues:0, msgTotalDequeues:13, msgTotalEnqueues:13, 
 msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
 releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
 unackedMessagesLow:0}
 {quote}
 See e.g. msgDepth:0 or msgTotalDequeues:13.
 Those values are correct from the technical point of view (broker can delete 
 only empty queue, after purging all its messages and deleting all bindings 
 etc.), but they are not right from end-user perspective who sees the broker 
 had some messages in the queue, but a log for deleting the queue does not 
 mention them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6524) [C++ broker]: Fix for QPID-5107 incomplete for queues

2015-04-30 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6524.
---

 [C++ broker]: Fix for QPID-5107 incomplete for queues
 -

 Key: QPID-6524
 URL: https://issues.apache.org/jira/browse/QPID-6524
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: Future


 QPID-5107 fixes all except the very basic scenario when the queue is deleted 
 before broker shutdown. I.e. in use case:
 {quote}
 qpid-config add queue testQueue1
 qpid-send --address testQueue1 -m 13 --content-size=1024
 qpid-config del queue testQueue1 --force
 {quote}
 the statistics show:
 {quote}
 trace Mgmt destroying queue. id:testQueue1 Statistics: {acquires:0, 
 bindingCount:0, bindingCountHigh:1, bindingCountLow:0, byteDepth:0, 
 byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, bytePersistDequeues:0, 
 bytePersistEnqueues:0, byteTotalDequeues:14183, byteTotalEnqueues:14183, 
 byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, consumerCountHigh:0, 
 consumerCountLow:0, creator:anonymous, discardsLvq:0, discardsOverflow:0, 
 discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, 
 flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, 
 messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, 
 msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, 
 msgPersistEnqueues:0, msgTotalDequeues:13, msgTotalEnqueues:13, 
 msgTxnDequeues:0, msgTxnEnqueues:0, redirectPeer:, redirectSource:False, 
 releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, 
 unackedMessagesLow:0}
 {quote}
 See e.g. msgDepth:0 or msgTotalDequeues:13.
 Those values are correct from the technical point of view (broker can delete 
 only empty queue, after purging all its messages and deleting all bindings 
 etc.), but they are not right from end-user perspective who sees the broker 
 had some messages in the queue, but a log for deleting the queue does not 
 mention them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-04-21 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec resolved QPID-6491.
-
   Resolution: Fixed
Fix Version/s: 0.33
 Assignee: Pavel Moravec

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-04-21 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6491.
---

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: 0.33

 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-04-13 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6491:
---

 Summary: qpid-route map does not use any authentication when 
querying other brokers
 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Priority: Minor


qpid-route route map during generating the federation topology connects to 
each and every broker in the federation to query it's federation peers. All 
such connections (except for the very first broker) are made as anonymous user 
only.

It is requested the tool passes username, password and optionally also 
--client-sasl-mechanism parameter to all other brokers as well.

(another option to this would be the tool gets the credentials info from the 
broker, but currently QMF response to links does not contain such info. This 
option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-04-13 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6491:

Attachment: QPID-6491.patch

Proposed patch: for connection to every broker in federation, use the username 
and password from brokerURL of the 1st broker. I.e. when running:

qpid-route route map guest/guest@mrg7001:7001

guest/guest credentials will be passed.

If running:

qpid-route route map guest/guest@mrg7001:7001 --client-sasl-mechanism=PLAIN

The SASL mechanism will be tried to the all brokers as well.

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Priority: Minor
 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-6491) qpid-route map does not use any authentication when querying other brokers

2015-04-13 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492010#comment-14492010
 ] 

Pavel Moravec edited comment on QPID-6491 at 4/13/15 10:13 AM:
---

Proposed patch: for connection to every broker in federation, use the username 
and password from brokerURL of the 1st broker. I.e. when running:

qpid-route route map guest/guest@mrg7001:7001

guest/guest credentials will be passed.

If running:

qpid-route route map guest/guest@mrg7001:7001 --client-sasl-mechanism=PLAIN

The SASL mechanism will be tried to the all brokers as well.

Note the username and password would be shown in all Finding Linked Brokers: 
what might not be desired, e.g.:

# qpid-route route map guest/guest@mrg7001:7001 --client-sasl-mechanism=PLAIN

Finding Linked Brokers:
guest/guest@mrg7001:7001... Ok
guest/guest@mrg6001:6001... Ok
guest/guest@mrg5001:5001... Ok
guest/guest@mrg4001:4001... Ok

Dynamic Routes:
..

On the other side, current behaviour prints the credentials for the 1st broker 
as well.


was (Author: pmoravec):
Proposed patch: for connection to every broker in federation, use the username 
and password from brokerURL of the 1st broker. I.e. when running:

qpid-route route map guest/guest@mrg7001:7001

guest/guest credentials will be passed.

If running:

qpid-route route map guest/guest@mrg7001:7001 --client-sasl-mechanism=PLAIN

The SASL mechanism will be tried to the all brokers as well.

 qpid-route map does not use any authentication when querying other brokers
 --

 Key: QPID-6491
 URL: https://issues.apache.org/jira/browse/QPID-6491
 Project: Qpid
  Issue Type: Bug
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Priority: Minor
 Attachments: QPID-6491.patch


 qpid-route route map during generating the federation topology connects to 
 each and every broker in the federation to query it's federation peers. All 
 such connections (except for the very first broker) are made as anonymous 
 user only.
 It is requested the tool passes username, password and optionally also 
 --client-sasl-mechanism parameter to all other brokers as well.
 (another option to this would be the tool gets the credentials info from the 
 broker, but currently QMF response to links does not contain such info. This 
 option would need much more code change also on broker side)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6397) [C++ broker] segfault when processing QMF method during periodic processing

2015-03-05 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6397.
---

 [C++ broker] segfault when processing QMF method during periodic processing
 ---

 Key: QPID-6397
 URL: https://issues.apache.org/jira/browse/QPID-6397
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31
Reporter: Pavel Moravec
Assignee: Pavel Moravec
 Fix For: Future


 There is a race condition causing segfault when:
 - one thread executes periodic processing with traces enabled (at least for 
 qpid::management::ManagementAgent::debugSnapshot)
 - second thread is just processing QMF method from a client
 The root cause is, the first thread iterates through managementObjects map in 
 dumpMap (or through newManagementObjects in dumpVector) while the second 
 thread is executing moveNewObjects that moves stuff from newManagementObjects 
 to managementObjects.
 See backtraces hit (dumpMap shown, hit also dumpVector):
 (gdb) bt # of thread 1
 #0  0x003f0c632885 in raise (sig=6) at 
 ../nptl/sysdeps/unix/sysv/linux/raise.c:64
 #1  0x003f0c634065 in abort () at abort.c:92
 #2  0x003f0c62b9fe in __assert_fail_base (fmt=value optimized out, 
 assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=value optimized out, function=value optimized out)
 at assert.c:96
 #3  0x003f0c62bac0 in __assert_fail (assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=418, 
 function=0x7ff117bd5e80 T* boost::shared_ptr template-parameter-1-1 
 ::operator-() const [with T = qpid::management::ManagementObject]) at 
 assert.c:105
 #4  0x7ff117947139 in 
 boost::shared_ptrqpid::management::ManagementObject::operator-() const ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #5  0x7ff117bafcd9 in qpid::management::(anonymous 
 namespace)::dumpMap(std::mapqpid::management::ObjectId, 
 boost::shared_ptrqpid::management::ManagementObject, 
 std::lessqpid::management::ObjectId, 
 std::allocatorstd::pairqpid::management::ObjectId const, 
 boost::shared_ptrqpid::management::ManagementObjectconst) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #6  0x7ff117bb06ba in 
 qpid::management::ManagementAgent::debugSnapshot(char const*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #7  0x7ff117b954ed in 
 qpid::management::ManagementAgent::periodicProcessing() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #8  0x7ff117bc5c1f in boost::_mfi::mf0void, 
 qpid::management::ManagementAgent::operator()(qpid::management::ManagementAgent*)
  const () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #9  0x7ff117bc4384 in void 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent* 
 ::operator()boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list0(boost::_bi::typevoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, boost::_bi::list0, int) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #10 0x7ff117bc1269 in boost::_bi::bind_tvoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  
 ::operator()() () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #11 0x7ff117bbc1e0 in 
 boost::detail::function::void_function_obj_invoker0boost::_bi::bind_tvoid, 
 boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  , 
 void::invoke(boost::detail::function::function_buffer) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #12 0x7ff117a5e2af in boost::function0void::operator()() const () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #13 0x7ff117b8e400 in qpid::management::(anonymous 
 namespace)::Periodic::fire() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #14 0x7ff1173b518f in qpid::sys::TimerTask::fireTask() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 ---Type return to continue, or q return to quit---
 #15 0x7ff1173b63e9 in 
 qpid::sys::Timer::fire(boost::intrusive_ptrqpid::sys::TimerTask) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #16 0x7ff1173b5d1d in qpid::sys::Timer::run() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #17 0x7ff1173280ef in qpid::sys::(anonymous 
 namespace)::runRunnable(void*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #18 0x003f0ce077f1 in start_thread (arg=0x7ff116c8f700) at 
 pthread_create.c:301
 #19 0x003f0c6e570d in 

[jira] [Assigned] (QPID-6397) [C++ broker] segfault when processing QMF method during periodic processing

2015-03-05 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec reassigned QPID-6397:
---

Assignee: Pavel Moravec

 [C++ broker] segfault when processing QMF method during periodic processing
 ---

 Key: QPID-6397
 URL: https://issues.apache.org/jira/browse/QPID-6397
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31
Reporter: Pavel Moravec
Assignee: Pavel Moravec

 There is a race condition causing segfault when:
 - one thread executes periodic processing with traces enabled (at least for 
 qpid::management::ManagementAgent::debugSnapshot)
 - second thread is just processing QMF method from a client
 The root cause is, the first thread iterates through managementObjects map in 
 dumpMap (or through newManagementObjects in dumpVector) while the second 
 thread is executing moveNewObjects that moves stuff from newManagementObjects 
 to managementObjects.
 See backtraces hit (dumpMap shown, hit also dumpVector):
 (gdb) bt # of thread 1
 #0  0x003f0c632885 in raise (sig=6) at 
 ../nptl/sysdeps/unix/sysv/linux/raise.c:64
 #1  0x003f0c634065 in abort () at abort.c:92
 #2  0x003f0c62b9fe in __assert_fail_base (fmt=value optimized out, 
 assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=value optimized out, function=value optimized out)
 at assert.c:96
 #3  0x003f0c62bac0 in __assert_fail (assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=418, 
 function=0x7ff117bd5e80 T* boost::shared_ptr template-parameter-1-1 
 ::operator-() const [with T = qpid::management::ManagementObject]) at 
 assert.c:105
 #4  0x7ff117947139 in 
 boost::shared_ptrqpid::management::ManagementObject::operator-() const ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #5  0x7ff117bafcd9 in qpid::management::(anonymous 
 namespace)::dumpMap(std::mapqpid::management::ObjectId, 
 boost::shared_ptrqpid::management::ManagementObject, 
 std::lessqpid::management::ObjectId, 
 std::allocatorstd::pairqpid::management::ObjectId const, 
 boost::shared_ptrqpid::management::ManagementObjectconst) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #6  0x7ff117bb06ba in 
 qpid::management::ManagementAgent::debugSnapshot(char const*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #7  0x7ff117b954ed in 
 qpid::management::ManagementAgent::periodicProcessing() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #8  0x7ff117bc5c1f in boost::_mfi::mf0void, 
 qpid::management::ManagementAgent::operator()(qpid::management::ManagementAgent*)
  const () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #9  0x7ff117bc4384 in void 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent* 
 ::operator()boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list0(boost::_bi::typevoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, boost::_bi::list0, int) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #10 0x7ff117bc1269 in boost::_bi::bind_tvoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  
 ::operator()() () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #11 0x7ff117bbc1e0 in 
 boost::detail::function::void_function_obj_invoker0boost::_bi::bind_tvoid, 
 boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  , 
 void::invoke(boost::detail::function::function_buffer) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #12 0x7ff117a5e2af in boost::function0void::operator()() const () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #13 0x7ff117b8e400 in qpid::management::(anonymous 
 namespace)::Periodic::fire() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #14 0x7ff1173b518f in qpid::sys::TimerTask::fireTask() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 ---Type return to continue, or q return to quit---
 #15 0x7ff1173b63e9 in 
 qpid::sys::Timer::fire(boost::intrusive_ptrqpid::sys::TimerTask) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #16 0x7ff1173b5d1d in qpid::sys::Timer::run() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #17 0x7ff1173280ef in qpid::sys::(anonymous 
 namespace)::runRunnable(void*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #18 0x003f0ce077f1 in start_thread (arg=0x7ff116c8f700) at 
 pthread_create.c:301
 #19 

[jira] [Commented] (QPID-6397) [C++ broker] segfault when processing QMF method during periodic processing

2015-03-02 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14342987#comment-14342987
 ] 

Pavel Moravec commented on QPID-6397:
-

Review request for a patch: https://reviews.apache.org/r/31619/

 [C++ broker] segfault when processing QMF method during periodic processing
 ---

 Key: QPID-6397
 URL: https://issues.apache.org/jira/browse/QPID-6397
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31
Reporter: Pavel Moravec

 There is a race condition causing segfault when:
 - one thread executes periodic processing with traces enabled (at least for 
 qpid::management::ManagementAgent::debugSnapshot)
 - second thread is just processing QMF method from a client
 The root cause is, the first thread iterates through managementObjects map in 
 dumpMap (or through newManagementObjects in dumpVector) while the second 
 thread is executing moveNewObjects that moves stuff from newManagementObjects 
 to managementObjects.
 See backtraces hit (dumpMap shown, hit also dumpVector):
 (gdb) bt # of thread 1
 #0  0x003f0c632885 in raise (sig=6) at 
 ../nptl/sysdeps/unix/sysv/linux/raise.c:64
 #1  0x003f0c634065 in abort () at abort.c:92
 #2  0x003f0c62b9fe in __assert_fail_base (fmt=value optimized out, 
 assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=value optimized out, function=value optimized out)
 at assert.c:96
 #3  0x003f0c62bac0 in __assert_fail (assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=418, 
 function=0x7ff117bd5e80 T* boost::shared_ptr template-parameter-1-1 
 ::operator-() const [with T = qpid::management::ManagementObject]) at 
 assert.c:105
 #4  0x7ff117947139 in 
 boost::shared_ptrqpid::management::ManagementObject::operator-() const ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #5  0x7ff117bafcd9 in qpid::management::(anonymous 
 namespace)::dumpMap(std::mapqpid::management::ObjectId, 
 boost::shared_ptrqpid::management::ManagementObject, 
 std::lessqpid::management::ObjectId, 
 std::allocatorstd::pairqpid::management::ObjectId const, 
 boost::shared_ptrqpid::management::ManagementObjectconst) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #6  0x7ff117bb06ba in 
 qpid::management::ManagementAgent::debugSnapshot(char const*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #7  0x7ff117b954ed in 
 qpid::management::ManagementAgent::periodicProcessing() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #8  0x7ff117bc5c1f in boost::_mfi::mf0void, 
 qpid::management::ManagementAgent::operator()(qpid::management::ManagementAgent*)
  const () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #9  0x7ff117bc4384 in void 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent* 
 ::operator()boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list0(boost::_bi::typevoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, boost::_bi::list0, int) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #10 0x7ff117bc1269 in boost::_bi::bind_tvoid, boost::_mfi::mf0void, 
 qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  
 ::operator()() () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #11 0x7ff117bbc1e0 in 
 boost::detail::function::void_function_obj_invoker0boost::_bi::bind_tvoid, 
 boost::_mfi::mf0void, qpid::management::ManagementAgent, 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  , 
 void::invoke(boost::detail::function::function_buffer) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #12 0x7ff117a5e2af in boost::function0void::operator()() const () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #13 0x7ff117b8e400 in qpid::management::(anonymous 
 namespace)::Periodic::fire() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #14 0x7ff1173b518f in qpid::sys::TimerTask::fireTask() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 ---Type return to continue, or q return to quit---
 #15 0x7ff1173b63e9 in 
 qpid::sys::Timer::fire(boost::intrusive_ptrqpid::sys::TimerTask) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #16 0x7ff1173b5d1d in qpid::sys::Timer::run() () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #17 0x7ff1173280ef in qpid::sys::(anonymous 
 namespace)::runRunnable(void*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
 #18 0x003f0ce077f1 in start_thread 

[jira] [Created] (QPID-6397) [C++ broker] segfault when processing QMF method during periodic processing

2015-02-18 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6397:
---

 Summary: [C++ broker] segfault when processing QMF method during 
periodic processing
 Key: QPID-6397
 URL: https://issues.apache.org/jira/browse/QPID-6397
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31
Reporter: Pavel Moravec


There is a race condition causing segfault when:
- one thread executes periodic processing with traces enabled (at least for 
qpid::management::ManagementAgent::debugSnapshot)
- second thread is just processing QMF method from a client

The root cause is, the first thread iterates through managementObjects map in 
dumpMap (or through newManagementObjects in dumpVector) while the second thread 
is executing moveNewObjects that moves stuff from newManagementObjects to 
managementObjects.

See backtraces hit (dumpMap shown, hit also dumpVector):
(gdb) bt # of thread 1
#0  0x003f0c632885 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x003f0c634065 in abort () at abort.c:92
#2  0x003f0c62b9fe in __assert_fail_base (fmt=value optimized out, 
assertion=0x7ff117bd5c94 px != 0, 
file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
line=value optimized out, function=value optimized out)
at assert.c:96
#3  0x003f0c62bac0 in __assert_fail (assertion=0x7ff117bd5c94 px != 0, 
file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
line=418, 
function=0x7ff117bd5e80 T* boost::shared_ptr template-parameter-1-1 
::operator-() const [with T = qpid::management::ManagementObject]) at 
assert.c:105
#4  0x7ff117947139 in 
boost::shared_ptrqpid::management::ManagementObject::operator-() const ()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#5  0x7ff117bafcd9 in qpid::management::(anonymous 
namespace)::dumpMap(std::mapqpid::management::ObjectId, 
boost::shared_ptrqpid::management::ManagementObject, 
std::lessqpid::management::ObjectId, 
std::allocatorstd::pairqpid::management::ObjectId const, 
boost::shared_ptrqpid::management::ManagementObjectconst) () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#6  0x7ff117bb06ba in qpid::management::ManagementAgent::debugSnapshot(char 
const*) ()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#7  0x7ff117b954ed in 
qpid::management::ManagementAgent::periodicProcessing() ()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#8  0x7ff117bc5c1f in boost::_mfi::mf0void, 
qpid::management::ManagementAgent::operator()(qpid::management::ManagementAgent*)
 const () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#9  0x7ff117bc4384 in void 
boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent* 
::operator()boost::_mfi::mf0void, qpid::management::ManagementAgent, 
boost::_bi::list0(boost::_bi::typevoid, boost::_mfi::mf0void, 
qpid::management::ManagementAgent, boost::_bi::list0, int) () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#10 0x7ff117bc1269 in boost::_bi::bind_tvoid, boost::_mfi::mf0void, 
qpid::management::ManagementAgent, 
boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  
::operator()() () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#11 0x7ff117bbc1e0 in 
boost::detail::function::void_function_obj_invoker0boost::_bi::bind_tvoid, 
boost::_mfi::mf0void, qpid::management::ManagementAgent, 
boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent*  , 
void::invoke(boost::detail::function::function_buffer) () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#12 0x7ff117a5e2af in boost::function0void::operator()() const () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#13 0x7ff117b8e400 in qpid::management::(anonymous 
namespace)::Periodic::fire() ()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
#14 0x7ff1173b518f in qpid::sys::TimerTask::fireTask() () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
---Type return to continue, or q return to quit---
#15 0x7ff1173b63e9 in 
qpid::sys::Timer::fire(boost::intrusive_ptrqpid::sys::TimerTask) ()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
#16 0x7ff1173b5d1d in qpid::sys::Timer::run() () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
#17 0x7ff1173280ef in qpid::sys::(anonymous namespace)::runRunnable(void*) 
()
   from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidcommon.so.2
#18 0x003f0ce077f1 in start_thread (arg=0x7ff116c8f700) at 
pthread_create.c:301
#19 0x003f0c6e570d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:115

(gdb) bt  # of thread 2
#0  0x7ff116ca99c7 in qpid::types::VariantImpl::~VariantImpl() () from 
/root/qpid-trunk/qpid/cpp/BLD/src/libqpidtypes.so.1
#1  0x7ff116caf2e9 in qpid::types::Variant::~Variant() () from 

[jira] [Commented] (QPID-6397) [C++ broker] segfault when processing QMF method during periodic processing

2015-02-18 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14325884#comment-14325884
 ] 

Pavel Moravec commented on QPID-6397:
-

What helped with troubleshooting is to add some debugs:

Index: cpp/src/qpid/management/ManagementAgent.cpp
===
--- cpp/src/qpid/management/ManagementAgent.cpp (revision 1660046)
+++ cpp/src/qpid/management/ManagementAgent.cpp (working copy)
@@ -1263,7 +1263,9 @@
 
 void ManagementAgent::handleMethodRequest(Buffer inBuffer, const string 
replyToKey, uint32_t sequence, const string userId)
 {
+QPID_LOG(trace, PavelM: hMR before mNO, nMO.size=  
newManagementObjects.size());
 moveNewObjects();
+QPID_LOG(trace, PavelM: hMR after mNO, nMO.size=  
newManagementObjects.size());
 
 string   methodName;
 string   packageName;
@@ -1358,7 +1360,9 @@
 void ManagementAgent::handleMethodRequest (const string body, const string 
rte, const string rtk,
const string cid, const string 
userId, bool viaLocal)
 {
+QPID_LOG(trace, PavelM: hMR before mNO, nMO.size=  
newManagementObjects.size());
 moveNewObjects();
+QPID_LOG(trace, PavelM: hMR after mNO, nMO.size=  
newManagementObjects.size());
 
 string   methodName;
 Variant::Map inMap;
@@ -2711,8 +2715,10 @@
   pendingDeletedObjs.size()   pending deletes
   summarizeAgents());
 
+QPID_LOG(trace, PavelM: debugSnapshot before dump, mO.size=  
managementObjects.size());
 QPID_LOG_IF(trace, managementObjects.size(),
 title  : objects  dumpMap(managementObjects));
+QPID_LOG(trace, PavelM: debugSnapshot before dump, mO.size=  
managementObjects.size());
 QPID_LOG_IF(trace, newManagementObjects.size(),
 title  : new objects  dumpVector(newManagementObjects));
 }


And of course add logging:

log-enable=trace+:debugSnapshot
log-enable=trace+:handleMethodRequest
log-enable=notice+


 [C++ broker] segfault when processing QMF method during periodic processing
 ---

 Key: QPID-6397
 URL: https://issues.apache.org/jira/browse/QPID-6397
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.31
Reporter: Pavel Moravec

 There is a race condition causing segfault when:
 - one thread executes periodic processing with traces enabled (at least for 
 qpid::management::ManagementAgent::debugSnapshot)
 - second thread is just processing QMF method from a client
 The root cause is, the first thread iterates through managementObjects map in 
 dumpMap (or through newManagementObjects in dumpVector) while the second 
 thread is executing moveNewObjects that moves stuff from newManagementObjects 
 to managementObjects.
 See backtraces hit (dumpMap shown, hit also dumpVector):
 (gdb) bt # of thread 1
 #0  0x003f0c632885 in raise (sig=6) at 
 ../nptl/sysdeps/unix/sysv/linux/raise.c:64
 #1  0x003f0c634065 in abort () at abort.c:92
 #2  0x003f0c62b9fe in __assert_fail_base (fmt=value optimized out, 
 assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=value optimized out, function=value optimized out)
 at assert.c:96
 #3  0x003f0c62bac0 in __assert_fail (assertion=0x7ff117bd5c94 px != 0, 
 file=0x7ff117bd5c68 /usr/include/boost/smart_ptr/shared_ptr.hpp, 
 line=418, 
 function=0x7ff117bd5e80 T* boost::shared_ptr template-parameter-1-1 
 ::operator-() const [with T = qpid::management::ManagementObject]) at 
 assert.c:105
 #4  0x7ff117947139 in 
 boost::shared_ptrqpid::management::ManagementObject::operator-() const ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #5  0x7ff117bafcd9 in qpid::management::(anonymous 
 namespace)::dumpMap(std::mapqpid::management::ObjectId, 
 boost::shared_ptrqpid::management::ManagementObject, 
 std::lessqpid::management::ObjectId, 
 std::allocatorstd::pairqpid::management::ObjectId const, 
 boost::shared_ptrqpid::management::ManagementObjectconst) () from 
 /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #6  0x7ff117bb06ba in 
 qpid::management::ManagementAgent::debugSnapshot(char const*) ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #7  0x7ff117b954ed in 
 qpid::management::ManagementAgent::periodicProcessing() ()
from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #8  0x7ff117bc5c1f in boost::_mfi::mf0void, 
 qpid::management::ManagementAgent::operator()(qpid::management::ManagementAgent*)
  const () from /root/qpid-trunk/qpid/cpp/BLD/src/libqpidbroker.so.2
 #9  0x7ff117bc4384 in void 
 boost::_bi::list1boost::_bi::valueqpid::management::ManagementAgent* 
 

[jira] [Updated] (QPID-6297) Python client (qpid.messaging) raises KeyError insead of reconnecting

2015-01-11 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6297:

Attachment: goferBug.cap

tcpdump from the error.

(I changed SSL to non-SSL traffic by purpose, for easily follofing tcpdump)

What happens:
- I bounced qpidd at the beginning to see goferd reconnect  reestablish 
sessions
- two TCP connections established (ports 42292 and 42293), the 2nd one creates 
an AMQP session named 42d8f2cf-.. (important below)
- at 14:37:49 GMT, I blocked goferd-qpidd traffic (iptables -j DROP) for 14 
seconds; since the client machine had 4 TCP retries set, it was enough
- retries on the TCP connection on port 42293 causes goferd detecting 
connection loss - but only _this_ connection seems to be lost, not the 42292 one
- at 14:38:04 GMT, new (one) connection established from port 42296. It tried 
to attach to the session named 42d8f2cf-.. and was bounced by the broker by 
session-busy / session already attached. TCP connection closed, no further 
goferd activity on wire

netstat outputs after the test:
goferd client:
[root@localhost ~]# netstat -anp | grep 5672
tcp0  0 10.34.84.221:42292  10.34.84.76:5672ESTABLISHED 
44483/python
tcp0  0 10.34.84.221:42296  10.34.84.76:5672TIME_WAIT   
-   
[root@localhost ~]# 

qpidd server:
[root@pmoravec-sat6 ~]# netstat -anp | grep 5672
tcp0  0 0.0.0.0:56720.0.0.0:*   
LISTEN  119588/qpidd
tcp0  0 10.34.84.76:567210.34.84.221:42292  
ESTABLISHED 119588/qpidd
tcp0  0 10.34.84.76:567210.34.84.221:42288  
TIME_WAIT   -   
tcp0  0 10.34.84.76:567210.34.84.221:42289  
TIME_WAIT   -   
tcp0  0 10.34.84.76:567210.34.84.221:42293  
ESTABLISHED 119588/qpidd
tcp0  0 :::5672 :::*
LISTEN  119588/qpidd
[root@pmoravec-sat6 ~]#

gofer logs from the test:
Jan 11 15:37:40 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 1]: connection aborted
Jan 11 15:37:40 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 1 seconds
Jan 11 15:37:40 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 1]: connection aborted
Jan 11 15:37:40 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 1 seconds
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 2]: [Errno 111] Connection refused
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 2 seconds
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 2]: [Errno 111] Connection refused
Jan 11 15:37:41 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 2 seconds
Jan 11 15:37:43 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:43 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:43 localhost goferd: [WARNING][Thread-2] qpid.messaging:407 - 
reconnect succeeded: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:43 localhost goferd: [WARNING][Thread-2] qpid.messaging:407 - 
reconnect succeeded: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:57 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 0]: [Errno 111] Connection refused
Jan 11 15:37:57 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 1 seconds
Jan 11 15:37:58 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:37:58 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 1]: [Errno 111] Connection refused
Jan 11 15:37:58 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 2 seconds
Jan 11 15:38:00 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:38:00 localhost goferd: [WARNING][Thread-2] qpid.messaging:453 - 
recoverable error[attempt 2]: [Errno 111] Connection refused
Jan 11 15:38:00 localhost goferd: [WARNING][Thread-2] qpid.messaging:455 - 
sleeping 4 seconds
Jan 11 15:38:04 localhost goferd: [WARNING][Thread-2] qpid.messaging:537 - 
trying: pmoravec-sat6.gsslab.brq.redhat.com:5672
Jan 11 15:38:04 localhost goferd: 

[jira] [Commented] (QPID-6297) Python client (qpid.messaging) raises KeyError insead of reconnecting

2015-01-11 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272940#comment-14272940
 ] 

Pavel Moravec commented on QPID-6297:
-

Backtrace in human-readable form:

File /usr/lib/python2.7/site-packages/gofer/transport/qpid/consumer.py, line 
116, in get return self.__receiver.fetch(timeout=timeout) 
File string, line 6, in fetch 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 1041, 
in fetch self._ecwait(lambda: not self.draining) 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 50, 
in _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout) 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 993, 
in _ewait result = self.session._ewait(lambda: self.error or predicate(), 
timeout) 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 580, 
in _ewait result = self.connection._ewait(lambda: self.error or predicate(), 
timeout) 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 219, 
in _ewait self.check_error() 
File /usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py, line 212, 
in check_error raise e InternalError: Traceback (most recent call last): 
File /usr/lib/python2.7/site-packages/qpid/messaging/driver.py, line 663, in 
write op.dispatch(self) 
File /usr/lib/python2.7/site-packages/qpid/ops.py, line 84, in dispatch 
getattr(target, handler)(self, *args) 
File /usr/lib/python2.7/site-packages/qpid/messaging/driver.py, line 888, in 
do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 0

Potential cause:
1) client calls receiver.fetch with high timeout (here 10seconds) - no msg 
available, library waiting to broker or timeout
2) library detects connection drop, so it detaches the session (with traceback:
[('/usr/lib64/python2.7/threading.py', 784, '__bootstrap', 
'self.__bootstrap_inner()'), ('/usr/lib64/python2.7/threading.py', 811, 
'__bootstrap_inner', 'self.run()'), ('/usr/lib64/python2.7/threading.py', 764, 
'run', 'self.__target(*self.__args, **self.__kwargs)'), 
('/usr/lib/python2.7/site-packages/qpid/selector.py', 141, 'run', 
'sel.readable()'), ('string', 6, 'readable', None), 
('/usr/lib/python2.7/site-packages/qpid/messaging/driver.py', 422, 'readable', 
'self.engine.write(data)'), 
('/usr/lib/python2.7/site-packages/qpid/messaging/driver.py', 664, 'write', 
'op.dispatch(self)'), ('/usr/lib/python2.7/site-packages/qpid/ops.py', 84, 
'dispatch', 'getattr(target, handler)(self, *args)'), 
('/usr/lib/python2.7/site-packages/qpid/messaging/driver.py', 886, 
'do_session_detached', 'sss = removing dtc.channel= + str(dtc.channel) + 
\\n + str(traceback.extract_stack()) + \\n')]
)

3) on the fetch timeout, internal exception is raised about session detached, 
so the connection driver is asked for removing the session (while it has been 
removed)


This should have trivial reproducer (I hope), something like:

qpid-receive.py --timeout=10 -a testQueue; {create:always} -m10

and blocking iptables after a while (receiver should cycle)

(will test it later on)

 Python client (qpid.messaging) raises KeyError insead of reconnecting
 -

 Key: QPID-6297
 URL: https://issues.apache.org/jira/browse/QPID-6297
 Project: Qpid
  Issue Type: Bug
  Components: Python Client
Affects Versions: 0.22
 Environment: EL6
Reporter: Jeff Ortel
 Attachments: goferBug.cap


 Description of problem:
 Having some temporary network outage causing gofer loses TCP connection to 
 AMQP broker, it does not try to reconnect.
 How reproducible:
 100%
 Steps to Reproduce:
 1. Just to speedup reproducer, lower kernel tunable net.ipv4.tcp_retries2 to 
 e.g. 4:
 echo 4  /proc/sys/net/ipv4/tcp_retries2
 2. Have consumer connected (with auto-reconnect enabled and heartbeats not 
 enabled) and receiver open on a queue address and check its TCP connections 
 to AMQP broker:
 netstat -anp | grep 5671
 (there should be 2 TCP connections)
 3. Emulate network outage via iptables:
 iptables -A OUTPUT -p tcp --dport 5671 -j REJECT
 4. Monitor /var/log/messages; once it logs WARNING recoverable error, flush 
 iptables (iptables -F).
 5. Wait few seconds.
 6. Check gofer TCP connections:
 netstat -anp | grep 5671
 Actual results:
 6. shows just 1 TCP connection
 /var/log/messages repeatedly logs:
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - 046d2084-b0f1-4de4-a039-89499d9e680d
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/gofer/transport/qpid/consumer.py, line 
 

[jira] [Commented] (QPID-6297) Python client (qpid.messaging) raises KeyError insead of reconnecting

2015-01-11 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273060#comment-14273060
 ] 

Pavel Moravec commented on QPID-6297:
-

Trivial reproducer:

0) decrease TCP retries:
echo 2  /proc/sys/net/ipv4/tcp_retries2

1) Run this script that runs receiver.fetch(timeout=10) in a loop:

#!/usr/bin/env python
from qpid.messaging import *
import datetime

conn = Connection(localhost:5672, reconnect=1)
timeout=10

try:
  conn.open()
  sess = conn.session()

  recv = sess.receiver(testQueue;{create:always})

  while (1):
print %s: before fetch, timeout=%s %(datetime.datetime.now(), timeout)
msg = Message()
try:
  msg = recv.fetch(timeout=timeout)
except ReceiverError, e:
  print e
print %s: after fetch, msg=%s %(datetime.datetime.now(), msg)

  sess.close()

except ReceiverError, e:
  print e
except KeyboardInterrupt:
  pass

conn.close()

2) simulate network outage:
iptables -A OUTPUT -p tcp --dport 5672 -j REJECT; date

3) Once the script logs No handlers could be found for logger 
qpid.messaging, flush iptables

4) Wait few seconds for the backtrace

 Python client (qpid.messaging) raises KeyError insead of reconnecting
 -

 Key: QPID-6297
 URL: https://issues.apache.org/jira/browse/QPID-6297
 Project: Qpid
  Issue Type: Bug
  Components: Python Client
Affects Versions: 0.22
 Environment: EL6
Reporter: Jeff Ortel
 Attachments: goferBug.cap


 Description of problem:
 Having some temporary network outage causing gofer loses TCP connection to 
 AMQP broker, it does not try to reconnect.
 How reproducible:
 100%
 Steps to Reproduce:
 1. Just to speedup reproducer, lower kernel tunable net.ipv4.tcp_retries2 to 
 e.g. 4:
 echo 4  /proc/sys/net/ipv4/tcp_retries2
 2. Have consumer connected (with auto-reconnect enabled and heartbeats not 
 enabled) and receiver open on a queue address and check its TCP connections 
 to AMQP broker:
 netstat -anp | grep 5671
 (there should be 2 TCP connections)
 3. Emulate network outage via iptables:
 iptables -A OUTPUT -p tcp --dport 5671 -j REJECT
 4. Monitor /var/log/messages; once it logs WARNING recoverable error, flush 
 iptables (iptables -F).
 5. Wait few seconds.
 6. Check gofer TCP connections:
 netstat -anp | grep 5671
 Actual results:
 6. shows just 1 TCP connection
 /var/log/messages repeatedly logs:
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - 046d2084-b0f1-4de4-a039-89499d9e680d
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/gofer/transport/qpid/consumer.py, line 
 113, in get return self.__receiver.fetch(timeout=timeout) File string, 
 line 6, in fetch File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 1030, in 
 fetch self._ecwait(lambda: self.linked) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 50, in 
 _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout) 
 File /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 
 993, in _ewait result = self.session._ewait(lambda: self.error or 
 predicate(), timeout) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 580, in 
 _ewait result = self.connection._ewait(lambda: self.error or predicate(), 
 timeout) File /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, 
 line 219, in _ewait self.check_error() File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 212, in 
 check_error raise e InternalError: Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 660, in 
 write op.dispatch(self) File /usr/lib/python2.6/site-packages/qpid/ops.py, 
 line 84, in dispatch getattr(target, handler)(self, *args) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 877, in 
 do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop(): 
 dictionary is empty'
 Expected results:
 2nd TCP connection re-established, no errors in /var/log/messages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6297) Python client (qpid.messaging) raises KeyError insead of reconnecting

2015-01-11 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273068#comment-14273068
 ] 

Pavel Moravec commented on QPID-6297:
-

That was test on qpid 0.22. Upstream qpid does not work much better:

Traceback (most recent call last):
  File /root/Python_MRG/test_gofer_like.py, line 18, in module
msg = recv.fetch(timeout=timeout)
  File string, line 6, in fetch
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 1067, in fetch
self._ecwait(lambda: not self.draining)
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 50, in _ecwait
result = self._ewait(lambda: self.closed or predicate(), timeout)
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 1019, in _ewait
result = self.session._ewait(lambda: self.error or predicate(), timeout)
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 595, in _ewait
result = self.connection._ewait(lambda: self.error or predicate(), timeout)
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 234, in _ewait
self.check_error()
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 226, in check_error
self.close()
  File string, line 6, in close
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 345, in close
ssn.close(timeout=timeout)
  File string, line 6, in close
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 777, in close
self.sync(timeout=timeout)
  File string, line 6, in sync
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 768, in sync
if not self._ewait(lambda: not self.outgoing and not self.acked, 
timeout=timeout):
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 595, in _ewait
result = self.connection._ewait(lambda: self.error or predicate(), timeout)
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 234, in _ewait
self.check_error()
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 226, in check_error
self.close()
  File string, line 6, in close
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 345, in close
ssn.close(timeout=timeout)
  File string, line 6, in close
  File /data_xfs/qpid/cpp/BLD/src/tests/python/qpid/messaging/endpoints.py, 
line 777, in close
self.sync(timeout=timeout)
..
RuntimeError: maximum recursion depth exceeded


 Python client (qpid.messaging) raises KeyError insead of reconnecting
 -

 Key: QPID-6297
 URL: https://issues.apache.org/jira/browse/QPID-6297
 Project: Qpid
  Issue Type: Bug
  Components: Python Client
Affects Versions: 0.22
 Environment: EL6
Reporter: Jeff Ortel
 Attachments: goferBug.cap


 Description of problem:
 Having some temporary network outage causing gofer loses TCP connection to 
 AMQP broker, it does not try to reconnect.
 How reproducible:
 100%
 Steps to Reproduce:
 1. Just to speedup reproducer, lower kernel tunable net.ipv4.tcp_retries2 to 
 e.g. 4:
 echo 4  /proc/sys/net/ipv4/tcp_retries2
 2. Have consumer connected (with auto-reconnect enabled and heartbeats not 
 enabled) and receiver open on a queue address and check its TCP connections 
 to AMQP broker:
 netstat -anp | grep 5671
 (there should be 2 TCP connections)
 3. Emulate network outage via iptables:
 iptables -A OUTPUT -p tcp --dport 5671 -j REJECT
 4. Monitor /var/log/messages; once it logs WARNING recoverable error, flush 
 iptables (iptables -F).
 5. Wait few seconds.
 6. Check gofer TCP connections:
 netstat -anp | grep 5671
 Actual results:
 6. shows just 1 TCP connection
 /var/log/messages repeatedly logs:
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - 046d2084-b0f1-4de4-a039-89499d9e680d
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/gofer/transport/qpid/consumer.py, line 
 113, in get return self.__receiver.fetch(timeout=timeout) File string, 
 line 6, in fetch File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 1030, in 
 fetch self._ecwait(lambda: self.linked) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 50, in 
 _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout) 
 File /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 
 993, in _ewait result = 

[jira] [Commented] (QPID-6297) Python client (qpid.messaging) raises KeyError insead of reconnecting

2015-01-07 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268935#comment-14268935
 ] 

Pavel Moravec commented on QPID-6297:
-

Why the problem must be in python library and not in goferd? Because the client 
raises unhandled exception?

Isn't using heartbeats a workaround?



 Python client (qpid.messaging) raises KeyError insead of reconnecting
 -

 Key: QPID-6297
 URL: https://issues.apache.org/jira/browse/QPID-6297
 Project: Qpid
  Issue Type: Bug
  Components: Python Client
Affects Versions: 0.22
 Environment: EL6
Reporter: Jeff Ortel

 Description of problem:
 Having some temporary network outage causing gofer loses TCP connection to 
 AMQP broker, it does not try to reconnect.
 How reproducible:
 100%
 Steps to Reproduce:
 1. Just to speedup reproducer, lower kernel tunable net.ipv4.tcp_retries2 to 
 e.g. 4:
 echo 4  /proc/sys/net/ipv4/tcp_retries2
 2. Have consumer connected (with auto-reconnect enabled and heartbeats not 
 enabled) and receiver open on a queue address and check its TCP connections 
 to AMQP broker:
 netstat -anp | grep 5671
 (there should be 2 TCP connections)
 3. Emulate network outage via iptables:
 iptables -A OUTPUT -p tcp --dport 5671 -j REJECT
 4. Monitor /var/log/messages; once it logs WARNING recoverable error, flush 
 iptables (iptables -F).
 5. Wait few seconds.
 6. Check gofer TCP connections:
 netstat -anp | grep 5671
 Actual results:
 6. shows just 1 TCP connection
 /var/log/messages repeatedly logs:
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - 046d2084-b0f1-4de4-a039-89499d9e680d
 Dec  1 16:39:02 pmoravec-rhel6-3 goferd: 
 [ERROR][pulp.agent.a726580c-5f1e-4a79-9f11-de0adc52c1e9] 
 gofer.transport.qpid.consumer:117 - Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/gofer/transport/qpid/consumer.py, line 
 113, in get return self.__receiver.fetch(timeout=timeout) File string, 
 line 6, in fetch File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 1030, in 
 fetch self._ecwait(lambda: self.linked) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 50, in 
 _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout) 
 File /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 
 993, in _ewait result = self.session._ewait(lambda: self.error or 
 predicate(), timeout) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 580, in 
 _ewait result = self.connection._ewait(lambda: self.error or predicate(), 
 timeout) File /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, 
 line 219, in _ewait self.check_error() File 
 /usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py, line 212, in 
 check_error raise e InternalError: Traceback (most recent call last): File 
 /usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 660, in 
 write op.dispatch(self) File /usr/lib/python2.6/site-packages/qpid/ops.py, 
 line 84, in dispatch getattr(target, handler)(self, *args) File 
 /usr/lib/python2.6/site-packages/qpid/messaging/driver.py, line 877, in 
 do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop(): 
 dictionary is empty'
 Expected results:
 2nd TCP connection re-established, no errors in /var/log/messages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6213) qpidd misses heartbeats

2014-11-27 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14227458#comment-14227458
 ] 

Pavel Moravec commented on QPID-6213:
-

I have run some more stress tests (basically variants of the generic one) on 
upstream qpid patched by QPID-6213-svn-10.patch and no issue found. Kudos for 
the patch!

(I haven't reviewed the patch from technical/code point of view, just applied 
and run stress tests)

 qpidd misses heartbeats
 ---

 Key: QPID-6213
 URL: https://issues.apache.org/jira/browse/QPID-6213
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: 0.31

 Attachments: 
 0001-QPID-6213-Fix-misuse-of-Timer-in-queue-cleaning-code.patch, 
 QPID-6213-svn-10.patch, QPID-6213_suggested_further_fix.patch, 
 qpid-6213-broker-1.log, qpid-6213-broker.log, qpid-6213-svn-01.patch, 
 qpid-6213-svn-14.patch, qpidd.log.gz


 Caused by https://issues.apache.org/jira/browse/QPID-5758. Reproducer from 
 Pavel Moravec: create many heartbeat enabled connections and queues (e.g. 500 
 idle receivers, each with their own queue) and have the purge interval 
 relatively short (to speed up reproducing).
 The broker misses heartbeats and connections get timed out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6213) qpidd misses heartbeats

2014-11-25 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6213:

Attachment: qpidd.log.gz

I tested current upstream with Chuck's patch, but.. I still see the same 
problem as before (maybe little more often than without this patch).

Attached are trace logs from the broker.

Reproducer:

./src/qpidd --auth=no --max-connections=1 --queue-purge-interval=60 
--log-to-file=/tmp/qpidd.log --trace --log-to-stderr=no

Then run bash script:
noDelQueues=1000

if [ $# -gt 0 ]; then
noDelQueues=$1
fi

maxIter=180
mySleep=10

echo $(date): creating $noDelQueues connections..
for i in $(seq 1 $noDelQueues); do qpid-receive --connection-options 
{'heartbeat':5} -a autoDelQueueNoBound_${i}; {create:always, node:{ 
x-declare:{auto-delete:True, arguments:{'qpid.auto_delete_timeout':1 -f 
--print-content=no  /dev/null 21  sleep 0.1; done

iter=0
while true; do
iter=$(($((iter))+1))
conns=$(pgrep qpid-receive | wc -w)
echo $(date): iteration:$iter connections:$conns
if [ $conns -lt $noDelQueues ]; then
echo error: found just $conns connections instead of 
$noDelQueues, in iteration $iter
break
fi
if [ $iter -eq $maxIter ]; then
echo no error
break
fi
sleep $mySleep
done

that usually finishes in 5-15th iteration (by error).

 qpidd misses heartbeats
 ---

 Key: QPID-6213
 URL: https://issues.apache.org/jira/browse/QPID-6213
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: 0.31

 Attachments: qpid-6213-svn-01.patch, qpidd.log.gz


 Caused by https://issues.apache.org/jira/browse/QPID-5758. Reproducer from 
 Pavel Moravec: create many heartbeat enabled connections and queues (e.g. 500 
 idle receivers, each with their own queue) and have the purge interval 
 relatively short (to speed up reproducing).
 The broker misses heartbeats and connections get timed out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6232) [C++ broker] Linearstore segfaults when ulimit prevents creating new file in EFP

2014-11-18 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6232:
---

 Summary: [C++ broker] Linearstore segfaults when ulimit prevents 
creating new file in EFP
 Key: QPID-6232
 URL: https://issues.apache.org/jira/browse/QPID-6232
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor


When EFP fails to open a new file (i.e. due to ulimit to nofiles), linearstore 
segfaults with backtrace:

#0  0x0034d429c1b9 in std::basic_stringchar, std::char_traitschar, 
std::allocatorchar ::rfind(char, unsigned long) const () from 
/usr/lib64/libstdc++.so.6
#1  0x7fe466719b50 in 
qpid::linearstore::journal::EmptyFilePool::takeEmptyFile (this=0xddb540, 
destDirectory=/var/lib/qpidd/qls/jrnl/Durable_4_8)
at 
/usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:109
#2  0x7fe4667327a0 in 
qpid::linearstore::journal::LinearFileController::pullEmptyFileFromEfp 
(this=0x3478288)
at 
/usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/LinearFileController.cpp:239
#3  0x7fe466748bfd in qpid::linearstore::journal::wmgr::flush_check 
(this=0x34784f8, res=@0x7fe462f55f90, cont=@0x7fe462f55f9f, 
done=@0x7fe462f55f9c)
at /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/wmgr.cpp:651
#4  0x7fe46674c134 in qpid::linearstore::journal::wmgr::enqueue 
(this=0x34784f8, data_buff=value optimized out, tot_data_len=177, 
this_data_len=value optimized out, dtokp=
0x7fe3e583ed60, xid_ptr=0x0, xid_len=0, tpc_flag=false, transient=false, 
external=false) at 
/usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/wmgr.cpp:223

The problem is EmptyFilePool::overwriteFileContents does not react anyhow if 
ofs.good() is false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6232) [C++ broker] Linearstore segfaults when ulimit prevents creating new file in EFP

2014-11-18 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6232.
---
   Resolution: Fixed
Fix Version/s: Future

Committed revision 1640357.


 [C++ broker] Linearstore segfaults when ulimit prevents creating new file in 
 EFP
 

 Key: QPID-6232
 URL: https://issues.apache.org/jira/browse/QPID-6232
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
  Labels: patch
 Fix For: Future


 When EFP fails to open a new file (i.e. due to ulimit to nofiles), 
 linearstore segfaults with backtrace:
 #0  0x0034d429c1b9 in std::basic_stringchar, std::char_traitschar, 
 std::allocatorchar ::rfind(char, unsigned long) const () from 
 /usr/lib64/libstdc++.so.6
 #1  0x7fe466719b50 in 
 qpid::linearstore::journal::EmptyFilePool::takeEmptyFile (this=0xddb540, 
 destDirectory=/var/lib/qpidd/qls/jrnl/Durable_4_8)
 at 
 /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:109
 #2  0x7fe4667327a0 in 
 qpid::linearstore::journal::LinearFileController::pullEmptyFileFromEfp 
 (this=0x3478288)
 at 
 /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/LinearFileController.cpp:239
 #3  0x7fe466748bfd in qpid::linearstore::journal::wmgr::flush_check 
 (this=0x34784f8, res=@0x7fe462f55f90, cont=@0x7fe462f55f9f, 
 done=@0x7fe462f55f9c)
 at /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/wmgr.cpp:651
 #4  0x7fe46674c134 in qpid::linearstore::journal::wmgr::enqueue 
 (this=0x34784f8, data_buff=value optimized out, tot_data_len=177, 
 this_data_len=value optimized out, dtokp=
 0x7fe3e583ed60, xid_ptr=0x0, xid_len=0, tpc_flag=false, transient=false, 
 external=false) at 
 /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/wmgr.cpp:223
 The problem is EmptyFilePool::overwriteFileContents does not react anyhow if 
 ofs.good() is false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6182) AMQP 1.0 consumer should be able to get messages from browse-only queue

2014-10-23 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6182:
---

 Summary: AMQP 1.0 consumer should be able to get messages from 
browse-only queue
 Key: QPID-6182
 URL: https://issues.apache.org/jira/browse/QPID-6182
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor


Description of problem:
Qpid broker allows 0-10 consumers on browse-only queues, but does not allow 1.0 
consumers there. It should allow 1.0 consumers as well, marking them internally 
as browsers instead (like it does for 0-10 consumer).


Version-Release number of selected component (if applicable):
0.30


How reproducible:
100%


Steps to Reproduce:
1. service qpidd restart
2. qpid-config add queue q --argument=qpid.browse-only=true --limit-policy=ring 
--max-queue-count=100
3. qpid-send -a q -m1
4. qpid-receive -a q # works fine
5. qpid-receive -a q --connection-option {protocol:amqp1.0}


Actual results:
Step 5 returns:
qpid-receive: Link detached by peer with amqp:internal-error: not-allowed: 
Queue q is browse only.  Refusing acquiring consumer. 
(/home/pmoravec/qpid-trunk/qpid/cpp/src/qpid/broker/Queue.cpp:552)


Expected results:
Both 4. and 5. to return one message (one empty line corresponding to empty 
message content).


Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6182) AMQP 1.0 consumer should be able to get messages from browse-only queue

2014-10-23 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6182.
---
   Resolution: Fixed
Fix Version/s: 0.31

Committed revision 1633798.


 AMQP 1.0 consumer should be able to get messages from browse-only queue
 ---

 Key: QPID-6182
 URL: https://issues.apache.org/jira/browse/QPID-6182
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
  Labels: easytest
 Fix For: 0.31


 Description of problem:
 Qpid broker allows 0-10 consumers on browse-only queues, but does not allow 
 1.0 consumers there. It should allow 1.0 consumers as well, marking them 
 internally as browsers instead (like it does for 0-10 consumer).
 Version-Release number of selected component (if applicable):
 0.30
 How reproducible:
 100%
 Steps to Reproduce:
 1. service qpidd restart
 2. qpid-config add queue q --argument=qpid.browse-only=true 
 --limit-policy=ring --max-queue-count=100
 3. qpid-send -a q -m1
 4. qpid-receive -a q # works fine
 5. qpid-receive -a q --connection-option {protocol:amqp1.0}
 Actual results:
 Step 5 returns:
 qpid-receive: Link detached by peer with amqp:internal-error: not-allowed: 
 Queue q is browse only.  Refusing acquiring consumer. 
 (/home/pmoravec/qpid-trunk/qpid/cpp/src/qpid/broker/Queue.cpp:552)
 Expected results:
 Both 4. and 5. to return one message (one empty line corresponding to empty 
 message content).
 Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6177) qpid-tool should print warning when initial connection to broker fails

2014-10-23 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6177.
---
   Resolution: Fixed
Fix Version/s: Future

Committed revision 1633818.


 qpid-tool should print warning when initial connection to broker fails
 --

 Key: QPID-6177
 URL: https://issues.apache.org/jira/browse/QPID-6177
 Project: Qpid
  Issue Type: Improvement
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor
 Fix For: Future


 When I mistype brokerURL in e.g. qpid-stat, I get some error (ConnectError or 
 AuthenticationFailure). But when I invoke qpid-tool with invalid credentials 
 or hostname, the tool raises no exception or warning, leaving the user under 
 the wrong assumption the tool has connected successfully.
 qpid-tool should either exit or at least print out some warning message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6177) qpid-tool should print warning when initial connection to broker fails

2014-10-22 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6177:
---

 Summary: qpid-tool should print warning when initial connection to 
broker fails
 Key: QPID-6177
 URL: https://issues.apache.org/jira/browse/QPID-6177
 Project: Qpid
  Issue Type: Improvement
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor


When I mistype brokerURL in e.g. qpid-stat, I get some error (ConnectError or 
AuthenticationFailure). But when I invoke qpid-tool with invalid credentials or 
hostname, the tool raises no exception or warning, leaving the user under the 
wrong assumption the tool has connected successfully.

qpid-tool should either exit or at least print out some warning message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6157) linearstore: segfault when 2 journals request new journal file from empty EFP

2014-10-17 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6157.
---
   Resolution: Fixed
Fix Version/s: 0.31

Committed revision 1632504.


 linearstore: segfault when 2 journals request new journal file from empty EFP
 -

 Key: QPID-6157
 URL: https://issues.apache.org/jira/browse/QPID-6157
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
  Labels: patch
 Fix For: 0.31


 Description of problem:
 Broker using linearstore module can segfault when:
 - EFP is empty
 - 2 journals concurrently request new journal file from EFP
 There is a race condition described in Additional info that leads to segfault.
 Version-Release number of selected component (if applicable):
 any
 How reproducible:
 100% in few minutes (on faster machines)
 Steps to Reproduce:
 Reproducer script:
 topics=10
 queues_per_topic=10
 rm -rf /var/lib/qpidd/* /tmp/qpidd.log
 service qpidd restart
 echo $(date): creating $(($((topics))*$((queues_per_topic queues
 for i in $(seq 1 $topics); do
   for j in $(seq 1 $queues_per_topic); do
 qpid-receive -a Durable_${i}_${j}; {create:always, node:{durable:true, 
 x-bindings:[{exchange:'amq.direct', queue:'Durable_${i}_${j}', key:'${i}'}] 
 }} 
   done
 done
 wait
 echo $(date): queues created
 while true; do
   echo $(date): publishing messages..
   for i in $(seq 1 $topics); do
 qpid-send -a amq.direct/${i} -m 100 --durable=yes 
 --content-size=1000 
   done
   wait
   echo $(date): consuming messages..
   for i in $(seq 1 $topics); do
 for j in $(seq 1 $queues_per_topic); do
   qpid-receive -a Durable_${i}_${j} -m 100 --print-content=no 
 done
   done
   wait
 done
 #end of the script
 Actual results:
 segfault with bt:
 Thread 1 (Thread 0x7ff85b3f1700 (LWP 17810)):
 #0  0x7ff9927104f3 in std::basic_stringchar, std::char_traitschar, 
 std::allocatorchar ::assign(std::basic_stringchar, 
 std::char_traitschar, std::allocatorchar  const) () from 
 /usr/lib64/libstdc++.so.6
 No symbol table info available.
 #1  0x7ff98e59d6a1 in operator= (this=0x1ab3480) at 
 /usr/include/c++/4.4.7/bits/basic_string.h:511
 No locals.
 #2  qpid::linearstore::journal::EmptyFilePool::popEmptyFile (this=0x1ab3480)
 at 
 /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:213
 l = {_sm = @0x1ab34f8}
 emptyFileName = 
 isEmpty = true
 #3  0x7ff98e59ddec in 
 qpid::linearstore::journal::EmptyFilePool::takeEmptyFile (this=0x1ab3480, 
 destDirectory=
 /var/lib/qpidd/qls/jrnl/DurableQueue)
 at 
 /usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:108
 emptyFileName = 
 newFileName = 
 Expected results:
 no segfault
 Additional info:
 Relevant source code:
 std::string EmptyFilePool::popEmptyFile() {
 std::string emptyFileName;
 bool isEmpty = false;
 {
 slock l(emptyFileListMutex_);
 isEmpty = emptyFileList_.empty();
 }
 if (isEmpty) {
 createEmptyFile();
 }
 {
 slock l(emptyFileListMutex_);
 emptyFileName = emptyFileList_.front();-- line 213
 emptyFileList_.pop_front();
 }
 return emptyFileName;
 }
 If two requests (R1 and R2) are made concurrently when EFP is empty such that:
 - R1 performs most of the function until line 212 (second lock)
   - this means creating one empty file
 - R2 performs the same - but now EFP has one file so no new file to be created
 - R1 (or R2, it does not matter) continues on line 212 and further
   - so it takes the empty file
 - the second request tries to take an empty file from the empty EFP and 
 triggers the segfault



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6160) CLONE - [CPP Broker] [CPP Client] Disable SSLv3 support

2014-10-17 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6160:

Component/s: (was: Java Client)
 (was: Java Broker)
 C++ Client
 C++ Broker
Description: 
SSLv3 is vulnerable to CVE-2014-3566, and will not fixed. 

Wherever a seure connection is established we should ensure that SSLv3 is not 
in the supported protocols.

  was:

SSLv3 is vulnerable to CVE-2014-3566, and will not fixed. 

Wherever a seure connection is established we should ensure that SSLv3 is not 
in the supported protocols.


 CLONE - [CPP Broker] [CPP Client] Disable SSLv3 support
 ---

 Key: QPID-6160
 URL: https://issues.apache.org/jira/browse/QPID-6160
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker, C++ Client
Reporter: Ken Giusti
Assignee: Ken Giusti
 Fix For: 0.31


 SSLv3 is vulnerable to CVE-2014-3566, and will not fixed. 
 Wherever a seure connection is established we should ensure that SSLv3 is not 
 in the supported protocols.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6157) linearstore: segfault when 2 journals request new journal file from empty EFP

2014-10-16 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6157:
---

 Summary: linearstore: segfault when 2 journals request new journal 
file from empty EFP
 Key: QPID-6157
 URL: https://issues.apache.org/jira/browse/QPID-6157
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec


Description of problem:
Broker using linearstore module can segfault when:
- EFP is empty
- 2 journals concurrently request new journal file from EFP

There is a race condition described in Additional info that leads to segfault.


Version-Release number of selected component (if applicable):
any


How reproducible:
100% in few minutes (on faster machines)


Steps to Reproduce:
Reproducer script:

topics=10
queues_per_topic=10

rm -rf /var/lib/qpidd/* /tmp/qpidd.log
service qpidd restart

echo $(date): creating $(($((topics))*$((queues_per_topic queues
for i in $(seq 1 $topics); do
  for j in $(seq 1 $queues_per_topic); do
qpid-receive -a Durable_${i}_${j}; {create:always, node:{durable:true, 
x-bindings:[{exchange:'amq.direct', queue:'Durable_${i}_${j}', key:'${i}'}] }} 

  done
done
wait

echo $(date): queues created
while true; do
  echo $(date): publishing messages..
  for i in $(seq 1 $topics); do
qpid-send -a amq.direct/${i} -m 100 --durable=yes --content-size=1000 

  done
  wait
  echo $(date): consuming messages..
  for i in $(seq 1 $topics); do
for j in $(seq 1 $queues_per_topic); do
  qpid-receive -a Durable_${i}_${j} -m 100 --print-content=no 
done
  done
  wait
done

#end of the script


Actual results:
segfault with bt:

Thread 1 (Thread 0x7ff85b3f1700 (LWP 17810)):
#0  0x7ff9927104f3 in std::basic_stringchar, std::char_traitschar, 
std::allocatorchar ::assign(std::basic_stringchar, std::char_traitschar, 
std::allocatorchar  const) () from /usr/lib64/libstdc++.so.6
No symbol table info available.
#1  0x7ff98e59d6a1 in operator= (this=0x1ab3480) at 
/usr/include/c++/4.4.7/bits/basic_string.h:511
No locals.
#2  qpid::linearstore::journal::EmptyFilePool::popEmptyFile (this=0x1ab3480)
at 
/usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:213
l = {_sm = @0x1ab34f8}
emptyFileName = 
isEmpty = true
#3  0x7ff98e59ddec in 
qpid::linearstore::journal::EmptyFilePool::takeEmptyFile (this=0x1ab3480, 
destDirectory=
/var/lib/qpidd/qls/jrnl/DurableQueue)
at 
/usr/src/debug/qpid-0.22/cpp/src/qpid/linearstore/journal/EmptyFilePool.cpp:108
emptyFileName = 
newFileName = 


Expected results:
no segfault


Additional info:
Relevant source code:

std::string EmptyFilePool::popEmptyFile() {
std::string emptyFileName;
bool isEmpty = false;
{
slock l(emptyFileListMutex_);
isEmpty = emptyFileList_.empty();
}
if (isEmpty) {
createEmptyFile();
}
{
slock l(emptyFileListMutex_);
emptyFileName = emptyFileList_.front();-- line 213
emptyFileList_.pop_front();
}
return emptyFileName;
}

If two requests (R1 and R2) are made concurrently when EFP is empty such that:
- R1 performs most of the function until line 212 (second lock)
  - this means creating one empty file
- R2 performs the same - but now EFP has one file so no new file to be created
- R1 (or R2, it does not matter) continues on line 212 and further
  - so it takes the empty file
- the second request tries to take an empty file from the empty EFP and 
triggers the segfault




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6147) [C++ broker linearstore] missing journal id in trace Mgmt create journal. log

2014-10-13 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6147:
---

 Summary:  [C++ broker linearstore] missing journal id in trace 
Mgmt create journal. log
 Key: QPID-6147
 URL: https://issues.apache.org/jira/browse/QPID-6147
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Trivial


Description of problem:
When creating a journal in linearstore, broker logs:

2014-10-13 10:35:32 [Model] trace Mgmt create journal. id:

without the queue name as expected id.

The journal name / id in the relevant QMF object is set properly later on (see 
qpid-tool - list journal), it is just missing in the trace log.


How reproducible:
100%


Steps to Reproduce:
1. qpidd 
--log-enable=trace+:qmf::org::apache::qpid::linearstore::Journal::Journal

2. (in 2nd terminal) qpid-config add queue Durable --durable

3. (in 1st terminal): check output; restart broker  check output


Actual results:
2014-10-13 12:55:28 [Model] trace Mgmt create journal. id:

(without Durable as the id)
(both when created via qpid-config and after restart)


Expected results:
2014-10-13 12:55:28 [Model] trace Mgmt create journal. id:Durable

(both when created via qpid-config and after restart)


Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6147) [C++ broker linearstore] missing journal id in trace Mgmt create journal. log

2014-10-13 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6147.
---
   Resolution: Fixed
Fix Version/s: Future

Committed revision 1631360.


  [C++ broker linearstore] missing journal id in trace Mgmt create journal. 
 log
 

 Key: QPID-6147
 URL: https://issues.apache.org/jira/browse/QPID-6147
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Trivial
  Labels: easyfix, easytest
 Fix For: Future


 Description of problem:
 When creating a journal in linearstore, broker logs:
 2014-10-13 10:35:32 [Model] trace Mgmt create journal. id:
 without the queue name as expected id.
 The journal name / id in the relevant QMF object is set properly later on 
 (see qpid-tool - list journal), it is just missing in the trace log.
 How reproducible:
 100%
 Steps to Reproduce:
 1. qpidd 
 --log-enable=trace+:qmf::org::apache::qpid::linearstore::Journal::Journal
 2. (in 2nd terminal) qpid-config add queue Durable --durable
 3. (in 1st terminal): check output; restart broker  check output
 Actual results:
 2014-10-13 12:55:28 [Model] trace Mgmt create journal. id:
 (without Durable as the id)
 (both when created via qpid-config and after restart)
 Expected results:
 2014-10-13 12:55:28 [Model] trace Mgmt create journal. id:Durable
 (both when created via qpid-config and after restart)
 Additional info:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6148) purging TTL expired messages via purge task should not increase acquires counters

2014-10-13 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6148:
---

 Summary:  purging TTL expired messages via purge task should not 
increase acquires counters 
 Key: QPID-6148
 URL: https://issues.apache.org/jira/browse/QPID-6148
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Trivial


Description of problem:
When purging messages via purge task (that runs every 10 minutes), for every 
purged message this way, acquires counter per queue and per broker is 
increased.

That does not make much sense, as technically the message is not acquired. 
Moreover purging the same message in the second way (removing it when finding 
what message to send/acquire to some consumer) does not increase the counter.


How reproducible:
100%


Steps to Reproduce:
# echo queue-purge-interval=10  /etc/qpid/qpidd.conf
# service qpidd restart
# qpid-send -a q; {create:always} -m1000 --ttl=1000
# sleep 10
# qpid-stat -q q | egrep '(acquires|ttl-expired)'; qpid-stat -g | egrep 
'(acquires|ttl-expired)'


Actual results:
  acquires1000
  discards-ttl-expired1000
  acquires1008
  discards-ttl-expired1000

(the 2nd acquires - brokerwide - should be 1000 due to the qpid-tool acquiring 
some messages)

Expected results:
  acquires0
  discards-ttl-expired1000
  acquires8
  discards-ttl-expired1000

(the 2nd acquires - brokerwide - shoudl be 0 due to qpid-tool acquiring some 
messages, but surely 1000)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-6148) purging TTL expired messages via purge task should not increase acquires counters

2014-10-13 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec closed QPID-6148.
---
   Resolution: Fixed
Fix Version/s: Future

Committed revision 1631396.


  purging TTL expired messages via purge task should not increase acquires 
 counters 
 ---

 Key: QPID-6148
 URL: https://issues.apache.org/jira/browse/QPID-6148
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Trivial
 Fix For: Future


 Description of problem:
 When purging messages via purge task (that runs every 10 minutes), for every 
 purged message this way, acquires counter per queue and per broker is 
 increased.
 That does not make much sense, as technically the message is not acquired. 
 Moreover purging the same message in the second way (removing it when finding 
 what message to send/acquire to some consumer) does not increase the counter.
 How reproducible:
 100%
 Steps to Reproduce:
 # echo queue-purge-interval=10  /etc/qpid/qpidd.conf
 # service qpidd restart
 # qpid-send -a q; {create:always} -m1000 --ttl=1000
 # sleep 10
 # qpid-stat -q q | egrep '(acquires|ttl-expired)'; qpid-stat -g | egrep 
 '(acquires|ttl-expired)'
 Actual results:
   acquires1000
   discards-ttl-expired1000
   acquires1008
   discards-ttl-expired1000
 (the 2nd acquires - brokerwide - should be 1000 due to the qpid-tool 
 acquiring some messages)
 Expected results:
   acquires0
   discards-ttl-expired1000
   acquires8
   discards-ttl-expired1000
 (the 2nd acquires - brokerwide - shoudl be 0 due to qpid-tool acquiring some 
 messages, but surely 1000)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-6118) Add qmf shutdown command to the broker

2014-09-26 Thread Pavel Moravec (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Moravec updated QPID-6118:

Attachment: QPID-6118-inspiration.patch

Inspiration for final patch, including ACLs.

Attached patch works well, just:
- the caller of shutdown QMF command does not get response as the broker 
shuts down before responding. Not sure if desired/acceptable behaviour.
- the broker reacts by calling Broker::shutdown method only. Not sure what else 
to call/cleanup/.. when processing the QMF request

Some testing:
(*) without ACLs preventing shutdown (shutdown_broker is trivial program that 
just calls QMF shutdown method against broker object):
$ ./shutdown_broker
2014-09-26 15:05:39 [Client] warning Connection 
[127.0.0.1:35530-127.0.0.1:5672] closed
Failed to connect (reconnect disabled)
$
(broker traces show normal shutdown)

(*) with ACLs preventing shutdown:
$ cat ~/.qpidd/qpidd.acl
acl deny all shutdown broker
acl allow all all
$ ./shutdown_broker 
fetching response with timeout 1000ms.
Error: �_values��
error_code
error_text�xunauthorized-access: ACL denied broker shutdown from anonymous@QPID 
(/data_xfs/qpid/cpp/src/qpid/broker/Broker.cpp:1305)
$

 Add qmf shutdown command to the broker 
 ---

 Key: QPID-6118
 URL: https://issues.apache.org/jira/browse/QPID-6118
 Project: Qpid
  Issue Type: Improvement
  Components: C++ Broker
Affects Versions: 0.28
Reporter: Alan Conway
Assignee: Alan Conway
 Attachments: QPID-6118-inspiration.patch


 Add a QMF shutdown command to the broker. On receiving this command the 
 broker would shut down in the same way as if it received a kill -TERM.
 The shutdown command must be restriced by a new ACL rule for security 
 purposes.
 Discussed on the qpid user list, all responses to the idea were positive:
 http://qpid.2158936.n2.nabble.com/QPID-C-Dynamically-Managing-Broker-td7613792.html#a7614175



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6118) Add qmf shutdown command to the broker

2014-09-26 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149166#comment-14149166
 ] 

Pavel Moravec commented on QPID-6118:
-

Good point, Chuck.

So then this patch is sufficient (apart of missing QMF response):

Index: src/qpid/broker/management-schema.xml
===
--- src/qpid/broker/management-schema.xml   (revision 1627786)
+++ src/qpid/broker/management-schema.xml   (working copy)
@@ -194,6 +194,9 @@
 arg name=targetQueue dir=I type=sstr   desc=Redirect target 
queue. Blank disables redirect./
 /method
 
+method name=shutdown desc=Shutdown the broker
+/method
+
   /class
 
   !--
Index: src/qpid/broker/Broker.cpp
===
--- src/qpid/broker/Broker.cpp  (revision 1627786)
+++ src/qpid/broker/Broker.cpp  (working copy)
@@ -689,6 +689,13 @@
 status =  queueRedirect(srcQueue, tgtQueue, getCurrentPublisher());
 break;
 }
+case _qmf::Broker::METHOD_SHUTDOWN :
+{
+QPID_LOG (debug, Broker::shutdown());
+status = Manageable::STATUS_OK;
+shutdown();
+break;
+}
 default:
 QPID_LOG (debug, Broker ManagementMethod not implemented: id=  
methodId  ]);
 status = Manageable::STATUS_NOT_IMPLEMENTED;


ACL preventing the shutdown is:
acl deny all access method name=shutdown


FYI to apply change in management-schema.xml, make clean; make did not help 
me. I had to completely remove the building directory and run cmake to 
re-create qmf/org/apache/qpid/broker/Broker.h from the xml file

 Add qmf shutdown command to the broker 
 ---

 Key: QPID-6118
 URL: https://issues.apache.org/jira/browse/QPID-6118
 Project: Qpid
  Issue Type: Improvement
  Components: C++ Broker
Affects Versions: 0.28
Reporter: Alan Conway
Assignee: Alan Conway
 Attachments: QPID-6118-inspiration.patch


 Add a QMF shutdown command to the broker. On receiving this command the 
 broker would shut down in the same way as if it received a kill -TERM.
 The shutdown command must be restriced by a new ACL rule for security 
 purposes.
 Discussed on the qpid user list, all responses to the idea were positive:
 http://qpid.2158936.n2.nabble.com/QPID-C-Dynamically-Managing-Broker-td7613792.html#a7614175



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6113) qpid-stat -u to show Outgoing objects for AMQP 1.0

2014-09-23 Thread Pavel Moravec (JIRA)
Pavel Moravec created QPID-6113:
---

 Summary: qpid-stat -u to show Outgoing objects for AMQP 1.0
 Key: QPID-6113
 URL: https://issues.apache.org/jira/browse/QPID-6113
 Project: Qpid
  Issue Type: Improvement
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor


Description of problem:
qpid-stat -u serves as a good command for understanding what consumers of what 
queues there exist. But for an AMQP 1.0 consumer, the command does not list 
AMQP 1.0 subscription as the consumer link is maintained in Outgoing QMF 
object. But qpid-stat cant cope with outgoing (or incoming) objects.

It is required to:
- either have an option -o (and optionally -i) for listing outgoing (optionally 
incoming) links
- or enhance qpid-stat -u by listing AMQP 1.0 outgoing links


Version-Release number of selected component (if applicable):
qpid-cpp-server 0.22-48


How reproducible:
100%


Steps to Reproduce:
1. qpid-receive -a someQueue; {create:always} --connection-option 
{protocol:amqp1.0} -f 
2. qpid-stat -u 


Actual results:
There is no subscription of someQueue. There is no way to see the outgoing 
links in via either qpid-stat option.


Expected results:
Some qpid-stat option to list outgoing links.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6113) qpid-stat -u to show Outgoing objects for AMQP 1.0

2014-09-23 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144743#comment-14144743
 ] 

Pavel Moravec commented on QPID-6113:
-

Review request for a patch extending qpid-stat -u to cover Outgoings: 
https://reviews.apache.org/r/25938/

 qpid-stat -u to show Outgoing objects for AMQP 1.0
 --

 Key: QPID-6113
 URL: https://issues.apache.org/jira/browse/QPID-6113
 Project: Qpid
  Issue Type: Improvement
  Components: Python Tools
Affects Versions: 0.30
Reporter: Pavel Moravec
Assignee: Pavel Moravec
Priority: Minor

 Description of problem:
 qpid-stat -u serves as a good command for understanding what consumers of 
 what queues there exist. But for an AMQP 1.0 consumer, the command does not 
 list AMQP 1.0 subscription as the consumer link is maintained in Outgoing 
 QMF object. But qpid-stat cant cope with outgoing (or incoming) objects.
 It is required to:
 - either have an option -o (and optionally -i) for listing outgoing 
 (optionally incoming) links
 - or enhance qpid-stat -u by listing AMQP 1.0 outgoing links
 Version-Release number of selected component (if applicable):
 qpid-cpp-server 0.22-48
 How reproducible:
 100%
 Steps to Reproduce:
 1. qpid-receive -a someQueue; {create:always} --connection-option 
 {protocol:amqp1.0} -f 
 2. qpid-stat -u 
 Actual results:
 There is no subscription of someQueue. There is no way to see the outgoing 
 links in via either qpid-stat option.
 Expected results:
 Some qpid-stat option to list outgoing links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



  1   2   3   4   5   >