Re: Potential qpid cpp 0.28 bug

2014-07-08 Thread Gordon Sim

On 07/08/2014 05:08 AM, Duong Quynh (FSU1.Z8.IP) wrote:

Here's your log Gordon.


[...]

2
2014-07-08 11:06:27 [Messaging] debug wakeupDriver()
2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 11:06:27 [Protocol] trace [4cc71475-8b5c-4127-b630-7d02c05da20b]: 0 - @transfer(20) 
[handle=0, delivery-id=0, delivery-tag=b\x00\x00\x00\x00, message-format=0, settled=false, 
more=false] (31) \x00Sp\xc0\x04\x02BP\x00\x00St\xc1\x01\x00\x00Su\xa0\x0bHello news!
2014-07-08 11:06:27 [Network] debug tcp:localhost:5672 encoded 65 bytes from 
65535
2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 decode(48)
2014-07-08 11:06:27 [Protocol] trace [4cc71475-8b5c-4127-b630-7d02c05da20b]: 0 
- @flow(19) [next-incoming-id=1, incoming-window=1, next-outgoing-id=1, 
outgoing-window=0, handle=0, delivery-count=1, link-credit=100]
2014-07-08 11:06:27 [Protocol] trace [4cc71475-8b5c-4127-b630-7d02c05da20b]: 0 
- @disposition(21) [role=true, first=0, last=0, settled=true, 
state=@accepted(36) []]
2014-07-08 11:06:27 [Network] debug tcp:localhost:5672 decoded 48 bytes from 48


This looks like the change is not in effect. (I'd expect to see an 
additional log message here if it were). Is it possible that your newly 
patched and built library is not being picked up? Did you install the 
previous version yourself and into the same place?



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



RE: Potential qpid cpp 0.28 bug

2014-07-08 Thread Duong Quynh (FSU1.Z8.IP)
Yes I did, I make clean, deleted the CMake_Cache.txt, redid the whole process 
from cmake .. if that doesn't work then something is wrong with the make 
process. I'll wipe out the build folder and retry.

Quynh

-Original Message-
From: Gordon Sim [mailto:g...@redhat.com] 
Sent: Tuesday, July 08, 2014 3:38 PM
To: dev@qpid.apache.org
Subject: Re: Potential qpid cpp 0.28 bug

On 07/08/2014 05:08 AM, Duong Quynh (FSU1.Z8.IP) wrote:
 Here's your log Gordon.

[...]
 2
 2014-07-08 11:06:27 [Messaging] debug wakeupDriver()
 2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 
 Sasl::canEncode(): 0 || 0
 2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 
 Sasl::canEncode(): 0 || 0
 2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 encode(65535)
 2014-07-08 11:06:27 [Protocol] trace [4cc71475-8b5c-4127-b630-7d02c05da20b]: 
 0 - @transfer(20) [handle=0, delivery-id=0, 
 delivery-tag=b\x00\x00\x00\x00, message-format=0, settled=false, 
 more=false] (31) 
 \x00Sp\xc0\x04\x02BP\x00\x00St\xc1\x01\x00\x00Su\xa0\x0bHello news!
 2014-07-08 11:06:27 [Network] debug tcp:localhost:5672 encoded 65 
 bytes from 65535
 2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 
 Sasl::canEncode(): 0 || 0
 2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 
 Sasl::canEncode(): 0 || 0
 2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 encode(65535)
 2014-07-08 11:06:27 [Security] trace tcp:localhost:5672 
 Sasl::canEncode(): 0 || 0
 2014-07-08 11:06:27 [Messaging] trace tcp:localhost:5672 decode(48)
 2014-07-08 11:06:27 [Protocol] trace 
 [4cc71475-8b5c-4127-b630-7d02c05da20b]: 0 - @flow(19) 
 [next-incoming-id=1, incoming-window=1, next-outgoing-id=1, 
 outgoing-window=0, handle=0, delivery-count=1, link-credit=100]
 2014-07-08 11:06:27 [Protocol] trace 
 [4cc71475-8b5c-4127-b630-7d02c05da20b]: 0 - @disposition(21) 
 [role=true, first=0, last=0, settled=true, state=@accepted(36) []]
 2014-07-08 11:06:27 [Network] debug tcp:localhost:5672 decoded 48 
 bytes from 48

This looks like the change is not in effect. (I'd expect to see an additional 
log message here if it were). Is it possible that your newly patched and built 
library is not being picked up? Did you install the previous version yourself 
and into the same place?


-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional 
commands, e-mail: dev-h...@qpid.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Potential qpid cpp 0.28 bug

2014-07-08 Thread Gordon Sim

On 07/08/2014 09:40 AM, Duong Quynh (FSU1.Z8.IP) wrote:

Yes I did, I make clean, deleted the CMake_Cache.txt, redid the whole process from 
cmake .. if that doesn't work then something is wrong with the make process. 
I'll wipe out the build folder and retry.


That should certainly have done it. I just don't understand the log 
trace. Could you get a stack trace for all the threads (pstack pid of 
producer process), to be sure we know where it is waiting?


(The protocol trace shows the session window is set to one message by 
the broker. Therefore only one message can initially be sent. The broker 
then does move that window forwards by one, which should allow another 
transfer to be sent. QPID-5737 fixes a case where this doesn't happen if 
the application is already closing the session.)


-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



RE: Potential qpid cpp 0.28 bug

2014-07-08 Thread Duong Quynh (FSU1.Z8.IP)
Deleted all qpid libs in /usr/lib64 (rm -rf *qpid*)
Deleted build folder
Recreated build folder
Cmake ..
Make
Make install
Rebuilt program
Result: same as before

./amq-producer 
2014-07-08 16:05:42 [Messaging] debug Driver started
2014-07-08 16:05:42 [Messaging] info Starting connection to 
amqp:tcp:localhost:5672
2014-07-08 16:05:42 [Messaging] info Connecting to tcp:localhost:5672
2014-07-08 16:05:42 [Messaging] debug tcp:localhost:5672 Connecting ...
2014-07-08 16:05:42 [System] info Connecting: [::1]:5672
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 1 || 0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 1 || 0
2014-07-08 16:05:42 [Protocol] debug tcp:localhost:5672 writing protocol 
header: 1-0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::encode(65535): 8
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] debug tcp:localhost:5672 Connected
2014-07-08 16:05:42 [Messaging] debug wakeupDriver()
2014-07-08 16:05:42 [Messaging] debug tcp:localhost:5672 Waiting to be 
authenticated...
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Protocol] debug tcp:localhost:5672 read protocol header: 
1-0
2014-07-08 16:05:42 [Security] trace Reading SASL frame of size 34
2014-07-08 16:05:42 [Security] trace Reading SASL-MECHANISMS
2014-07-08 16:05:42 [Protocol] debug tcp:localhost:5672 Received 
SASL-MECHANISMS(ANONYMOUS PLAIN )
2014-07-08 16:05:42 [Security] debug CyrusSasl::start(PLAIN )
2014-07-08 16:05:42 [Security] debug min_ssf: 0, max_ssf: 256
2014-07-08 16:05:42 [Security] debug getUserFromSettings(): guest
2014-07-08 16:05:42 [Security] debug CyrusSasl::start(PLAIN ): selected PLAIN 
response: '\x00guest\x00guest'
2014-07-08 16:05:42 [Security] trace Completed encoding of frame of 52 bytes
2014-07-08 16:05:42 [Protocol] debug tcp:localhost:5672 Sent SASL-INIT(PLAIN, 
\x00guest\x00guest, localhost)
2014-07-08 16:05:42 [Messaging] debug wakeupDriver()
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::decode(42): 42
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 
1
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 
1
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::encode(65535): 52
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Security] trace Reading SASL frame of size 16
2014-07-08 16:05:42 [Security] trace Reading SASL-OUTCOME
2014-07-08 16:05:42 [Protocol] debug tcp:localhost:5672 Received 
SASL-OUTCOME(\x00)
2014-07-08 16:05:42 [Messaging] debug wakeupDriver()
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::decode(16): 16
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 16:05:42 [Protocol] trace [e0ecca2e-049e-402f-bd0f-575b8efba722]:   
- AMQP
2014-07-08 16:05:42 [Network] debug tcp:localhost:5672 encoded 8 bytes from 
65535
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 decode(8)
2014-07-08 16:05:42 [Protocol] trace [e0ecca2e-049e-402f-bd0f-575b8efba722]:   
- AMQP
2014-07-08 16:05:42 [Network] debug tcp:localhost:5672 decoded 8 bytes from 8
2014-07-08 16:05:42 [Messaging] debug tcp:localhost:5672 Authenticated
2014-07-08 16:05:42 [Messaging] debug tcp:localhost:5672 Opening...
2014-07-08 16:05:42 [Messaging] debug wakeupDriver()
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 16:05:42 [Protocol] trace [e0ecca2e-049e-402f-bd0f-575b8efba722]: 0 
- @open(16) [container-id=e0ecca2e-049e-402f-bd0f-575b8efba722, 
properties={:qpid.client_process=amq-producer, :qpid.client_pid=48301, 
:qpid.client_ppid=41102}]
2014-07-08 16:05:42 [Network] debug tcp:localhost:5672 encoded 155 bytes from 
65535
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-08 16:05:42 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-08 16:05:42 [Messaging] trace tcp:localhost:5672 decode(23)
2014-07-08 16:05:42 [Protocol] trace [e0ecca2e-049e-402f-bd0f-575b8efba722]: 0 
- 

RE: Potential qpid cpp 0.28 bug

2014-07-08 Thread Duong Quynh (FSU1.Z8.IP)
Here you go, it's waiting in Sender.close() as seen from the stack.

[root@localhost ~]# ps aux | grep amq
root 48326  0.3  0.4 164584  7880 pts/0Sl+  16:09   0:00 ./amq-producer
root 48347  0.0  0.0 103252   836 pts/2S+   16:10   0:00 grep amq
[root@localhost ~]# pstack 48326
Thread 2 (Thread 0x7f77f570a700 (LWP 48327)):
#0  0x00336d0e9153 in epoll_wait () from /lib64/libc.so.6
#1  0x7f77f5df4ecc in qpid::sys::Poller::wait(qpid::sys::Duration) () from 
/usr/local/lib64/libqpidcommon.so.2
#2  0x7f77f5df4af5 in qpid::sys::Poller::run() () from 
/usr/local/lib64/libqpidcommon.so.2
#3  0x7f77f5de94cb in qpid::sys::(anonymous namespace)::runRunnable(void*) 
() from /usr/local/lib64/libqpidcommon.so.2
#4  0x00336d4079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x00336d0e8b5d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f77f570e860 (LWP 48326)):
#0  0x00336d40b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7f77f6728523 in qpid::sys::Condition::wait(qpid::sys::Mutex) () from 
/usr/local/lib64/libqpidmessaging.so.2
#2  0x7f77f672878f in qpid::sys::Monitor::wait() () from 
/usr/local/lib64/libqpidmessaging.so.2
#3  0x7f77f6720ab6 in qpid::messaging::amqp::ConnectionContext::wait() () 
from /usr/local/lib64/libqpidmessaging.so.2
#4  0x7f77f6720b19 in 
qpid::messaging::amqp::ConnectionContext::wait(boost::shared_ptrqpid::messaging::amqp::SessionContext)
 () from /usr/local/lib64/libqpidmessaging.so.2
#5  0x7f77f671e7c4 in 
qpid::messaging::amqp::ConnectionContext::detach(boost::shared_ptrqpid::messaging::amqp::SessionContext,
 boost::shared_ptrqpid::messaging::amqp::SenderContext) () from 
/usr/local/lib64/libqpidmessaging.so.2
#6  0x7f77f673c768 in qpid::messaging::amqp::SenderHandle::close() () from 
/usr/local/lib64/libqpidmessaging.so.2
#7  0x7f77f6797965 in qpid::messaging::Sender::close() () from 
/usr/local/lib64/libqpidmessaging.so.2
#8  0x00403738 in main (argc=1, argv=0x7fff0845f6b8) at 
../amq-producer/main.cpp:43

-Original Message-
From: Gordon Sim [mailto:g...@redhat.com] 
Sent: Tuesday, July 08, 2014 4:09 PM
To: dev@qpid.apache.org
Subject: Re: Potential qpid cpp 0.28 bug

On 07/08/2014 09:40 AM, Duong Quynh (FSU1.Z8.IP) wrote:
 Yes I did, I make clean, deleted the CMake_Cache.txt, redid the whole process 
 from cmake .. if that doesn't work then something is wrong with the make 
 process. I'll wipe out the build folder and retry.

That should certainly have done it. I just don't understand the log trace. 
Could you get a stack trace for all the threads (pstack pid of producer 
process), to be sure we know where it is waiting?

(The protocol trace shows the session window is set to one message by the 
broker. Therefore only one message can initially be sent. The broker then does 
move that window forwards by one, which should allow another transfer to be 
sent. QPID-5737 fixes a case where this doesn't happen if the application is 
already closing the session.)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional 
commands, e-mail: dev-h...@qpid.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Potential qpid cpp 0.28 bug

2014-07-08 Thread Gordon Sim

On 07/08/2014 10:11 AM, Duong Quynh (FSU1.Z8.IP) wrote:

Here you go, it's waiting in Sender.close() as seen from the stack.


Ah, your example code must be a little different from what you sent at 
the beginning of the thread, as that did not call sender.close(). I was 
assuming it was blocked in session.close() as per the example.


On the face of it this looks like perhaps a proton bug. There should 
have been a detach emitted, but that seems not to be the case. I wonder 
if perhaps the session window is somehow preventing that.


Could you try this new patch and see if that has any effect?
diff -up ./src/qpid/messaging/amqp/ConnectionContext.cpp.orig ./src/qpid/messaging/amqp/ConnectionContext.cpp
--- ./src/qpid/messaging/amqp/ConnectionContext.cpp.orig	2014-07-08 10:43:01.396355339 +0100
+++ ./src/qpid/messaging/amqp/ConnectionContext.cpp	2014-07-08 10:43:28.566356812 +0100
@@ -321,6 +321,7 @@ void ConnectionContext::detach(boost::sh
 wakeupDriver();
 while (pn_link_state(lnk-sender)  PN_REMOTE_ACTIVE) {
 wait(ssn);
+wakeupDriver();
 }
 ssn-removeSender(lnk-getName());
 }


-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

[jira] [Created] (QPID-5883) Error message for certain authentication failures is not clear

2014-07-08 Thread Gordon Sim (JIRA)
Gordon Sim created QPID-5883:


 Summary: Error message for certain authentication failures is not 
clear
 Key: QPID-5883
 URL: https://issues.apache.org/jira/browse/QPID-5883
 Project: Qpid
  Issue Type: Bug
  Components: C++ Client
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: 0.29


E.g. when specifying PLAIN but not specifying username and password, or when 
trying to choose an unsupported mechanism.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5883) Error message for certain authentication failures is not clear

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14054812#comment-14054812
 ] 

ASF subversion and git services commented on QPID-5883:
---

Commit 1608711 from [~gsim] in branch 'qpid/trunk'
[ https://svn.apache.org/r1608711 ]

QPID-5883: improve error message a little for 'no-mech' sasl error

 Error message for certain authentication failures is not clear
 --

 Key: QPID-5883
 URL: https://issues.apache.org/jira/browse/QPID-5883
 Project: Qpid
  Issue Type: Bug
  Components: C++ Client
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: 0.29


 E.g. when specifying PLAIN but not specifying username and password, or when 
 trying to choose an unsupported mechanism.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Review Request 23305: [C++ broker] Make memory usage consistent after broker restart

2014-07-08 Thread Pavel Moravec

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23305/
---

(Updated July 8, 2014, 11:56 a.m.)


Review request for qpid, Gordon Sim and Kim van der Riet.


Changes
---

updated diff per Gordon's comments (all except the latest).


Bugs: QPID-5880
https://issues.apache.org/jira/browse/QPID-5880


Repository: qpid


Description
---

Simple idea:
- in Queue::enqueue, set PersistableMessage::persistencyID manually to some 
unique number that is identical to all message instances that has common 
SharedState - e.g. to the pointer to SharedState
- during journal recovery, if we recover a message with already seen 
persistencyID, use the previous seen instead with its SharedState and 
PersistableMessage bits

Known limitation:
- message annotations added to some queue (e.g. due to queue sequencing 
enabled) will be either over-written or shared to all other queues during 
recovery

The patch contains a new QueueSettings option to enable (by default disabled) 
this feature on per queue basis. This somehow limits the limitation above.

Isn't storing pointer to SharedState to the disk (via persistencyID) some sort 
of security breach? (I dont think so, but worth to ask)

Can't manual setup of persistencyID break something in store? (AFAIK no as 
uniqueness of the ID is assured: 1) a new / different message with the same 
persistencyID can appear only after the previous instance is gone from memory, 
and 2) only queues with the option enabled are checked for message coupling)

Will it help in cluster? No, it won't. As when primary broker gets 1 message to 
an exchange that distributes it to 100 queues, th broker updates backup brokers 
via 100 individual enqueue 1 message to queue q[1-100] events. So backup 
brokers consume more memory than primary - the same amount like primary does 
not share SharedState at all.

So it is reasonable for standalone brokers only.


Diffs (updated)
-

  /trunk/qpid/cpp/src/qpid/broker/Queue.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h 1608083 
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/broker/RecoveryManagerImpl.h 1608083 
  /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.h 1608083 
  /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.h 1608083 
  /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.cpp 1608083 

Diff: https://reviews.apache.org/r/23305/diff/


Testing
---

No significant difference in memory consumption before  after restart (in 
setup of 500 queues with qpid.store_msgID=true and thousands of messages sent 
via fanout exchange to all of them).

Automated tests passed.


Thanks,

Pavel Moravec



Re: Review Request 23305: [C++ broker] Make memory usage consistent after broker restart

2014-07-08 Thread Pavel Moravec

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23305/
---

(Updated July 8, 2014, 12:53 p.m.)


Review request for qpid, Gordon Sim and Kim van der Riet.


Changes
---

Added check the option can't be used with paging. As messages offloaded to the 
disk and re-loaded back might have different pointer to SharedState, making the 
pointer (as value of persistencyID) not unique.


Bugs: QPID-5880
https://issues.apache.org/jira/browse/QPID-5880


Repository: qpid


Description
---

Simple idea:
- in Queue::enqueue, set PersistableMessage::persistencyID manually to some 
unique number that is identical to all message instances that has common 
SharedState - e.g. to the pointer to SharedState
- during journal recovery, if we recover a message with already seen 
persistencyID, use the previous seen instead with its SharedState and 
PersistableMessage bits

Known limitation:
- message annotations added to some queue (e.g. due to queue sequencing 
enabled) will be either over-written or shared to all other queues during 
recovery

The patch contains a new QueueSettings option to enable (by default disabled) 
this feature on per queue basis. This somehow limits the limitation above.

Isn't storing pointer to SharedState to the disk (via persistencyID) some sort 
of security breach? (I dont think so, but worth to ask)

Can't manual setup of persistencyID break something in store? (AFAIK no as 
uniqueness of the ID is assured: 1) a new / different message with the same 
persistencyID can appear only after the previous instance is gone from memory, 
and 2) only queues with the option enabled are checked for message coupling)

Will it help in cluster? No, it won't. As when primary broker gets 1 message to 
an exchange that distributes it to 100 queues, th broker updates backup brokers 
via 100 individual enqueue 1 message to queue q[1-100] events. So backup 
brokers consume more memory than primary - the same amount like primary does 
not share SharedState at all.

So it is reasonable for standalone brokers only.


Diffs (updated)
-

  /trunk/qpid/cpp/src/qpid/broker/Queue.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h 1608083 
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.h 1608083 
  /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp 1608083 
  /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.h 1608083 
  /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.cpp 1608083 

Diff: https://reviews.apache.org/r/23305/diff/


Testing
---

No significant difference in memory consumption before  after restart (in 
setup of 500 queues with qpid.store_msgID=true and thousands of messages sent 
via fanout exchange to all of them).

Automated tests passed.


Thanks,

Pavel Moravec



[jira] [Resolved] (QPID-5883) Error message for certain authentication failures is not clear

2014-07-08 Thread Gordon Sim (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Sim resolved QPID-5883.
--

Resolution: Fixed

 Error message for certain authentication failures is not clear
 --

 Key: QPID-5883
 URL: https://issues.apache.org/jira/browse/QPID-5883
 Project: Qpid
  Issue Type: Bug
  Components: C++ Client
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: 0.29


 E.g. when specifying PLAIN but not specifying username and password, or when 
 trying to choose an unsupported mechanism.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5872) [C++ client] Memory leak in qpid::messaging::amqp::ConnectionContext::ConnectionContext when using qpid-receive

2014-07-08 Thread Pavel Moravec (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14054922#comment-14054922
 ] 

Pavel Moravec commented on QPID-5872:
-

Some other leaks when closing session with open receiver there. To reproduce, 
apply this patch:

{noformat}
Index: ../src/tests/qpid-receive.cpp
===
--- ../src/tests/qpid-receive.cpp   (revision 1608083)
+++ ../src/tests/qpid-receive.cpp   (working copy)
@@ -195,6 +195,7 @@
 connection = Connection(opts.url, opts.connectionOptions);
 connection.open();
 std::auto_ptrFailoverUpdates updates(opts.failoverUpdates ? new 
FailoverUpdates(connection) : 0);
+for (int ii=0; ii1000; ii++) {
 Session session = opts.tx ? 
connection.createTransactionalSession() : connection.createSession();
 Receiver receiver = session.createReceiver(opts.address);
 receiver.setCapacity(opts.capacity);
@@ -287,6 +288,7 @@
 session.acknowledge();
 }
 session.close();
+}
 connection.close();
 return 0;
 }
{noformat}

And then run:

valgrind --leak-check=full ./qpid-receive --connection-options 
{protocol:amqp1.0} --print-headers true --messages 1 --address q

with output like:

{noformat}
==18796== 226,344 (1,024 direct, 225,320 indirect) bytes in 2 blocks are 
definitely lost in loss record 586 of 616
==18796==at 0x4A06409: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==18796==by 0x3710815868: pni_map_allocate (object.c:425)
==18796==by 0x371081634E: pn_map (object.c:477)
==18796==by 0x3710816A9C: pn_hash (object.c:674)
==18796==by 0x3710821044: pn_session (engine.c:735)
==18796==by 0x4C5A71C: 
qpid::messaging::amqp::ConnectionContext::newSession(bool, std::string const) 
(in /home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4C68FBE: 
qpid::messaging::amqp::ConnectionHandle::newSession(bool, std::string const) 
(in /home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4CC6F3F: 
qpid::messaging::Connection::createSession(std::string const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x40CDEB: main (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/tests/qpid-receive)
==18796==
==18796== 565,860 (240 direct, 565,620 indirect) bytes in 5 blocks are 
definitely lost in loss record 592 of 616
==18796==at 0x4A06409: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==18796==by 0x3710815961: pn_new (object.c:41)
==18796==by 0x371081625A: pn_list (object.c:359)
==18796==by 0x3710820F72: pn_session (engine.c:721)
==18796==by 0x4C5A71C: 
qpid::messaging::amqp::ConnectionContext::newSession(bool, std::string const) 
(in /home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4C68FBE: 
qpid::messaging::amqp::ConnectionHandle::newSession(bool, std::string const) 
(in /home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4CC6F3F: 
qpid::messaging::Connection::createSession(std::string const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x40CDEB: main (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/tests/qpid-receive)
==18796==
==18796== 12,157,572 (57,552 direct, 12,100,020 indirect) bytes in 109 blocks 
are definitely lost in loss record 613 of 616
==18796==at 0x4A06409: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==18796==by 0x3710815961: pn_new (object.c:41)
==18796==by 0x3710821293: pn_link_new (engine.c:817)
==18796==by 0x4C6BE97: 
qpid::messaging::amqp::ReceiverContext::ReceiverContext(pn_session_t*, 
std::string const, qpid::messaging::Address const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4C76BA1: 
qpid::messaging::amqp::SessionContext::createReceiver(qpid::messaging::Address 
const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4C7DE8B: 
qpid::messaging::amqp::SessionHandle::createReceiver(qpid::messaging::Address 
const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x4CCDC1D: qpid::messaging::Session::createReceiver(std::string 
const) (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/libqpidmessaging.so.2.0.0)
==18796==by 0x40CE36: main (in 
/home/pmoravec/qpid-trunk/qpid/cpp/BLD/src/tests/qpid-receive)
==18796==
==18796== 63,013,928 (152,864 direct, 62,861,064 indirect) bytes in 562 blocks 
are definitely lost in loss record 616 of 616
==18796==at 0x4A06409: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==18796==by 0x3710815961: pn_new (object.c:41)
==18796==   

Re: Review Request 23305: [C++ broker] Make memory usage consistent after broker restart

2014-07-08 Thread Alan Conway

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23305/#review47442
---


Using a pointer as an identifier isn't safe. Pointer values can be re-used as 
soon as the in-memory copy of the message is deleted, so they are not unique 
over time. A global atomic counter might be a possibility but we would need to 
benchmark for performance consequences before making this the default behavior.

- Alan Conway


On July 8, 2014, 12:53 p.m., Pavel Moravec wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23305/
 ---
 
 (Updated July 8, 2014, 12:53 p.m.)
 
 
 Review request for qpid, Gordon Sim and Kim van der Riet.
 
 
 Bugs: QPID-5880
 https://issues.apache.org/jira/browse/QPID-5880
 
 
 Repository: qpid
 
 
 Description
 ---
 
 Simple idea:
 - in Queue::enqueue, set PersistableMessage::persistencyID manually to some 
 unique number that is identical to all message instances that has common 
 SharedState - e.g. to the pointer to SharedState
 - during journal recovery, if we recover a message with already seen 
 persistencyID, use the previous seen instead with its SharedState and 
 PersistableMessage bits
 
 Known limitation:
 - message annotations added to some queue (e.g. due to queue sequencing 
 enabled) will be either over-written or shared to all other queues during 
 recovery
 
 The patch contains a new QueueSettings option to enable (by default disabled) 
 this feature on per queue basis. This somehow limits the limitation above.
 
 Isn't storing pointer to SharedState to the disk (via persistencyID) some 
 sort of security breach? (I dont think so, but worth to ask)
 
 Can't manual setup of persistencyID break something in store? (AFAIK no as 
 uniqueness of the ID is assured: 1) a new / different message with the same 
 persistencyID can appear only after the previous instance is gone from 
 memory, and 2) only queues with the option enabled are checked for message 
 coupling)
 
 Will it help in cluster? No, it won't. As when primary broker gets 1 message 
 to an exchange that distributes it to 100 queues, th broker updates backup 
 brokers via 100 individual enqueue 1 message to queue q[1-100] events. So 
 backup brokers consume more memory than primary - the same amount like 
 primary does not share SharedState at all.
 
 So it is reasonable for standalone brokers only.
 
 
 Diffs
 -
 
   /trunk/qpid/cpp/src/qpid/broker/Queue.cpp 1608083 
   /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h 1608083 
   /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp 1608083 
   /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.h 1608083 
   /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp 1608083 
   /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.h 1608083 
   /trunk/qpid/cpp/src/qpid/linearstore/MessageStoreImpl.cpp 1608083 
 
 Diff: https://reviews.apache.org/r/23305/diff/
 
 
 Testing
 ---
 
 No significant difference in memory consumption before  after restart (in 
 setup of 500 queues with qpid.store_msgID=true and thousands of messages sent 
 via fanout exchange to all of them).
 
 Automated tests passed.
 
 
 Thanks,
 
 Pavel Moravec
 




Re: Review Request 23305: [C++ broker] Make memory usage consistent after broker restart

2014-07-08 Thread Pavel Moravec


 On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
  Having the broker set the persistence id in some cases, where the current 
  expectation is that the store sets it, makes me nervous. How does this 
  affect the windows store(s) for example? Does any store make any 
  assumptions about the id that would no longer hold?

Good point.

Both linearstore and legacystore make no assumption. They just set (and 
require) unique ID for each record within one journal, but don't care about it 
otherwise. In legacystore, there is an int comparison within loadContent method 
(used only for journal recovery) that can make the function less efficient if 
unordered persistence ID is found.

Anyway it would be great if Kim could confirm I am right.

Both Window stores rewrite persistence ID by their value, so for ms-clfs and 
ms-sql stores the patch won't have any impact at all.


 On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.h, line 81
  https://reviews.apache.org/r/23305/diff/2/?file=624628#file624628line81
 
  Should be camel case rather than underscore, like other option members.

Fixed.


 On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
  /trunk/qpid/cpp/src/qpid/broker/QueueSettings.cpp, line 61
  https://reviews.apache.org/r/23305/diff/2/?file=624629#file624629line61
 
  Odd option name, qpid.store_msg_id would be neater. I don't feel the 
  option name really properly describes the function though. It's really more 
  something like 'share recovered messages'

I did not like the option name either, it was just first draft. Thanks for the 
suggestion, SHARE_RECOVERED_MSGS(qpid.share_recovered_msgs); is used now.


 On July 7, 2014, 5:17 p.m., Gordon Sim wrote:
  /trunk/qpid/cpp/src/qpid/legacystore/MessageStoreImpl.cpp, line 987
  https://reviews.apache.org/r/23305/diff/2/?file=624631#file624631line987
 
  Would be nicer to contain this all in the broker code, e.g. perhaps in 
  qpid/broker/RecoveryManagerImpl.cpp?

That would make sense from logical point of view, as the broker should be 
responsible for the message re-coupling in its memory. But..

It would have to be in RecoveryManagerImpl class, to have the message_ids map 
broker-wide.

RecoveryManagerImpl::recoverMessage is aware of one message only, without 
context (like queue it belongs to or what persistency ID the message should 
have). I would have to extend the method argument to have persistencyID and 
RecoverableQueue::shared_ptr to move that functionality there. The Queue 
pointer is necessary due to message_ids map to keep only pointers to messages 
from qpid.share_recovered_msgs queues. Otherwise the map would cache redundant 
pointers as well.

Is it worth adding the two arguments for a method to be called for every 
message recovery? If the feature will be applied only to some queues?

Or is there some another option I overlooked?


- Pavel


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23305/#review47397
---


On July 8, 2014, 12:53 p.m., Pavel Moravec wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/23305/
 ---
 
 (Updated July 8, 2014, 12:53 p.m.)
 
 
 Review request for qpid, Gordon Sim and Kim van der Riet.
 
 
 Bugs: QPID-5880
 https://issues.apache.org/jira/browse/QPID-5880
 
 
 Repository: qpid
 
 
 Description
 ---
 
 Simple idea:
 - in Queue::enqueue, set PersistableMessage::persistencyID manually to some 
 unique number that is identical to all message instances that has common 
 SharedState - e.g. to the pointer to SharedState
 - during journal recovery, if we recover a message with already seen 
 persistencyID, use the previous seen instead with its SharedState and 
 PersistableMessage bits
 
 Known limitation:
 - message annotations added to some queue (e.g. due to queue sequencing 
 enabled) will be either over-written or shared to all other queues during 
 recovery
 
 The patch contains a new QueueSettings option to enable (by default disabled) 
 this feature on per queue basis. This somehow limits the limitation above.
 
 Isn't storing pointer to SharedState to the disk (via persistencyID) some 
 sort of security breach? (I dont think so, but worth to ask)
 
 Can't manual setup of persistencyID break something in store? (AFAIK no as 
 uniqueness of the ID is assured: 1) a new / different message with the same 
 persistencyID can appear only after the previous instance is gone from 
 memory, and 2) only queues with the option enabled are checked for message 
 coupling)
 
 Will it help in cluster? No, it won't. As when primary broker gets 1 message 
 to an exchange that distributes it to 100 queues, th broker updates backup 
 brokers via 100 

[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055282#comment-14055282
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608884 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608884 ]

DISPATCH-56: Push configuration from python to C for dispatch router.

Python code parses config files asentity list and pushes each configuration
entity into the C code.

- libqpid_dispatch.py: ctypes python wrapper around libqpid-dispatch
- Various error handling and memory management fixes.
- Introduce EnumValue, C code can take enum as string or int.
- Generate C source code for schema enum values.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055281#comment-14055281
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608882 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608882 ]

DISPATCH-56: Replace qpid_dispatch_internal.config with 
qpid_dispatch_internal.management.

Replace the old config schema with the new management schema.
- Various error handling improvements.
- Router code calls out to new config parsing/schema code.
- Renamed waypoint.name to address, conflict with standard managemenet entity 
name attribute.
- Usign new qdrouterd.conf man page, generated from schema.
- Added references and fixed value attributes to schema.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055283#comment-14055283
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608889 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608889 ]

DISPATCH-56: Implement CREATE AMQP management operations.

agent.py:
- python management agent.
- Handles CREATE request for log, listener, connector, fixed-address, waypoint.
- Listens on $mangement2 for now, old C agent is still in place.

node.py: make CREATE calls

system_tests_management: tests to verify create calls.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-08 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5870:
---

Attachment: QPID-5870.part2.patch

I added a check to ensure that the subscription queue is not deleted for 
durable subscribers.

All existing tests pass including the durable subscription tests.

 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.part2.patch, QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-08 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055304#comment-14055304
 ] 

Rajith Attapattu commented on QPID-5870:


 but I wonder what happens around e.g. DurableSubscriptions, are they handled 
 appropriately? Needs some tests.

The testDurableSubscription in AddressBasedDestination test verifies this 
change.
If you run the test without part2 of the patch is fails.


 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.part2.patch, QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-5884) NullPointerException when using Base64MD5 file for AMQP 1.0 authentication

2014-07-08 Thread Mark Soderquist (JIRA)
Mark Soderquist created QPID-5884:
-

 Summary: NullPointerException when using Base64MD5 file for AMQP 
1.0 authentication
 Key: QPID-5884
 URL: https://issues.apache.org/jira/browse/QPID-5884
 Project: Qpid
  Issue Type: Bug
  Components: Java Broker
Affects Versions: 0.28
 Environment: OpenJDK Runtime Environment (rhel-2.4.7.1.el6_5-x86_64 
u55-b13)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Mark Soderquist


Received an NPE when using the Base64MD5PasswordFile authentication provider. 
This did not happen when using the PlainPasswordFile. Here is the stack trace:

java.lang.NullPointerException
at 
org.apache.qpid.amqp_1_0.transport.ConnectionEndpoint.receiveSaslInit(ConnectionEndpoint.java:818)
at 
org.apache.qpid.amqp_1_0.type.security.SaslInit.invoke(SaslInit.java:112)
at 
org.apache.qpid.amqp_1_0.transport.ConnectionEndpoint.receive(ConnectionEndpoint.java:737)
at 
org.apache.qpid.amqp_1_0.framing.SASLFrameHandler.parse(SASLFrameHandler.java:240)
at 
org.apache.qpid.server.protocol.v1_0.ProtocolEngine_1_0_0_SASL$3.run(ProtocolEngine_1_0_0_SASL.java:367)
at 
org.apache.qpid.server.protocol.v1_0.ProtocolEngine_1_0_0_SASL$3.run(ProtocolEngine_1_0_0_SASL.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at 
org.apache.qpid.server.protocol.v1_0.ProtocolEngine_1_0_0_SASL.received(ProtocolEngine_1_0_0_SASL.java:362)
at 
org.apache.qpid.server.protocol.v1_0.ProtocolEngine_1_0_0_SASL.received(ProtocolEngine_1_0_0_SASL.java:64)
at 
org.apache.qpid.server.protocol.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:132)
at 
org.apache.qpid.server.protocol.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:48)
at 
org.apache.qpid.transport.network.io.IoReceiver.run(IoReceiver.java:161)
at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055472#comment-14055472
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608947 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608947 ]

DISPATCH-56: Push configuration from python to C for dispatch router.

Python code parses config files asentity list and pushes each configuration
entity into the C code.

- libqpid_dispatch.py: ctypes python wrapper around libqpid-dispatch
- Various error handling and memory management fixes.
- Introduce EnumValue, C code can take enum as string or int.
- Generate C source code for schema enum values.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055471#comment-14055471
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608945 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608945 ]

DISPATCH-56: Replace qpid_dispatch_internal.config with 
qpid_dispatch_internal.management.

Replace the old config schema with the new management schema.
- Various error handling improvements.
- Router code calls out to new config parsing/schema code.
- Renamed waypoint.name to address, conflict with standard managemenet entity 
name attribute.
- Usign new qdrouterd.conf man page, generated from schema.
- Added references and fixed value attributes to schema.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-56) Implement Create/Read/Update/Delete operations in the agent

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-56?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055473#comment-14055473
 ] 

ASF subversion and git services commented on DISPATCH-56:
-

Commit 1608952 from [~aconway] in branch 'dispatch/trunk'
[ https://svn.apache.org/r1608952 ]

DISPATCH-56: Implement CREATE AMQP management operations.

agent.py:
- python management agent.
- Handles CREATE request for log, listener, connector, fixed-address, waypoint.
- Listens on $mangement2 for now, old C agent is still in place.

node.py: make CREATE calls

system_tests_management: tests to verify create calls.

 Implement Create/Read/Update/Delete operations in the agent
 ---

 Key: DISPATCH-56
 URL: https://issues.apache.org/jira/browse/DISPATCH-56
 Project: Qpid Dispatch
  Issue Type: Sub-task
  Components: Management Agent
Reporter: Ted Ross
Assignee: Alan Conway
 Fix For: 0.3


 Implement the CRUD-style commands in the management agent and use them to 
 access waypoints, address-prefixes, self, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-5885) Virtualhostnode to replace real virtualhost with replica virtualhost in the event that the BDB HA goes into detached state

2014-07-08 Thread Keith Wall (JIRA)
Keith Wall created QPID-5885:


 Summary: Virtualhostnode to replace real virtualhost with replica 
virtualhost in the event that the BDB HA goes into detached state
 Key: QPID-5885
 URL: https://issues.apache.org/jira/browse/QPID-5885
 Project: Qpid
  Issue Type: Bug
  Components: Java Broker
Reporter: Keith Wall
 Fix For: 0.29


Currently, if the BDB HA node goes into a detached state (as it does when it 
restarts itself when it detects that quorum has gone), the virtualhost remains 
available.  

If a client attempts a virtualhost operation whilst the node is in this state, 
the operation appears hangs with timeouts appearing on the client.  This hang 
is owing to the ReplicaConsistencyPolicy - which defaults to 1h.

The virtualhostnode should replace the real virtualhost with the 'replica' 
virtualhost when the node is in detached state.  This will cause existing 
client connections to be disconnected and new connections will be disallowed.  





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5885) Virtualhostnode to replace real virtualhost with replica virtualhost in the event that the BDB HA goes into detached state

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055578#comment-14055578
 ] 

ASF subversion and git services commented on QPID-5885:
---

Commit 1608956 from [~k-wall] in branch 'qpid/trunk'
[ https://svn.apache.org/r1608956 ]

QPID-5885: [Java Broker] Virtualhostnode to replace real virtualhost with 
replica virtualhost in the event that the BDB HA goes into detached state

 Virtualhostnode to replace real virtualhost with replica virtualhost in the 
 event that the BDB HA goes into detached state
 --

 Key: QPID-5885
 URL: https://issues.apache.org/jira/browse/QPID-5885
 Project: Qpid
  Issue Type: Bug
  Components: Java Broker
Reporter: Keith Wall
Assignee: Keith Wall
 Fix For: 0.29


 Currently, if the BDB HA node goes into a detached state (as it does when it 
 restarts itself when it detects that quorum has gone), the virtualhost 
 remains available.  
 If a client attempts a virtualhost operation whilst the node is in this 
 state, the operation appears hangs with timeouts appearing on the client.  
 This hang is owing to the ReplicaConsistencyPolicy - which defaults to 1h.
 The virtualhostnode should replace the real virtualhost with the 'replica' 
 virtualhost when the node is in detached state.  This will cause existing 
 client connections to be disconnected and new connections will be disallowed. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-5885) Virtualhostnode to replace real virtualhost with replica virtualhost in the event that the BDB HA goes into detached state

2014-07-08 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall reassigned QPID-5885:


Assignee: Keith Wall

 Virtualhostnode to replace real virtualhost with replica virtualhost in the 
 event that the BDB HA goes into detached state
 --

 Key: QPID-5885
 URL: https://issues.apache.org/jira/browse/QPID-5885
 Project: Qpid
  Issue Type: Bug
  Components: Java Broker
Reporter: Keith Wall
Assignee: Keith Wall
 Fix For: 0.29


 Currently, if the BDB HA node goes into a detached state (as it does when it 
 restarts itself when it detects that quorum has gone), the virtualhost 
 remains available.  
 If a client attempts a virtualhost operation whilst the node is in this 
 state, the operation appears hangs with timeouts appearing on the client.  
 This hang is owing to the ReplicaConsistencyPolicy - which defaults to 1h.
 The virtualhostnode should replace the real virtualhost with the 'replica' 
 virtualhost when the node is in detached state.  This will cause existing 
 client connections to be disconnected and new connections will be disallowed. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5876) Java client causes unecessary rejects after failover when using 0-8..0-9-1 protocols

2014-07-08 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-5876:
-

Status: Open  (was: Reviewable)

Thanks for drawing my attention to this, Robbie.



 Java client causes unecessary rejects after failover when using 0-8..0-9-1 
 protocols
 

 Key: QPID-5876
 URL: https://issues.apache.org/jira/browse/QPID-5876
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Reporter: Andrew MacBean
Assignee: Keith Wall

 Highest delivery tag variable not reset after failover and causes rejections 
 to be sent unecessarily.
 The AMQSession.resubscribe() implementation does not reset the 
 _highestDeliveryTag member variable. This means that rejects are send 
 needlessly/incorrectly after failover occurs as the value is higher than it 
 should be.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055652#comment-14055652
 ] 

ASF subversion and git services commented on QPID-5870:
---

Commit 1608971 from [~rajith] in branch 'qpid/trunk'
[ https://svn.apache.org/r1608971 ]

QPID-5870 A Consumer is now marked if it's using a durable subscription.
The topic subscription queue is now deleted when the subscription ends unless 
it's marked as a durable-topic-subscription.

 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.part2.patch, QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



qpid cpp cmake bug

2014-07-08 Thread Duong Quynh (FSU1.Z8.IP)
When run make install following problems happen:

CMake Error at bindings/qpid/ruby/cmake_install.cmake:44 (file):
  file INSTALL cannot find
  /root/Downloads/qpid-cpp-0.28/BLD/bindings/qpid/ruby/libcqpid_ruby.so.
Call Stack (most recent call first):
  bindings/cmake_install.cmake:37 (include)
  cmake_install.cmake:51 (include)

CMake Error at bindings/qmf2/ruby/cmake_install.cmake:44 (file):
  file INSTALL cannot find
  /root/Downloads/qpid-cpp-0.28/BLD/bindings/qmf2/ruby/libcqmf2_ruby.so.
Call Stack (most recent call first):
  bindings/cmake_install.cmake:38 (include)
  cmake_install.cmake:51 (include)

actual files after make are cqpid_ruby.so and cqmf2_ruby.so missing lib 
that's why make install doesn't find it. Can be solved by manually rename the 
files.


RE: Potential qpid cpp 0.28 bug

2014-07-08 Thread Duong Quynh (FSU1.Z8.IP)
This patch seems to have fixed the infinite hang, looks like it was indeed 
stuck in the detach because program would hang no matter whether you call 
sender.close() or not.

# ./amq-producer
2014-07-09 09:58:58 [Messaging] debug Driver started
2014-07-09 09:58:58 [Messaging] info Starting connection to 
amqp:tcp:localhost:5672
2014-07-09 09:58:58 [Messaging] info Connecting to tcp:localhost:5672
2014-07-09 09:58:58 [Messaging] debug tcp:localhost:5672 Connecting ...
2014-07-09 09:58:58 [System] info Connecting: [::1]:5672
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 1 || 0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 1 || 0
2014-07-09 09:58:58 [Protocol] debug tcp:localhost:5672 writing protocol 
header: 1-0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::encode(65535): 8
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] debug tcp:localhost:5672 Connected
2014-07-09 09:58:58 [Messaging] debug wakeupDriver()
2014-07-09 09:58:58 [Messaging] debug tcp:localhost:5672 Waiting to be 
authenticated...
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Protocol] debug tcp:localhost:5672 read protocol header: 
1-0
2014-07-09 09:58:58 [Security] trace Reading SASL frame of size 34
2014-07-09 09:58:58 [Security] trace Reading SASL-MECHANISMS
2014-07-09 09:58:58 [Protocol] debug tcp:localhost:5672 Received 
SASL-MECHANISMS(ANONYMOUS PLAIN )
2014-07-09 09:58:58 [Security] debug CyrusSasl::start(PLAIN )
2014-07-09 09:58:58 [Security] debug min_ssf: 0, max_ssf: 256
2014-07-09 09:58:58 [Security] debug getUserFromSettings(): guest
2014-07-09 09:58:58 [Security] debug CyrusSasl::start(PLAIN ): selected PLAIN 
response: '\x00guest\x00guest'
2014-07-09 09:58:58 [Security] trace Completed encoding of frame of 52 bytes
2014-07-09 09:58:58 [Protocol] debug tcp:localhost:5672 Sent SASL-INIT(PLAIN, 
\x00guest\x00guest, localhost)
2014-07-09 09:58:58 [Messaging] debug wakeupDriver()
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::decode(42): 42
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 
1
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 
1
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::encode(65535): 52
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Security] trace Reading SASL frame of size 16
2014-07-09 09:58:58 [Security] trace Reading SASL-OUTCOME
2014-07-09 09:58:58 [Protocol] debug tcp:localhost:5672 Received 
SASL-OUTCOME(\x00)
2014-07-09 09:58:58 [Messaging] debug wakeupDriver()
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::decode(16): 16
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-09 09:58:58 [Protocol] trace [e9eea98e-d0d1-4bf7-b6d3-25748ed4b801]:   
- AMQP
2014-07-09 09:58:58 [Network] debug tcp:localhost:5672 encoded 8 bytes from 
65535
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 decode(8)
2014-07-09 09:58:58 [Protocol] trace [e9eea98e-d0d1-4bf7-b6d3-25748ed4b801]:   
- AMQP
2014-07-09 09:58:58 [Network] debug tcp:localhost:5672 decoded 8 bytes from 8
2014-07-09 09:58:58 [Messaging] debug tcp:localhost:5672 Authenticated
2014-07-09 09:58:58 [Messaging] debug tcp:localhost:5672 Opening...
2014-07-09 09:58:58 [Messaging] debug wakeupDriver()
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-09 09:58:58 [Protocol] trace [e9eea98e-d0d1-4bf7-b6d3-25748ed4b801]: 0 
- @open(16) [container-id=e9eea98e-d0d1-4bf7-b6d3-25748ed4b801, 
properties={:qpid.client_process=amq-producer, :qpid.client_pid=54885, 
:qpid.client_ppid=41102}]
2014-07-09 09:58:58 [Network] debug tcp:localhost:5672 encoded 155 bytes from 
65535
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 encode(65535)
2014-07-09 09:58:58 [Security] trace tcp:localhost:5672 Sasl::canEncode(): 0 || 0
2014-07-09 09:58:58 [Messaging] trace tcp:localhost:5672 decode(23)
2014-07-09 09:58:58 [Protocol] trace [e9eea98e-d0d1-4bf7-b6d3-25748ed4b801]: