[jira] [Commented] (PROTON-2441) [cpp] Crash upon reconnect when user passed empty vector to connection_options::failover_urls

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445599#comment-17445599
 ] 

ASF GitHub Bot commented on PROTON-2441:


astitcher commented on pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338#issuecomment-972459810


   @DreamPearl  also note that I subsequently modified your new test in commit 
56520fbaa00ab1d64b9672b072f7b8b2e88c1916
   to deduplicate the code and also to use more C++11 there now that we can. If 
you have questions about any of this just ask.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Crash upon reconnect when user passed empty vector to 
> connection_options::failover_urls
> -
>
> Key: PROTON-2441
> URL: https://issues.apache.org/jira/browse/PROTON-2441
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.36.0
> Environment: Linux fedora 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 
> 16:31:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> NAME=Fedora
> VERSION="34 (Workstation Edition)"
> ID=fedora
> VERSION_ID=34
> VERSION_CODENAME=""
> PLATFORM_ID="platform:f34"
>Reporter: Rakhi Kumari
>Assignee: Rakhi Kumari
>Priority: Major
> Fix For: proton-c-0.37.0
>
>
> {noformat}
> $ gdb ./reconnect_client
> (gdb) run amqp://127.0.0.1 examples 1
> Starting program: 
> /home/rkumari/repos/qpid-proton/build/cpp/examples/reconnect_client 
> amqp://127.0.0.1 examples 1
> Missing separate debuginfos, use: dnf debuginfo-install 
> glibc-2.33-5.fc34.x86_64
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Retries: 0 Delay: 0 Trying: NO URL@0 SZ: 0
>  *Program received signal SIGSEGV, Segmentation fault.*
>  0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
> (gdb) backtrace
>  #0 0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
>  #1 0x77e984b5 in proton::container::impl::reconnect (this=0x43c640, 
> pnc=0x43df10) at 
> /home/rkumari/repos/qpid-proton/cpp/src/proactor_container_impl.cpp:241
>  #2 0x77ead66d in std::__invoke_impl (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__f=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> , __t=@0x446078: 
> 0x43c640)
>  at /usr/include/c++/11/bits/invoke.h:74
>  #3 0x77ead1d6 in std::__invoke (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__fn=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> ) at 
> /usr/include/c++/11/bits/invoke.h:96
>  #4 0x77eacd73 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::__call 1ul>(std::tuple<>&&, std::_Index_tuple<0ul, 1ul>) (this=0x446060, __args=...)
>  at /usr/include/c++/11/functional:420
>  #5 0x77eac7a6 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::operator()<, void>() (this=0x446060) 
> at /usr/include/c++/11/functional:503
>  #6 0x77eaba21 in std::__invoke_impl (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::__invoke_other, std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__f=...) at 
> /usr/include/c++/11/bits/invoke.h:61
>  #7 0x77eaa497 in std::__invoke_r (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__fn=...) at 
> /usr/include/c++/11/bits/invoke.h:154
>  #8 0x77ea8cdf in std::_Function_handler (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)> >::_M_invoke(std::_Any_data const&) 
> (__functor=...)
>  at /usr/include/c++/11/bits/std_function.h:291
>  #9 0x77e9f12e in std::function::operator()() const 
> (this=0x445fa8) at 

[GitHub] [qpid-proton] astitcher commented on pull request #338: PROTON-2441: [cpp] Fix connection_options failover urls segfault

2021-11-17 Thread GitBox


astitcher commented on pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338#issuecomment-972459810


   @DreamPearl  also note that I subsequently modified your new test in commit 
56520fbaa00ab1d64b9672b072f7b8b2e88c1916
   to deduplicate the code and also to use more C++11 there now that we can. If 
you have questions about any of this just ask.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2441) [cpp] Crash upon reconnect when user passed empty vector to connection_options::failover_urls

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445595#comment-17445595
 ] 

ASF GitHub Bot commented on PROTON-2441:


astitcher commented on pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338#issuecomment-972457403


   I've merged this PR with a bug fix to an connection leak in the new 
reconnect test - this leak can be seen in the failing Travis CI job - I think 
it's unfortunate that the github actions didn't pick this up as well - are they 
running without valgrind?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Crash upon reconnect when user passed empty vector to 
> connection_options::failover_urls
> -
>
> Key: PROTON-2441
> URL: https://issues.apache.org/jira/browse/PROTON-2441
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.36.0
> Environment: Linux fedora 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 
> 16:31:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> NAME=Fedora
> VERSION="34 (Workstation Edition)"
> ID=fedora
> VERSION_ID=34
> VERSION_CODENAME=""
> PLATFORM_ID="platform:f34"
>Reporter: Rakhi Kumari
>Assignee: Rakhi Kumari
>Priority: Major
> Fix For: proton-c-0.37.0
>
>
> {noformat}
> $ gdb ./reconnect_client
> (gdb) run amqp://127.0.0.1 examples 1
> Starting program: 
> /home/rkumari/repos/qpid-proton/build/cpp/examples/reconnect_client 
> amqp://127.0.0.1 examples 1
> Missing separate debuginfos, use: dnf debuginfo-install 
> glibc-2.33-5.fc34.x86_64
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Retries: 0 Delay: 0 Trying: NO URL@0 SZ: 0
>  *Program received signal SIGSEGV, Segmentation fault.*
>  0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
> (gdb) backtrace
>  #0 0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
>  #1 0x77e984b5 in proton::container::impl::reconnect (this=0x43c640, 
> pnc=0x43df10) at 
> /home/rkumari/repos/qpid-proton/cpp/src/proactor_container_impl.cpp:241
>  #2 0x77ead66d in std::__invoke_impl (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__f=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> , __t=@0x446078: 
> 0x43c640)
>  at /usr/include/c++/11/bits/invoke.h:74
>  #3 0x77ead1d6 in std::__invoke (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__fn=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> ) at 
> /usr/include/c++/11/bits/invoke.h:96
>  #4 0x77eacd73 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::__call 1ul>(std::tuple<>&&, std::_Index_tuple<0ul, 1ul>) (this=0x446060, __args=...)
>  at /usr/include/c++/11/functional:420
>  #5 0x77eac7a6 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::operator()<, void>() (this=0x446060) 
> at /usr/include/c++/11/functional:503
>  #6 0x77eaba21 in std::__invoke_impl (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::__invoke_other, std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__f=...) at 
> /usr/include/c++/11/bits/invoke.h:61
>  #7 0x77eaa497 in std::__invoke_r (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__fn=...) at 
> /usr/include/c++/11/bits/invoke.h:154
>  #8 0x77ea8cdf in std::_Function_handler (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)> >::_M_invoke(std::_Any_data const&) 
> (__functor=...)
>  at /usr/include/c++/11/bits/std_function.h:291
>  #9 0x77e9f12e in std::function::operator()() const 
> (this=0x445fa8) at 

[jira] [Commented] (PROTON-2441) [cpp] Crash upon reconnect when user passed empty vector to connection_options::failover_urls

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445596#comment-17445596
 ] 

ASF GitHub Bot commented on PROTON-2441:


astitcher closed pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Crash upon reconnect when user passed empty vector to 
> connection_options::failover_urls
> -
>
> Key: PROTON-2441
> URL: https://issues.apache.org/jira/browse/PROTON-2441
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.36.0
> Environment: Linux fedora 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 
> 16:31:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> NAME=Fedora
> VERSION="34 (Workstation Edition)"
> ID=fedora
> VERSION_ID=34
> VERSION_CODENAME=""
> PLATFORM_ID="platform:f34"
>Reporter: Rakhi Kumari
>Assignee: Rakhi Kumari
>Priority: Major
> Fix For: proton-c-0.37.0
>
>
> {noformat}
> $ gdb ./reconnect_client
> (gdb) run amqp://127.0.0.1 examples 1
> Starting program: 
> /home/rkumari/repos/qpid-proton/build/cpp/examples/reconnect_client 
> amqp://127.0.0.1 examples 1
> Missing separate debuginfos, use: dnf debuginfo-install 
> glibc-2.33-5.fc34.x86_64
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Retries: 0 Delay: 0 Trying: NO URL@0 SZ: 0
>  *Program received signal SIGSEGV, Segmentation fault.*
>  0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
> (gdb) backtrace
>  #0 0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
>  #1 0x77e984b5 in proton::container::impl::reconnect (this=0x43c640, 
> pnc=0x43df10) at 
> /home/rkumari/repos/qpid-proton/cpp/src/proactor_container_impl.cpp:241
>  #2 0x77ead66d in std::__invoke_impl (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__f=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> , __t=@0x446078: 
> 0x43c640)
>  at /usr/include/c++/11/bits/invoke.h:74
>  #3 0x77ead1d6 in std::__invoke (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__fn=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> ) at 
> /usr/include/c++/11/bits/invoke.h:96
>  #4 0x77eacd73 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::__call 1ul>(std::tuple<>&&, std::_Index_tuple<0ul, 1ul>) (this=0x446060, __args=...)
>  at /usr/include/c++/11/functional:420
>  #5 0x77eac7a6 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::operator()<, void>() (this=0x446060) 
> at /usr/include/c++/11/functional:503
>  #6 0x77eaba21 in std::__invoke_impl (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::__invoke_other, std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__f=...) at 
> /usr/include/c++/11/bits/invoke.h:61
>  #7 0x77eaa497 in std::__invoke_r (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__fn=...) at 
> /usr/include/c++/11/bits/invoke.h:154
>  #8 0x77ea8cdf in std::_Function_handler (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)> >::_M_invoke(std::_Any_data const&) 
> (__functor=...)
>  at /usr/include/c++/11/bits/std_function.h:291
>  #9 0x77e9f12e in std::function::operator()() const 
> (this=0x445fa8) at /usr/include/c++/11/bits/std_function.h:560
>  #10 0x77e96050 in proton::internal::v11::work::operator() 
> (this=0x445fa8) at 
> /home/rkumari/repos/qpid-proton/cpp/include/proton/work_queue.hpp:283
>  #11 0x77e9ba47 in proton::container::impl::run_timer_jobs 
> (this=0x43c640) at 
> 

[GitHub] [qpid-proton] astitcher commented on pull request #338: PROTON-2441: [cpp] Fix connection_options failover urls segfault

2021-11-17 Thread GitBox


astitcher commented on pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338#issuecomment-972457403


   I've merged this PR with a bug fix to an connection leak in the new 
reconnect test - this leak can be seen in the failing Travis CI job - I think 
it's unfortunate that the github actions didn't pick this up as well - are they 
running without valgrind?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] astitcher closed pull request #338: PROTON-2441: [cpp] Fix connection_options failover urls segfault

2021-11-17 Thread GitBox


astitcher closed pull request #338:
URL: https://github.com/apache/qpid-proton/pull/338


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2467) Tidy up C++ reconnect test

2021-11-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445594#comment-17445594
 ] 

ASF subversion and git services commented on PROTON-2467:
-

Commit 56520fbaa00ab1d64b9672b072f7b8b2e88c1916 in qpid-proton's branch 
refs/heads/main from Andrew Stitcher
[ https://gitbox.apache.org/repos/asf?p=qpid-proton.git;h=56520fb ]

PROTON-2467: Tidy up C++ reconnect test

More use of C++11 features
Deduplicate code from a couple of tests


> Tidy up C++ reconnect test
> --
>
> Key: PROTON-2467
> URL: https://issues.apache.org/jira/browse/PROTON-2467
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: cpp-binding
>Reporter: Andrew Stitcher
>Assignee: Andrew Stitcher
>Priority: Minor
>
> * Make code a bit simpler by extracting some common code out from duplicted 
> tests
> * More use of C++11 to make the code a bit easier to read.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2441) [cpp] Crash upon reconnect when user passed empty vector to connection_options::failover_urls

2021-11-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445593#comment-17445593
 ] 

ASF subversion and git services commented on PROTON-2441:
-

Commit 10db7aa3f2ad22c4869ac93d5c7d677f876e92b0 in qpid-proton's branch 
refs/heads/main from Rakhi Kumari
[ https://gitbox.apache.org/repos/asf?p=qpid-proton.git;h=10db7aa ]

PROTON-2441: Fix connection_options failover urls segfault


> [cpp] Crash upon reconnect when user passed empty vector to 
> connection_options::failover_urls
> -
>
> Key: PROTON-2441
> URL: https://issues.apache.org/jira/browse/PROTON-2441
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.36.0
> Environment: Linux fedora 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 
> 16:31:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> NAME=Fedora
> VERSION="34 (Workstation Edition)"
> ID=fedora
> VERSION_ID=34
> VERSION_CODENAME=""
> PLATFORM_ID="platform:f34"
>Reporter: Rakhi Kumari
>Assignee: Rakhi Kumari
>Priority: Major
> Fix For: proton-c-0.37.0
>
>
> {noformat}
> $ gdb ./reconnect_client
> (gdb) run amqp://127.0.0.1 examples 1
> Starting program: 
> /home/rkumari/repos/qpid-proton/build/cpp/examples/reconnect_client 
> amqp://127.0.0.1 examples 1
> Missing separate debuginfos, use: dnf debuginfo-install 
> glibc-2.33-5.fc34.x86_64
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Retries: 0 Delay: 0 Trying: NO URL@0 SZ: 0
>  *Program received signal SIGSEGV, Segmentation fault.*
>  0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
> (gdb) backtrace
>  #0 0x77cf1104 in std::basic_ostream >& 
> std::operator<< , std::allocator 
> >(std::basic_ostream >&, 
> std::__cxx11::basic_string, std::allocator 
> > const&) () from /lib64/libstdc++.so.6
>  #1 0x77e984b5 in proton::container::impl::reconnect (this=0x43c640, 
> pnc=0x43df10) at 
> /home/rkumari/repos/qpid-proton/cpp/src/proactor_container_impl.cpp:241
>  #2 0x77ead66d in std::__invoke_impl (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__f=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> , __t=@0x446078: 
> 0x43c640)
>  at /usr/include/c++/11/bits/invoke.h:74
>  #3 0x77ead1d6 in std::__invoke (proton::container::impl::*&)(pn_connection_t*), proton::container::impl*&, 
> pn_connection_t*&> (__fn=
>  @0x446060: (void (proton::container::impl::*)(proton::container::impl * 
> const, pn_connection_t *)) 0x77e97ed4 
> ) at 
> /usr/include/c++/11/bits/invoke.h:96
>  #4 0x77eacd73 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::__call 1ul>(std::tuple<>&&, std::_Index_tuple<0ul, 1ul>) (this=0x446060, __args=...)
>  at /usr/include/c++/11/functional:420
>  #5 0x77eac7a6 in std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>::operator()<, void>() (this=0x446060) 
> at /usr/include/c++/11/functional:503
>  #6 0x77eaba21 in std::__invoke_impl (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::__invoke_other, std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__f=...) at 
> /usr/include/c++/11/bits/invoke.h:61
>  #7 0x77eaa497 in std::__invoke_r (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&>(std::_Bind (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)>&) (__fn=...) at 
> /usr/include/c++/11/bits/invoke.h:154
>  #8 0x77ea8cdf in std::_Function_handler (proton::container::impl::*(proton::container::impl*, 
> pn_connection_t*))(pn_connection_t*)> >::_M_invoke(std::_Any_data const&) 
> (__functor=...)
>  at /usr/include/c++/11/bits/std_function.h:291
>  #9 0x77e9f12e in std::function::operator()() const 
> (this=0x445fa8) at /usr/include/c++/11/bits/std_function.h:560
>  #10 0x77e96050 in proton::internal::v11::work::operator() 
> (this=0x445fa8) at 
> /home/rkumari/repos/qpid-proton/cpp/include/proton/work_queue.hpp:283
>  #11 0x77e9ba47 in proton::container::impl::run_timer_jobs 
> (this=0x43c640) at 
> /home/rkumari/repos/qpid-proton/cpp/src/proactor_container_impl.cpp:536
>  #12 0x77e9be69 in proton::container::impl::dispatch 

[jira] [Created] (PROTON-2467) Tidy up C++ reconnect test

2021-11-17 Thread Andrew Stitcher (Jira)
Andrew Stitcher created PROTON-2467:
---

 Summary: Tidy up C++ reconnect test
 Key: PROTON-2467
 URL: https://issues.apache.org/jira/browse/PROTON-2467
 Project: Qpid Proton
  Issue Type: Improvement
  Components: cpp-binding
Reporter: Andrew Stitcher
Assignee: Andrew Stitcher


* Make code a bit simpler by extracting some common code out from duplicted 
tests
* More use of C++11 to make the code a bit easier to read.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2396) [cpp] Seed in uuid.cpp can lead to duplicates

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445442#comment-17445442
 ] 

ASF GitHub Bot commented on PROTON-2396:


jiridanek commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751541523



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+
   public:
 /// Make a copy.
 PN_CPP_EXTERN static uuid copy();
 
 /// Return a uuid copied from bytes.  Bytes must point to at least
 /// 16 bytes.  If `bytes == 0` the UUID is zero-initialized.
-PN_CPP_EXTERN static uuid copy(const char* bytes);
+PN_CPP_EXTERN static uuid copy(const char *bytes);

Review comment:
   @DreamPearl Are you still using that clang-format config? You could 
change the setting there and commit that... some time in the future.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Seed in uuid.cpp can lead to duplicates
> -
>
> Key: PROTON-2396
> URL: https://issues.apache.org/jira/browse/PROTON-2396
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
> Environment: RHEL7 running in OpenStack
> docker-ce 19.03.5
> qpid-proton 0.28.0
> qpid-cpp 1.37.0
>Reporter: Ryan Herbert
>Assignee: Rakhi Kumari
>Priority: Major
>
> The random number seed used in qpid-proton/cpp/src/uuid.cpp is based on the 
> current time and the PID of the running process.  When starting multiple 
> proton instances simultaneously in Docker containers via automated 
> deployment, there is a high probability that multiple instances will get the 
> same seed since the PID within the Docker container is consistent and the 
> same across multiple copies of the same Docker container.
> This results in duplicate link names when binding to exchanges. When this 
> happens, the queue gets bound to two different exchanges, and requests sent 
> to one exchange will get responses from both services.
> To work around this error, we are specifying the link name via 
> sender_options/receiver_options every time we open a new sender/receiver, and 
> we also specify the container_id in connection_options.  We are using 
> std::mt19937_64 seeded with 
> std::chrono::system_clock::now().time_since_epoch().count() to generate the 
> random part of our link names, which seems to have enough randomness that it 
> has eliminated the problem for us.
> As pointed out in the Proton user forum, std::random_device is probably a 
> better choice for initializing the seed.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2396) [cpp] Seed in uuid.cpp can lead to duplicates

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445443#comment-17445443
 ] 

ASF GitHub Bot commented on PROTON-2396:


jiridanek commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751541924



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+

Review comment:
   @DreamPearl Right, ABI. Sorry I forgot to think about that before.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Seed in uuid.cpp can lead to duplicates
> -
>
> Key: PROTON-2396
> URL: https://issues.apache.org/jira/browse/PROTON-2396
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
> Environment: RHEL7 running in OpenStack
> docker-ce 19.03.5
> qpid-proton 0.28.0
> qpid-cpp 1.37.0
>Reporter: Ryan Herbert
>Assignee: Rakhi Kumari
>Priority: Major
>
> The random number seed used in qpid-proton/cpp/src/uuid.cpp is based on the 
> current time and the PID of the running process.  When starting multiple 
> proton instances simultaneously in Docker containers via automated 
> deployment, there is a high probability that multiple instances will get the 
> same seed since the PID within the Docker container is consistent and the 
> same across multiple copies of the same Docker container.
> This results in duplicate link names when binding to exchanges. When this 
> happens, the queue gets bound to two different exchanges, and requests sent 
> to one exchange will get responses from both services.
> To work around this error, we are specifying the link name via 
> sender_options/receiver_options every time we open a new sender/receiver, and 
> we also specify the container_id in connection_options.  We are using 
> std::mt19937_64 seeded with 
> std::chrono::system_clock::now().time_since_epoch().count() to generate the 
> random part of our link names, which seems to have enough randomness that it 
> has eliminated the problem for us.
> As pointed out in the Proton user forum, std::random_device is probably a 
> better choice for initializing the seed.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on a change in pull request #340: PROTON-2396: Use random_device for seed initialization in uuid.cpp

2021-11-17 Thread GitBox


jiridanek commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751541924



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+

Review comment:
   @DreamPearl Right, ABI. Sorry I forgot to think about that before.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] jiridanek commented on a change in pull request #340: PROTON-2396: Use random_device for seed initialization in uuid.cpp

2021-11-17 Thread GitBox


jiridanek commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751541523



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+
   public:
 /// Make a copy.
 PN_CPP_EXTERN static uuid copy();
 
 /// Return a uuid copied from bytes.  Bytes must point to at least
 /// 16 bytes.  If `bytes == 0` the UUID is zero-initialized.
-PN_CPP_EXTERN static uuid copy(const char* bytes);
+PN_CPP_EXTERN static uuid copy(const char *bytes);

Review comment:
   @DreamPearl Are you still using that clang-format config? You could 
change the setting there and commit that... some time in the future.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445379#comment-17445379
 ] 

ASF GitHub Bot commented on QPID-8569:
--

pjfawcett opened a new pull request #28:
URL: https://github.com/apache/qpid-cpp/pull/28


   Queue creation - don't update QMF until all validation done


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> operator:
> {{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
> literal or identifier"}}
> or an invalid characters:
> {{"\header='value'"}} which produces an {{"Found illegal character"}}
> both the above result in the creation of an undeletable queue
> (I realise that "\header" isn't a valid value. I came across this error when 
> trying to add double-quotes around the property name and got the wrong number 
> of backslashes)
> A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
> example program in qpid-proton and change the filter string in [line 
> 65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-cpp] pjfawcett opened a new pull request #28: QPID-8569 Illegal selector results in undeletable queue

2021-11-17 Thread GitBox


pjfawcett opened a new pull request #28:
URL: https://github.com/apache/qpid-cpp/pull/28


   Queue creation - don't update QMF until all validation done


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


 [ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Fawcett updated QPID-8569:
---
Description: 
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"\header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

(I realise that "\header" isn't a valid value. I came across this error when 
trying to add double-quotes around the property name and got the wrong number 
of backslashes)

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]





  was:
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"\header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]






> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> 

[jira] [Updated] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


 [ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Fawcett updated QPID-8569:
---
Description: 
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"\header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]





  was:
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]






> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> 

[jira] [Updated] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


 [ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Fawcett updated QPID-8569:
---
Description: 
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"\\header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]





  was:
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='\\value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]






> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> 

[jira] [Updated] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


 [ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Fawcett updated QPID-8569:
---
Description: 
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]





  was:
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"\\header='value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]






> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> 

[jira] [Assigned] (DISPATCH-2258) test_25_parallel_waypoint_test failing in system_tests_distribution

2021-11-17 Thread Ganesh Murthy (Jira)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy reassigned DISPATCH-2258:
---

Assignee: Ken Giusti  (was: Ganesh Murthy)

> test_25_parallel_waypoint_test failing in system_tests_distribution
> ---
>
> Key: DISPATCH-2258
> URL: https://issues.apache.org/jira/browse/DISPATCH-2258
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Tests
>Reporter: Ganesh Murthy
>Assignee: Ken Giusti
>Priority: Major
>
> {noformat}
> 37: ==
> 37: FAIL: test_25_parallel_waypoint_test 
> (system_tests_distribution.DistributionTests)
> 37: --
> 37: Traceback (most recent call last):
> 37:   File 
> "/home/travis/build/apache/qpid-dispatch/tests/system_tests_distribution.py", 
> line 1641, in test_25_parallel_waypoint_test
> 37: self.assertIsNone(test.error)
> 37: AssertionError: 'Timeout Expired: n_sent=200 n_rcvd=194 n_thru=198' is 
> not None
> 37: 
> 37: --
> 37: Ran 25 tests in 344.700s
> 37: 
> 37: FAILED (failures=1, skipped=7)
> 37/73 Test #37: system_tests_distribution .***Failed  
> 344.85 sec {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


[ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445323#comment-17445323
 ] 

Pete Fawcett commented on QPID-8569:


I believe the problem is due to an error in the Queue 
[constructor|https://github.com/apache/qpid-cpp/blob/main/src/qpid/broker/Queue.cpp#L190]

In the section beginning at [line 
218|https://github.com/apache/qpid-cpp/blob/main/src/qpid/broker/Queue.cpp#L218]
 the ManagementAgent is informed of the new queue being created by adding a 
{{_qmf::Queue}} object.  However, this is before the selector is created in 
[line 
234|https://github.com/apache/qpid-cpp/blob/main/src/qpid/broker/Queue.cpp#L234]

An illegal character, and some other parsing errors, cause the Selector 
constructor to throw an exception.
The exception handling aborts the creation of the queue and returns an error to 
the client.

However, the {{_qmf::Queue} object is never cleaned up.  As a result, the 
ManagementAgent is out of sync with the actual broker.  It shows a queue to 
exist, but it doesn't exist in the broker and so a {{"No such queue"}} error is 
returned if an attempt is made to delete it,

I think the solution is to process the selector (and any other related 
processing that might fail) before updating the ManagementAgent

I intend to submit a Pull Request to this end



> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> operator:
> {{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
> literal or identifier"}}
> or an invalid characters:
> {{"header='\\value'"}} which produces an {{"Found illegal character"}}
> both the above result in the creation of an undeletable queue
> A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
> example program in qpid-proton and change the filter string in [line 
> 65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2396) [cpp] Seed in uuid.cpp can lead to duplicates

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445320#comment-17445320
 ] 

ASF GitHub Bot commented on PROTON-2396:


astitcher commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751379226



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+

Review comment:
   Please don't add these private members to the class. It changes the ABI 
(also the API but in a more subtle way). 
   Since these are thread_locals anyway they don't need to be per instance 
variables so can just be static to the implementation file itself and not be in 
the class at all.

##
File path: cpp/src/uuid.cpp
##
@@ -38,20 +43,17 @@
 namespace proton {
 
 namespace {
-
-
-// Seed the random number generated once at startup.
-struct seed {
-seed() {
-// A hash of time and PID, time alone is a bad seed as programs started
-// within the same second will get the same seed.
-unsigned long secs = time(0);
-unsigned long pid = GETPID();
-std::srand(((secs*181)*((pid-83)*359))%104729);
-}
-} seed_;
-
-}
+// A hash of time, PID and random_device, time alone is a bad seed as programs
+// started within the same second will get the same seed.
+thread_local unsigned long ticks =
+std::chrono::system_clock::now().time_since_epoch().count();
+unsigned long pid = GETPID();
+unsigned int rd = std::random_device{}();
+} // namespace
+
+thread_local std::seed_seq uuid::seed{ticks, pid, (unsigned long)rd};
+thread_local std::independent_bits_engine
+uuid::engine(seed);

Review comment:
   I like the struct seed construct to initialise the seed once. You can 
shift your code into there and make it thread_local I think which will be a 
neater implementation and a smaller change.

##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+
   public:
 /// Make a copy.
 PN_CPP_EXTERN static uuid copy();
 
 /// Return a uuid copied from bytes.  Bytes must point to at least
 /// 16 bytes.  If `bytes == 0` the UUID is zero-initialized.
-PN_CPP_EXTERN static uuid copy(const char* bytes);
+PN_CPP_EXTERN static uuid copy(const char *bytes);

Review comment:
   FWIW the convention used in our C++ is in line with usual C++ 
conventions and has the '*' in the type side not the name side. Our C code is 
in line with more usual C conventions and is the other way round. This is 
probably not written down anywhere - sorry!

##
File path: cpp/src/uuid.cpp
##
@@ -70,12 +72,7 @@ uuid uuid::copy(const char* bytes) {
 
 uuid uuid::random() {
 uuid bytes;
-int r = std::rand();
-for (size_t i = 0; i < bytes.size(); ++i ) {
-bytes[i] = r & 0xFF;
-r >>= 8;
-if (!r) r = std::rand();
-}
+std::generate(bytes.begin(), bytes.end(), std::ref(engine));

Review comment:
   in my way of doing this engine would be seed_::engine - maybe seed_ 
should have a better name?

##
File path: cpp/src/uuid.cpp
##
@@ -38,20 +43,17 @@
 namespace proton {
 
 namespace {
-
-
-// Seed the random number generated once at startup.
-struct seed {
-seed() {
-// A hash of time and PID, time alone is a bad seed as programs started
-// within the same second will get the same seed.
-unsigned long secs = time(0);
-unsigned long pid = GETPID();
-std::srand(((secs*181)*((pid-83)*359))%104729);
-}
-} seed_;
-
-}
+// A hash of time, PID and random_device, time alone is a bad seed as programs
+// started within the same second will get the same seed.

Review comment:
   I think this comment maybe now incorrect as 
std::chrono::system_clock::now() isn't in seconds any more (is it?)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Seed in uuid.cpp can lead to duplicates
> -
>
> Key: PROTON-2396
> URL: https://issues.apache.org/jira/browse/PROTON-2396
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: 

[GitHub] [qpid-proton] astitcher commented on a change in pull request #340: PROTON-2396: Use random_device for seed initialization in uuid.cpp

2021-11-17 Thread GitBox


astitcher commented on a change in pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#discussion_r751379226



##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+

Review comment:
   Please don't add these private members to the class. It changes the ABI 
(also the API but in a more subtle way). 
   Since these are thread_locals anyway they don't need to be per instance 
variables so can just be static to the implementation file itself and not be in 
the class at all.

##
File path: cpp/src/uuid.cpp
##
@@ -38,20 +43,17 @@
 namespace proton {
 
 namespace {
-
-
-// Seed the random number generated once at startup.
-struct seed {
-seed() {
-// A hash of time and PID, time alone is a bad seed as programs started
-// within the same second will get the same seed.
-unsigned long secs = time(0);
-unsigned long pid = GETPID();
-std::srand(((secs*181)*((pid-83)*359))%104729);
-}
-} seed_;
-
-}
+// A hash of time, PID and random_device, time alone is a bad seed as programs
+// started within the same second will get the same seed.
+thread_local unsigned long ticks =
+std::chrono::system_clock::now().time_since_epoch().count();
+unsigned long pid = GETPID();
+unsigned int rd = std::random_device{}();
+} // namespace
+
+thread_local std::seed_seq uuid::seed{ticks, pid, (unsigned long)rd};
+thread_local std::independent_bits_engine
+uuid::engine(seed);

Review comment:
   I like the struct seed construct to initialise the seed once. You can 
shift your code into there and make it thread_local I think which will be a 
neater implementation and a smaller change.

##
File path: cpp/include/proton/uuid.hpp
##
@@ -35,13 +38,20 @@ namespace proton {
 
 /// A 16-byte universally unique identifier.
 class uuid : public byte_array<16> {
+
+  private:
+thread_local static std::independent_bits_engine
+engine;
+thread_local static std::seed_seq seed;
+
   public:
 /// Make a copy.
 PN_CPP_EXTERN static uuid copy();
 
 /// Return a uuid copied from bytes.  Bytes must point to at least
 /// 16 bytes.  If `bytes == 0` the UUID is zero-initialized.
-PN_CPP_EXTERN static uuid copy(const char* bytes);
+PN_CPP_EXTERN static uuid copy(const char *bytes);

Review comment:
   FWIW the convention used in our C++ is in line with usual C++ 
conventions and has the '*' in the type side not the name side. Our C code is 
in line with more usual C conventions and is the other way round. This is 
probably not written down anywhere - sorry!

##
File path: cpp/src/uuid.cpp
##
@@ -70,12 +72,7 @@ uuid uuid::copy(const char* bytes) {
 
 uuid uuid::random() {
 uuid bytes;
-int r = std::rand();
-for (size_t i = 0; i < bytes.size(); ++i ) {
-bytes[i] = r & 0xFF;
-r >>= 8;
-if (!r) r = std::rand();
-}
+std::generate(bytes.begin(), bytes.end(), std::ref(engine));

Review comment:
   in my way of doing this engine would be seed_::engine - maybe seed_ 
should have a better name?

##
File path: cpp/src/uuid.cpp
##
@@ -38,20 +43,17 @@
 namespace proton {
 
 namespace {
-
-
-// Seed the random number generated once at startup.
-struct seed {
-seed() {
-// A hash of time and PID, time alone is a bad seed as programs started
-// within the same second will get the same seed.
-unsigned long secs = time(0);
-unsigned long pid = GETPID();
-std::srand(((secs*181)*((pid-83)*359))%104729);
-}
-} seed_;
-
-}
+// A hash of time, PID and random_device, time alone is a bad seed as programs
+// started within the same second will get the same seed.

Review comment:
   I think this comment maybe now incorrect as 
std::chrono::system_clock::now() isn't in seconds any more (is it?)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2258) test_25_parallel_waypoint_test failing in system_tests_distribution

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445311#comment-17445311
 ] 

ASF GitHub Bot commented on DISPATCH-2258:
--

kgiusti commented on a change in pull request #1441:
URL: https://github.com/apache/qpid-dispatch/pull/1441#discussion_r751355815



##
File path: tests/system_tests_distribution.py
##
@@ -3884,7 +3882,7 @@ def __init__(self,
 n_links_per_message = 4
 self.n_expected_transitions = len(self.senders) * 
self.messages_per_sender * n_links_per_message
 
-self.debug = False
+self.debug = debug
 
 self.test_name = test_name
 

Review comment:
   Since github won't let me comment on lines that haven't been changed: 
this is regards to the debug_print() function: add a flush=True to the print 
statement:  print(message, flush=True).   unittest has a tendency to not flush 
stdout if a failure is hit.

##
File path: tests/system_tests_distribution.py
##
@@ -3826,6 +3823,7 @@ def __init__(self,
 self.error= None
 self.messages_per_sender  = 100

Review comment:
   Non-patch observation: is 100 messages *really* necessary?  CI systems 
tend to be painfully slow.  Can the same degree of confidence that this works 
be achieved by say 20 messages?  

##
File path: tests/system_tests_distribution.py
##
@@ -3910,11 +3908,9 @@ def send_from_client(self, sender, n_messages, 
sender_index):
 n_sent += 1
 self.n_sent+= 1
 self.n_transitions += 1
-self.debug_print("send_from_client -- sender: %d n_sent: %d" % 
(sender_index, n_sent))
+self.debug_print("send_from_client -- sender: %d n_sent: %d" % 
(sender_index, n_sent))

Review comment:
   Did you also want to do the same thing (move the debug print out of the 
loop) for the send_from_waypoint() method below?  Just asking...

##
File path: tests/system_tests_distribution.py
##
@@ -3972,19 +3958,25 @@ def on_link_opening(self, event):
 if self.n_waypoint_receivers < 2 :
 self.waypoints[self.n_waypoint_receivers]['receiver'] = 
event.receiver
 self.n_waypoint_receivers += 1
+# Create the senders after the waypoint receiver links have 
been opened.
+if self.n_waypoint_receivers == 2:
+for i in range(len(self.sender_connections)):
+cnx = self.sender_connections[i]
+sender = self.senders[i]
+sender['sender'] = event.container.create_sender(cnx,
+ 
self.destination,
+ 
name=link_name())
+sender['to_send'] = self.messages_per_sender
+sender['n_sent'] = 0
 
 def on_sendable(self, event):
-self.debug_print("on_sendable --")
 for index in range(len(self.senders)) :
 sender = self.senders[index]
 if event.sender == sender['sender'] :
-self.debug_print("client sender %d" % index)
 if sender['n_sent'] < sender['to_send'] :
 self.debug_print("sending %d" % sender['to_send'])
 self.send_from_client(sender['sender'], sender['to_send'], 
index)
 sender['n_sent'] = sender['to_send']  # n_sent = n_to_send

Review comment:
   wait a sec... can't send_from_client exit with < to_send if credit runs 
out?   This seems wrong.  Maybe send_from_client should be incrementing 
sender['n_sent'] rather than having this line assume sender['to_send'] was 
actually sent.
   
   And don't get me started on why the heck this test is using maps 
(sender['n_sent']) rather than a class (sender.n_sent) all over this test.  
It's like this test was purposely written to be easy to break and maximize 
reader confusion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> test_25_parallel_waypoint_test failing in system_tests_distribution
> ---
>
> Key: DISPATCH-2258
> URL: https://issues.apache.org/jira/browse/DISPATCH-2258
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Tests
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
>
> {noformat}
> 37: 

[GitHub] [qpid-dispatch] kgiusti commented on a change in pull request #1441: DISPATCH-2258: Cleaned up log messages, created senders after receive…

2021-11-17 Thread GitBox


kgiusti commented on a change in pull request #1441:
URL: https://github.com/apache/qpid-dispatch/pull/1441#discussion_r751355815



##
File path: tests/system_tests_distribution.py
##
@@ -3884,7 +3882,7 @@ def __init__(self,
 n_links_per_message = 4
 self.n_expected_transitions = len(self.senders) * 
self.messages_per_sender * n_links_per_message
 
-self.debug = False
+self.debug = debug
 
 self.test_name = test_name
 

Review comment:
   Since github won't let me comment on lines that haven't been changed: 
this is regards to the debug_print() function: add a flush=True to the print 
statement:  print(message, flush=True).   unittest has a tendency to not flush 
stdout if a failure is hit.

##
File path: tests/system_tests_distribution.py
##
@@ -3826,6 +3823,7 @@ def __init__(self,
 self.error= None
 self.messages_per_sender  = 100

Review comment:
   Non-patch observation: is 100 messages *really* necessary?  CI systems 
tend to be painfully slow.  Can the same degree of confidence that this works 
be achieved by say 20 messages?  

##
File path: tests/system_tests_distribution.py
##
@@ -3910,11 +3908,9 @@ def send_from_client(self, sender, n_messages, 
sender_index):
 n_sent += 1
 self.n_sent+= 1
 self.n_transitions += 1
-self.debug_print("send_from_client -- sender: %d n_sent: %d" % 
(sender_index, n_sent))
+self.debug_print("send_from_client -- sender: %d n_sent: %d" % 
(sender_index, n_sent))

Review comment:
   Did you also want to do the same thing (move the debug print out of the 
loop) for the send_from_waypoint() method below?  Just asking...

##
File path: tests/system_tests_distribution.py
##
@@ -3972,19 +3958,25 @@ def on_link_opening(self, event):
 if self.n_waypoint_receivers < 2 :
 self.waypoints[self.n_waypoint_receivers]['receiver'] = 
event.receiver
 self.n_waypoint_receivers += 1
+# Create the senders after the waypoint receiver links have 
been opened.
+if self.n_waypoint_receivers == 2:
+for i in range(len(self.sender_connections)):
+cnx = self.sender_connections[i]
+sender = self.senders[i]
+sender['sender'] = event.container.create_sender(cnx,
+ 
self.destination,
+ 
name=link_name())
+sender['to_send'] = self.messages_per_sender
+sender['n_sent'] = 0
 
 def on_sendable(self, event):
-self.debug_print("on_sendable --")
 for index in range(len(self.senders)) :
 sender = self.senders[index]
 if event.sender == sender['sender'] :
-self.debug_print("client sender %d" % index)
 if sender['n_sent'] < sender['to_send'] :
 self.debug_print("sending %d" % sender['to_send'])
 self.send_from_client(sender['sender'], sender['to_send'], 
index)
 sender['n_sent'] = sender['to_send']  # n_sent = n_to_send

Review comment:
   wait a sec... can't send_from_client exit with < to_send if credit runs 
out?   This seems wrong.  Maybe send_from_client should be incrementing 
sender['n_sent'] rather than having this line assume sender['to_send'] was 
actually sent.
   
   And don't get me started on why the heck this test is using maps 
(sender['n_sent']) rather than a class (sender.n_sent) all over this test.  
It's like this test was purposely written to be easy to break and maximize 
reader confusion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)


 [ 
https://issues.apache.org/jira/browse/QPID-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pete Fawcett updated QPID-8569:
---
Description: 
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='\\value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65|https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65|]





  was:
Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='\\value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65](https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65)






> Illegal selector results in undeletable queue
> -
>
> Key: QPID-8569
> URL: https://issues.apache.org/jira/browse/QPID-8569
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.39.0
>Reporter: Pete Fawcett
>Priority: Major
>
> Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
> specifying a selector.
> If a valid selector is specified then, as expected, a queue is created.  The 
> queue name and properties can be seen using {{qpid-stat}} or similar.  When 
> the receiver is closed the queue disappears.
> If an invalid selector is specified then, again as expected, an error is 
> returned to the client.
> The problem is that it appears that a queue has been created. A new queue can 
> be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" 
> it has not been deleted.
> Furthermore, trying to delete the queue using {{qpid-config}} returns a 
> {{"not-found: Delete failed. No such queue: ..."}} error.
> I don't think all invalid selectors produce this situation, and I think that 
> there is some variation depending on the client being used - which perhaps 
> suggests some validation is being done at the client end.  However, there are 
> certain invalid selectors that produce this error in both Python and C++ 
> client bindings,
> Examples of invalid selector that produce errors are using an invalid 
> 

[jira] [Created] (QPID-8569) Illegal selector results in undeletable queue

2021-11-17 Thread Pete Fawcett (Jira)
Pete Fawcett created QPID-8569:
--

 Summary: Illegal selector results in undeletable queue
 Key: QPID-8569
 URL: https://issues.apache.org/jira/browse/QPID-8569
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: qpid-cpp-1.39.0
Reporter: Pete Fawcett


Using AMQP 1.0 and creating a receiver, with an exchange as a source, and 
specifying a selector.

If a valid selector is specified then, as expected, a queue is created.  The 
queue name and properties can be seen using {{qpid-stat}} or similar.  When the 
receiver is closed the queue disappears.

If an invalid selector is specified then, again as expected, an error is 
returned to the client.
The problem is that it appears that a queue has been created. A new queue can 
be seen using {{qpid-stat}} and, even though is shows it to be "auto-delete" it 
has not been deleted.
Furthermore, trying to delete the queue using {{qpid-config}} returns a 
{{"not-found: Delete failed. No such queue: ..."}} error.

I don't think all invalid selectors produce this situation, and I think that 
there is some variation depending on the client being used - which perhaps 
suggests some validation is being done at the client end.  However, there are 
certain invalid selectors that produce this error in both Python and C++ client 
bindings,

Examples of invalid selector that produce errors are using an invalid operator:
{{"header=='value'"}} which produces an {{"Illegal selector: '=': expected 
literal or identifier"}}
or an invalid characters:
{{"header='\\value'"}} which produces an {{"Found illegal character"}}
both the above result in the creation of an undeletable queue

A minimal way to reproduce these errors is to use the {{selected_recv.cpp}} 
example program in qpid-proton and change the filter string in [line 
65](https://github.com/apache/qpid-proton/blob/main/cpp/examples/selected_recv.cpp#L65)







--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445250#comment-17445250
 ] 

Ken Giusti commented on PROTON-2466:


This is a difficult issue to reproduce.  In my experience it can take a few 
hours and the resulting log files are huge.

To reproduce:
 # check out head of the qdrouter 1.18.x branch
 # back out the pointer clear patch that prevents the crash from occurring:
 ## commit 6734891419fcafdbc87d40eca269d07821c1b813 DISPATCH-2286: reset the 
raw conn context when handling disconnect
 # run two routers using the above configurations:
 ## rm -f qdrouterd-A-log.txt ; qdrouterd -c qdrouterd-A.conf & rm -f 
qdrouterd-B-log.txt ; qdrouterd -c qdrouterd-B.conf &
 # Install iperf3
 # spawn an iperf3 server for the router to connected to:
 ## iperf3 -s -p 8080 &
 # run iperf3 clients to generate traffic in a loop:
 ## while iperf3 -c 127.0.0.1 -p 8000 -t 5 -P 8; do echo "OK"; sleep 2; done
 # wait for crash

> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf, qdrouterd-B.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.
> Here is the log output from the router just prior to the crash (filtered on 
> the affected connection):
> $ tail C140.txt                                                               
>                                
> 2021-11-16 17:11:10.925728 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_WAKE connector                                              
>         
> 2021-11-16 17:11:10.926990 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_WAKE connector                                              
>         
> 2021-11-16 17:11:10.927001 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_READ connector Event                                        
>         
> 2021-11-16 17:11:10.927034 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_READ Read 0 bytes. Total read 0 bytes                       
>         
> 2021-11-16 17:11:10.927596 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_WRITTEN connector pn_raw_connection_take_written_buffers 
> wrote 3276\
> 8 bytes. Total written 36929573 bytes                                         
>                                                                     
> 2021-11-16 17:11:10.928207 -0500 TCP_ADAPTOR (debug) [C140][L322] 
> PN_RAW_CONNECTION_CLOSED_READ connector                                       
>   
> 2021-11-16 17:11:10.928591 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_CLOSED_WRITE connector                                      
>         
> 2021-11-16 17:11:10.929160 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_WRITTEN connector pn_raw_connection_take_written_buffers 
> wrote 3276\
> 8 bytes. Total written 36962341 bytes                                         
>                                                                     
> *2021-11-16 17:11:10.929410 -0500 TCP_ADAPTOR (info) [C140] 
> PN_RAW_CONNECTION_DISCONNECTED connector* 
> *2021-11-16 17:11:10.929915 -0500 TCP_ADAPTOR (debug) [C140] 
> PN_RAW_CONNECTION_WAKE connector*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-2466:
---
Description: 
While running tcp stress tests against qdrouterd a crash occurred.  The crash 
was due to a stale pointer dereference.

qdrouterd code has been patched to properly clear the pointer and check for 
null in the effected codepath.  However...

... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
previously arrived on the raw connection.

IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last event 
generated on a raw connection, and once that event has been handled the raw 
connection is released.   If that is correct then the arrival of the following 
WAKE event is a bug.

Here is the log output from the router just prior to the crash (filtered on the 
affected connection):

$ tail C140.txt                                                                 
                             
2021-11-16 17:11:10.925728 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_WAKE connector                                                
      
2021-11-16 17:11:10.926990 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_WAKE connector                                                
      
2021-11-16 17:11:10.927001 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_READ connector Event                                          
      
2021-11-16 17:11:10.927034 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_READ Read 0 bytes. Total read 0 bytes                         
      
2021-11-16 17:11:10.927596 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_WRITTEN connector pn_raw_connection_take_written_buffers 
wrote 3276\
8 bytes. Total written 36929573 bytes                                           
                                                                  
2021-11-16 17:11:10.928207 -0500 TCP_ADAPTOR (debug) [C140][L322] 
PN_RAW_CONNECTION_CLOSED_READ connector                                         
2021-11-16 17:11:10.928591 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_CLOSED_WRITE connector                                        
      
2021-11-16 17:11:10.929160 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_WRITTEN connector pn_raw_connection_take_written_buffers 
wrote 3276\
8 bytes. Total written 36962341 bytes                                           
                                                                  
*2021-11-16 17:11:10.929410 -0500 TCP_ADAPTOR (info) [C140] 
PN_RAW_CONNECTION_DISCONNECTED connector* 
*2021-11-16 17:11:10.929915 -0500 TCP_ADAPTOR (debug) [C140] 
PN_RAW_CONNECTION_WAKE connector*

  was:
While running tcp stress tests against qdrouterd a crash occurred.  The crash 
was due to a stale pointer dereference.

qdrouterd code has been patched to properly clear the pointer and check for 
null in the effected codepath.  However...

... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
previously arrived on the raw connection.

IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last event 
generated on a raw connection, and once that event has been handled the raw 
connection is released.   If that is correct then the arrival of the following 
WAKE event is a bug.


> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf, qdrouterd-B.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.
> Here is the log output from the router just prior to the crash (filtered on 
> the affected connection):
> $ tail C140.txt                                                               
>                                
> 2021-11-16 17:11:10.925728 -0500 

[jira] [Updated] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-2466:
---
Attachment: qdrouterd-B.conf

> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf, qdrouterd-B.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-2466:
---
Attachment: qdrouterd-A.conf

> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-2466:
---
Attachment: (was: qdrouterd-A.conf)

> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)
Ken Giusti created PROTON-2466:
--

 Summary: raw connection posts wake events after disconnect event 
is handled
 Key: PROTON-2466
 URL: https://issues.apache.org/jira/browse/PROTON-2466
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-c
Affects Versions: proton-c-0.36.0
Reporter: Ken Giusti
 Attachments: qdrouterd-A.conf

While running tcp stress tests against qdrouterd a crash occurred.  The crash 
was due to a stale pointer dereference.

qdrouterd code has been patched to properly clear the pointer and check for 
null in the effected codepath.  However...

... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
previously arrived on the raw connection.

IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last event 
generated on a raw connection, and once that event has been handled the raw 
connection is released.   If that is correct then the arrival of the following 
WAKE event is a bug.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2466) raw connection posts wake events after disconnect event is handled

2021-11-17 Thread Ken Giusti (Jira)


 [ 
https://issues.apache.org/jira/browse/PROTON-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ken Giusti updated PROTON-2466:
---
Attachment: qdrouterd-A.conf

> raw connection posts wake events after disconnect event is handled
> --
>
> Key: PROTON-2466
> URL: https://issues.apache.org/jira/browse/PROTON-2466
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: proton-c-0.36.0
>Reporter: Ken Giusti
>Priority: Major
> Attachments: qdrouterd-A.conf
>
>
> While running tcp stress tests against qdrouterd a crash occurred.  The crash 
> was due to a stale pointer dereference.
> qdrouterd code has been patched to properly clear the pointer and check for 
> null in the effected codepath.  However...
> ... the access occurred while processing a PN_RAW_CONNECTION_WAKE event that 
> arrived on a raw connection *after* a PN_RAW_CONNECTION_DISCONNECTED event 
> previously arrived on the raw connection.
> IIUC the PN_RAW_CONNECTION_DISCONNECTED event is supposed to be the last 
> event generated on a raw connection, and once that event has been handled the 
> raw connection is released.   If that is correct then the arrival of the 
> following WAKE event is a bug.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2267) Add a core-thread facility to allow IO modules to subscribe to address-reachability data

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445173#comment-17445173
 ] 

ASF GitHub Bot commented on DISPATCH-2267:
--

ted-ross commented on a change in pull request #1435:
URL: https://github.com/apache/qpid-dispatch/pull/1435#discussion_r751254686



##
File path: include/qpid/dispatch/router_core.h
##
@@ -160,6 +169,64 @@ void qdr_send_to2(qdr_core_t *core, qd_message_t *msg, 
const char *addr,
   bool exclude_inprocess, bool control);
 
 
+/**
+ **
+ * Address watch functions
+ **
+ */
+
+typedef uint32_t qdr_watch_handle_t;
+
+/**
+ * Handler for updates on watched addresses.  This function shall be invoked 
on an IO thread.
+ * 
+ * Note:  This function will be invoked when a watched address has a change in 
reachability.
+ * It is possible that the function may be called when no change occurs, 
particularly when an
+ * address is removed from the core address table.
+ *
+ * @param context The opaque context supplied in the call to 
qdr_core_watch_address
+ * @param local_consumers Number of consuming (outgoing) links for this 
address on this router
+ * @param in_proc_consumers Number of in-process consumers for this address on 
this router
+ * @param remote_consumers Number of remote routers with consumers for this 
address
+ * @param local_producers Number of producing (incoming) links for this 
address on this router
+ */
+typedef void (*qdr_address_watch_update_t)(void *context,
+   uint32_t  local_consumers,
+   uint32_t  in_proc_consumers,
+   uint32_t  remote_consumers,
+   uint32_t  local_producers);
+
+/**
+ * qdr_core_watch_address
+ *
+ * Subscribe to watch for changes in the reachability for an address.  It is 
safe to invoke this
+ * function from an IO thread.
+ * 
+ * @param core Pointer to the core module
+ * @param address The address to be watched
+ * @param aclass Address class character
+ * @param phase Address phase character ('0' .. '9')
+ * @param on_watch The handler function
+ * @param context The opaque context sent to the handler on all invocations
+ * @return Watch handle to be used when canceling the watch
+ */
+qdr_watch_handle_t qdr_core_watch_address(qdr_core_t *core,
+  const char *address,
+  characlass,
+  charphase,
+  qdr_address_watch_update_t  on_watch,
+  void   *context);
+
+/**
+ * qdr_core_unwatch_address
+ * 
+ * Cancel an address watch subscription.  It is safe to invoke this function 
from an IO thread.

Review comment:
   Yes, this is a good catch.  It will require a bit of re-work, but I'll 
make it so that this cannot happen.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add a core-thread facility to allow IO modules to subscribe to 
> address-reachability data
> 
>
> Key: DISPATCH-2267
> URL: https://issues.apache.org/jira/browse/DISPATCH-2267
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Reporter: Ted Ross
>Assignee: Ted Ross
>Priority: Major
> Fix For: 1.19.0
>
>
> Add a facility to the Core Thread that allows an IO-thread module to register 
> for updates about a particular address.  Callbacks into the IO-thread (on an 
> IO thread) shall inform the module about changes to the reachability of an 
> address.
> This can be used by a protocol listener to open or close the listening socket 
> for a protocol listener based on the availability of remote connectors.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ted-ross commented on a change in pull request #1435: DISPATCH-2267 - IO-thread facility to watch for changes to address reachability

2021-11-17 Thread GitBox


ted-ross commented on a change in pull request #1435:
URL: https://github.com/apache/qpid-dispatch/pull/1435#discussion_r751254686



##
File path: include/qpid/dispatch/router_core.h
##
@@ -160,6 +169,64 @@ void qdr_send_to2(qdr_core_t *core, qd_message_t *msg, 
const char *addr,
   bool exclude_inprocess, bool control);
 
 
+/**
+ **
+ * Address watch functions
+ **
+ */
+
+typedef uint32_t qdr_watch_handle_t;
+
+/**
+ * Handler for updates on watched addresses.  This function shall be invoked 
on an IO thread.
+ * 
+ * Note:  This function will be invoked when a watched address has a change in 
reachability.
+ * It is possible that the function may be called when no change occurs, 
particularly when an
+ * address is removed from the core address table.
+ *
+ * @param context The opaque context supplied in the call to 
qdr_core_watch_address
+ * @param local_consumers Number of consuming (outgoing) links for this 
address on this router
+ * @param in_proc_consumers Number of in-process consumers for this address on 
this router
+ * @param remote_consumers Number of remote routers with consumers for this 
address
+ * @param local_producers Number of producing (incoming) links for this 
address on this router
+ */
+typedef void (*qdr_address_watch_update_t)(void *context,
+   uint32_t  local_consumers,
+   uint32_t  in_proc_consumers,
+   uint32_t  remote_consumers,
+   uint32_t  local_producers);
+
+/**
+ * qdr_core_watch_address
+ *
+ * Subscribe to watch for changes in the reachability for an address.  It is 
safe to invoke this
+ * function from an IO thread.
+ * 
+ * @param core Pointer to the core module
+ * @param address The address to be watched
+ * @param aclass Address class character
+ * @param phase Address phase character ('0' .. '9')
+ * @param on_watch The handler function
+ * @param context The opaque context sent to the handler on all invocations
+ * @return Watch handle to be used when canceling the watch
+ */
+qdr_watch_handle_t qdr_core_watch_address(qdr_core_t *core,
+  const char *address,
+  characlass,
+  charphase,
+  qdr_address_watch_update_t  on_watch,
+  void   *context);
+
+/**
+ * qdr_core_unwatch_address
+ * 
+ * Cancel an address watch subscription.  It is safe to invoke this function 
from an IO thread.

Review comment:
   Yes, this is a good catch.  It will require a bit of re-work, but I'll 
make it so that this cannot happen.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-2267) Add a core-thread facility to allow IO modules to subscribe to address-reachability data

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/DISPATCH-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445172#comment-17445172
 ] 

ASF GitHub Bot commented on DISPATCH-2267:
--

ted-ross commented on a change in pull request #1435:
URL: https://github.com/apache/qpid-dispatch/pull/1435#discussion_r751250679



##
File path: src/router_core/address_watch.c
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "router_core_private.h"
+#include "qpid/dispatch/amqp.h"
+
+struct qdr_address_watch_t {
+DEQ_LINKS(struct qdr_address_watch_t);
+qdr_watch_handle_t  watch_handle;
+char   *address_hash;
+qdr_address_watch_update_t  handler;
+void   *context;
+};
+
+ALLOC_DECLARE(qdr_address_watch_t);
+ALLOC_DEFINE(qdr_address_watch_t);
+
+static void qdr_watch_invoker(qdr_core_t *core, qdr_general_work_t *work);
+static void qdr_core_watch_address_CT(qdr_core_t *core, qdr_action_t *action, 
bool discard);
+static void qdr_core_unwatch_address_CT(qdr_core_t *core, qdr_action_t 
*action, bool discard);
+static void qdr_address_watch_free_CT(qdr_address_watch_t *watch);
+
+//==
+// Core Interface Functions
+//==
+qdr_watch_handle_t qdr_core_watch_address(qdr_core_t *core,
+  const char *address,
+  characlass,
+  charphase,
+  qdr_address_watch_update_t  on_watch,
+  void   *context)
+{
+static sys_atomic_t next_handle;
+qdr_action_t *action = qdr_action(qdr_core_watch_address_CT, 
"watch_address");
+
+action->args.io.address   = qdr_field(address);
+action->args.io.address_class = aclass;
+action->args.io.address_phase = phase;
+action->args.io.watch_handler = on_watch;
+action->args.io.context   = context;
+action->args.io.value32_1 = sys_atomic_inc(_handle);
+
+qdr_action_enqueue(core, action);
+return action->args.io.value32_1;
+}
+
+
+void qdr_core_unwatch_address(qdr_core_t *core, qdr_watch_handle_t handle)
+{
+qdr_action_t *action = qdr_action(qdr_core_unwatch_address_CT, 
"unwatch_address");
+
+action->args.io.value32_1 = handle;
+qdr_action_enqueue(core, action);
+}
+
+
+//==
+// In-Core API Functions
+//==
+void qdr_trigger_address_watch_CT(qdr_core_t *core, qdr_address_t *addr)
+{
+const char  *address_hash = (char*) 
qd_hash_key_by_handle(addr->hash_handle);
+qdr_address_watch_t *watch= DEQ_HEAD(core->addr_watches);
+
+while (!!watch) {
+if (strcmp(watch->address_hash, address_hash) == 0) {
+qdr_general_work_t *work = qdr_general_work(qdr_watch_invoker);
+work->watch_handler = watch->handler;
+work->context   = watch->context;
+work->local_consumers   = DEQ_SIZE(addr->rlinks);
+work->in_proc_consumers = DEQ_SIZE(addr->subscriptions);
+work->remote_consumers  = qd_bitmask_cardinality(addr->rnodes);
+work->local_producers   = DEQ_SIZE(addr->inlinks);
+qdr_post_general_work_CT(core, work);
+}
+watch = DEQ_NEXT(watch);
+}
+}
+
+void qdr_address_watch_shutdown(qdr_core_t *core)
+{
+qdr_address_watch_t *watch = DEQ_HEAD(core->addr_watches);
+while (!!watch) {
+DEQ_REMOVE(core->addr_watches, watch);
+qdr_address_watch_free_CT(watch);
+watch = DEQ_HEAD(core->addr_watches);
+}
+}
+
+
+//==
+// Local Functions

[GitHub] [qpid-dispatch] ted-ross commented on a change in pull request #1435: DISPATCH-2267 - IO-thread facility to watch for changes to address reachability

2021-11-17 Thread GitBox


ted-ross commented on a change in pull request #1435:
URL: https://github.com/apache/qpid-dispatch/pull/1435#discussion_r751250679



##
File path: src/router_core/address_watch.c
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "router_core_private.h"
+#include "qpid/dispatch/amqp.h"
+
+struct qdr_address_watch_t {
+DEQ_LINKS(struct qdr_address_watch_t);
+qdr_watch_handle_t  watch_handle;
+char   *address_hash;
+qdr_address_watch_update_t  handler;
+void   *context;
+};
+
+ALLOC_DECLARE(qdr_address_watch_t);
+ALLOC_DEFINE(qdr_address_watch_t);
+
+static void qdr_watch_invoker(qdr_core_t *core, qdr_general_work_t *work);
+static void qdr_core_watch_address_CT(qdr_core_t *core, qdr_action_t *action, 
bool discard);
+static void qdr_core_unwatch_address_CT(qdr_core_t *core, qdr_action_t 
*action, bool discard);
+static void qdr_address_watch_free_CT(qdr_address_watch_t *watch);
+
+//==
+// Core Interface Functions
+//==
+qdr_watch_handle_t qdr_core_watch_address(qdr_core_t *core,
+  const char *address,
+  characlass,
+  charphase,
+  qdr_address_watch_update_t  on_watch,
+  void   *context)
+{
+static sys_atomic_t next_handle;
+qdr_action_t *action = qdr_action(qdr_core_watch_address_CT, 
"watch_address");
+
+action->args.io.address   = qdr_field(address);
+action->args.io.address_class = aclass;
+action->args.io.address_phase = phase;
+action->args.io.watch_handler = on_watch;
+action->args.io.context   = context;
+action->args.io.value32_1 = sys_atomic_inc(_handle);
+
+qdr_action_enqueue(core, action);
+return action->args.io.value32_1;
+}
+
+
+void qdr_core_unwatch_address(qdr_core_t *core, qdr_watch_handle_t handle)
+{
+qdr_action_t *action = qdr_action(qdr_core_unwatch_address_CT, 
"unwatch_address");
+
+action->args.io.value32_1 = handle;
+qdr_action_enqueue(core, action);
+}
+
+
+//==
+// In-Core API Functions
+//==
+void qdr_trigger_address_watch_CT(qdr_core_t *core, qdr_address_t *addr)
+{
+const char  *address_hash = (char*) 
qd_hash_key_by_handle(addr->hash_handle);
+qdr_address_watch_t *watch= DEQ_HEAD(core->addr_watches);
+
+while (!!watch) {
+if (strcmp(watch->address_hash, address_hash) == 0) {
+qdr_general_work_t *work = qdr_general_work(qdr_watch_invoker);
+work->watch_handler = watch->handler;
+work->context   = watch->context;
+work->local_consumers   = DEQ_SIZE(addr->rlinks);
+work->in_proc_consumers = DEQ_SIZE(addr->subscriptions);
+work->remote_consumers  = qd_bitmask_cardinality(addr->rnodes);
+work->local_producers   = DEQ_SIZE(addr->inlinks);
+qdr_post_general_work_CT(core, work);
+}
+watch = DEQ_NEXT(watch);
+}
+}
+
+void qdr_address_watch_shutdown(qdr_core_t *core)
+{
+qdr_address_watch_t *watch = DEQ_HEAD(core->addr_watches);
+while (!!watch) {
+DEQ_REMOVE(core->addr_watches, watch);
+qdr_address_watch_free_CT(watch);
+watch = DEQ_HEAD(core->addr_watches);
+}
+}
+
+
+//==
+// Local Functions
+//==
+static void qdr_address_watch_free_CT(qdr_address_watch_t *watch)
+{
+free(watch->address_hash);
+free_qdr_address_watch_t(watch);
+}
+
+
+static void 

[jira] [Commented] (PROTON-2396) [cpp] Seed in uuid.cpp can lead to duplicates

2021-11-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PROTON-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17445148#comment-17445148
 ] 

ASF GitHub Bot commented on PROTON-2396:


DreamPearl commented on pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#issuecomment-971547101


   Now the build is passing. @astitcher Can you please take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [cpp] Seed in uuid.cpp can lead to duplicates
> -
>
> Key: PROTON-2396
> URL: https://issues.apache.org/jira/browse/PROTON-2396
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
> Environment: RHEL7 running in OpenStack
> docker-ce 19.03.5
> qpid-proton 0.28.0
> qpid-cpp 1.37.0
>Reporter: Ryan Herbert
>Assignee: Rakhi Kumari
>Priority: Major
>
> The random number seed used in qpid-proton/cpp/src/uuid.cpp is based on the 
> current time and the PID of the running process.  When starting multiple 
> proton instances simultaneously in Docker containers via automated 
> deployment, there is a high probability that multiple instances will get the 
> same seed since the PID within the Docker container is consistent and the 
> same across multiple copies of the same Docker container.
> This results in duplicate link names when binding to exchanges. When this 
> happens, the queue gets bound to two different exchanges, and requests sent 
> to one exchange will get responses from both services.
> To work around this error, we are specifying the link name via 
> sender_options/receiver_options every time we open a new sender/receiver, and 
> we also specify the container_id in connection_options.  We are using 
> std::mt19937_64 seeded with 
> std::chrono::system_clock::now().time_since_epoch().count() to generate the 
> random part of our link names, which seems to have enough randomness that it 
> has eliminated the problem for us.
> As pointed out in the Proton user forum, std::random_device is probably a 
> better choice for initializing the seed.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-proton] DreamPearl commented on pull request #340: PROTON-2396: Use random_device for seed initialization in uuid.cpp

2021-11-17 Thread GitBox


DreamPearl commented on pull request #340:
URL: https://github.com/apache/qpid-proton/pull/340#issuecomment-971547101


   Now the build is passing. @astitcher Can you please take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org