[ 
https://issues.apache.org/jira/browse/PROTON-2543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545110#comment-17545110
 ] 

Fredrik Hallenberg commented on PROTON-2543:
--------------------------------------------

* cpu hardware type and model: Intel Xeon
 * OS and version: CentOS 7 (Linux 3.10)
 * compiler (gcc/clang/other): gcc 8
 * Number of concurrent threads servicing proactor event batches: single thread 
(also tested with various amount of threads).
 * Number of active proactors in failing process (usually 1): 1
 * Running on bare hardware, VM, container: Docker containers, I believe it 
also happened on bare hardware
 * crash occurs during main operation or on shutdown (or both): Main operation
 * Types of connections and listeners: Many short lived incoming connections to 
one listener. Usually running on virtual network between a few containers.

> Crash in epoll.c resched_pop_front
> ----------------------------------
>
>                 Key: PROTON-2543
>                 URL: https://issues.apache.org/jira/browse/PROTON-2543
>             Project: Qpid Proton
>          Issue Type: Bug
>          Components: proton-c
>            Reporter: Fredrik Hallenberg
>            Assignee: Clifford Jansen
>            Priority: Major
>         Attachments: qpid-epoll-crash.patch
>
>
> During stress testing it is fairly easy to reproduce a segfault in 
> resched_pop_front. Using gdb it is easy to see that polled_resched_front can 
> be zero when entering this function which causes the value to wrap and then a 
> crash in later calls.
> polled_resched_front is not checked when calling this function in one 
> instance, the trivial fix to check this value is seen in the attached patch 
> seems to work.
> Tested with Qpid Proton C++ 0.37.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to