BewareMyPower opened a new pull request, #168: URL: https://github.com/apache/pulsar-client-cpp/pull/168
Fixes https://github.com/apache/pulsar-client-cpp/issues/167 ### Motivation Here are some debugging info when the segfault happened in `testCloseClient`. The outputs have been trimmed to make them clear. An example crash at `async_write`: ``` #12 0x00007ffff7496dad in basic_stream_socket<...>::boost::asio::async_write /usr/include/boost/asio/impl/write.hpp:512 #13 0x00007ffff748e003 in ClientConnection::asyncWrite lib/ClientConnection.h:245 #14 0x00007ffff746e0b6 in ClientConnection::handleHandshake (this=0x555555e689d0) lib/ClientConnection.cc:502 ``` Another example crash at `async_receive`: ``` #6 0x00007ffff7497247 in basic_stream_socket<...>::async_receive /usr/include/boost/asio/basic_stream_socket.hpp:677 #7 0x00007ffff748e647 in ClientConnection::asyncReceive lib/ClientConnection.h:258 #8 0x00007ffff746fa5d in ClientConnection::readNextCommand lib/ClientConnection.cc:606 ``` The frame where it crashed: ``` 245 if (descriptor_data->shutdown_) (gdb) p descriptor_data $2 = (boost::asio::detail::epoll_reactor::per_descriptor_data &) @0x555555e4a780: 0x0 ``` We can see the socket descriptor is `nullptr`. The root cause is when `async_receive` or `async_write` is called, the `io_service` object might be closed. This case happened when `createProducerAsync` is called, the actual producer creation continues in another thread, while the `client.close()` happens in the current thread. ### Modifications Check if the `ClientConnection` is closed before `async_receive` or `async_write`. To avoid the use of lock, changing the `state_` field to atomic. ### Verifications ```bash ./tests/pulsar-tests --gtest_filter='ClientTest.testCloseClient' --gtest_repeat=20 ``` It never crashed after applying this patch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
