The branch, master has been updated via acd9248b13c tevent: version 0.16.0 via 407cda2f3b7 tevent: add support for TEVENT_FD_ERROR via 55f25eb34bb tevent: add test_event_fd3 via a76056fafb4 tevent: add test_fd_speed3 via 28bf51fc657 tevent: let tevent_epoll.c use new generic mpx infrastructure via b328e990651 tevent: add tevent_common_fd_mpx infrastructure via 95d6600a066 tevent: split out a tevent_common_fd_disarm() helper via 7672a29febe ldb: sync DLIST_DEMOTE_SHORT() changes to include/dlinklist.h via 4fe39d9e7c9 lib/util: sync DLIST_DEMOTE_SHORT() changes to dlinklist.h via 30d22631a6b tevent: introduce DLIST_DEMOTE_SHORT() from d895c98c507 wintest: Fix invalid escape sequences
https://git.samba.org/?p=samba.git;a=shortlog;h=master - Log ----------------------------------------------------------------- commit acd9248b13cba06d5b748f17aa9bc5d62079d9cc Author: Stefan Metzmacher <me...@samba.org> Date: Wed Jul 19 23:04:01 2023 +0200 tevent: version 0.16.0 - the epoll backend is no longer limited to 2 event handlers per low level fd. - finally add support for TEVENT_FD_ERROR Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> Autobuild-User(master): Ralph Böhme <s...@samba.org> Autobuild-Date(master): Fri Oct 13 10:45:51 UTC 2023 on atb-devel-224 commit 407cda2f3b7738d3690daeb8d679898f78ef3b74 Author: Stefan Metzmacher <me...@samba.org> Date: Wed Jul 13 09:46:26 2011 +0200 tevent: add support for TEVENT_FD_ERROR After 12 years we finally got TEVENT_FD_ERROR support :-) TEVENT_FD_WRITE event handlers never get errors reported instead the event handler is silently disabled. There are likely callers relying on that behavior, so we are not able to chance it. Now TEVENT_FD_WRITE can be used together with TEVENT_FD_ERROR in order to get errors reported without waiting for TEVENT_FD_READ. TEVENT_FD_ERROR can also be used alone in order to detect errors on sockets in order to cleanup resources. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 55f25eb34bb7994e4410899b86cd6df44b2d1fb7 Author: Stefan Metzmacher <me...@samba.org> Date: Wed Dec 28 16:54:24 2022 +0100 tevent: add test_event_fd3 The tests the interaction of multiple event handlers on the same low level fd. It shows that poll and epoll backends behave in the same fair way. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit a76056fafb489624eb3bb451f373b256b8895ec5 Author: Stefan Metzmacher <me...@samba.org> Date: Mon Apr 24 14:37:38 2023 +0000 tevent: add test_fd_speed3 Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 28bf51fc657179de020716a486aa1651143529a8 Author: Stefan Metzmacher <me...@samba.org> Date: Fri Nov 11 22:30:35 2022 +0100 tevent: let tevent_epoll.c use new generic mpx infrastructure This allows any number of event handlers per low level fd. It means the epoll backend behaves like the poll backend now. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit b328e990651a3182bba3e4e8d8b91eed457bd8a1 Author: Stefan Metzmacher <me...@samba.org> Date: Wed Nov 9 22:48:10 2022 +0100 tevent: add tevent_common_fd_mpx infrastructure Backends may require to map individual tevent_fd instances to a single low level kernel state (e.g. for epoll). This generic infrastructure adds helper functions using a generic (sub)part of struct tevent_fd. The new code will allow us to support more than 2 tevent_fd instances per fd, which makes sure all backends can provide a similar behavior. This will be important when we add TEVENT_FD_ERROR as a 3rd kind of fd event. The aim is to use this in order to replace the limited implementation we already have in tevent_epoll.c. As these helpers are typically called from within 'void tevent_fd_set_flags(struct tevent_fd *fde, uint16_t flags)' there's no way to report errors. So in order avoid additional error handling complexity the helpers try to avoid any allocations which may fail. It also means the logic in tevent_epoll.c doesn't have to change much. These are implemented as static line functions in order to avoid the function call overhead, which showed up in profiles of the early implementation. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 95d6600a0668b8abac53cbe2085236b31d652b66 Author: Stefan Metzmacher <me...@samba.org> Date: Thu Aug 31 18:09:28 2023 +0200 tevent: split out a tevent_common_fd_disarm() helper It means tevent_trace_fd_callback(TEVENT_EVENT_TRACE_DETACH) is always called and similar future changes are only needed in one place. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 7672a29febe9151b4435fae9d6b21a82205d911f Author: Stefan Metzmacher <me...@samba.org> Date: Mon Jan 30 16:10:07 2023 +0100 ldb: sync DLIST_DEMOTE_SHORT() changes to include/dlinklist.h Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 4fe39d9e7c91c43ab4e1ecd3fbaac09987fea45d Author: Stefan Metzmacher <me...@samba.org> Date: Mon Jan 30 16:10:07 2023 +0100 lib/util: sync DLIST_DEMOTE_SHORT() changes to dlinklist.h Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 30d22631a6b2e9d5a8df17e7f3f231cc6020cb30 Author: Stefan Metzmacher <me...@samba.org> Date: Mon Jan 30 16:10:07 2023 +0100 tevent: introduce DLIST_DEMOTE_SHORT() It turns out that the overhead of DLIST_DEMOTE() implemented as DLIST_REMOVE();DLIST_ADD_END(), is very high if the list contains only 1 or 2 elements. The next commits will make use of DLIST_DEMOTE_SHORT() for multiplexing multiple tevent_fd structures for a single fd and the most important and common case is a list with just one element. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> ----------------------------------------------------------------------- Summary of changes: lib/ldb/include/dlinklist.h | 21 + .../ABI/{tevent-0.15.0.sigs => tevent-0.16.0.sigs} | 0 lib/tevent/testsuite.c | 625 +++++++++++++++++++++ lib/tevent/tevent.c | 5 +- lib/tevent/tevent.h | 36 +- lib/tevent/tevent_dlinklist.h | 21 + lib/tevent/tevent_epoll.c | 608 ++++++++++---------- lib/tevent/tevent_fd.c | 18 +- lib/tevent/tevent_internal.h | 387 +++++++++++++ lib/tevent/tevent_poll.c | 96 ++-- lib/tevent/tevent_wrapper.c | 4 +- lib/tevent/wscript | 2 +- lib/util/dlinklist.h | 21 + 13 files changed, 1483 insertions(+), 361 deletions(-) copy lib/tevent/ABI/{tevent-0.15.0.sigs => tevent-0.16.0.sigs} (100%) Changeset truncated at 500 lines: diff --git a/lib/ldb/include/dlinklist.h b/lib/ldb/include/dlinklist.h index a775e8dcdc1..49a135a23bd 100644 --- a/lib/ldb/include/dlinklist.h +++ b/lib/ldb/include/dlinklist.h @@ -156,6 +156,27 @@ do { \ DLIST_ADD_END(list, p); \ } while (0) +/* + * like DLIST_DEMOTE(), but optimized + * for short lists with 0, 1 or 2 elements + */ +#define DLIST_DEMOTE_SHORT(list, p) \ +do { \ + if ((list) == NULL) { \ + /* no reason to demote, just add */ \ + DLIST_ADD(list, p); \ + } else if ((list)->prev == (p)) { \ + /* optimize if p is last */ \ + } else if ((list) == (p)) { \ + /* optimize if p is first */ \ + (list)->prev->next = (p); \ + (list) = (p)->next; \ + (p)->next = NULL; \ + } else { \ + DLIST_DEMOTE(list, p); \ + } \ +} while (0) + /* concatenate two lists - putting all elements of the 2nd list at the end of the first list. diff --git a/lib/tevent/ABI/tevent-0.15.0.sigs b/lib/tevent/ABI/tevent-0.16.0.sigs similarity index 100% copy from lib/tevent/ABI/tevent-0.15.0.sigs copy to lib/tevent/ABI/tevent-0.16.0.sigs diff --git a/lib/tevent/testsuite.c b/lib/tevent/testsuite.c index c5f7ef32146..e0881661756 100644 --- a/lib/tevent/testsuite.c +++ b/lib/tevent/testsuite.c @@ -32,6 +32,7 @@ #include "system/network.h" #include "torture/torture.h" #include "torture/local/proto.h" +#include "lib/util/blocking.h" #ifdef HAVE_PTHREAD #include "system/threads.h" #include <assert.h> @@ -97,6 +98,22 @@ static void do_write(int fd, void *buf, size_t count) } while (ret == -1 && errno == EINTR); } +static void do_fill(int fd) +{ + uint8_t buf[1024] = {0, }; + ssize_t ret; + + set_blocking(fd, false); + + do { + do { + ret = write(fd, buf, ARRAY_SIZE(buf)); + } while (ret == -1 && errno == EINTR); + } while (ret == ARRAY_SIZE(buf)); + + set_blocking(fd, true); +} + static void fde_handler_write(struct tevent_context *ev_ctx, struct tevent_fd *f, uint16_t flags, void *private_data) { @@ -367,6 +384,12 @@ static bool test_fd_speed2(struct torture_context *test, return test_fd_speedX(test, test_data, 1); } +static bool test_fd_speed3(struct torture_context *test, + const void *test_data) +{ + return test_fd_speedX(test, test_data, 2); +} + struct test_event_fd1_state { struct torture_context *tctx; const char *backend; @@ -827,6 +850,600 @@ static bool test_event_fd2(struct torture_context *tctx, return true; } +struct test_event_fd3_state { + struct torture_context *tctx; + const char *backend; + struct tevent_context *ev; + struct timeval start_time; + struct tevent_timer *te1, *te2, *te3, *te4, *te5; + struct test_event_fd3_sock { + struct test_event_fd3_state *state; + const char *sock_name; + int fd; + const char *phase_name; + uint64_t iteration_id; + uint64_t max_iterations; + uint16_t expected_flags; + uint8_t expected_count; + uint8_t actual_count; + struct test_event_fd3_fde { + struct test_event_fd3_sock *sock; + struct tevent_fd *fde; + uint64_t last_iteration_id; + } fde1, fde2, fde3, fde4, fde5, fde6, fde7, fde8, fde9; + void (*fde_callback)(struct test_event_fd3_fde *tfde, + uint16_t flags); + } sock0, sock1; + bool finished; + const char *error; +}; + +static void test_event_fd3_fde_callback(struct test_event_fd3_fde *tfde, + uint16_t flags) +{ + struct test_event_fd3_sock *sock = tfde->sock; + struct test_event_fd3_state *state = sock->state; + uint16_t fde_flags = tevent_fd_get_flags(tfde->fde); + uint16_t expected_flags = sock->expected_flags & fde_flags; + + if (expected_flags == 0) { + state->finished = true; + state->error = __location__; + return; + } + + if (flags != expected_flags) { + state->finished = true; + state->error = __location__; + return; + } + + if (tfde->last_iteration_id == sock->iteration_id) { + state->finished = true; + state->error = __location__; + return; + } + + tfde->last_iteration_id = sock->iteration_id; + + sock->actual_count += 1; + + if (sock->actual_count > sock->expected_count) { + state->finished = true; + state->error = __location__; + return; + } + + if (sock->actual_count == sock->expected_count) { + sock->actual_count = 0; + sock->iteration_id += 1; + } + + if (sock->iteration_id > sock->max_iterations) { + torture_comment(state->tctx, + "%s: phase[%s] finished with %"PRIu64" iterations\n", + sock->sock_name, + sock->phase_name, + sock->max_iterations); + tevent_fd_set_flags(sock->fde1.fde, 0); + tevent_fd_set_flags(sock->fde2.fde, 0); + tevent_fd_set_flags(sock->fde3.fde, 0); + tevent_fd_set_flags(sock->fde4.fde, 0); + tevent_fd_set_flags(sock->fde5.fde, 0); + tevent_fd_set_flags(sock->fde6.fde, 0); + tevent_fd_set_flags(sock->fde7.fde, 0); + tevent_fd_set_flags(sock->fde8.fde, 0); + tevent_fd_set_flags(sock->fde9.fde, 0); + sock->fde_callback = NULL; + } +} + +static void test_event_fd3_prepare_phase(struct test_event_fd3_sock *sock, + const char *phase_name, + uint64_t max_iterations, + uint16_t expected_flags, + uint8_t expected_count, + uint16_t flags1, + uint16_t flags2, + uint16_t flags3, + uint16_t flags4, + uint16_t flags5, + uint16_t flags6, + uint16_t flags7, + uint16_t flags8, + uint16_t flags9) +{ + struct test_event_fd3_state *state = sock->state; + + if (sock->fde_callback != NULL) { + state->finished = true; + state->error = __location__; + return; + } + + sock->phase_name = phase_name; + sock->max_iterations = max_iterations; + sock->expected_flags = expected_flags; + sock->expected_count = expected_count; + sock->iteration_id = 1; + sock->actual_count = 0; + + tevent_fd_set_flags(sock->fde1.fde, flags1); + sock->fde1.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde2.fde, flags2); + sock->fde2.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde3.fde, flags3); + sock->fde3.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde4.fde, flags4); + sock->fde4.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde5.fde, flags5); + sock->fde5.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde6.fde, flags6); + sock->fde6.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde7.fde, flags7); + sock->fde7.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde8.fde, flags8); + sock->fde8.last_iteration_id = 0; + tevent_fd_set_flags(sock->fde9.fde, flags9); + sock->fde9.last_iteration_id = 0; + + sock->fde_callback = test_event_fd3_fde_callback; +} + +static void test_event_fd3_sock_handler(struct tevent_context *ev_ctx, + struct tevent_fd *fde, + uint16_t flags, + void *private_data) +{ + struct test_event_fd3_fde *tfde = + (struct test_event_fd3_fde *)private_data; + struct test_event_fd3_sock *sock = tfde->sock; + struct test_event_fd3_state *state = sock->state; + + if (sock->fd == -1) { + state->finished = true; + state->error = __location__; + return; + } + + if (sock->fde_callback == NULL) { + state->finished = true; + state->error = __location__; + return; + } + + sock->fde_callback(tfde, flags); + return; +} + +static bool test_event_fd3_assert_timeout(struct test_event_fd3_state *state, + double expected_elapsed, + const char *func) +{ + double e = timeval_elapsed(&state->start_time); + double max_latency = 0.05; + + if (e < expected_elapsed) { + torture_comment(state->tctx, + "%s: elapsed=%.6f < expected_elapsed=%.6f\n", + func, e, expected_elapsed); + state->finished = true; + state->error = __location__; + return false; + } + + if (e > (expected_elapsed + max_latency)) { + torture_comment(state->tctx, + "%s: elapsed=%.6f > " + "(expected_elapsed=%.6f + max_latency=%.6f)\n", + func, e, expected_elapsed, max_latency); + state->finished = true; + state->error = __location__; + return false; + } + + torture_comment(state->tctx, "%s: elapsed=%.6f\n", __func__, e); + return true; +} + +static void test_event_fd3_writeable(struct tevent_context *ev_ctx, + struct tevent_timer *te, + struct timeval tval, + void *private_data) +{ + struct test_event_fd3_state *state = + (struct test_event_fd3_state *)private_data; + + if (!test_event_fd3_assert_timeout(state, 1, __func__)) { + return; + } + + test_event_fd3_prepare_phase(&state->sock0, + __func__, + INT8_MAX, + TEVENT_FD_WRITE, + 5, + TEVENT_FD_WRITE, + 0, + TEVENT_FD_READ, + TEVENT_FD_WRITE, + TEVENT_FD_READ|TEVENT_FD_WRITE, + TEVENT_FD_READ, + TEVENT_FD_WRITE, + TEVENT_FD_READ|TEVENT_FD_WRITE, + 0); + + test_event_fd3_prepare_phase(&state->sock1, + __func__, + INT8_MAX, + TEVENT_FD_WRITE, + 9, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR); +} + +static void test_event_fd3_readable(struct tevent_context *ev_ctx, + struct tevent_timer *te, + struct timeval tval, + void *private_data) +{ + struct test_event_fd3_state *state = + (struct test_event_fd3_state *)private_data; + uint8_t c = 0; + + if (!test_event_fd3_assert_timeout(state, 2, __func__)) { + return; + } + + do_write(state->sock0.fd, &c, 1); + do_write(state->sock1.fd, &c, 1); + + test_event_fd3_prepare_phase(&state->sock0, + __func__, + INT8_MAX, + TEVENT_FD_READ|TEVENT_FD_WRITE, + 9, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR); + + test_event_fd3_prepare_phase(&state->sock1, + __func__, + INT8_MAX, + TEVENT_FD_READ|TEVENT_FD_WRITE, + 7, + TEVENT_FD_READ, + TEVENT_FD_READ|TEVENT_FD_WRITE, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + 0, + TEVENT_FD_READ, + TEVENT_FD_WRITE, + TEVENT_FD_ERROR, + TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE); +} + +static void test_event_fd3_not_writeable(struct tevent_context *ev_ctx, + struct tevent_timer *te, + struct timeval tval, + void *private_data) +{ + struct test_event_fd3_state *state = + (struct test_event_fd3_state *)private_data; + + if (!test_event_fd3_assert_timeout(state, 3, __func__)) { + return; + } + + do_fill(state->sock0.fd); + do_fill(state->sock1.fd); + + test_event_fd3_prepare_phase(&state->sock0, + __func__, + INT8_MAX, + TEVENT_FD_READ, + 5, + TEVENT_FD_READ|TEVENT_FD_WRITE, + TEVENT_FD_WRITE, + TEVENT_FD_READ, + 0, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_ERROR, + TEVENT_FD_ERROR, + TEVENT_FD_READ); + + test_event_fd3_prepare_phase(&state->sock1, + __func__, + INT8_MAX, + TEVENT_FD_READ, + 9, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR); +} + +static void test_event_fd3_off(struct tevent_context *ev_ctx, + struct tevent_timer *te, + struct timeval tval, + void *private_data) +{ + struct test_event_fd3_state *state = + (struct test_event_fd3_state *)private_data; + + if (!test_event_fd3_assert_timeout(state, 4, __func__)) { + return; + } + + TALLOC_FREE(state->sock0.fde1.fde); + state->sock0.fd = -1; + + test_event_fd3_prepare_phase(&state->sock1, + __func__, + INT8_MAX, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + 7, + TEVENT_FD_READ|TEVENT_FD_WRITE, + TEVENT_FD_WRITE, + TEVENT_FD_READ, + 0, + TEVENT_FD_READ|TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_WRITE|TEVENT_FD_ERROR, + TEVENT_FD_READ|TEVENT_FD_ERROR, + TEVENT_FD_ERROR, + TEVENT_FD_READ); +} + +static void test_event_fd3_finished(struct tevent_context *ev_ctx, + struct tevent_timer *te, + struct timeval tval, + void *private_data) +{ + struct test_event_fd3_state *state = + (struct test_event_fd3_state *)private_data; + + if (!test_event_fd3_assert_timeout(state, 5, __func__)) { + return; + } + + /* + * this should never be triggered + */ + if (state->sock0.fde_callback != NULL) { + state->finished = true; + state->error = __location__; + return; + } + if (state->sock1.fde_callback != NULL) { + state->finished = true; + state->error = __location__; + return; + } + + state->finished = true; +} + +static bool test_event_fd3(struct torture_context *tctx, + const void *test_data) +{ + struct test_event_fd3_state state = { + .tctx = tctx, + .backend = (const char *)test_data, + }; + int rc; + int sock[2]; + + state.ev = test_tevent_context_init_byname(tctx, state.backend); + if (state.ev == NULL) { + torture_skip(tctx, talloc_asprintf(tctx, + "event backend '%s' not supported\n", + state.backend)); + return true; + } + + torture_comment(tctx, "backend '%s' - %s\n", + state.backend, __FUNCTION__); -- Samba Shared Repository