The branch, master has been updated via 621349d s3/rpc_server: Character Encode Spotlight Queries via 1a9d6ce s3:messages: make the loop in msg_dgm_ref_recv() more robust against stale pointers via 0503bba s4:messaging: add local.messaging.multi_ctx.multi_ctx test via d08efa7 python/tests: make the test_assoc_group_fail2() test more resilient against timing via 93f6163 ctdb: close the correct pipe fd in a test from b12f6c6 WHATSNEW add entries audit logging and lmdb.
https://git.samba.org/?p=samba.git;a=shortlog;h=master - Log ----------------------------------------------------------------- commit 621349d559053e86064d2502d114d107914c189f Author: Ralph Boehme <s...@samba.org> Date: Wed Mar 15 13:38:19 2017 +0100 s3/rpc_server: Character Encode Spotlight Queries Fix path escaping in Spotlight so paths with spaces or special characters can be properly matched to tracker paths. Bug: https://bugzilla.samba.org/show_bug.cgi?id=12688 Based-on-a-patch-from: Mike M Pestorich <mmpestor...@gmail.com> (similar to github.com/netatalk/netatalk/commit/90aa43d) Signed-off-by: Ralph Boehme <s...@samba.org> Reviewed-by: Jeremy Allison <j...@samba.org> Autobuild-User(master): Jeremy Allison <j...@samba.org> Autobuild-Date(master): Tue Jul 10 23:17:20 CEST 2018 on sn-devel-144 commit 1a9d6ce58939678f88b3081fb91c3309ff3cddb7 Author: Stefan Metzmacher <me...@samba.org> Date: Mon Jul 9 12:33:34 2018 +0200 s3:messages: make the loop in msg_dgm_ref_recv() more robust against stale pointers The interaction between msg_dgm_ref_recv() and msg_dgm_ref_destructor() doesn't allow two references from messaging_dgm_ref() to be free'd during the loop in msg_dgm_ref_recv(). In addition to the global 'refs' list, we also need to have a global 'next_ref' pointer, which can be adjusted in msg_dgm_ref_destructor(). As AD DC we hit this when using irpc in auth_winbind, which uses imessaging_client_init(). In addition to the main messaging_dgm_ref() in smbd, source3/auth/auth_samba4.c: prepare_gensec() and make_auth4_context_s4() also generate a temporary imessaging_context for auth_context->msg_ctx from within auth_generic_prepare(). Bug: https://bugzilla.samba.org/show_bug.cgi?id=13514 Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Jeremy Allison <j...@samba.org> commit 0503bbab958754bc8ba32da8578602927ebf25c0 Author: Stefan Metzmacher <me...@samba.org> Date: Tue Jul 10 16:21:55 2018 +0200 s4:messaging: add local.messaging.multi_ctx.multi_ctx test This tests the usage of multiple imessaging_contexts in one process and also freeing two of them during a message handler. Bug: https://bugzilla.samba.org/show_bug.cgi?id=13514 Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Jeremy Allison <j...@samba.org> commit d08efa7f90e97f1f797d09f7519076646110cb45 Author: Stefan Metzmacher <me...@samba.org> Date: Thu Jun 21 06:31:03 2018 +0200 python/tests: make the test_assoc_group_fail2() test more resilient against timing On a busy system [e]poll() on the server will mark both the old connection fd and also the listening fd as readable. epoll() returns the events in order, so the server processes the disconnect first. With poll() we don't have an order of the events and the server is likely to process the connect before the disconnect. Signed-off-by: Stefan Metzmacher <me...@samba.org> Reviewed-by: Ralph Boehme <s...@samba.org> commit 93f61639a64f0b3156437b75c31330aa189d3db7 Author: Ralph Boehme <s...@samba.org> Date: Tue Jun 19 10:35:04 2018 +0200 ctdb: close the correct pipe fd in a test This was discovered in an autobuild with a patched tevent that used the "poll" backend by default. Test failure: $ bin/sock_daemon_test /dev/shm/sock_daemon_test.pid /dev/shm/sock_daemon_test.sock 5 test5[28011]: daemon started, pid=28011 test5[28011]: listening on /dev/shm/sock_daemon_test.sock sock_daemon_test: ../ctdb/tests/src/sock_daemon_test.c:980: test5: Assertion `ret == i+1' failed. Abgebrochen (Speicherabzug geschrieben) metze@SERNOX14:~/devel/samba/4.0/master4-test$ test5[28011]: PID 28010 gone away, exiting test5[28011]: Shutting down sock_daemon_test: ../ctdb/tests/src/sock_daemon_test.c:964: test5: Assertion `ret == EINTR' failed. After an epic debugging session we spotted the problem. Signed-off-by: Ralph Boehme <s...@samba.org> Reviewed-by: Stefan Metzmacher <me...@samba.org> ----------------------------------------------------------------------- Summary of changes: ctdb/tests/src/sock_daemon_test.c | 2 +- python/samba/tests/dcerpc/raw_protocol.py | 3 + source3/lib/messages_dgm_ref.c | 12 +- source3/rpc_server/mdssvc/mdssvc.c | 16 ++- source4/lib/messaging/tests/messaging.c | 221 ++++++++++++++++++++++++++++++ 5 files changed, 247 insertions(+), 7 deletions(-) Changeset truncated at 500 lines: diff --git a/ctdb/tests/src/sock_daemon_test.c b/ctdb/tests/src/sock_daemon_test.c index 916ba22..80004b4 100644 --- a/ctdb/tests/src/sock_daemon_test.c +++ b/ctdb/tests/src/sock_daemon_test.c @@ -759,7 +759,7 @@ static int test5_client(const char *sockpath, int id, pid_t pid_server, tevent_loop_once(ev); } - close(fd[0]); + close(fd[1]); state.fd = -1; while (kill(pid_server, 0) == 0 || errno != ESRCH) { diff --git a/python/samba/tests/dcerpc/raw_protocol.py b/python/samba/tests/dcerpc/raw_protocol.py index ff815e9..7cc3a4a 100755 --- a/python/samba/tests/dcerpc/raw_protocol.py +++ b/python/samba/tests/dcerpc/raw_protocol.py @@ -18,6 +18,7 @@ import sys import os +import time sys.path.insert(0, "bin/python") os.environ["PYTHONUNBUFFERED"] = "1" @@ -4930,6 +4931,8 @@ class TestDCERPC_BIND(RawDCERPCTest): ack = self.do_generic_bind(ctx=ctx) self._disconnect("test_assoc_group_fail2") + self.assertNotConnected() + time.sleep(0.5) self.connect() ack2 = self.do_generic_bind(ctx=ctx,assoc_group_id=ack.u.assoc_group_id, diff --git a/source3/lib/messages_dgm_ref.c b/source3/lib/messages_dgm_ref.c index 39d2270..470dfbe 100644 --- a/source3/lib/messages_dgm_ref.c +++ b/source3/lib/messages_dgm_ref.c @@ -35,6 +35,7 @@ struct msg_dgm_ref { static pid_t dgm_pid = 0; static struct msg_dgm_ref *refs = NULL; +static struct msg_dgm_ref *next_ref = NULL; static int msg_dgm_ref_destructor(struct msg_dgm_ref *r); static void msg_dgm_ref_recv(struct tevent_context *ev, @@ -121,16 +122,16 @@ static void msg_dgm_ref_recv(struct tevent_context *ev, const uint8_t *msg, size_t msg_len, int *fds, size_t num_fds, void *private_data) { - struct msg_dgm_ref *r, *next; + struct msg_dgm_ref *r; /* * We have to broadcast incoming messages to all refs. The first ref * that grabs the fd's will get them. */ - for (r = refs; r != NULL; r = next) { + for (r = refs; r != NULL; r = next_ref) { bool active; - next = r->next; + next_ref = r->next; active = messaging_dgm_fde_active(r->fde); if (!active) { @@ -150,6 +151,11 @@ static int msg_dgm_ref_destructor(struct msg_dgm_ref *r) if (refs == NULL) { abort(); } + + if (r == next_ref) { + next_ref = r->next; + } + DLIST_REMOVE(refs, r); TALLOC_FREE(r->fde); diff --git a/source3/rpc_server/mdssvc/mdssvc.c b/source3/rpc_server/mdssvc/mdssvc.c index 9be0cc4..5a63d37 100644 --- a/source3/rpc_server/mdssvc/mdssvc.c +++ b/source3/rpc_server/mdssvc/mdssvc.c @@ -1136,6 +1136,8 @@ static bool slrpc_open_query(struct mds_ctx *mds_ctx, struct sl_query *slq = NULL; int result; char *querystring; + char *scope = NULL; + char *escaped_scope = NULL; array = dalloc_zero(reply, sl_array_t); if (array == NULL) { @@ -1214,12 +1216,20 @@ static bool slrpc_open_query(struct mds_ctx *mds_ctx, goto error; } - slq->path_scope = dalloc_get(path_scope, "char *", 0); - if (slq->path_scope == NULL) { + scope = dalloc_get(path_scope, "char *", 0); + if (scope == NULL) { + goto error; + } + + escaped_scope = g_uri_escape_string(scope, + G_URI_RESERVED_CHARS_ALLOWED_IN_PATH, + TRUE); + if (escaped_scope == NULL) { goto error; } - slq->path_scope = talloc_strdup(slq, slq->path_scope); + slq->path_scope = talloc_strdup(slq, escaped_scope); + g_free(escaped_scope); if (slq->path_scope == NULL) { goto error; } diff --git a/source4/lib/messaging/tests/messaging.c b/source4/lib/messaging/tests/messaging.c index ba58978..80c8583 100644 --- a/source4/lib/messaging/tests/messaging.c +++ b/source4/lib/messaging/tests/messaging.c @@ -393,6 +393,226 @@ static bool test_messaging_overflow_check(struct torture_context *tctx) return true; } +struct test_multi_ctx { + struct torture_context *tctx; + struct imessaging_context *server_ctx; + struct imessaging_context *client_ctx[4]; + size_t num_missing; + bool got_server; + bool got_client_0_1; + bool got_client_2_3; + bool ok; +}; + +static void multi_ctx_server_handler(struct imessaging_context *msg, + void *private_data, + uint32_t msg_type, + struct server_id server_id, + DATA_BLOB *data) +{ + struct test_multi_ctx *state = private_data; + char *str = NULL; + + torture_assert_goto(state->tctx, state->num_missing >= 1, + state->ok, fail, + "num_missing should be at least 1."); + state->num_missing -= 1; + + torture_assert_goto(state->tctx, !state->got_server, + state->ok, fail, + "already got server."); + state->got_server = true; + + /* + * We free the context itself and most likely reuse + * the memory immediately. + */ + TALLOC_FREE(state->server_ctx); + str = generate_random_str(state->tctx, 128); + torture_assert_goto(state->tctx, str != NULL, + state->ok, fail, + "generate_random_str()"); + +fail: + return; +} + +static void multi_ctx_client_0_1_handler(struct imessaging_context *msg, + void *private_data, + uint32_t msg_type, + struct server_id server_id, + DATA_BLOB *data) +{ + struct test_multi_ctx *state = private_data; + char *str = NULL; + + torture_assert_goto(state->tctx, state->num_missing >= 2, + state->ok, fail, + "num_missing should be at least 2."); + state->num_missing -= 2; + + torture_assert_goto(state->tctx, !state->got_client_0_1, + state->ok, fail, + "already got client_0_1."); + state->got_client_0_1 = true; + + /* + * We free two contexts and most likely reuse + * the memory immediately. + */ + TALLOC_FREE(state->client_ctx[0]); + str = generate_random_str(state->tctx, 128); + torture_assert_goto(state->tctx, str != NULL, + state->ok, fail, + "generate_random_str()"); + TALLOC_FREE(state->client_ctx[1]); + str = generate_random_str(state->tctx, 128); + torture_assert_goto(state->tctx, str != NULL, + state->ok, fail, + "generate_random_str()"); + +fail: + return; +} + +static void multi_ctx_client_2_3_handler(struct imessaging_context *msg, + void *private_data, + uint32_t msg_type, + struct server_id server_id, + DATA_BLOB *data) +{ + struct test_multi_ctx *state = private_data; + char *str = NULL; + + torture_assert_goto(state->tctx, state->num_missing >= 2, + state->ok, fail, + "num_missing should be at least 2."); + state->num_missing -= 2; + + torture_assert_goto(state->tctx, !state->got_client_2_3, + state->ok, fail, + "already got client_2_3."); + state->got_client_2_3 = true; + + /* + * We free two contexts and most likely reuse + * the memory immediately. + */ + TALLOC_FREE(state->client_ctx[2]); + str = generate_random_str(state->tctx, 128); + torture_assert_goto(state->tctx, str != NULL, + state->ok, fail, + "generate_random_str()"); + TALLOC_FREE(state->client_ctx[3]); + str = generate_random_str(state->tctx, 128); + torture_assert_goto(state->tctx, str != NULL, + state->ok, fail, + "generate_random_str()"); + +fail: + return; +} + +static bool test_multi_ctx(struct torture_context *tctx) +{ + struct test_multi_ctx state = { + .tctx = tctx, + .ok = true, + }; + struct timeval tv; + NTSTATUS status; + + lpcfg_set_cmdline(tctx->lp_ctx, "pid directory", "piddir.tmp"); + + /* + * We use cluster_id(0, 0) as that gets for + * all task ids. + */ + state.server_ctx = imessaging_init(tctx, + tctx->lp_ctx, + cluster_id(0, 0), + tctx->ev); + torture_assert(tctx, state.server_ctx != NULL, + "Failed to init messaging context"); + + status = imessaging_register(state.server_ctx, &state, + MSG_TMP_BASE-1, + multi_ctx_server_handler); + torture_assert(tctx, NT_STATUS_IS_OK(status), "imessaging_register failed"); + + state.client_ctx[0] = imessaging_init(tctx, + tctx->lp_ctx, + cluster_id(0, 0), + tctx->ev); + torture_assert(tctx, state.client_ctx[0] != NULL, + "msg_client_ctx imessaging_init() failed"); + status = imessaging_register(state.client_ctx[0], &state, + MSG_TMP_BASE-1, + multi_ctx_client_0_1_handler); + torture_assert(tctx, NT_STATUS_IS_OK(status), "imessaging_register failed"); + state.client_ctx[1] = imessaging_init(tctx, + tctx->lp_ctx, + cluster_id(0, 0), + tctx->ev); + torture_assert(tctx, state.client_ctx[1] != NULL, + "msg_client_ctx imessaging_init() failed"); + status = imessaging_register(state.client_ctx[1], &state, + MSG_TMP_BASE-1, + multi_ctx_client_0_1_handler); + torture_assert(tctx, NT_STATUS_IS_OK(status), "imessaging_register failed"); + state.client_ctx[2] = imessaging_init(tctx, + tctx->lp_ctx, + cluster_id(0, 0), + tctx->ev); + torture_assert(tctx, state.client_ctx[2] != NULL, + "msg_client_ctx imessaging_init() failed"); + status = imessaging_register(state.client_ctx[2], &state, + MSG_TMP_BASE-1, + multi_ctx_client_2_3_handler); + torture_assert(tctx, NT_STATUS_IS_OK(status), "imessaging_register failed"); + state.client_ctx[3] = imessaging_init(tctx, + tctx->lp_ctx, + cluster_id(0, 0), + tctx->ev); + torture_assert(tctx, state.client_ctx[3] != NULL, + "msg_client_ctx imessaging_init() failed"); + status = imessaging_register(state.client_ctx[3], &state, + MSG_TMP_BASE-1, + multi_ctx_client_2_3_handler); + torture_assert(tctx, NT_STATUS_IS_OK(status), "imessaging_register failed"); + + /* + * Send one message that need to arrive on 3 ( 5 - 2 ) handlers. + */ + state.num_missing = 5; + + status = imessaging_send(state.server_ctx, + cluster_id(0, 0), + MSG_TMP_BASE-1, NULL); + torture_assert_ntstatus_ok(tctx, status, "msg failed"); + + tv = timeval_current(); + while (timeval_elapsed(&tv) < 30 && state.num_missing > 0 && state.ok) { + int ret; + + ret = tevent_loop_once(tctx->ev); + torture_assert_int_equal(tctx, ret, 0, "tevent_loop_once()"); + } + + if (!state.ok) { + return false; + } + + torture_assert_int_equal(tctx, state.num_missing, 0, + "wrong message count"); + + torture_assert(tctx, state.got_client_0_1, "got_client_0_1"); + torture_assert(tctx, state.got_client_2_3, "got_client_2_3"); + torture_assert(tctx, state.got_server, "got_server"); + + return true; +} + struct torture_suite *torture_local_messaging(TALLOC_CTX *mem_ctx) { struct torture_suite *s = torture_suite_create(mem_ctx, "messaging"); @@ -400,5 +620,6 @@ struct torture_suite *torture_local_messaging(TALLOC_CTX *mem_ctx) torture_suite_add_simple_test(s, "overflow_check", test_messaging_overflow_check); torture_suite_add_simple_test(s, "ping_speed", test_ping_speed); + torture_suite_add_simple_test(s, "multi_ctx", test_multi_ctx); return s; } -- Samba Shared Repository