The branch, master has been updated via bb22be08b07 s3: tests: Add samba3.blackbox.force-close-share via 6b567e0c138 s3: VFS: vfs_aio_pthread: Make aio opens safe against connection teardown. via e5660666059 s3: VFS: vfs_aio_pthread: Add a talloc context parameter to create_private_open_data(). via ddb9038fe77 s3: VFS: vfs_aio_pthread. Move xconn into state struct (opd). via 8db831a318c s3: VFS: vfs_aio_pthread: Replace state destructor with explicitly called teardown function. via a1e247c3ba5 s3: VFS: vfs_aio_pthread. Fix leak of state struct on error. via 410e7599bd2 s3: smbd: Make sure we correctly reply to outstanding aio requests with an error on SHUTDOWN_CLOSE. via 9ecbda263f1 s3: VFS: vfs_glusterfs. Protect vfs_gluster_fsync_done() from accessing a freed req pointer. via cdde55a69d0 s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_fsync_state as the callback data to the subreq. via c0c088b1b78 s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_fsync_state. via 67910c751c9 s3: VFS: vfs_glusterfs. Protect vfs_gluster_pwrite_done() from accessing a freed req pointer. via 3357a77d082 s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_pwrite_state as the callback data to the subreq. via 058a7effd00 s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_pwrite_state. via 99283871c52 s3: VFS: vfs_glusterfs. Protect vfs_gluster_pread_done() from accessing a freed req pointer. via c6c4e2de22c s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_pread_state as the callback data to the subreq. via 0e3dc0078eb s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_pread_state. via 18671534e42 s3: VFS: vfs_default. Protect vfs_fsync_done() from accessing a freed req pointer. via d623779913e s3: VFS: vfs_default. Pass in struct vfswrap_fsync_state as the callback data to the subreq. via 4adde71b99d s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_fsync_state. via c8cd93dd54c s3: VFS: vfs_default. Protect vfs_pwrite_done() from accessing a freed req pointer. via 13e25d68385 s3: VFS: vfs_default. Pass in struct vfswrap_pwrite_state as the callback data to the subreq. via 86cc7439501 s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_pwrite_state. via b9ad06079fe s3: VFS: vfs_default. Protect vfs_pread_done() from accessing a freed req pointer. via e102908f112 s3: VFS: vfs_default. Pass in struct vfswrap_pread_state as the callback data to the subreq. via 594a435b33e s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_pread_state. from d2d2329b119 audit_logging tests: Fix timezone validation
https://git.samba.org/?p=samba.git;a=shortlog;h=master - Log ----------------------------------------------------------------- commit bb22be08b077b7d5911ccdeb1012f4dea85647e5 Author: Jeremy Allison <j...@samba.org> Date: Tue Mar 3 13:31:18 2020 -0800 s3: tests: Add samba3.blackbox.force-close-share Checks server stays up whilst writing to a force closed share. Uses existing aio_delay_inject share to delay writes while we force close the share. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> Autobuild-User(master): Jeremy Allison <j...@samba.org> Autobuild-Date(master): Sun Mar 8 19:34:14 UTC 2020 on sn-devel-184 commit 6b567e0c138d1cf2bcf58c84872ed2b0e89d628d Author: Jeremy Allison <j...@samba.org> Date: Thu Mar 5 10:22:00 2020 -0800 s3: VFS: vfs_aio_pthread: Make aio opens safe against connection teardown. Allocate state off fsp->conn, not NULL, and add a destructor that catches deallocation of conn which happens on connection shutdown or force close. Note - We don't allocate off fsp as the passed in fsp will get freed once we return EINPROGRESS/NT_STATUS_MORE_PROCESSING_REQUIRED. A new fsp pointer gets allocated on every re-run of the open code path. The destructor allows us to NULL out the saved conn struct pointer when conn is deallocated so we know not to access deallocated memory. This matches the async teardown code changes for bug #14301 in pread/pwrite/fsync vfs_default.c and vfs_glusterfs.c state is still correctly deallocated in all code paths so no memory leaks. This allows us to safely complete when the openat() returns and then return the error NT_STATUS_NETWORK_NAME_DELETED to the client open request. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit e566066605981549b670a5392683fbd81ce93d18 Author: Jeremy Allison <j...@samba.org> Date: Fri Mar 6 09:30:26 2020 -0800 s3: VFS: vfs_aio_pthread: Add a talloc context parameter to create_private_open_data(). Pass in NULL for now so no behavior change. We will be changing this from NULL to fsp->conn in a later commit. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit ddb9038fe776b1d8239e563a4c9a70b4097645f3 Author: Jeremy Allison <j...@samba.org> Date: Wed Mar 4 16:39:39 2020 -0800 s3: VFS: vfs_aio_pthread. Move xconn into state struct (opd). We will need this in future to cause a pending open to be rescheduled after the connection struct we're using has been shut down with an aio open in flight. This will allow a correct error reply to an awaiting client. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 8db831a318cd4a10ec9c1d629ebff4ca35b8acfe Author: Jeremy Allison <j...@samba.org> Date: Wed Mar 4 13:47:13 2020 -0800 s3: VFS: vfs_aio_pthread: Replace state destructor with explicitly called teardown function. This will allow repurposing a real destructor to allow connections structs to be freed whilst the aio open request is in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit a1e247c3ba579ecc6ee03f5aad9679ed79fac5ac Author: Jeremy Allison <j...@samba.org> Date: Wed Mar 4 13:29:08 2020 -0800 s3: VFS: vfs_aio_pthread. Fix leak of state struct on error. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 410e7599bd2ae9b35429f60a529bb7c4aa88df25 Author: Jeremy Allison <j...@samba.org> Date: Mon Mar 2 13:11:06 2020 -0800 s3: smbd: Make sure we correctly reply to outstanding aio requests with an error on SHUTDOWN_CLOSE. SHUTDOWN_CLOSE can be called when smbcontrol close-share is used to terminate active connections. Previously we just called talloc_free() on the outstanding request, but this caused crashes (before the async callback functions were fixed not to reference req directly) and also leaves the SMB2 request outstanding on the processing queue. Using tevent_req_error() instead causes the outstanding SMB1/2/3 request to return with NT_STATUS_INVALID_HANDLE and removes it from the processing queue. The callback function called from this calls talloc_free(req). The destructor will remove itself from the fsp and the aio_requests array. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 9ecbda263f102a24257fd47142b7c24d1f429d8d Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 16:01:11 2020 -0800 s3: VFS: vfs_glusterfs. Protect vfs_gluster_fsync_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_gluster_fsync_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit cdde55a69d0dacd2f9939c2f00cd356c0186f791 Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:59:16 2020 -0800 s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_fsync_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfs_gluster_fsync_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfs_gluster_fsync_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit c0c088b1b786f0ba248960114191277e91bbba2f Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:57:20 2020 -0800 s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_fsync_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 67910c751c9f5ce8cdd1e57b34e51e5b7163838b Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:55:36 2020 -0800 s3: VFS: vfs_glusterfs. Protect vfs_gluster_pwrite_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_gluster_pwrite_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 3357a77d0823eddc1b0db68cfa251a0d54058c88 Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:53:19 2020 -0800 s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_pwrite_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfs_gluster_pwrite_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfs_gluster_pwrite_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 058a7effd00b47e4778f7d680cc9c2a7d40d5fa8 Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:47:52 2020 -0800 s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_pwrite_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 99283871c5230e640a8102943ebed685459ed9af Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:38:04 2020 -0800 s3: VFS: vfs_glusterfs. Protect vfs_gluster_pread_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_gluster_pread_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit c6c4e2de22cd3d84f45f5c21e6b09b09274f7f7b Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:35:46 2020 -0800 s3: VFS: vfs_glusterfs. Pass in struct vfs_gluster_pread_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfs_gluster_pread_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfs_gluster_pread_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 0e3dc0078ebd6aa79553bf2afa8e72945e23dfb0 Author: Jeremy Allison <j...@samba.org> Date: Fri Feb 28 15:33:35 2020 -0800 s3: VFS: vfs_glusterfs: Add tevent_req pointer to state struct in vfs_gluster_pread_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 18671534e42f66b904e51c3fbe887e998ff79493 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:56:41 2020 -0800 s3: VFS: vfs_default. Protect vfs_fsync_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_fsync_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit d623779913e0d4a46d7e299dc41b5c83cb127872 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:54:47 2020 -0800 s3: VFS: vfs_default. Pass in struct vfswrap_fsync_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfswrap_fsync_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfswrap_fsync_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 4adde71b99d4ab09914072458329d5f1008b77e3 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:53:10 2020 -0800 s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_fsync_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit c8cd93dd54cb9f78665928d4bc8fcc3baf084b6f Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:51:35 2020 -0800 s3: VFS: vfs_default. Protect vfs_pwrite_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_pwrite_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 13e25d68385aa951115e0e063ec6a9a281fea4a4 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:49:38 2020 -0800 s3: VFS: vfs_default. Pass in struct vfswrap_pwrite_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfswrap_pwrite_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfswrap_pwrite_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 86cc7439501ab9b9eb018a18dbbef9567eb9b6f9 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:44:39 2020 -0800 s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_pwrite_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit b9ad06079fe362385cc4c77f8e8d54f5f74d6db6 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:40:46 2020 -0800 s3: VFS: vfs_default. Protect vfs_pread_done() from accessing a freed req pointer. If the fsp is forced closed by a SHUTDOWN_CLOSE whilst the request is in flight (share forced closed by smbcontrol), then we set state->req = NULL in the state destructor. The existing state destructor prevents the state memory from being freed, so when the thread completes and calls vfs_pread_done(), just throw away the result if state->req == NULL. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit e102908f112866d657b8c0cd6a5b217d070210c8 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:34:51 2020 -0800 s3: VFS: vfs_default. Pass in struct vfswrap_pread_state as the callback data to the subreq. Find the req we're finishing off by looking inside vfswrap_pread_state. In a shutdown close the caller calls talloc_free(req), so we can't access it directly as callback data. The next commit will NULL out the vfswrap_pread_state->req pointer when a caller calls talloc_free(req), and the request is still in flight. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> commit 594a435b33e8447625ca83b50daec2d08cf66d64 Author: Jeremy Allison <j...@samba.org> Date: Thu Feb 27 16:30:51 2020 -0800 s3: VFS: vfs_default: Add tevent_req pointer to state struct in vfswrap_pread_state. We will need this to detect when this request is outstanding but has been destroyed in a SHUTDOWN_CLOSE on this file. BUG: https://bugzilla.samba.org/show_bug.cgi?id=14301 Signed-off-by: Jeremy Allison <j...@samba.org> Reviewed-by: Volker Lendecke <v...@samba.org> ----------------------------------------------------------------------- Summary of changes: source3/modules/vfs_aio_pthread.c | 109 ++++++++++++++++++++----- source3/modules/vfs_default.c | 93 +++++++++++++++++---- source3/modules/vfs_glusterfs.c | 93 +++++++++++++++++---- source3/script/tests/test_force_close_share.sh | 100 +++++++++++++++++++++++ source3/selftest/tests.py | 9 ++ source3/smbd/close.c | 30 +++++-- 6 files changed, 376 insertions(+), 58 deletions(-) create mode 100755 source3/script/tests/test_force_close_share.sh Changeset truncated at 500 lines: diff --git a/source3/modules/vfs_aio_pthread.c b/source3/modules/vfs_aio_pthread.c index d13ce2fdc63..1ccf89a6d8c 100644 --- a/source3/modules/vfs_aio_pthread.c +++ b/source3/modules/vfs_aio_pthread.c @@ -51,6 +51,7 @@ struct aio_open_private_data { const char *fname; char *dname; connection_struct *conn; + struct smbXsrv_connection *xconn; const struct security_unix_token *ux_tok; uint64_t initial_allocation_size; /* Returns. */ @@ -62,6 +63,7 @@ struct aio_open_private_data { static struct aio_open_private_data *open_pd_list; static void aio_open_do(struct aio_open_private_data *opd); +static void opd_free(struct aio_open_private_data *opd); /************************************************************************ Find the open private data by mid. @@ -90,10 +92,40 @@ static void aio_open_handle_completion(struct tevent_req *subreq) tevent_req_callback_data(subreq, struct aio_open_private_data); int ret; - struct smbXsrv_connection *xconn; ret = pthreadpool_tevent_job_recv(subreq); TALLOC_FREE(subreq); + + /* + * We're no longer in flight. Remove the + * destructor used to preserve opd so + * a talloc_free actually removes it. + */ + talloc_set_destructor(opd, NULL); + + if (opd->conn == NULL) { + /* + * We were shutdown closed in flight. No one + * wants the result, and state has been reparented + * to the NULL context, so just free it so we + * don't leak memory. + */ + DBG_NOTICE("aio open request for %s/%s abandoned in flight\n", + opd->dname, + opd->fname); + if (opd->ret_fd != -1) { + close(opd->ret_fd); + opd->ret_fd = -1; + } + /* + * Find outstanding event and reschedule so the client + * gets an error message return from the open. + */ + schedule_deferred_open_message_smb(opd->xconn, opd->mid); + opd_free(opd); + return; + } + if (ret != 0) { bool ok; @@ -127,15 +159,8 @@ static void aio_open_handle_completion(struct tevent_req *subreq) opd->in_progress = false; - /* - * TODO: In future we need a proper algorithm - * to find the correct connection for a fsp. - * For now we only have one connection, so this is correct... - */ - xconn = opd->conn->sconn->client->connections; - /* Find outstanding event and reschedule. */ - if (!schedule_deferred_open_message_smb(xconn, opd->mid)) { + if (!schedule_deferred_open_message_smb(opd->xconn, opd->mid)) { /* * Outstanding event didn't exist or was * cancelled. Free up the fd and throw @@ -145,7 +170,7 @@ static void aio_open_handle_completion(struct tevent_req *subreq) close(opd->ret_fd); opd->ret_fd = -1; } - TALLOC_FREE(opd); + opd_free(opd); } } @@ -207,27 +232,28 @@ static void aio_open_do(struct aio_open_private_data *opd) } /************************************************************************ - Open private data destructor. + Open private data teardown. ***********************************************************************/ -static int opd_destructor(struct aio_open_private_data *opd) +static void opd_free(struct aio_open_private_data *opd) { if (opd->dir_fd != -1) { close(opd->dir_fd); } DLIST_REMOVE(open_pd_list, opd); - return 0; + TALLOC_FREE(opd); } /************************************************************************ Create and initialize a private data struct for async open. ***********************************************************************/ -static struct aio_open_private_data *create_private_open_data(const files_struct *fsp, +static struct aio_open_private_data *create_private_open_data(TALLOC_CTX *ctx, + const files_struct *fsp, int flags, mode_t mode) { - struct aio_open_private_data *opd = talloc_zero(NULL, + struct aio_open_private_data *opd = talloc_zero(ctx, struct aio_open_private_data); const char *fname = NULL; @@ -244,13 +270,19 @@ static struct aio_open_private_data *create_private_open_data(const files_struct .mid = fsp->mid, .in_progress = true, .conn = fsp->conn, + /* + * TODO: In future we need a proper algorithm + * to find the correct connection for a fsp. + * For now we only have one connection, so this is correct... + */ + .xconn = fsp->conn->sconn->client->connections, .initial_allocation_size = fsp->initial_allocation_size, }; /* Copy our current credentials. */ opd->ux_tok = copy_unix_token(opd, get_current_utok(fsp->conn)); if (opd->ux_tok == NULL) { - TALLOC_FREE(opd); + opd_free(opd); return NULL; } @@ -262,12 +294,12 @@ static struct aio_open_private_data *create_private_open_data(const files_struct fsp->fsp_name->base_name, &opd->dname, &fname) == false) { - TALLOC_FREE(opd); + opd_free(opd); return NULL; } opd->fname = talloc_strdup(opd, fname); if (opd->fname == NULL) { - TALLOC_FREE(opd); + opd_free(opd); return NULL; } @@ -277,15 +309,30 @@ static struct aio_open_private_data *create_private_open_data(const files_struct opd->dir_fd = open(opd->dname, O_RDONLY); #endif if (opd->dir_fd == -1) { - TALLOC_FREE(opd); + opd_free(opd); return NULL; } - talloc_set_destructor(opd, opd_destructor); DLIST_ADD_END(open_pd_list, opd); return opd; } +static int opd_inflight_destructor(struct aio_open_private_data *opd) +{ + /* + * Setting conn to NULL allows us to + * discover the connection was torn + * down which kills the fsp that owns + * opd. + */ + DBG_NOTICE("aio open request for %s/%s cancelled\n", + opd->dname, + opd->fname); + opd->conn = NULL; + /* Don't let opd go away. */ + return -1; +} + /***************************************************************** Setup an async open. *****************************************************************/ @@ -297,7 +344,18 @@ static int open_async(const files_struct *fsp, struct aio_open_private_data *opd = NULL; struct tevent_req *subreq = NULL; - opd = create_private_open_data(fsp, flags, mode); + /* + * Allocate off fsp->conn, not NULL or fsp. As we're going + * async fsp will get talloc_free'd when we return + * EINPROGRESS/NT_STATUS_MORE_PROCESSING_REQUIRED. A new fsp + * pointer gets allocated on every re-run of the + * open code path. Allocating on fsp->conn instead + * of NULL allows use to get notified via destructor + * if the conn is force-closed or we shutdown. + * opd is always safely freed in all codepath so no + * memory leaks. + */ + opd = create_private_open_data(fsp->conn, fsp, flags, mode); if (opd == NULL) { DEBUG(10, ("open_async: Could not create private data.\n")); return -1; @@ -308,6 +366,7 @@ static int open_async(const files_struct *fsp, fsp->conn->sconn->pool, aio_open_worker, opd); if (subreq == NULL) { + opd_free(opd); return -1; } tevent_req_set_callback(subreq, aio_open_handle_completion, opd); @@ -317,6 +376,12 @@ static int open_async(const files_struct *fsp, opd->dname, opd->fname)); + /* + * Add a destructor to protect us from connection + * teardown whilst the open thread is in flight. + */ + talloc_set_destructor(opd, opd_inflight_destructor); + /* Cause the calling code to reschedule us. */ errno = EINPROGRESS; /* Maps to NT_STATUS_MORE_PROCESSING_REQUIRED. */ return -1; @@ -364,7 +429,7 @@ static bool find_completed_open(files_struct *fsp, smb_fname_str_dbg(fsp->fsp_name))); /* Now we can free the opd. */ - TALLOC_FREE(opd); + opd_free(opd); return true; } diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c index a30f3ba1d31..fac7fa30ab7 100644 --- a/source3/modules/vfs_default.c +++ b/source3/modules/vfs_default.c @@ -780,6 +780,7 @@ static ssize_t vfswrap_pwrite(vfs_handle_struct *handle, files_struct *fsp, cons } struct vfswrap_pread_state { + struct tevent_req *req; ssize_t ret; int fd; void *buf; @@ -809,6 +810,7 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle, return NULL; } + state->req = req; state->ret = -1; state->fd = fsp->fh->fd; state->buf = data; @@ -825,7 +827,7 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle, if (tevent_req_nomem(subreq, req)) { return tevent_req_post(req, ev); } - tevent_req_set_callback(subreq, vfs_pread_done, req); + tevent_req_set_callback(subreq, vfs_pread_done, state); talloc_set_destructor(state, vfs_pread_state_destructor); @@ -861,21 +863,40 @@ static void vfs_pread_do(void *private_data) static int vfs_pread_state_destructor(struct vfswrap_pread_state *state) { + /* + * This destructor only gets called if the request is still + * in flight, which is why we deny it by returning -1. We + * also set the req pointer to NULL so the _done function + * can detect the caller doesn't want the result anymore. + * + * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this. + */ + state->req = NULL; return -1; } static void vfs_pread_done(struct tevent_req *subreq) { - struct tevent_req *req = tevent_req_callback_data( - subreq, struct tevent_req); - struct vfswrap_pread_state *state = tevent_req_data( - req, struct vfswrap_pread_state); + struct vfswrap_pread_state *state = tevent_req_callback_data( + subreq, struct vfswrap_pread_state); + struct tevent_req *req = state->req; int ret; ret = pthreadpool_tevent_job_recv(subreq); TALLOC_FREE(subreq); SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes); talloc_set_destructor(state, NULL); + if (req == NULL) { + /* + * We were shutdown closed in flight. No one + * wants the result, and state has been reparented + * to the NULL context, so just free it so we + * don't leak memory. + */ + DBG_NOTICE("pread request abandoned in flight\n"); + TALLOC_FREE(state); + return; + } if (ret != 0) { if (ret != EAGAIN) { tevent_req_error(req, ret); @@ -908,6 +929,7 @@ static ssize_t vfswrap_pread_recv(struct tevent_req *req, } struct vfswrap_pwrite_state { + struct tevent_req *req; ssize_t ret; int fd; const void *buf; @@ -937,6 +959,7 @@ static struct tevent_req *vfswrap_pwrite_send(struct vfs_handle_struct *handle, return NULL; } + state->req = req; state->ret = -1; state->fd = fsp->fh->fd; state->buf = data; @@ -953,7 +976,7 @@ static struct tevent_req *vfswrap_pwrite_send(struct vfs_handle_struct *handle, if (tevent_req_nomem(subreq, req)) { return tevent_req_post(req, ev); } - tevent_req_set_callback(subreq, vfs_pwrite_done, req); + tevent_req_set_callback(subreq, vfs_pwrite_done, state); talloc_set_destructor(state, vfs_pwrite_state_destructor); @@ -989,21 +1012,40 @@ static void vfs_pwrite_do(void *private_data) static int vfs_pwrite_state_destructor(struct vfswrap_pwrite_state *state) { + /* + * This destructor only gets called if the request is still + * in flight, which is why we deny it by returning -1. We + * also set the req pointer to NULL so the _done function + * can detect the caller doesn't want the result anymore. + * + * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this. + */ + state->req = NULL; return -1; } static void vfs_pwrite_done(struct tevent_req *subreq) { - struct tevent_req *req = tevent_req_callback_data( - subreq, struct tevent_req); - struct vfswrap_pwrite_state *state = tevent_req_data( - req, struct vfswrap_pwrite_state); + struct vfswrap_pwrite_state *state = tevent_req_callback_data( + subreq, struct vfswrap_pwrite_state); + struct tevent_req *req = state->req; int ret; ret = pthreadpool_tevent_job_recv(subreq); TALLOC_FREE(subreq); SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes); talloc_set_destructor(state, NULL); + if (req == NULL) { + /* + * We were shutdown closed in flight. No one + * wants the result, and state has been reparented + * to the NULL context, so just free it so we + * don't leak memory. + */ + DBG_NOTICE("pwrite request abandoned in flight\n"); + TALLOC_FREE(state); + return; + } if (ret != 0) { if (ret != EAGAIN) { tevent_req_error(req, ret); @@ -1036,6 +1078,7 @@ static ssize_t vfswrap_pwrite_recv(struct tevent_req *req, } struct vfswrap_fsync_state { + struct tevent_req *req; ssize_t ret; int fd; @@ -1060,6 +1103,7 @@ static struct tevent_req *vfswrap_fsync_send(struct vfs_handle_struct *handle, return NULL; } + state->req = req; state->ret = -1; state->fd = fsp->fh->fd; @@ -1072,7 +1116,7 @@ static struct tevent_req *vfswrap_fsync_send(struct vfs_handle_struct *handle, if (tevent_req_nomem(subreq, req)) { return tevent_req_post(req, ev); } - tevent_req_set_callback(subreq, vfs_fsync_done, req); + tevent_req_set_callback(subreq, vfs_fsync_done, state); talloc_set_destructor(state, vfs_fsync_state_destructor); @@ -1107,21 +1151,40 @@ static void vfs_fsync_do(void *private_data) static int vfs_fsync_state_destructor(struct vfswrap_fsync_state *state) { + /* + * This destructor only gets called if the request is still + * in flight, which is why we deny it by returning -1. We + * also set the req pointer to NULL so the _done function + * can detect the caller doesn't want the result anymore. + * + * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this. + */ + state->req = NULL; return -1; } static void vfs_fsync_done(struct tevent_req *subreq) { - struct tevent_req *req = tevent_req_callback_data( - subreq, struct tevent_req); - struct vfswrap_fsync_state *state = tevent_req_data( - req, struct vfswrap_fsync_state); + struct vfswrap_fsync_state *state = tevent_req_callback_data( + subreq, struct vfswrap_fsync_state); + struct tevent_req *req = state->req; int ret; ret = pthreadpool_tevent_job_recv(subreq); TALLOC_FREE(subreq); SMBPROFILE_BYTES_ASYNC_END(state->profile_bytes); talloc_set_destructor(state, NULL); + if (req == NULL) { + /* + * We were shutdown closed in flight. No one + * wants the result, and state has been reparented + * to the NULL context, so just free it so we + * don't leak memory. + */ + DBG_NOTICE("fsync request abandoned in flight\n"); + TALLOC_FREE(state); + return; + } if (ret != 0) { if (ret != EAGAIN) { tevent_req_error(req, ret); diff --git a/source3/modules/vfs_glusterfs.c b/source3/modules/vfs_glusterfs.c index d4b68fba376..b5300282b7b 100644 --- a/source3/modules/vfs_glusterfs.c +++ b/source3/modules/vfs_glusterfs.c @@ -713,6 +713,7 @@ static ssize_t vfs_gluster_pread(struct vfs_handle_struct *handle, } struct vfs_gluster_pread_state { + struct tevent_req *req; ssize_t ret; glfs_fd_t *fd; void *buf; @@ -748,6 +749,7 @@ static struct tevent_req *vfs_gluster_pread_send(struct vfs_handle_struct return NULL; } + state->req = req; state->ret = -1; state->fd = glfd; state->buf = data; @@ -764,7 +766,7 @@ static struct tevent_req *vfs_gluster_pread_send(struct vfs_handle_struct if (tevent_req_nomem(subreq, req)) { return tevent_req_post(req, ev); } - tevent_req_set_callback(subreq, vfs_gluster_pread_done, req); + tevent_req_set_callback(subreq, vfs_gluster_pread_done, state); talloc_set_destructor(state, vfs_gluster_pread_state_destructor); @@ -805,21 +807,40 @@ static void vfs_gluster_pread_do(void *private_data) static int vfs_gluster_pread_state_destructor(struct vfs_gluster_pread_state *state) { + /* + * This destructor only gets called if the request is still + * in flight, which is why we deny it by returning -1. We + * also set the req pointer to NULL so the _done function + * can detect the caller doesn't want the result anymore. + * + * Forcing the fsp closed by a SHUTDOWN_CLOSE can cause this. + */ + state->req = NULL; return -1; } -- Samba Shared Repository