On 6/2/20 9:33 AM, Richard W.M. Jones wrote:
On Tue, Jun 02, 2020 at 09:22:49AM -0500, Eric Blake wrote:
On 6/2/20 7:27 AM, Richard W.M. Jones wrote:
Use an extensible buffer (a vector<char>) when reading
/proc/self/cmdline.

Tidy up some error messages.
---
  plugins/vddk/reexec.c | 57 ++++++++++++++++++++++++++-----------------
  1 file changed, 35 insertions(+), 22 deletions(-)


Pre-existing bug, which you did not fix here.  If we failed here, we
are leaking fd.  You slightly improved the situation by marking the
leaked fd O_CLOEXEC, but that really doesn't matter if we properly
fix the code to close(fd) before any early return, at which point
the lifetime of fd is only during single-threaded execution and
O_CLOEXEC doesn't matter.

I was wondering if we should define conditions where we want reexec to
be skipped (probably if /proc/self/... files don't exist, since it's
likely that it isn't a "sufficiently Linux-like" platform).  We would
hard fail for all the other errors such as buffer_reserve failing
above.  What do you think?

Hard fail for things like malloc failure make sense (if we fail to re-exec but continue on in spite of a malloc failure, we'll probably die real soon at our next malloc, at which point dying asap is saner)

Although vddk already assumes a Linux system, you are correct that /proc/self might not be properly mounted, which is one case where soft fail (returning without re-exec, at which point the caller probably exits but with a nicer error message about being unable to locate the vddk .so libraries than what we can give here). So even if we don't fix the fd leak, it's unlikely that the overall nbdkit process is going to be long-running where the open fd remains open. I guess it's mostly a quality-of-error situation, when re-exec fails.


Note in the next patch I actually did do some hard failing for
operations that shouldn't fail and are hard to recover from.

Yes, and those made sense, so adding more here is fine too.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

_______________________________________________
Libguestfs mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/libguestfs

Reply via email to