So the current plan I have for _qmp_timer is:
- As Max suggested, move it in __init__ and check there for the
wrapper contents. If we need to block forever (gdb, valgrind), we set
it to None. Otherwise to 15 seconds. I think setting it always to None
is not ideal, because if you are testing something that deadlocks (see
my attempts to remove/add locks in QEMU multiqueue) and the socket is
set to block forever, you don't know if the test is super slow or it
just deadlocked.
I agree with your concern on rational defaults, let's focus on that
briefly:
Let's have QEMUMachine default to *no timeouts* moving forward, and have
the timeouts be *opt-in*. This keeps the Machine class somewhat pure and
free of opinions. The separation of mechanism and policy.
Next, instead of modifying hundreds of tests to opt-in to the timeout,
let's modify the VM class in iotests.py to opt-in to that timeout,
restoring the current "safe" behavior of iotests.
The above items can happen in a single commit, preserving behavior in
the bisect.
Finally, we can add a non-private property that individual tests can
re-override to opt BACK out of the default.
Something as simple as:
vm.qmp_timeout = None
would be just fine.
I applied these suggested changes, will send v4 and we'll see what comes
out of it.
Well, one can argue that in both cases this is not the expected
behavior, but I think having an upper bound on each QMP command
execution would be good.
- pass _qmp_timer directly to self._qmp.accept() in _post launch,
leaving _launch() intact. I think this makes sense because as you also
mentioned, changing _post_launch() into taking a parameter requires
changing also all subclasses and pass values around.
Sounds OK. If we do change the defaults back to "No Timeout" in a way
that allows an override by an opinionated class, we'll already have the
public property, though, so a parameter might not be needed.
(Yes, this is the THIRD time I've changed my mind in 48 hours.)
Any opinion on this is very welcome.
Brave words!
My last thought here is that I still don't like the idea of QEMUMachine
class changing its timeout behavior based on the introspection of
wrapper args.
It feels much more like the case that a caller who is knowingly wrapping
it with a program that delays its execution should change its parameters
accordingly based on what the caller knows about what they're trying to
accomplish.
Does that make the code too messy? I understand you probably want to
ensure that adding a GDB wrapper is painless and simple, so it might not
be great to always ask a caller to remember to set some timeout value to
use it.
I am not sure I follow you here, where do you want to move this logic?
Can you please elaborate more, I did not understand what you mean.
I understand that tweaking the timers in iotests.py with checks like
if not (qemu_gdb or qemu_valgrind):
normal timer
may not be the most beautiful piece of code, but as you said it keeps
things as simple as they can.
I figure that the right place to do this, though, is wherever the
boilerplate code gets written that knows how to set up the right gdb
args and so on, and not in machine.py. It sounds like iotests.py code to
me, maybe in the VM class.
After the changes suggested on qmp_timeout, iotests.py already contains
the only code to perform the setup right for gdb and valgrind, and
machine.py is not touched (except for qmp_timeout). iotests.py will
essentially contain a couple of ifs like the one above, changing the
timer when gdb and valgring are *not* needed.
Emanuele