On 30/04/2021 13:59, Max Reitz wrote:
On 14.04.21 19:03, Emanuele Giuseppe Esposito wrote:
Attaching a gdbserver implies that the qmp socket
should wait indefinitely for an answer from QEMU.
Signed-off-by: Emanuele Giuseppe Esposito <eespo...@redhat.com>
---
python/qemu/machine.py | 3 +++
tests/qemu-iotests/iotests.py | 10 +++++++++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/python/qemu/machine.py b/python/qemu/machine.py
index 12752142c9..d6142271c2 100644
--- a/python/qemu/machine.py
+++ b/python/qemu/machine.py
@@ -409,6 +409,9 @@ def _launch(self) -> None:
stderr=subprocess.STDOUT,
shell=False,
close_fds=False)
+
+ if 'gdbserver' in self._wrapper:
+ self._qmp_timer = None
Why doesn’t __init__() evaluate this? This here doesn’t feel like the
right place for it. If we want to evaluate it here, self._qmp_timer
shouldn’t exist, and instead the timeout should be a _post_launch()
parameter. (Which I would have nothing against, by the way.)
Uhm.. I got another comment in a previous version where for the "event"
callbacks it was better a property than passing around a parameter.
Which I honestly agree.
What should __init__() do? The check here is to see if the invocation
has gdb (and a couple of patches ahead also valgrind), to remove the timer.
If I understand what you mean, you want something like
def __init__(self, timer):
But from my understanding, the class hierarchy is:
QEMUMachine: in machine.py
QEMUQtestMachine(QEMUMachine): in qtest.py
VM(qtest.QEMUQtestMachine): in iotests.py
VM() is then invoked in each test.
So this is not easily reachable by check.py, to pass the parameter into
__init__
Also, mypy complains that this variable is a float, so iotest 297 (which
runs mypy) fails.
This and all mistakes of test 297 (mypy) is my fault: I did not have
pylint-3 installed thus when testing it skipped the 297 test.
self._post_launch()
def _early_cleanup(self) -> None:
diff --git a/tests/qemu-iotests/iotests.py
b/tests/qemu-iotests/iotests.py
index 05d0dc0751..380527245e 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -478,7 +478,10 @@ def log(msg: Msg,
class Timeout:
def __init__(self, seconds, errmsg="Timeout"):
- self.seconds = seconds
+ if qemu_gdb:
+ self.seconds = 3000
+ else:
+ self.seconds = seconds
We might as well completely disable the timeout then, that would be more
honest, I think. (I.e. to make __enter__ and __exit__ no-ops.)
self.errmsg = errmsg
def __enter__(self):
signal.signal(signal.SIGALRM, self.timeout)
@@ -684,6 +687,11 @@ def qmp_to_opts(self, obj):
output_list += [key + '=' + obj[key]]
return ','.join(output_list)
+ def get_qmp_events(self, wait: bool = False) -> List[QMPMessage]:
+ if qemu_gdb:
+ wait = 0.0
First, this is a bool. I can see that qmp.py expects a
Union[bool, float], but machine.py expects just a bool. (Also, mypy
complains that this specific `wait` here is a `bool`. You can see that
by running iotest 297.)
I think here machine.py should reflect the qmp.py behavior and have an
Union[bool, float] too.
Second, I don’t understand this. If the caller wants to block waiting
on an event, then that should have nothing to do with whether we have
gdb running or not. As far as I understand, setting wait to 0.0 is the
same as wait = False, i.e. we don’t block and just return None
immediately if there is no pending event.
You're right, this might not be needed here. The problem I had was that
calling gdb and pausing at a breakpoint or something for a while would
make the QMP socket timeout, thus aborting the whole test. In order to
avoid that, I need to stop or delay timers.
I can't remember why I added this check here. At some point I am sure
the test was failing because of socket timeout expiration, but I cannot
reproduce the problem when commenting out this check above in
get_qmp_events. The other check in patch 3 should be enough.
Emanuele
Max
+ return super().get_qmp_events(wait=wait)
+
def get_qmp_events_filtered(self, wait=60.0):
result = []
for ev in self.get_qmp_events(wait=wait):