Nir Soffer has uploaded a new change for review.

Change subject: oop: Explicitly close running ioprocesses
......................................................................

oop: Explicitly close running ioprocesses

IOProcess client is using __del__ for automatic closing when the last
reference to the client disappears. This attempt fail consistency when
__del__ is called during shutdown, since Python is partly tore down.

In the worst cases, the entire application will get stuck during
shutdown. We see this randomly when running the tests in the CI.
Looking in backtraces, all the threads are blocked trying to acquire a
lock that may have been already destroyed.

This patch avoid this issue by explicitly closing all running
ioprocesses during application shutdown and when cleaning up after
storage tests. This ensure that IOProcess's close() and __del__() are
running in a well defined environment.

Change-Id: Id9fce2bcc8a7916dd9e31a8b23a4199611c7938f
Bug-Url: https://bugzilla.redhat.com/1334274
Signed-off-by: Nir Soffer <[email protected]>
---
M tests/storagetestlib.py
M vdsm/storage/hsm.py
M vdsm/storage/outOfProcess.py
3 files changed, 28 insertions(+), 2 deletions(-)


  git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/30/60830/1

diff --git a/tests/storagetestlib.py b/tests/storagetestlib.py
index 5c72ae7..b448649 100644
--- a/tests/storagetestlib.py
+++ b/tests/storagetestlib.py
@@ -31,6 +31,7 @@
 from vdsm.storage import constants as sc
 
 from storage import sd, blockSD, fileSD, image, blockVolume, volume
+from storage import outOfProcess as oop
 from storage import hsm
 from storage.sdm import volume_artifacts
 
@@ -80,7 +81,10 @@
             [hsm, 'sdCache', fake_sdc],
         ]):
             fake_sdc.domains[sd_manifest.sdUUID] = FakeSD(sd_manifest)
-            yield FakeFileEnv(tmpdir, sd_manifest, fake_sdc)
+            try:
+                yield FakeFileEnv(tmpdir, sd_manifest, fake_sdc)
+            finally:
+                oop.stop()
 
 
 @contextmanager
@@ -98,7 +102,10 @@
         ]):
             sd_manifest = make_blocksd_manifest(tmpdir, lvm)
             fake_sdc.domains[sd_manifest.sdUUID] = FakeSD(sd_manifest)
-            yield FakeBlockEnv(tmpdir, sd_manifest, fake_sdc, lvm)
+            try:
+                yield FakeBlockEnv(tmpdir, sd_manifest, fake_sdc, lvm)
+            finally:
+                oop.stop()
 
 
 class FakeMetadata(dict):
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 9fce6e6..30a3a69 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -3287,6 +3287,7 @@
                                  exc_info=True)
 
             self.taskMng.prepareForShutdown()
+            oop.stop()
         except:
             pass
 
diff --git a/vdsm/storage/outOfProcess.py b/vdsm/storage/outOfProcess.py
index 3fc1970..c6b2cee 100644
--- a/vdsm/storage/outOfProcess.py
+++ b/vdsm/storage/outOfProcess.py
@@ -52,6 +52,24 @@
 log = logging.getLogger('Storage.oop')
 
 
+def stop():
+    """
+    Called during application shutdown to close all running ioprocesses.
+
+    Tests using oop should call this to ensure that stale ioprocess are not
+    left when a tests ends.
+    """
+    with _procPoolLock:
+        for name, (eol, proc) in _procPool.items():
+            log.debug("Closing ioprocess %s", name)
+            try:
+                proc._ioproc.close()
+            except Exception:
+                log.exception("Error closing ioprocess %s", name)
+        _procPool.clear()
+        _refProcPool.clear()
+
+
 def cleanIdleIOProcesses(clientName):
     now = elapsed_time()
     for name, (eol, proc) in _procPool.items():


-- 
To view, visit https://gerrit.ovirt.org/60830
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Id9fce2bcc8a7916dd9e31a8b23a4199611c7938f
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <[email protected]>
_______________________________________________
vdsm-patches mailing list
[email protected]
https://lists.fedorahosted.org/admin/lists/[email protected]

Reply via email to