Nir Soffer has posted comments on this change.

Change subject: lvm: deactivate lvs during bootstrap
......................................................................


Patch Set 5:

(2 comments)

> Nice. if I understand the logic correctly you have my +1. Although, didn't 
> you want to run it only once on first boot?

> if your statement: ""After the first service start, logical volumes are 
> typically inactive and this change adds no additional cost."" answers my 
> question, I just hope it doesn't take too long in different loads to perform 
> that logic. otherwise, loading the active tasks will be delayed and iirc, we 
> really want to avoid that.

Running on only on first run was important when the operation included finding 
active lvs by accessing the storage, which can be expensive when we have large 
number of lvs. Now that we use the existing cache, it cost nothing to find 
active lvs.

Even more important, if we do find a lv that was left active by the previous 
vdsm process, because of unclean shutdown or a bug, running again ensure that 
we start again in a clean state.

....................................................
File vdsm/storage/lvm.py
Line 34: from collections import namedtuple
Line 35: import pprint as pp
Line 36: import threading
Line 37: from itertools import chain
Line 38: from itertools import groupby
This makes it harder to review and less readable.
Line 39: from subprocess import list2cmdline
Line 40: 
Line 41: from vdsm import constants
Line 42: import misc


Line 660:     for (vg, opened), group in groupby(activelvs, lambda triple: 
triple[:2]):
Line 661:         lvs = [triple[2] for triple in group]
Line 662:         if opened:
Line 663:             log.warning("Skipping open lvs: vg=%s lvs=%s", vg, lvs)
Line 664:         else:
There are two edge cases here:

- qemu try to open the device after we deactivate it - it will fail to open the 
device - safe.
- qemu did open the device after the cache was built but before we try to 
deactivate - the lvm command will fail and leave other lvs in this vg active - 
unlikely. In this case we can reload lvs and try to run the command again 
without the new opened devices.

I discussed this with Ayal and Allon and they did not see this as a problem.
Line 665:             log.info("Deactivating lvs: vg=%s lvs=%s", vg, lvs)
Line 666:             try:
Line 667:                 _setLVAvailability(vg, lvs, "n")
Line 668:             except se.CannotDeactivateLogicalVolume:


-- 
To view, visit http://gerrit.ovirt.org/21291
To unsubscribe, visit http://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I5f142ebca7a00d45f2500ad2631fab2366f2f7db
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsof...@redhat.com>
Gerrit-Reviewer: Allon Mureinik <amure...@redhat.com>
Gerrit-Reviewer: Ayal Baron <aba...@redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <dan...@redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimo...@redhat.com>
Gerrit-Reviewer: Nir Soffer <nsof...@redhat.com>
Gerrit-Reviewer: Sergey Gotliv <sgot...@redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybron...@redhat.com>
Gerrit-Reviewer: Yeela Kaplan <ykap...@redhat.com>
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes
_______________________________________________
vdsm-patches mailing list
vdsm-patches@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-patches

Reply via email to