Federico Simoncelli has posted comments on this change.

Change subject: lvm: deactivate lvs during bootstrap
......................................................................


Patch Set 5: Code-Review+1

(3 comments)

Few comments. +1 for now, I still have doubts.

....................................................
File vdsm/storage/hsm.py
Line 373:         except Exception:
Line 374:             self.log.warn("Failed to clean Storage Repository.", 
exc_info=True)
Line 375: 
Line 376:         def storageRefresh():
Line 377:             lvm.bootstrap()
I wonder how long this will take on large deployments, being asynchronous I am 
worried that it might get in some more serious race with the activation of the 
special LVs (e.g. triggered by a connectStoragePoool).
Line 378:             sdCache.refreshStorage()
Line 379: 
Line 380:             fileUtils.createdir(self.tasksDir)
Line 381:             # TBD: Should this be run in connectStoragePool? Should 
tasksDir


....................................................
File vdsm/storage/lvm.py
Line 661:         lvs = [triple[2] for triple in group]
Line 662:         if opened:
Line 663:             log.warning("Skipping open lvs: vg=%s lvs=%s", vg, lvs)
Line 664:         else:
Line 665:             log.info("Deactivating lvs: vg=%s lvs=%s", vg, lvs)
I guess we'll have to accept the massive logging on fcp at boot.
Line 666:             try:
Line 667:                 _setLVAvailability(vg, lvs, "n")
Line 668:             except se.CannotDeactivateLogicalVolume:
Line 669:                 log.error("Error deactivating lvs: vg=%s lvs=%s", vg, 
lvs)


Line 663:             log.warning("Skipping open lvs: vg=%s lvs=%s", vg, lvs)
Line 664:         else:
Line 665:             log.info("Deactivating lvs: vg=%s lvs=%s", vg, lvs)
Line 666:             try:
Line 667:                 _setLVAvailability(vg, lvs, "n")
This might be safe also for the special LVs (it seems to me that blockSD always 
assumes that they could be inactive), unless we get in some sort of race (see 
comment on hsm.py). Maybe special LVs just deserve a refresh instead of being 
deactivated.
Line 668:             except se.CannotDeactivateLogicalVolume:
Line 669:                 log.error("Error deactivating lvs: vg=%s lvs=%s", vg, 
lvs)
Line 670:             # Some lvs are inactive now
Line 671:             _lvminfo._invalidatelvs(vg, lvs)


-- 
To view, visit http://gerrit.ovirt.org/21291
To unsubscribe, visit http://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I5f142ebca7a00d45f2500ad2631fab2366f2f7db
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Nir Soffer <nsof...@redhat.com>
Gerrit-Reviewer: Allon Mureinik <amure...@redhat.com>
Gerrit-Reviewer: Ayal Baron <aba...@redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <dan...@redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimo...@redhat.com>
Gerrit-Reviewer: Nir Soffer <nsof...@redhat.com>
Gerrit-Reviewer: Sergey Gotliv <sgot...@redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybron...@redhat.com>
Gerrit-Reviewer: Yeela Kaplan <ykap...@redhat.com>
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes
_______________________________________________
vdsm-patches mailing list
vdsm-patches@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-patches

Reply via email to