Arik Hadas has posted comments on this change.
Change subject: RAM snapshots feature
......................................................................
Patch Set 12: (8 inline comments)
....................................................
File vdsm/vm.py
Line 2852: if fromSnapshot:
Line 2853: srcDomXML = self._correctDiskVolumes(srcDomXML)
Line 2854: self._connection.restoreFlags(fname, srcDomXML)
Line 2855: else:
Line 2856: self._connection.restore(fname)
using restoreFlags with our saved _srcDomXml on restore from hibernation
introduces bugs related to usb. I would prefer not to make changes to the
working flow of restore from hibernation at this stage as there's still
unfinished work on the ram snapshot feature. replace 'restore' call with
'restoreFlags' can be made later as a different patch if we would like.
I moved the disks correction up so that the updated xml will be exposed in the
hook.
Line 2857: finally:
Line 2858:
self.cif.teardownVolumePath(self.conf['restoreState'])
Line 2859:
Line 2860: self._dom = NotifyingVirDomain(
Line 2880: self.setDownStatus(ERROR, 'failed to start libvirt vm')
Line 2881: return
Line 2882: self._domDependentInit()
Line 2883:
Line 2884: def _correctDiskVolumes(self, srcDomXML):
I don't understand why do I need getTopVolume method, so I split it differently
to extract different responsibilities in this method to sub-methods
Line 2885: """
Line 2886: Replace each volume in the given XML with the latest volume
Line 2887: that the image has.
Line 2888: Each image has a newer volume than the one that appears in
the
Line 2892: """
Line 2893: parsedSrcDomXML = _domParseStr(srcDomXML)
Line 2894: diskElements = parsedSrcDomXML.childNodes[0]. \
Line 2895:
getElementsByTagName('devices')[0].getElementsByTagName('disk')
Line 2896: sourceElements = []
Done (is it ok? if not, please suggest names..)
Line 2897:
Line 2898: for diskElement in diskElements:
Line 2899: if diskElement.getAttribute('device') != 'disk':
Line 2900: continue
Line 2894: diskElements = parsedSrcDomXML.childNodes[0]. \
Line 2895:
getElementsByTagName('devices')[0].getElementsByTagName('disk')
Line 2896: sourceElements = []
Line 2897:
Line 2898: for diskElement in diskElements:
Done (is it ok? if not, please suggest names..)
Line 2899: if diskElement.getAttribute('device') != 'disk':
Line 2900: continue
Line 2901:
Line 2902: sourceElement =
diskElement.getElementsByTagName('source')[0]
Line 2895:
getElementsByTagName('devices')[0].getElementsByTagName('disk')
Line 2896: sourceElements = []
Line 2897:
Line 2898: for diskElement in diskElements:
Line 2899: if diskElement.getAttribute('device') != 'disk':
Done
Line 2900: continue
Line 2901:
Line 2902: sourceElement =
diskElement.getElementsByTagName('source')[0]
Line 2903: if sourceElement:
Line 2907: if vmDrive.device == 'disk':
Line 2908: pathToImageDir = os.path.dirname(vmDrive.path)
Line 2909: for sourceElement in sourceElements:
Line 2910: filePath = sourceElement.getAttribute('file')
Line 2911: if filePath.startswith(pathToImageDir):
why? is it ok now?
Line 2912: sourceElement.setAttribute('file',
vmDrive.path)
Line 2913: sourceElements.remove(sourceElement)
Line 2914: break
Line 2915:
Line 3584: # This code should be removed once qemu-img will handle
files
Line 3585: # with size that is not multiple of block size correctly
Line 3586: if memoryParams:
Line 3587: sdUUID = memoryVol['domainID']
Line 3588: oop.getProcessPool(sdUUID).fileUtils.write(
ok, for block based storage there's no problem - we don't need to pad the file
as its size is always a multiple of block size.
so we're left with file based storages - I understand that we don't want to
initiate another process for non-NFS file based storage, but I don't know if it
is that heavy. anyway, I already implemented the padding to take place as oop
for NFS only and maybe just for the sake of consistency (as I've been told that
oop is currently used only for NFS) we could leave it that way. that being
said, I don't mind to change it if needed - but let's decide :)
Line 3589: memoryVolPath, '\0' * 512, 'a')
Line 3590:
Line 3591: for drive in newDrives.values(): # Update the drive
information
Line 3592: try:
Line 3585: # with size that is not multiple of block size correctly
Line 3586: if memoryParams:
Line 3587: sdUUID = memoryVol['domainID']
Line 3588: oop.getProcessPool(sdUUID).fileUtils.write(
Line 3589: memoryVolPath, '\0' * 512, 'a')
I can't use it for couple of reasons:
1. apparently we can't return reference to objects from oop functions so we
can't use DirectFile object
2. it could serve us if we had control over how libvirt writes the file - if
they could use our DirectFile and it would do the padding internally it would
solve the problem, but since that's not the case, I need to do the padding
myself.
as per saggi's suggestion I replaced the method in fileUtils with one that use
'ftruncate' to pad the file with zeroes to the nearest multiple of 512.
Line 3590:
Line 3591: for drive in newDrives.values(): # Update the drive
information
Line 3592: try:
Line 3593: self.updateDriveParameters(drive)
--
To view, visit http://gerrit.ovirt.org/15072
To unsubscribe, visit http://gerrit.ovirt.org/settings
Gerrit-MessageType: comment
Gerrit-Change-Id: I62401940afb0228cbd9dd3611b6ed8e0ff67c82c
Gerrit-PatchSet: 12
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Arik Hadas <[email protected]>
Gerrit-Reviewer: Arik Hadas <[email protected]>
Gerrit-Reviewer: Dan Kenigsberg <[email protected]>
Gerrit-Reviewer: Eduardo <[email protected]>
Gerrit-Reviewer: Igor Lvovsky <[email protected]>
Gerrit-Reviewer: Michal Skrivanek <[email protected]>
Gerrit-Reviewer: Peter V. Saveliev <[email protected]>
Gerrit-Reviewer: Saggi Mizrahi <[email protected]>
Gerrit-Reviewer: Vinzenz Feenstra <[email protected]>
Gerrit-Reviewer: Yeela Kaplan <[email protected]>
Gerrit-Reviewer: oVirt Jenkins CI Server
_______________________________________________
vdsm-patches mailing list
[email protected]
https://lists.fedorahosted.org/mailman/listinfo/vdsm-patches