Nir Soffer has posted comments on this change.

Change subject: Implement SDM.copy_data
......................................................................


Patch Set 3:

(14 comments)

Nice!

https://gerrit.ovirt.org/#/c/60420/3/vdsm/storage/sdm/api/copy_data.py
File vdsm/storage/sdm/api/copy_data.py:

Line 61:     def _abort(self):
Line 62:         if self._operation:
Line 63:             self._operation.abort()
Line 64: 
Line 65:     def _run(self):
We can take the domain lock here.
Line 66:         with self.source.acquire(), 
self.destination.acquire(writable=True):
Line 67:             with self.source.prepare(), 
self.destination.prepare(writable=True):
Line 68:                 # Do not start copying if we have already been aborted
Line 69:                 if self._status == jobs.STATUS.ABORTED:


Line 62:         if self._operation:
Line 63:             self._operation.abort()
Line 64: 
Line 65:     def _run(self):
Line 66:         with self.source.acquire(), 
self.destination.acquire(writable=True):
for locking, exclusive can be more clear.
Line 67:             with self.source.prepare(), 
self.destination.prepare(writable=True):
Line 68:                 # Do not start copying if we have already been aborted
Line 69:                 if self._status == jobs.STATUS.ABORTED:
Line 70:                     return


Line 82:         interval = config.getint("irs", "progress_interval")
Line 83:         while not self._operation.finished:
Line 84:             self._operation.wait(interval)
Line 85:             self.log.debug('copy_data (job:%s) progress: %s%%',
Line 86:                            self.id, self.progress)
I think we can move this to qemuimg.Operation. The only thing related to this 
class is the log, and it can be good enough as generic "qemu-img operation".

This can also clean up the code in image.py.
Line 87: 
Line 88: 
Line 89: def _parse_endpoint(params):
Line 90:     if params.get('endpoint_type') == 'div':


Line 85:             self.log.debug('copy_data (job:%s) progress: %s%%',
Line 86:                            self.id, self.progress)
Line 87: 
Line 88: 
Line 89: def _parse_endpoint(params):
Maybe create_endpoint?
Line 90:     if params.get('endpoint_type') == 'div':
Line 91:         return CopyDataDivEndpoint(params)
Line 92:     else:
Line 93:         raise ValueError("Invalid or unsupported endpoint %r" % params)


Line 86:                            self.id, self.progress)
Line 87: 
Line 88: 
Line 89: def _parse_endpoint(params):
Line 90:     if params.get('endpoint_type') == 'div':
Does endpoint needs the endpoint_type property? We have the class of the object 
for identifying the type.

I think the union is need only for the schema, the actual code can be happy 
without having this tag.

So we can do:

    endopoint_type = params.pop('endpoint_type')
    if endpoint_type == 'div':
        return CopyDataDivEndpoint(params)
Line 91:         return CopyDataDivEndpoint(params)
Line 92:     else:
Line 93:         raise ValueError("Invalid or unsupported endpoint %r" % params)
Line 94: 


Line 96: class CopyDataEndpoint(properties.Owner):
Line 97: 
Line 98:     def __init__(self, params):
Line 99:         for k, v in params.items():
Line 100:             setattr(self, k, v)
This is not safe - will try to set anything sent in params - bad idea.

Better accept only the properties for this object. This is something 
properties.Owner can probably do.
Line 101: 
Line 102: 
Line 103: class CopyDataDivEndpoint(CopyDataEndpoint):
Line 104:     endpoint_type = properties.Enum(required=True, values=('div',))


Line 100:             setattr(self, k, v)
Line 101: 
Line 102: 
Line 103: class CopyDataDivEndpoint(CopyDataEndpoint):
Line 104:     endpoint_type = properties.Enum(required=True, values=('div',))
This can accept only single value per class, not very useful enum.
Line 105:     sd_id = properties.UUID(required=True)
Line 106:     img_id = properties.UUID(required=True)
Line 107:     vol_id = properties.UUID(required=True)
Line 108: 


Line 107:     vol_id = properties.UUID(required=True)
Line 108: 
Line 109:     def __init__(self, params):
Line 110:         super(CopyDataDivEndpoint, self).__init__(params)
Line 111:         self._vol = None
I would just do:

    self.sd_id = params.get('sd_id)
    self.img_id = params.get('img_id')
    self.vol_id = params.get('vol_id')
    self._vol = None

Eliminating this boilerplate can be nice but lets not waste time on it now. 
Lets do this when we have more code using properties.
Line 112: 
Line 113:     @property
Line 114:     def path(self):
Line 115:         if not self._vol:


Line 112: 
Line 113:     @property
Line 114:     def path(self):
Line 115:         if not self._vol:
Line 116:             return None
Lets let it fail if _vol was not initialized, we are hiding bug in the code 
with this invalid path.
Line 117:         return self._vol.getVolumePath()
Line 118: 
Line 119:     @property
Line 120:     def qemu_format(self):


Line 119:     @property
Line 120:     def qemu_format(self):
Line 121:         # TODO: Use Image._detect_format to handle broken VM md 
images.
Line 122:         if not self._vol:
Line 123:             return None
Same, it should fail if you did not prepare the volume.
Line 124:         return  sc.fmt2str(self._vol.getFormat())
Line 125: 
Line 126:     @property
Line 127:     def backing_path(self):


Line 138:         return sc.fmt2str(parent_vol.getFormat())
Line 139: 
Line 140:     def _get_parent_vol(self):
Line 141:         if not self._vol:
Line 142:             return None
Lets it fail
Line 143:         return self._vol.getParentVolume()
Line 144: 
Line 145:     @contextmanager
Line 146:     def acquire(self, writable=False):


Line 142:             return None
Line 143:         return self._vol.getParentVolume()
Line 144: 
Line 145:     @contextmanager
Line 146:     def acquire(self, writable=False):
Please document why we are taking the image lock and not the volume lock, and 
add note about not supporting concurrent copies for now.
Line 147:         res_ns = sd.getNamespace(self.sd_id, IMAGE_NAMESPACE)
Line 148:         lock_type = rm.LockType.exclusive if writable else 
rm.LockType.shared
Line 149:         with rmanager.acquireResource(res_ns, self.img_id, lock_type):
Line 150:             yield


Line 150:             yield
Line 151: 
Line 152:     @contextmanager
Line 153:     def prepare(self, writable=False):
Line 154:         vars.task.getSharedLock(sc.STORAGE, self.sd_id)
Hiding a lock in prepare is surprizing. Locking should be explicit and clear.

Why not keep the lock in hsm?

If you think we should lock only inside the job when the job is running, 
acquire and release this lock in acquire.
Line 155:         dom = sdCache.produce_manifest(self.sd_id)
Line 156:         self._vol = dom.produceVolume(self.img_id, self.vol_id)
Line 157:         try:
Line 158:             self._vol.prepare(rw=writable, justme=True)


Line 152:     @contextmanager
Line 153:     def prepare(self, writable=False):
Line 154:         vars.task.getSharedLock(sc.STORAGE, self.sd_id)
Line 155:         dom = sdCache.produce_manifest(self.sd_id)
Line 156:         self._vol = dom.produceVolume(self.img_id, self.vol_id)
Why not prepare here? Can it fail leaving half prepared volume?

If it does, add a comment explaining this.
Line 157:         try:
Line 158:             self._vol.prepare(rw=writable, justme=True)
Line 159:             yield
Line 160:         finally:


-- 
To view, visit https://gerrit.ovirt.org/60420
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I30ff635c0c73f67b296033b4a506fc3b9ededfbe
Gerrit-PatchSet: 3
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Adam Litke <ali...@redhat.com>
Gerrit-Reviewer: Adam Litke <ali...@redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Nir Soffer <nsof...@redhat.com>
Gerrit-Reviewer: gerrit-hooks <automat...@ovirt.org>
Gerrit-HasComments: Yes
_______________________________________________
vdsm-patches mailing list
vdsm-patches@lists.fedorahosted.org
https://lists.fedorahosted.org/admin/lists/vdsm-patches@lists.fedorahosted.org

Reply via email to