Nir Soffer has posted comments on this change. Change subject: v2v: Convert VM from external source to Data Domain ......................................................................
Patch Set 11: (16 comments) https://gerrit.ovirt.org/#/c/37509/11/tests/v2vTests.py File tests/v2vTests.py: Line 110: return self._disk_count Line 111: Line 112: @property Line 113: def progress(self): Line 114: return self._progress This little over engineered for a mock. It is also not really a mock but a fake. class FakeImportVM: def __init__(self): self.current_disk = None self.disk_count = None self.progress = None self.status_msg = None def progress_stage(self, progress): self.progress = progress def disk_copy_stage(self, msg, disk, count): self.status_msg = msg self.current_disk = disk self.disk_count = count Line 115: Line 116: Line 117: class v2vTests(TestCaseBase): Line 118: @MonkeyPatch(libvirtconnection, 'open_connection', hypervisorConnect) Line 153: 13 Please add more progress data, this does not test the progress parsing code. And also add another disk to verify that the code handle correctly multiple disks. Line 155: 256 What is 256.0 - time from start of the operation? Line 163: Line 164: parser.parse_progress(' (13.16/100%)') Line 165: self.assertEquals(importVm.progress, 13) Line 166: parser.parse_progress(' (100/100%)') Line 167: self.assertEquals(importVm.progress, 100) We should convert the parser to generator, accepting lines from v2v process, and generating stream of events. The test should be like this: parser = v2v.OutputParser(StringIO(output)) events = list(parser.parse()) self.assertEqual(events, [ (v2v.IMPORT_PROGRESS, 1, 2, "Copying disk 1/2 to /tmp/v2v..."), (v2v.DISK_PROGRESS, 0), (v2v.DISK_PROGRESS, 50), (v2v.DISK_PROGRESS, 100), (v2v.IMPORT_PROGRESS, 2, 2, "Copying disk 2/2 to /tmp/v2v..."), (v2v.DISK_PROGRESS, 0), (v2v.DISK_PROGRESS, 50), (v2v.DISK_PROGRESS, 100), ]) The parser can be used like this in ImportVM: for event in parser.parse(): if event.type == IMPORT_PROGRESS: self._current_disk = event.current_disk self._disk_count = event.disk_count self._status_msg = event.status_msg elif event.type == DISK_PROGRESS: self._disk_progress = event.progress else: raise InvalidEvent(event) This is much easier to test and use, you don't need to create fake objects and invent an api for setting the info. You can implement parse() like this: for line in self.stdout: if 'Copying disk' in line: current_disk, disk_count, description = parse line yield IMPORT_PROGRESS, current_disk, disk_count, description while True: read data from self.stdout progress = parse buf yield DISK_PROGRESS, progress if progress == 100: break https://gerrit.ovirt.org/#/c/37509/11/vdsm/v2v.py File vdsm/v2v.py: Line 91: Line 92: Line 93: class NoSuchOvf(V2VJobError): Line 94: ''' Ovf path is not exists in /var/run/vdsm/v2v/ ''' Line 95: err_name = 'invalidV2VOvfPath' I think we should fix also the error name. Line 96: Line 97: Line 98: class VolumeError(RuntimeError): Line 99: def __str__(self): Line 260: self._preparedVolumes = [] Line 261: self._parser = OutputParser(self) Line 262: self._pass_file = os.path.join(_V2V_DIR, "%s.tmp" % jobId) Line 263: self.ABORT_RETRIES = 3 Line 264: self.ABORT_DELAY = 30 Should be class variable, why create another instance variable for each job? I would add these to vdsm conf so we can tune this if needed in the field (in a new patch). Also so we really want to wait for the process 90 seconds? I would not wait more then 10 seconds. Line 265: Line 266: def start(self): Line 267: self._thread = threading.Thread(target=self._run) Line 268: self._thread.daemon = True Line 330: Line 331: if self._proc.returncode != 0: Line 332: self._status = STATUS.ERROR Line 333: error = self._get_process_errors() Line 334: self._status_msg = error You don't need to set self._status_msg as it is done in _run. This is the benefit of handling all errors in one place. Line 335: raise V2VProcessError(error) Line 336: self._status = STATUS.DONE Line 337: Line 338: def disk_copy_stage(self, status_msg, current_disk, disk_count): Line 331: if self._proc.returncode != 0: Line 332: self._status = STATUS.ERROR Line 333: error = self._get_process_errors() Line 334: self._status_msg = error Line 335: raise V2VProcessError(error) So this can simplified to: raise V2VProcessError(self._get_process_errors()) Line 336: self._status = STATUS.DONE Line 337: Line 338: def disk_copy_stage(self, status_msg, current_disk, disk_count): Line 339: self._status = STATUS.DISK_COPY Line 346: def progress_stage(self, progress): Line 347: try: Line 348: self._disk_progress = int(progress) Line 349: except ValueError: Line 350: self._abort() If this exception is bubbled up to _run - we can let _run do the abort. Line 351: raise JobProgressError("Error parsing progress: %r" % progress) Line 352: Line 353: def _watch_process_output(self): Line 354: while not self._aborted and self._proc.returncode is None: Line 371: def _get_process_errors(self): Line 372: error = self._proc.stderr.readlines() Line 373: if len(error) > 0: Line 374: msg = 'error process ended with errors: %s' % error Line 375: return msg This will return None if there are no errors, so we don't get any context in the error. Should be: return "Process failed: %s" self._proc.stderr.read(1024) We like to limit the amount of data from the process, I just invented a number, but you probably know what should be the limit. Since it is not very useful method, we can inline it in the single call site. Line 376: Line 377: def _create_v2v_dir(self): Line 378: try: Line 379: os.mkdir(_V2V_DIR, 0o700) Line 402: self._status = STATUS.ABORT Line 403: logging.info('aborting job id: %r', self.id) Line 404: self._abort() Line 405: Line 406: def _abort(self): We should move this to utils in another patch. Line 407: self._aborted = True Line 408: if self._proc.returncode is None: Line 409: for i in range(self.ABORT_RETRIES): Line 410: if self._proc.returncode is not None: Line 421: self._proc.returncode) Line 422: return Line 423: raise Line 424: time.sleep(self.ABORT_DELAY) Line 425: logging.error("Error killing virt-v2v (pid: %d)", self._proc.pid) If we failed, we should pass the pid to zombiereaper, so it would reap the child process when its finally done. See vdsm/storage/hba.py for example usage. Line 426: Line 427: def _abort_msg(self, msg): Line 428: self._status_msg = msg Line 429: logging.error(msg) Line 429: logging Why do we need this method now? self._status_msg is set in _run and the message is raised there. Line 462: ''' prepareImage returns /prefix/sdUUID/images/imgUUID/volUUID Line 463: we need storage domain absolute path so we go up 3 levels ''' Line 464: path = os.path.split(path) Line 465: path = os.path.split(path) Line 466: return os.path.split(path) This is still broken - see my comment for version 10. Line 467: Line 468: def _teardown_volumes(self): Line 469: for drive in self._preparedVolumes: Line 470: try: Line 489: unexpected format in "Copying disk" Please include the unexpected line in the error message. Otherwise we will have very hard time debugging this failure. Line 497: JobOutputParsing Please include the unexpected buf in the error message. Otherwise we will have very hard time debugging this failure. -- To view, visit https://gerrit.ovirt.org/37509 To unsubscribe, visit https://gerrit.ovirt.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I34bd86d5a87ea8c42113c4a732f87ddd4ceab9ea Gerrit-PatchSet: 11 Gerrit-Project: vdsm Gerrit-Branch: master Gerrit-Owner: Shahar Havivi <[email protected]> Gerrit-Reviewer: Dan Kenigsberg <[email protected]> Gerrit-Reviewer: Federico Simoncelli <[email protected]> Gerrit-Reviewer: Francesco Romani <[email protected]> Gerrit-Reviewer: Michal Skrivanek <[email protected]> Gerrit-Reviewer: Nir Soffer <[email protected]> Gerrit-Reviewer: Piotr Kliczewski <[email protected]> Gerrit-Reviewer: Saggi Mizrahi <[email protected]> Gerrit-Reviewer: Shahar Havivi <[email protected]> Gerrit-Reviewer: Yaniv Bronhaim <[email protected]> Gerrit-Reviewer: [email protected] Gerrit-Reviewer: oVirt Jenkins CI Server Gerrit-HasComments: Yes _______________________________________________ vdsm-patches mailing list [email protected] https://lists.fedorahosted.org/mailman/listinfo/vdsm-patches
