Nir Soffer has posted comments on this change. Change subject: gluster: Allow gluster mount with additional servers ......................................................................
Patch Set 4: (10 comments) https://gerrit.ovirt.org/#/c/40665/4/tests/glusterStorageServerTests.py File tests/glusterStorageServerTests.py: Line 41: 'volumeType': 'REPLICATE'}} Line 42: Line 43: @MonkeyPatch(storageServer, 'supervdsm', Supervdsm()) Line 44: def testGetVolumeInfo(self): Line 45: def glusterVolumeInfo(volumeName=None, remoteServer=None): It would be nice to assert that this volumeName and remoteServer have the correct value. Line 46: return self.volumeInfo Line 47: Line 48: storageServer.supervdsm.glusterVolumeInfo = glusterVolumeInfo Line 49: Line 47: Line 48: storageServer.supervdsm.glusterVolumeInfo = glusterVolumeInfo Line 49: Line 50: gluster = GlusterFSConnection("192.168.122.1:/music", "") Line 51: self.assertEquals(gluster._getGlusterMountOptions(), Lets not check private functions. Instead, mock the mount command and check that connect() is sending the correct options. https://gerrit.ovirt.org/#/c/40665/4/vdsm/storage/storageServer.py File vdsm/storage/storageServer.py: Line 31 Line 32 Line 33 Line 34 Line 35 Add blank line before the supervdsm import. It belongs to the next group, not to vdsm group. Line 251: DIR = "glusterSD" Line 252: Line 253: def connect(self): Line 254: glusterMountOptions = self._getGlusterMountOptions() Line 255: self._options += glusterMountOptions This code means that each time you call connect, gluster mount options are appended to self._options. You want to check the options and add them to self._options only once. This can be done by initializing in __init__, but we don't want to do stuff in __init__, this leads to bad things and objects were are hard to test. So we can do this using lazy initialization: @property def options(self): if self._gluster_options is None: self._gluster_options = self._get_gluster_mount_options() return self._options + self._gluster_options MountConnection should access self.options instead of self._options. Since you are using a property, this is now a readonly value, and other code cannot change it. Having readonly public property, annoying reviewers will not complain abut testing private methods ;-) Line 256: super(GlusterFSConnection, self).connect() Line 257: Line 258: def _getGlusterMountOptions(self): Line 259: remotePath = self._remotePath.split(":/") Line 255: self._options += glusterMountOptions Line 256: super(GlusterFSConnection, self).connect() Line 257: Line 258: def _getGlusterMountOptions(self): Line 259: remotePath = self._remotePath.split(":/") Are you sure that all volumes are using ":/"? Plese see the discussion in https://gerrit.ovirt.org/36237 about this. Line 260: glusterServer = remotePath[0] Line 261: volume = remotePath[1] Line 262: volInfo = supervdsm.getProxy().glusterVolumeInfo(volume, Line 263: glusterServer) Line 257: Line 258: def _getGlusterMountOptions(self): Line 259: remotePath = self._remotePath.split(":/") Line 260: glusterServer = remotePath[0] Line 261: volume = remotePath[1] The Python way to do this split is to use tuple unpacking: server, volume = remotePath.split(":/", 1) Note that I'm spliting once, in case volume name contains also ":/" - maybe this is impossible, but there is no reason to split more then once anyway. Line 262: volInfo = supervdsm.getProxy().glusterVolumeInfo(volume, Line 263: glusterServer) Line 264: servers = self._getGlusterServers(volume, volInfo) Line 265: servers.remove(glusterServer) Line 262: volInfo = supervdsm.getProxy().glusterVolumeInfo(volume, Line 263: glusterServer) Line 264: servers = self._getGlusterServers(volume, volInfo) Line 265: servers.remove(glusterServer) Line 266: return "backup-volfile-servers=%s:%s" % (servers[0], servers[1]) This assume replica 3 - we want to support any replica 1, 3 and maybe other. The correct way would be: return "backup-volfile-servers=" + ":".join(servers) We also want to reject replica 2, but lets do this in another patch. Line 267: Line 268: def _getGlusterServers(self, volume, volInfo): Line 269: servers = [] Line 270: bricks = volInfo[volume]['bricks'] Line 264: servers = self._getGlusterServers(volume, volInfo) Line 265: servers.remove(glusterServer) Line 266: return "backup-volfile-servers=%s:%s" % (servers[0], servers[1]) Line 267: Line 268: def _getGlusterServers(self, volume, volInfo): The parameter volume is not very clear - is this a volume object or volume name? better use more descriptive name. Line 269: servers = [] Line 270: bricks = volInfo[volume]['bricks'] Line 271: for brick in bricks: Line 272: server = brick.split(":/")[0] Line 268: def _getGlusterServers(self, volume, volInfo): Line 269: servers = [] Line 270: bricks = volInfo[volume]['bricks'] Line 271: for brick in bricks: Line 272: server = brick.split(":/")[0] Check also here that volumes and servers are always separated by ":/" - since we could care less about the volume name in this code, why not split by ":"? Line 273: servers.append(server) Line 274: Line 275: return servers Line 276: Line 271: for brick in bricks: Line 272: server = brick.split(":/")[0] Line 273: servers.append(server) Line 274: Line 275: return servers The pythonic way to do this is: return [brick.split(":/")[0] for brick in volInfo[volume]['bricks']] And we don't need a function for this, you can simply inline this in the caller: servers = [brick.split(":/")[0] for brick in volInfo[volume]['bricks']] Line 276: Line 277: Line 278: class NFSConnection(object): Line 279: DEFAULT_OPTIONS = ["soft", "nosharecache"] -- To view, visit https://gerrit.ovirt.org/40665 To unsubscribe, visit https://gerrit.ovirt.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I2478a5edc1fc9d24eb96d64a32a98a2467ce2989 Gerrit-PatchSet: 4 Gerrit-Project: vdsm Gerrit-Branch: master Gerrit-Owner: Ala Hino <[email protected]> Gerrit-Reviewer: Allon Mureinik <[email protected]> Gerrit-Reviewer: Jenkins CI Gerrit-Reviewer: Nir Soffer <[email protected]> Gerrit-Reviewer: Sandro Bonazzola <[email protected]> Gerrit-Reviewer: [email protected] Gerrit-HasComments: Yes _______________________________________________ vdsm-patches mailing list [email protected] https://lists.fedorahosted.org/mailman/listinfo/vdsm-patches
