On Mon, Sep 23, 2013 at 2:16 PM, Jose A. Lopes <[email protected]> wrote:

> > diff --git a/lib/cmdlib/cluster.py b/lib/cmdlib/cluster.py
> > index e352ce4..52b1cd4 100644
> > --- a/lib/cmdlib/cluster.py
> > +++ b/lib/cmdlib/cluster.py
> > @@ -538,8 +538,6 @@ class LUClusterRepairDiskSizes(NoHooksLU):
> >          continue
> >
> >        newl = [v[2].Copy() for v in dskl]
> > -      for dsk in newl:
> > -        self.cfg.SetDiskID(dsk, node_uuid)
> >        instance = dskl[0][0]
> >        node_name = self.cfg.GetNodeName(node_uuid)
> >        result = self.rpc.call_blockdev_getdimensions(node_uuid,
> > @@ -2697,7 +2695,6 @@ class LUClusterVerifyGroup(LogicalUnit,
> _VerifyErrors):
> >        for (inst_uuid, dev) in disks:
> >          (anno_disk,) = AnnotateDiskParams(instanceinfo[inst_uuid],
> [dev],
> >                                            self.cfg)
> > -        self.cfg.SetDiskID(anno_disk, nuuid)
> >          dev_inst_only.append((anno_disk, instanceinfo[inst_uuid]))
>
> Perhaps a 'map' or list comprehension instead of 'empty list', 'for',
> 'append'.
>

Hmm, maybe, but that wouldn't fit into this patch (this one is only about
removing SetDiskID and physical_id, no refactorings).


>
> >
> >        node_disks_dev_inst_only[nuuid] = dev_inst_only
> > diff --git a/lib/cmdlib/common.py b/lib/cmdlib/common.py
> > index d1e428f..80334e9 100644
> > --- a/lib/cmdlib/common.py
> > +++ b/lib/cmdlib/common.py
> > @@ -1036,9 +1036,6 @@ def CheckIAllocatorOrNode(lu, iallocator_slot,
> node_slot):
> >  def FindFaultyInstanceDisks(cfg, rpc_runner, instance, node_uuid,
> prereq):
> >    faulty = []
>
> This can be moved closer to where it's used (i.e., a few lines below).
> And the 'for' code that uses 'faulty' can be rewritten as a list
> comprehension. What do you think ?
>

See above.


>
> >
> > -  for dev in instance.disks:
> > -    cfg.SetDiskID(dev, node_uuid)
> > -
> >    result = rpc_runner.call_blockdev_getmirrorstatus(
> >               node_uuid, (instance.disks, instance))
> >    result.Raise("Failed to get disk status from node %s" %
> > diff --git a/lib/cmdlib/instance.py b/lib/cmdlib/instance.py
> > index 7d33cbe..005004b 100644
> > --- a/lib/cmdlib/instance.py
> > +++ b/lib/cmdlib/instance.py
> > @@ -1272,7 +1272,6 @@ class LUInstanceCreate(LogicalUnit):
> >          for t_dsk, a_dsk in zip(tmp_disks, self.disks):
> >            rename_to.append(t_dsk.logical_id)
> >            t_dsk.logical_id = (t_dsk.logical_id[0],
> a_dsk[constants.IDISK_ADOPT])
> > -          self.cfg.SetDiskID(t_dsk, self.pnode.uuid)
>
> Given that we remove the call to 'SetDiskID', does the previous
> assignment to 't_dsk' still make sense ?
>
> If so, is it possible to make to make the assignment more clear, that
> is, instead of doing the assignment
>

As far as I can see, the assignment is important. The changes are reflected
in the tmp_disks list, which is then sent to a node via RPC. An I honestly
don't know how to make an assignment more clear than by performing an
assignment...


>
> > @@ -572,15 +570,10 @@ class ConfigWriter(object):
> >          result.append("duplicate logical id %s" % str(disk.logical_id))
> >        else:
> >          l_ids.append(disk.logical_id)
>
> This piece of code ^ is of the form
>
>   if ... :
>     if ...:
>       ...
>     else:
>       ...
>
> Can we use 'and' ?
>
>   if ... and ...:
>     ...
>   else:
>     ...
>

No.

That would change the logic. And it would not fit into this patch.

Cheers,
Thomas


>
> Thanks,
> Jose
>
> --
> Jose Antonio Lopes
> Ganeti Engineering
> Google Germany GmbH
> Dienerstr. 12, 80331, München
>
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg
> Geschäftsführer: Graham Law, Christine Elizabeth Flores
> Steuernummer: 48/725/00206
> Umsatzsteueridentifikationsnummer: DE813741370
>



-- 
Thomas Thrainer | Software Engineer | [email protected] |

Google Germany GmbH
Dienerstr. 12
80331 München

Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Geschäftsführer: Graham Law, Christine Elizabeth Flores

Reply via email to