Hi Drew,

I took a look over the changes you made, and have some comments:

In distro_const/__init__.py:

- Lines 272, 283:

  Should we also check that there is actually 1 Filesystem and 1 Zpool,
  since the subsequent code assumes fs[0] and zpool[0] vars are valid.

- Lines 318 to 321:

  Why do you have to specifically set dry_run here? Should you not be also
  passing this value on from self.dry_run or similar since not doing so
  makes the dry_run parameters to DC next to useless since it's then
  destructive, doesn't it?

- Lines 333 ot 339:

  Might be useful here to have a specific comment to state the the
  fs.get() is fetching the mountpoint directly from ZFS at this
  point.

- Lines 308 and 472:

  Would it be worth providing a single point for constructing these
  paths rather than repeating the code?

In test_distro_const.py:

- Why did you remove the test here? Is it still not useful to test the
  validation?

Thanks,

Darren.

On 31/05/2012 20:54, Drew Fisher wrote:
> All:
> 
> I went ahead and implemented the fix from within Filesystem.from_xml() and
> the corresponding fix for DC.  Mary is still sanity testing the images, but
> since I'm out of the office Friday and Monday and Matt is out Monday, Ethan
> and I felt that these changes needed to be reviewed ASAP to make b18's
> closure on Monday.  These changes supersede Matt's changes to
> instantiation_zone.py.  That file will not be changed with these changes.
> 
> Here's a pointer to the webrev:
> 
> https://cr.opensolaris.org/action/browse/caiman/drewfish/filesystem_from_xml/webrev/
> 
> The code in Filesystem.from_xml() was added way back in b156 for DC to
> handle a target specification in which a zpool/fs dataset didn't mount at
> /zpool/fs on the filesystem.  What DC has been doing since then is passing
> strings around like dataset.mountpoint, dataset.action, dataset.name, etc,
> rather than simply passing the DOC object.  The changes I made removes
> these strings and instead simply passes the DOC object in and out of the
> various DC functions.
> 
> The fix is to simply let ZFS deal with the mountpoint on its own.  If the
> user supplies it as a attribute on the Filesystem element, we'll explicitly
> set it.  If not, ZFS will handle it for us.  If we do it this way, we can
> remove the need for an entire target-instantiation checkpoint from within
> DC.  We can simply use Filesystem.create() on the 4 datasets we need.
> 
> Matt has test data from various zone installs that I'll ask he provide as a
> reply to this mail.  Also, I fixed a bug Jack just filed about an invalid
> zpool name on the system.
> 
> Unittests are clean -
> DC: 
> [mox:test] > sudo nosetests -s test_*.py
> Ran 68 tests in 54.928s
> OK
> 
> Target:
> [mox:test] > sudo nosetests -sx test_*.py
> Ran 268 tests in 24.744s
> OK
> 
> I've also tested a DC manifest to make sure ZFS is doing the right thing:
> 
>     <target name="desired">
>       <logical>
>         <zpool name="tank" action="use_existing">
>           <filesystem name="mp/dcmp/ai" action="preserve"/>
>         </zpool>
>       </logical>
>     </target>
> 
> In which the mountpoint for tank/mp is at:
> 
> [mox:test] > zfs get mountpoint tank/mp
> NAME     PROPERTY    VALUE                  SOURCE
> tank/mp  mountpoint  /export/home/overhere  local
> 
> The DC datasets are:
> 
> [mox:test] > zfs list -r tank/mp
> NAME                         USED  AVAIL  REFER  MOUNTPOINT
> tank/mp                      320K   648G  36.0K  /export/home/overhere
> tank/mp/dcmp                 284K   648G  36.0K  /export/home/overhere/dcmp
> tank/mp/dcmp/ai              248K   648G  38.6K  /export/home/overhere/dcmp/ai
> tank/mp/dcmp/ai/build_data  78.6K   648G  37.3K 
> /export/home/overhere/dcmp/ai/build_data
> tank/mp/dcmp/ai/logs        95.9K   648G  95.9K 
> /export/home/overhere/dcmp/ai/logs
> tank/mp/dcmp/ai/media       34.6K   648G  34.6K 
> /export/home/overhere/dcmp/ai/media
> 
> As you can see, zfs create / DC is doing the correct thing.
> 
> Thanks!
> 
> -Drew (on behalf of Matt and Ethan)
> 
> 
> _______________________________________________
> caiman-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
> 

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to