Dave:

I will do the net TI install and send results later.


----- Original Message -----
From: [email protected]
To: [email protected]
Cc: [email protected], [email protected]
Sent: Friday, June 1, 2012 6:47:51 AM GMT -08:00 Tijuana / Baja California
Subject: Re: [caiman-discuss] CR Request 7171068

I'd suggest a network text install be tested as well.

Dave

On 05/31/12 18:49, Mary Ding wrote:
> Drew:
>
> I had done the following install regression testing:
>
> x86:
>
> LiveCD install - works
> Text CD install - works
> AI net install - works
>
> sparc:
>
> AI net install - works
> Text CD install - works
>
>
>
> On 05/31/12 03:42 PM, Matt Keenan wrote:
>> Drew,
>>
>> Thanks for putting this together so quickly:
>>
>> I've carried out the following zone related tests :
>>
>> 1. Install a zone specifying no ai_manifest or sc_profile thus it uses
>> default zone_default.xml manifest and runs sysconfig on first boot.
>>
>> Zone installed, and on first boot prompted for root password and
>> initial user account details
>>
>> Zone booted cleanly, and logged in as new initial user and verified
>> dataset rpool/export/home's mountpoint is inherited from parent
>> rpool/export and set to /export/home
>>
>> 2. Install a zone specifying a sc_profile which automatically sets root
>> password and creates a new initial user.
>>
>> Zone installed, and boot cleanly. Logged as initial user and verified
>> dataset rpool/export/home's mountpoint is inherited from parent
>> rpool/export and set to /export/home
>>
>> 3. Install a zone specifying a custom AI manifest which sets
>> rpool/export's mountpoint to be "/foo"
>>
>> Zone installed and on initial boot prompted only for root password
>> as not prompted for initial user details.
>>
>> Zone booted cleanly, and logged in as root and verified dataset
>> rpool/export was mounted at /foo and also verified
>> dataset rpool/export/home's mountpoint is inherited from parent
>> rpool/export and was set to /foo/home.
>>
>> 4. Install a zone specifying a custom AI manifest, which sets
>> rpool/export's mountpoint to be "/foo" and also specify a sc_profile
>> to set user password and create initial use.
>>
>> Zone booted cleanly, logged in as initial user. and verified dataset
>> rpool/export was mounted at /foo and also verified
>> dataset rpool/export/home's mountpoint is inherited from parent
>> rpool/export and was set to /foo/home.
>>
>>
>> All tests passed successfully, just one quirk in test 3, where only
>> root password was prompted for on initial boot and not initial user
>> details. My guess is the sysconfig must check the existence of
>> /export/home and if not there will not prompt for initial user details.
>>
>> cheers
>>
>> Matt
>>
>> On 05/31/12 20:54, Drew Fisher wrote:
>>> All:
>>>
>>> I went ahead and implemented the fix from within Filesystem.from_xml()
>>> and the corresponding fix for DC. Mary is still sanity testing the
>>> images, but since I'm out of the office Friday and Monday and Matt is
>>> out Monday, Ethan and I felt that these changes needed to be reviewed
>>> ASAP to make b18's closure on Monday. These changes supersede Matt's
>>> changes to instantiation_zone.py. That file will not be changed with
>>> these changes.
>>>
>>> Here's a pointer to the webrev:
>>>
>>> https://cr.opensolaris.org/action/browse/caiman/drewfish/filesystem_from_xml/webrev/
>>>
>>>
>>> The code in Filesystem.from_xml() was added way back in b156 for DC to
>>> handle a target specification in which a zpool/fs dataset didn't mount
>>> at /zpool/fs on the filesystem. What DC has been doing since then is
>>> passing strings around like dataset.mountpoint, dataset.action,
>>> dataset.name, etc, rather than simply passing the DOC object. The
>>> changes I made removes these strings and instead simply passes the DOC
>>> object in and out of the various DC functions.
>>>
>>> The fix is to simply let ZFS deal with the mountpoint on its own. If the
>>> user supplies it as a attribute on the Filesystem element, we'll
>>> explicitly set it. If not, ZFS will handle it for us. If we do it this
>>> way, we can remove the need for an entire target-instantiation
>>> checkpoint from within DC. We can simply use Filesystem.create() on the
>>> 4 datasets we need.
>>>
>>> Matt has test data from various zone installs that I'll ask he provide
>>> as a reply to this mail. Also, I fixed a bug Jack just filed about an
>>> invalid zpool name on the system.
>>>
>>> Unittests are clean -
>>> DC:
>>> [mox:test] > sudo nosetests -s test_*.py
>>> Ran 68 tests in 54.928s
>>> OK
>>>
>>> Target:
>>> [mox:test] > sudo nosetests -sx test_*.py
>>> Ran 268 tests in 24.744s
>>> OK
>>>
>>> I've also tested a DC manifest to make sure ZFS is doing the right
>>> thing:
>>>
>>> <target name="desired">
>>> <logical>
>>> <zpool name="tank" action="use_existing">
>>> <filesystem name="mp/dcmp/ai" action="preserve"/>
>>> </zpool>
>>> </logical>
>>> </target>
>>>
>>> In which the mountpoint for tank/mp is at:
>>>
>>> [mox:test] > zfs get mountpoint tank/mp
>>> NAME PROPERTY VALUE SOURCE
>>> tank/mp mountpoint /export/home/overhere local
>>>
>>> The DC datasets are:
>>>
>>> [mox:test] > zfs list -r tank/mp
>>> NAME USED AVAIL REFER MOUNTPOINT
>>> tank/mp 320K 648G 36.0K /export/home/overhere
>>> tank/mp/dcmp 284K 648G 36.0K /export/home/overhere/dcmp
>>> tank/mp/dcmp/ai 248K 648G 38.6K /export/home/overhere/dcmp/ai
>>> tank/mp/dcmp/ai/build_data 78.6K 648G 37.3K
>>> /export/home/overhere/dcmp/ai/build_data
>>> tank/mp/dcmp/ai/logs 95.9K 648G 95.9K /export/home/overhere/dcmp/ai/logs
>>> tank/mp/dcmp/ai/media 34.6K 648G 34.6K
>>> /export/home/overhere/dcmp/ai/media
>>>
>>> As you can see, zfs create / DC is doing the correct thing.
>>>
>>> Thanks!
>>>
>>> -Drew (on behalf of Matt and Ethan)
>>
>> _______________________________________________
>> caiman-discuss mailing list
>> [email protected]
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>
> _______________________________________________
> caiman-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to