Ian,

On Sun, Jun 24, 2007 at 03:47:10PM -0400, Ian Murdock wrote:
> One thing I don't see on the requirements list is ZFS as the default
> file system.
> This really needs to be there. It's one of the killer features of
> Solaris, and we should make sure we use it to maximum advantage.

+1  :-)
 
> Of course, to take maximum advantage of ZFS, the entire system needs to
> be on ZFS, which means ZFS boot, which I assume is an installer task? Is
> this on the installer roadmap? What needs to be done here? 
> [...]

The test-implementation for ZFS-Boot I looked at does the initial 
ZFS-pool- and filesystem-creation inside a modified pfinstall. 
So this should be done in the new installer itself ( or delegated to
a sub-process, as a module).
For Jumpstart there are some new keywords for ZFS-Pools/Filesystems
to make the install hands-off.

I would add the grub bootloader with ZFS-boot support to the
requirements list if it is not already there.

> Does putting the entire system on ZFS mean we can do away with
> slice level partitioning? I found the whole slice thing confusing, and I
> suspect I won't be the only one coming from Linux to be confused. I see
> you can put swap on ZFS, but [2] seems to indicate that ZFS can only
> boot from slices (though [1] says "[i]f you use a whole disk for a
> rootpool, you must use a slice notation (e.g. c0d0s0) so that it is
> labeled with an SMI label"--I'm too much of a noob to follow that).
> 
> [1] http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
> [2] http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling

Using the whole disk for ZFS is per default using EFI-Labels for the disks.
The nice thing here is, that the disks internal write-cache is enabled
(and flushed as needed). The old-style x86 fdisk partioning is completely
replaced by EFI and the disk is 100% used for ZFS-only.

The BIOS of the machine and grub would need to be able booting of EFI
labeled Disks. Relocating grub to a old x86 fdisk disk might not be 
enough to solve this.

Using the Slice notation leaves using disk's internal write-cache for
ZFS off.
Booting from a ZFS-filesystem on a fdisk/SMI-labeled disk only needs 
the grub bootloader extended for ZFS-Boot.  This should run on nearly
any i86pc Hardware and it would run now.

Swap on ZFS is possible, but I myself would say, a plain slice on the
disk without a ZFS-Block-Device should be faster, since ZFS/ZFS-Blockdevice
is no extra layer in this case. We should ask the ZFS peoples about the
difference in performance between a plain disk slice and a ZFS Block-Device.


> One big win if we can do everything in a ZFS pool is that we don't have
> to worry about partitioning. Big simplification there.
> Presumably, all "partitioning" becomes is resizing existing
> partitions to make room for the Solaris (ZFS) partition,
> and I see parted has been ported to Solaris, so we
> could presumably use libparted in the Solaris installer?

The only paritioning which might remain is, that we should consider
making "swap" a real Solaris Slice and the rest of the space the
zpool.
But the whole bunch of filesystems would be without partitioning inside
the zpool. I see only a little disadvantage from having one
slice swap, the rest as the zpool.


[...]
> We've already talked about using ZFS snapshots to implement
> rollbacks after package/system upgrades.
> 
> Tim Foster has done a really neat SMF-based automatic snapshot
> service [4].  [......]

I would say, create a snapshot not too often, but still automaticly
with a rotation schema. 
 
> Tim Foster has also written a script to enable booting into ZFS
> snapshots ([5]). What role might this play in either of the
> ideas above? Do the snapshots/clones created by the package system
> automatically get added via this mechanism? How about booting
> directly into the snapshots to implement a system recovery
> mechanism again without a whole lot of additional programming?

I would strongly say, to do a snapshot at every install/upgrade/
downgrade/remove operation of the package-tools.
This would enable rollbacks for failed operations.

But snapshotting such system-change-points should not affect
in-use userdata, since upgrades are often done during usage hours...
(emails arriving, ..., so keep user-data and spool-areas completely
separate)

Another word to live-upgrade, there is one very important point
as a differnce to simple ZFS-snapshots:
If a user decides to "roolback" to a earlier state or go to a 
third or a forth variant of his installed OSse, then live-upgrade uses 
sync-rules to resync important files (or even spool-areas).
This should be the same with zfs snapshots *or* alternave boot-environments
on ZFS, since only rolling back and not forth again, would be less 
then the features we know from live-upgrade.


 

Thomas

--

*********************************************************************
Thomas Wagner                Tel:    +49-(0)-711-720 98-131
Strategic Support Engineer   Fax:    +49-(0)-711-720 98-443
Global Customer Services     Cell:   +49-(0)-175-292 60 64
Sun Microsystems GmbH        E-Mail: [EMAIL PROTECTED]
Zettachring 10A, D-70567 Stuttgart       http://www.sun.de

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering

_______________________________________________
indiana-discuss mailing list
[email protected]
http://opensolaris.org/mailman/listinfo/indiana-discuss

Reply via email to