> We have a server with a couple X-25E's and a bunch of
> larger SATA
> disks.
> 
> To save space, we want to install Solaris 10 (our
> install is only about
> 1.4GB) to the X-25E's and use the remaining space on
> the SSD's for ZIL
> attached to a zpool created from the SATA drives.
> 
> Currently we do this by installing the OS using
> SVM+UFS (to mirror the
> OS between the two SSD's) and then using the
> remaining space on a slice
> as ZIL for the larger SATA-based zpool.
> 
> However, SVM+UFS is more annoying to work with as far
> as LiveUpgrade is
> concerned.  We'd love to use a ZFS root, but that
> requires that the
> entire SSD be dedicated as an rpool leaving no space
> for ZIL.  Or does
> it?

For every system I have ever done zfs root on, it's always
been a slice on a disk.  As an example, we have an x4500
with 1TB disks.  For that root config, we are planning on
something like 150G on s0, and the rest on S3. s0 for
the rpool, and s3 for the qpool.  We didn't want to have
to deal with issues around flashing a huge volume, as
we found out with our other x4500 with 500GB disks.

AFAIK, it's only non-rpool disks that use the "whole disk",
and I doubt there's some sort of specific feature with
an SSD, but I could be wrong.

I like your idea of a reasonably sized root rpool and the
rest used for the ZIL.  But if you're going to do LU,
you should probably take a good look at how much space
you need for the clones and snapshots on the rpool

Ben
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to