----- Исходное сообщение -----
От: "Dave U. Random" <anonym...@anonymitaet-im-inter.net>
Дата: Tuesday, June 21, 2011 18:32
Тема: Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?
Кому (To): zfs-discuss@opensolaris.org

> Hello Jim! I understood ZFS doesn't like slices but from your 
> reply maybe I
> should reconsider. I have a few older servers with 4 bays x 73G. 
> If I make a
> root mirror pool and swap on the other 2 as you suggest, then I 
> would have
> about 63G x 4 left over.


For the sake of completeness, I should mention that you can also
create a fast and redundant 4-way mirrored root pool ;)

> If so then I am back to wondering what 
> to do about
> 4 drives. Is raidz1 worthwhile in this scenario? That is less 
> redundancythat a mirror and much less than a 3 way mirror, isn't 
> it? Is it even
> possible to do raidz2 on 4 slices? Or would 2, 2 way mirrors be 
> better? I
> don't understand what RAID10 is, is it simply a stripe of two 
> mirrors? 
Yes, by that I meant a striping over two mirrors.

> Or would it be best to do a 3 way mirror and a hot spare? I would 
> like to be
> able to tolerate losing one drive without loss of integrity.

Any of the scenarios above allow you to lose one drive and not 
lose data immediately. The rest is a compromise between both
performance, space and further redundancy:
* 3- or 4-way mirror: least useable space (25% of total disk capacity),
most redundancy, highest read speeds for concurrent loads
* striping of mirrors (raid10): average useable space (50%), high 
read speeds for concurrent loads, can tolerate loss of up to 2 drives
(slices) in a "good" scenario (if they are from different mirrors)
* raidz2: average useable space (50%), can tolerate loss of any 2 drives
* raidz1: max useable space (75%), can tolerate loss of any 1 drive
 
After all the discussions about performance recently on this forum,
I would not try to guess which performance would be better in 
general - raidz1 or raidz2 (there are reads, writes, scrubs and 
resilvers seemingly all with different preferences toward disk layout),
but with a generic workload we have (i.e. serving up zones with
some development databases and J2SE app servers) this was not
seen to matter much. So for us it was usually raidz2 for tolerance
or raidz1 for space.
 

> I will be doing new installs of Solaris 10. Is there an option 
> in the
> installer for me to issue ZFS commands and set up pools or do I 
> need to
> format the disks before installing and if so how do I do that? 
 
Unfortunately, I last installed Solaris 10u7 or so from scratch, 
others were liveupdates of existing systems and OpenSolaris 
machines, so I am not certain. 

From what I gather, the text installer is much more powerful
than the graphical one, and its ZFS root setup might encompass 
creating a root pool in a slice of given size, and possibly mirror 
it right away. Maybe you can do likewise in JumpStart, but we 
did not do that after all.
 
Anyhow, after you install a ZFS root of your sufficient size
(i.e. our minimalist Solaris 10 installs are often under 1-2Gb 
per boot environment, multiply for storing different OEs like 
LiveUpdate and for snapshot history), you can create a slice
for the data pool component (s3 in our setups), and then 
clone the disk slice layout to the other 3 drives like this:
#  prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
(you might need to install the slice table spanning 100% of 
drives with the fdisk command, first).

Then you attach one of the slices to the ZFS root pool to make
a mirror, if the installer did not do that:
# zpool attach rpool c1t0d0s0 c1t1d0s0

If you have several controllers (perhaps even on different PCI buses) 
you might want to pick a drive on a different controller than the first 
one in order to have less SPoF's, but make sure that the second 
controller is bootable from BIOS.

And make that drive bootable:
SPARC:
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0
x86/x86_64:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
 
For two other drives you just create a new pool in slices *s0:
# zpool create swappool mirror c1t2d0s0 c1t3d0s0
# zfs create -V2g swappool/dump
# zfs create -V6g swappool/swap

Sizes are arbitrary here, they depend on your RAM sizing.
You can later add swap from other pools, including a data pool.
Dump device size can be "tested" by configuring dumpadm to
use the new device - it would either refuse to use a device too 
small (then you recreate it bigger), or accept it.

The installer would probably create a dump and a swap devices
in your root pool, you may elect to destroy them since you have
another swap device, at least.

Make sure to update the /etc/vfstab file to reference the swap 
areas which your system should use further on.

After this is all completed, you can create a "data pool" in the
s3 slices with your chosen geometry, i.e.
# zpool create pool raidz2 c1t0d0s3 c1t1d0s3 c1t2d0s3 c1t3d0s3

In our setups this pool holds not only data, but also zone roots
(each in a dedicated dataset), separately from the root pool.
This allows each zone with its data (possibly in dedicated and
delegated sub-datasets) to be a single unit of backup and migration.
AFAIK this is not a Sun-supported configuration (they used to 
require that zone roots are kept with the root FS), but it works
well other than puzzling LiveUpgrade (depends on versions a 
lot, though). Regarding the latter, we found that it is faster and
least error-prone to detach the zones before LUing, then LU
just the global zone (clone of the current BE), and reattach 
the local zones with update mode. Maybe with recent LU
versions you don't need trickery like that, I can't say now.

> Thank you.
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to