Piero Gramenzi wrote:
Hi Darren,
I do have a disk array that is providing striped LUNs to my Solaris
box. Hence I'd like to simply concat those LUNs without adding
another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all
"n" LUNs.
Individual ZFS blocks don't span a vdev (lun in your case) they are on
one disk or another. ZFS will stripe blocks across all available top
level vdevs, and as additional top level vdevs are added it will
attempt to rebalance as new writes come in.
That's exactly what I would like to avoid.
Why ?
I do have several Solaris servers, each one mounting several LUNs
provided by the same disk-array.
The disk-array is configured to provide RAID5 (7+1) protection so each
LUN is striped across several physical disks.
Make sure you still provide ZFS with redundancy otherwise you will
regret it later. Ideally by giving it the ability to do raidz or
mirroring, failing that at the very least set copies=2.
If I was mounting multiple LUNs on the same Solaris box and ZFS was
using them in RAID0 then the result is that each filesystem write would span across a
*huge* number of physical disks on the disk-array.
That is a *good* thing it helps increase performance usually.
This would almost certainly impacting other LUNs mounted on different
servers.
Why ? I don't think it will unless you disk hardware is really
simplistic, and if it is that simplistic then it might not be able to
keep up regardless.
As the striping is already provided at hardware level by the disk-array,
I would like to avoid to further stripe already striped devices (ie LUNs).
But why ? What is the rationale ?
If that's correct, is there a way to avoid that and get ZFS to write
sequentially on the LUNs that are part of myPool?
Why do you want to do that ? What do you actually think it gives you,
other than possibly *worse* performance ?
ditto.
Of course the alternative is to get the disk-array providing non-striped
Ideally give ZFS access to the "raw" disks and let it do everything.
LUNs but I suspect
that hardware striping is way more efficient than software strping, no
matter of good ZFS is.
Poor assumption to make - the only way is to verify it for your
particular workload.
The other thing to consider is that if you don't give ZFS the ability
for it to create multiple copies of the data (ideally one of
mirror,raidz,raidz2,raidz3) you run the risk of losing complete access
to the data regardless of wither ZFS was striping or not.
Try to work with ZFS not against it.
For more information and suggestions I strongly recommend reading the
following:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
If you are hosting databases this one too:
http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
and after that if you still need performance help read this one (last!):
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
don't jump straight to the ZFS_Evil_Tuning_Guide - seriously!
--
Darren J Moffat
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss