Alderman, Sean wrote: > Hi all, > I'm new to the list as of today, I've come because I'm fascinated with > ZFS and my company has just begun an adventure into the unknown with > Solaris 10. > > We've got a few Sun Fire X4200's and a few Sun Fire V245's that we're > playing with and we've come to a decision point about how to configure > SAN LUNs on these boxes. I'm curious what you all think would be a best > practice for the relatively simple scenario described below: > > Application Use: Oracle 10.2 > Server: Sun Fire V245 w/ Sun branded Emulex FC HBA > SAN Storage Allocated: 10 100GB LUNs > > I'm not much of an oracle guy, but I will say we don't have a lot of > experience running with Oracle on File systems, most of our existing > Oracle Servers are RAC configured with ASM on raw SAN…and we don't like > this very much.
If you are using RAC, your choices are limited. ZFS will not work with RAC. You should check out QFS, which does work with RAC, and is in the queue for being open sourced from Sun. Watch http://www.opensolaris.org/os/project/samqfs/ > I'm wondering what the best way to allocate these LUNs with ZFS would be… Good pointers include: http://blogs.sun.com/realneel/entry/zfs_and_databases http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > Configure one zpool with all 10 LUNs and a single file system assigning > no special constraints (mirror/striping/raid/zraid) to the pool? > > Configure a zpool for each of the 10 LUNs with a single file system > inside each pool? > > Configure one zpool with all 10 LUNs and 10 file systems (again no > special zpool config)? In general, the best advice is KISS. For Oracle databases, we also tend to recommend separate file system or zpool for redo logs. To restate this more generally, use separate zpools when you need separate policies for the data which include zpool-specific settings. Similarly, use separate file systems when the file system policies may be different. > There are some undefined variables, such as the SAN and Oracle > configurations, but I'm not in a position to control those, I don't > admin the SAN, nor am I a DBA. Strictly from the System Admin > perspective would there be a best solution here? If we were using > Veritas Volume Manager, and we were to consider a zpool to be equivalent > to a volume group (also a zfs ~ vxfs logical volume), VVM has > limitations where performance becomes bad if LUNs are too large, or too > many, and so forth. Does ZFS have the same constraints? Does it follow > that allowing ZFS to manage all the LUNs under a single pool and file > system will perform better?...following the idea that the lower the > level of control the better performance through less layers of > abstraction/overhead. ZFS seems to scale well, from a management perspective. VxVM has a bit of a reputation due to implementation and patches over the years which impacted the scalability -- I would expect most of these to be solved in modern releases. > My next question would be to consider those scenarios with the use of > ZFS mirror or raid functionality. Does this add unnecessary overhead at > the cost of performance when the SAN may be configured in a RAID 5 or > RAID 10 arrangement? ZFS can recover from many more faults than your RAID array (including your RAID array). But it may not be able to recover if it is not configured for redundancy. I think of this decision as one of, "where would you like to be able to recover from faults?" The correct answer being, "as close to the application as possible." -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss