Op dinsdag 16 juli 2013 schreef Frank Leonhardt (fra...@fjl.co.uk) het volgende:
> On 16/07/2013 10:41, Shane Ambler wrote: > >> On 16/07/2013 14:41, aurfalien wrote: >> >>> >>> On Jul 15, 2013, at 9:23 PM, Warren Block wrote: >>> >>> On Mon, 15 Jul 2013, aurfalien wrote: >>>> >>>> ... thats the question :) >>>>> >>>>> At any rate, I'm building a rather large 100+TB NAS using ZFS. >>>>> >>>>> However for my OS, should I also ZFS or simply gmirror as I've a >>>>> dedicated pair of 256GB SSD drives for it. I didn't ask for SSD >>>>> sys drives, this system just came with em. >>>>> >>>>> This is more of a best practices q. >>>>> >>>> >>>> ZFS has data integrity checking, gmirror has low RAM overhead. >>>> gmirror is, at present, restricted to MBR partitioning due to >>>> metadata conflicts with GPT, so 2TB is the maximum size. >>>> >>>> Best practices... depends on your use. gmirror for the system >>>> leaves more RAM for ZFS. >>>> >>> >>> Perfect, thanks Warren. >>> >>> Just what I was looking for. >>> >> >> I doubt that you would save any ram having the os on a non-zfs drive as >> you will already be using zfs chances are that non-zfs drives would only >> increase ram usage by adding a second cache. zfs uses it's own cache >> system and isn't going to share it's cache with other system managed >> drives. I'm not actually certain if the system cache still sits above >> zfs cache or not, I think I read it bypasses the traditional drive cache. >> >> For zfs cache you can set the max usage by adjusting vfs.zfs.arc_max >> that is a system wide setting and isn't going to increase if you have >> two zpools. >> >> Tip: set the arc_max value - by default zfs will use all physical ram >> for cache, set it to be sure you have enough ram left for any services >> you want running. >> >> Have you considered using one or both SSD drives with zfs? They can be >> added as cache or log devices to help performance. >> See man zpool under Intent Log and Cache Devices. >> >> I agree with the sentiment of using the SSD as ZFS cache - it's possibly > the only logical use for them. > > I guess that with 100Tb worth of Winchesters you're not on a very tight > budget, and not too tight on RAM for the OS either. If I was going to do > this I'd stick with the OS on UFS and a gmirror because I simply don't > trust ZFS. This is based on pure prejudice and inexperience. > > I know how to arrange disks on a UNIX file system for performance - what > to use for swap, where tmp files should go and so on. I also know where > every file will be, physically, in the event of trouble. And here's the > clincher: If the machine blows up I can simply take one of the mirrored > drives, slap it in to some new hardware and I've got a very reasonable > chance that it'll boot. Can I do this with ZFS? I get the feeling that the > answer is an emphatic "maybe". > > So all things considered, I'd need a good reason not to stick with what I > know works reliably and can be recovered in the event of a disaster (UFS), > but I'm happy to watch and learn from everyone else's experience! > > I would us a zfs for the os. I have a couple of servers that did not survive a power failure with gmirror. The problems i had was when the power failed one disk was in a rebuilding state and then when the background fsck started or was busy for some time it would crash the whole server. Removing the disk that was rebuilding resolved the issue. This happened to me more than once. Most of the times it worked as advertised but not always. Before people tell me to use an UPS, i used a UPS but the damn thing gave way itself. Then after it came back from the warranty repair it gave way again. Some times it came back right away, leaving some servers survive and some in the state they where. It was hard to find the cause in the beginning because of the fact some servers did survive the power failure. We did not suspect the UPS at first. Anyway, gmirror did not work for me in all cases. I am now running a few servers with a zfs root. I did not have any problems with them till now (knock on wood). Since reading that swap on zfs root can cause trouble i have a separate freebsd-swap partition for the swap. Gr Johan > ______________________________**_________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/**mailman/listinfo/freebsd-**questions<http://lists.freebsd.org/mailman/listinfo/freebsd-questions> > To unsubscribe, send any mail to " > freebsd-questions-unsubscr...@freebsd.org" > _______________________________________________ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"