On Nov 17, 2008, at 3:26 AM, Wes Morgan wrote:

The Areca cards do NOT have the cache enabled by default. I ordered the optional battery and RAM upgrade for my collection of 1231ML cards. Even with the BBWC, the cache is not enabled by default. I had to go out of my way to enable it, on every single controller.

Are you using these areca cards successfully with large arrays?

Yes, if you consider 24 x 1TB large.

I found a 1680i card for a decent price and installed it this weekend, but since then I'm seeing the raidz2 pool that it's running hang so frequently that I can't even trust using it. The hangs occur in both 7-stable and 8-current with the new ZFS patch. Same exact settings that have been rock solid for me before now don't want to work at all. The drives are just set as JBOD -- the controller actually defaulted to this, so I didn't have to make any real changes in the BIOS.

Any tips on your setup? Did you have any similar problems?

I talked to a storage vendor of ours that has sold several SuperMicro systems like ours where the client was using OpenSolaris and having similar stability issues to what we see on FreeBSD. It seems to be a lack of maturity in ZFS that underlies these problems.

It appears that running ZFS on FreeBSD will either thrill or horrify. When I tested with modest I/O requirements, it worked great and I was tickled. But when I build these new systems as backup servers, I was generating immensely more disk I/O. I started with 7.0 release and saw crashes hourly. With tuning, I was only crashing once or twice a day (always memory related). With 16GB of RAM.

I ran for a month with one server on JBOD with RAIDZ2 and another with RAIDZ across two RAID 5 arrays. Then I lost a disk and consequently the array on the JBOD server. Since RAID 5 had proved to run so much faster, I ditched the Marvell cards, installed a pair of 1231MLs and reformatted it with RAID 5. Both 24 disk systems have been ZFS RAIDZ across two RAID 5 hardware arrays for months since. If I build another system tomorrow, that's exactly how I'd do it.

After upgrading to 8-HEAD and applying The Great ZFS Patch, I am content with only having to reboot the systems once every 7-12 days.

I have another system with only 8 disks and 4GB of RAM with ZFS running on a single RAID 5 array. Under the same workload as the 24 disk systems, it was crashing at least once a day. This was existing hardware, so we were confident it wasn't hardware issues. I finally resolved it by wiping the disks clean, creating a GPT partition on the array and using UFS. The system hasn't crashed once since and is far more responsive under heavy load than my ZFS systems.

Of course, all of this might get a fair bit better soon:

http://svn.freebsd.org/viewvc/base?view=revision&revision=185029

Matt
_______________________________________________
freebsd-hardware@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to