Hi,

on friday I received two of my new fc raids, that I intended to use as my new zpool devices. These devices are from CiDesign and their type/model is iR16FC4ER. These are fc raids, that also allow JBOD operation, which is what I chose. So I configured 16 raid groups on each system and configured the raids to attach them to their fc channel one by one.

On my Sol11Expr host I have created a zpool of mirror vdevs, by selecting 1 disk from either raid. This way I got a zpool that looks like this:

r...@solaris11c:~# zpool status newObelixData
  pool: newObelixData
 state: ONLINE
 scan: resilvered 1K in 0h0m with 0 errors on Sat Dec 11 15:25:35 2010
config:

        NAME                        STATE     READ WRITE CKSUM
        newObelixData               ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            c9t2100001378AC02C7d0   ONLINE       0     0     0
            c9t2100001378AC0355d0   ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            c9t2100001378AC02C7d1   ONLINE       0     0     0
            c9t2100001378AC0355d1   ONLINE       0     0     0
          mirror-2                  ONLINE       0     0     0
            c9t2100001378AC02C7d2   ONLINE       0     0     0
            c9t2100001378AC0355d2   ONLINE       0     0     0
          mirror-3                  ONLINE       0     0     0
            c9t2100001378AC02C7d3   ONLINE       0     0     0
            c9t2100001378AC0355d3   ONLINE       0     0     0
          mirror-4                  ONLINE       0     0     0
            c9t2100001378AC02C7d4   ONLINE       0     0     0
            c9t2100001378AC0355d4   ONLINE       0     0     0
          mirror-5                  ONLINE       0     0     0
            c9t2100001378AC02C7d5   ONLINE       0     0     0
            c9t2100001378AC0355d5   ONLINE       0     0     0
          mirror-6                  ONLINE       0     0     0
            c9t2100001378AC02C7d6   ONLINE       0     0     0
            c9t2100001378AC0355d6   ONLINE       0     0     0
          mirror-7                  ONLINE       0     0     0
            c9t2100001378AC02C7d7   ONLINE       0     0     0
            c9t2100001378AC0355d7   ONLINE       0     0     0
          mirror-8                  ONLINE       0     0     0
            c9t2100001378AC02C7d8   ONLINE       0     0     0
            c9t2100001378AC0355d8   ONLINE       0     0     0
          mirror-9                  ONLINE       0     0     0
            c9t2100001378AC02C7d9   ONLINE       0     0     0
            c9t2100001378AC0355d9   ONLINE       0     0     0
          mirror-10                 ONLINE       0     0     0
            c9t2100001378AC02C7d10  ONLINE       0     0     0
            c9t2100001378AC0355d10  ONLINE       0     0     0
          mirror-11                 ONLINE       0     0     0
            c9t2100001378AC02C7d11  ONLINE       0     0     0
            c9t2100001378AC0355d11  ONLINE       0     0     0
          mirror-12                 ONLINE       0     0     0
            c9t2100001378AC02C7d12  ONLINE       0     0     0
            c9t2100001378AC0355d12  ONLINE       0     0     0
          mirror-13                 ONLINE       0     0     0
            c9t2100001378AC02C7d13  ONLINE       0     0     0
            c9t2100001378AC0355d13  ONLINE       0     0     0
          mirror-14                 ONLINE       0     0     0
            c9t2100001378AC02C7d14  ONLINE       0     0     0
            c9t2100001378AC0355d14  ONLINE       0     0     0
          mirror-15                 ONLINE       0     0     0
            c9t2100001378AC02C7d15  ONLINE       0     0     0
            c9t2100001378AC0355d15  ONLINE       0     0     0

errors: No known data errors

At first I disabled all write cache and read ahead options for each raid group on the raids, since I wanted to provide ZFS as much control over the drives as possible, but the performance was quite worse. I am running this zpool on a Sun Fire X4170M2 with 32 GB of RAM so I ran bonnie++ with -s 63356 -n 128 and got these results:

Sequential Output
char: 51819
block: 50602
rewrite: 28090

Sequential Input:
char: 62562
block 60979

Random seeks: 510 <- this seems really low to me, isn't it?

Sequential Create:
create: 27529
read: 172287
delete: 30522

Random Create:
create: 25531
read: 244977
delete 29423

Since I was curious, what would happen, if I'd enable WriteCache and ReadAhead on the raid groups, I turned them on for all 32 devices and re-ran bonnie++. To my great dismay, this time zfs had a lot of random troubles with the drives, where zfs would remove drives arbitrarily from the pool since they exceeded the error thresholds. On one run, this only happend to 4 drives from one fc raid on the next run 3 drives from the other raid got removed from the pool.

I know, that I'd better disable all "optimizations" on the raid side, but the performance seems just too bad with these settings. Maybe running 16 mirrors in a zpool is not a good idea - but that seems more than unlikely to me.

Is there anything else I can check?

Cheers,
budy


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to