I'm using OpenIndiana 151a7, zpool v28, zfs v5.

When I bought my storage servers I intentionally left hdd slots available
so I can add another vdev when needed and delay immediate expenses.

After reading some posts on the mailing list I'm getting concerned about
degrading performance due to unequal distribution of data among the vdevs.
I still have a chance to migrate the data away, add all drives and rebuild
the pools and start fresh.

Before going that road I was hoping to hear your opinion on what will be
the best way to handle this.

System: Supermicro with 36 hdd bays. 28 bays filled with 3TB SAS 7.2K
enterprise drives. 8 bays available to add another vdev to the pool.

Pool configuration:
# zpool status pool01
  pool: pool01
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Nov 21 17:41:52 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        pool01                     ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c8t5000CCA01AA8E3C0d0  ONLINE       0     0     0
            c8t5000CCA01AA8E3F0d0  ONLINE       0     0     0
            c8t5000CCA01AA8E394d0  ONLINE       0     0     0
            c8t5000CCA01AA8E434d0  ONLINE       0     0     0
            c8t5000CCA01AA793A0d0  ONLINE       0     0     0
            c8t5000CCA01AA79380d0  ONLINE       0     0     0
            c8t5000CCA01AA79398d0  ONLINE       0     0     0
            c8t5000CCA01AB56B10d0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c8t5000CCA01AB56B28d0  ONLINE       0     0     0
            c8t5000CCA01AB56B64d0  ONLINE       0     0     0
            c8t5000CCA01AB56B80d0  ONLINE       0     0     0
            c8t5000CCA01AB56BB0d0  ONLINE       0     0     0
            c8t5000CCA01AB56EA4d0  ONLINE       0     0     0
            c8t5000CCA01ABDAEBCd0  ONLINE       0     0     0
            c8t5000CCA01ABDAED0d0  ONLINE       0     0     0
            c8t5000CCA01ABDAF1Cd0  ONLINE       0     0     0
          raidz2-2                 ONLINE       0     0     0
            c8t5000CCA01ABDAF7Cd0  ONLINE       0     0     0
            c8t5000CCA01ABDAF10d0  ONLINE       0     0     0
            c8t5000CCA01ABDAF40d0  ONLINE       0     0     0
            c8t5000CCA01ABDAF60d0  ONLINE       0     0     0
            c8t5000CCA01ABDAF74d0  ONLINE       0     0     0
            c8t5000CCA01ABDAF80d0  ONLINE       0     0     0
            c8t5000CCA01ABDB04Cd0  ONLINE       0     0     0
            c8t5000CCA01ABDB09Cd0  ONLINE       0     0     0
        logs
          mirror-3                 ONLINE       0     0     0
            c6t0d0                 ONLINE       0     0     0
            c6t1d0                 ONLINE       0     0     0
        cache
          c6t2d0                   ONLINE       0     0     0
          c6t3d0                   ONLINE       0     0     0
        spares
          c8t5000CCA01ABDB020d0    AVAIL
          c8t5000CCA01ABDB060d0    AVAIL

errors: No known data errors
#

Will adding another vdev hurt the performance?

Thank you,

-- Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to