<...>
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t001738010140000Bd0 c4t001738010140000Cd0
c4t001738010140001Cd0 c4t0017380101400012d0

each of those devices is a 64GB lun, right?

I did it - created one pool, 4*64GB size, and running the benchmark now.
I'll update you on results, but one pool is definitely not what I need.
My target is - SunCluster with HA ZFS where I need 2 or 4 pools per node.


>
>>
>> Do you know what you're limiting factor was for ZFS (CPU, memory,
>> I/O...)?
>
>
> Thanks to George Wilson who pointed me to the fact that the memory was
> fully consumed.
> I removed the line
> "set ncsize = 0x100000" from /etc/system
> and the now the host isn't hung during the test anymore.
> But performance is still an issue.
>

ah, you were limiting the # of dnlc entries... so you're still seeing
ZFS max out at 2000 ops/s?  Let us know what happends when you switch to
1 pool.

I'd say "increasing" instead of "limiting".

TIA,
-- Leon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to