Hello!
Our company have 2 sun 7110 with the following configuration:

Primary:
7110 with 2 qc 1.9ghz HE opterons and 32GB ram
16 2.5" 10Krpm sas disc (2 system, 1 spare)
a pool is configured from the rest so we have 13 active working discs in 
raidz-2 (called main)
there is a sun J4200 jbod connected to this device with 12x750GB discs
with 1 spare and 11active discs there is another pool configured (called JBOD)

Backup:
7110 (converted from x4240) with 2 qc 1.9ghz HE opterons and 8GB ram
16 2.5" 10Krpm sas disc (2 system, 1 spare)
a pool is configured from the rest so we have 13 active working discs in 
raidz-2 (called main)
there is a promise wess jbod connected to this device with 12x1T discs
with 1 spare and 11active discs there is another pool configured (called JBOD)

The two storages are connected with periodic (1 hour) replication
All the discs aand other hardware are working properely.
Zil is turned off, system is set to assync.
the firmware version is fishworks 2010.Q3.2.0.
The purpose of the system is to proveide NFSv3 shares for our mini-cloud
The system is mission critical.

Our problem is, that both devices experience low performace, and high latency 
(up to 1.5sec on some operations)
due to heavy cacheing the main storage-s full input+output bandwitch is about 
5MB/sec with ~2000 op/sec from NFSv3 (1950 metadata cache hit/sec, 350 data 
hit/sec 50 data miss/sec)

The very strange thing is:
we have very high rates at disc percent of utilization (every disc) due to 
~3200 iops in the discs (60 iops / data disc) with 0 size

if we initiate a sequencial read or write from one of the nfs clients we get 
8-15MB/sec performacce from the system.

I'd like to know why it is doing that, how can an iops be 0 length, what can we 
do about it?
Thank you for any help, we really need to solve this.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to