Sync was disabled on the main pool and then let to inherrit to everything else. 
The>  reason for disabled this in the first place was to fix bad NFS write 
performance (even with>  Zil on an X25e SSD it was under 1MB/s).
I've also tried setting the logbias to throughput and latency but they both 
perform>  around the same level.
Thanks
-Matt
I believe you're hitting bug "7000208: Space map trashing affects NFS write 
throughput". We also did, and it did impact iscsi as well.

If you have enough ram you can try enabling metaslab debug (which makes problem 
vanish);

# echo metaslab_debug/W1 | mdb -kw

And calculating amount of ram needed:


/usr/sbin/amd64/zdb -mm<poolname>  >  /tmp/zdb-mm.out

awk '/segments/ {s+=$2}END {printf("sum=%d\n",s)}' zdb_mm.out

93373117 sum of segments
16 VDEVs
116 metaslabs
1856 metaslabs in total

93373117/1856 = 50308 average number of segments per metaslab

50308*1856*64
5975785472

5975785472/1024/1024/1024
5.56

= 5.56 GB

Yours
Markus Kovero
Out of curiosity, I just tried the command on one of my zpools, that has an number of ZFS volumes and was presented the following:

root@solaris11c:/obelixData/99999_Testkunde/01_Etat01# /usr/sbin/amd64/zdb -mm obelixData > /tmp/zdb-mm.out
WARNING: can't open objset for obelixData/15035_RWE
zdb: can't open 'obelixData': I/O error

So, actually one of the volumes seems to have a problem, of which I wasn't aware - but this volume seems to behave just normal and it doesn't seem to show any erratic behaviour.

Can anybody maybe shed some light on what might be wrong with that particular volume?

Thanks,
budy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to