This looks like another instance of
6429205 each zpool needs to monitor its throughput and throttle
heavy writers|
or at least it is a contributing factor.
Note that your /etc/system is mispelled (maybe just in the e-mail)
Didn't you get a console message ?
-r
Le 24 mai 07 à 09:50, Amer Ather a écrit :
IHAC complaining about database startup failure after large files are
copied into ZFS filesystem. If he wait for some time then it works. It
seems that ZFS is not freeing buffers from its ARC cache fast enough.
Lockstat shows long block events for lock arc_reclaim_thr_lock:
Adaptive mutex hold: 552045 events in 6.771 seconds (81525 events/sec)
----------------------------------------------------------------------
---------
Count indv cuml rcnt nsec Lock Hottest Caller
5 22% 22% 0.00 310317083 arc_reclaim_thr_lock[8] cv_timedwait
+0x74
nsec ------ Time Distribution ------ count Stack
16777216 |@@@@@@ 1 arc_reclaim_thread
+0x158
33554432 | 0 thread_start+0x4
I suggested him to limit the ZFS ARC size to 100MB in /etc/system
file
and then reboot:
set arch.c_max=0t104857600
capture the output using:
arc::print -tad
{
703a0328 arc_state_t *anon = ARC_anon
703a0330 arc_state_t *mru = ARC_mru
703a0338 arc_state_t *mru_ghost = ARC_mru_ghost
703a0340 arc_state_t *mfu = ARC_mfu
703a0348 arc_state_t *mfu_ghost = ARC_mfu_ghost
703a0350 uint64_t size = 0t705566208
703a0358 uint64_t p = 0t793865216
703a0360 uint64_t c = 0t1587369984
703a0368 uint64_t c_min = 0t67108864
703a0370 uint64_t c_max = 0t1587369984
703a0378 uint64_t hits = 0t47674
703a0380 uint64_t misses = 0t5634
703a0388 uint64_t deleted = 0t530
703a0390 uint64_t skipped = 0
703a0398 uint64_t hash_elements = 0t6079
703a03a0 uint64_t hash_elements_max = 0t6079
703a03a8 uint64_t hash_collisions = 0t845
703a03b0 uint64_t hash_chains = 0t414
703a03b8 uint32_t hash_chain_max = 3
703a03bc int no_grow = 0
}
still showed ZFS ARC size over 500MB. Also don't see DNLC buffers
tying
up ZFS ARC buffers. Customer even tried setting following values
manually via mdb:
arc.c_max: 100 MB
arc.c: 100 MB
arc. p: 50 MB
It did n't help.
I like to know:
1- What Sol10 Update contains this workaround?
2- What is the Bug (there are several) that addresses this issue?
3- Is there a long-term fix planned that does not require manually
limiting the ZFS ARC? Please provide bug #.
Thanks,
Amer Ather
PTS-KERNEL
[EMAIL PROTECTED]
408-276-9780 (x19780)
email Pager: [EMAIL PROTECTED]
" In theory, there is no difference between theory and practice,
but in practice, there is."
-Jan L.A. van de Snepscheut
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss