----- Original Message -----
From: "Matthew Ahrens via illumos-zfs" <[email protected]>
With regards to buffers being allocated by arc_get_data_buf() I can't see
a path by which ARC will prevent a new buffer being allocated even when
arc_evict_needed().
It won't, but it will evict an existing buffer, thus freeing up memory for
the new one.
So just to confirm, the expected behaviour given a sudden burst of writes
when we're already tight on memory is that arc_get_data_buf calls
arc_evict(...) which removes cached data, which then gets reused
as a dirty data buffer?
If so we should see a large amount hits to the arc_evict:entry probe
with recycle followed by arc__evict hits.
If thats the case can't we hit min ARC but yet still claim new buffers? If
so we can suddenly demand up to 10% of the system memory all of which
may required VM to page before it can provide said memory.
Sure, the ARC can grow up to the minimum size without restriction. Is your
ARC below the minimum size?
Not sure Karl, could you confirm the size of ARC when the issue triggers?
If the ARC size is at min or above, I'm wondering if we're simply not
successfully
evicting cache data in preference for write data when we're expecting possibly
to due to hash lock misses?
Karl when you see stalls are you seeing increasing mutex and
recycle misses?
sysctl kstat.zfs.misc.arcstats.mutex_miss
sysctl kstat.zfs.misc.arcstats.recycle_miss
If this is the case could be that we're hitting contention between writes
triggering arc_evict directly and arc_reclaim_thread doing a cleanup?
Regards
Steve
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer