On Oct 22, 2012, at 6:52 AM, Chris Nagele <nag...@wildbit.com> wrote:
>> If after it decreases in size it stays there it might be similar to: >> >> 7111576 arc shrinks in the absence of memory pressure > > After it dropped, it did build back up. Today is the first day that > these servers are working under real production load and it is looking > much better. arcstat is showing some nice numbers for arc, but l2 is > still building. > > read hits miss hit% l2read l2hits l2miss l2hit% arcsz l2size > 19K 17K 2.5K 87 2.5K 490 2.0K 19 148G 371G > 41K 39K 2.3K 94 2.3K 184 2.1K 7 148G 371G > 34K 34K 694 98 694 17 677 2 148G 371G > 16K 15K 1.0K 93 1.0K 16 1.0K 1 148G 371G > 39K 36K 2.3K 94 2.3K 20 2.3K 0 148G 371G > 23K 22K 746 96 746 76 670 10 148G 371G > 49K 47K 1.7K 96 1.7K 249 1.5K 14 148G 371G > 23K 21K 1.4K 93 1.4K 38 1.4K 2 148G 371G > > My only guess is that the large zfs send / recv streams were affecting > the cache when they started and finished. There are other cases where data is evicted from the ARC, though I don't have a complete list at my fingertips. For example, if a zvol is closed, then the data for the zvol is evicted. -- richard > > Thanks for the responses and help. > > Chris > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- richard.ell...@richardelling.com +1-760-896-4422
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss