On 06.04.2012 12:13, Konstantin Belousov wrote:
On Thu, Apr 05, 2012 at 11:54:53PM +0400, Andrey Zonov wrote:
On 05.04.2012 23:41, Konstantin Belousov wrote:
On Thu, Apr 05, 2012 at 11:33:46PM +0400, Andrey Zonov wrote:
On 05.04.2012 19:54, Alan Cox wrote:
On 04/04/2012 02:17, Konstantin Belousov wrote:
On Tue, Apr 03, 2012 at 11:02:53PM +0400, Andrey Zonov wrote:
[snip]
This is what I expect. But why this doesn't work without reading file
manually?
Issue seems to be in some change of the behaviour of the reserv or
phys allocator. I Cc:ed Alan.

I'm pretty sure that the behavior here hasn't significantly changed in
about twelve years. Otherwise, I agree with your analysis.

On more than one occasion, I've been tempted to change:

pmap_remove_all(mt);
if (mt->dirty != 0)
vm_page_deactivate(mt);
else
vm_page_cache(mt);

to:

vm_page_dontneed(mt);


Thanks Alan!  Now it works as I expect!

But I have more questions to you and kib@.  They are in my test below.

So, prepare file as earlier, and take information about memory usage
>from top(1).  After preparation, but before test:
Mem: 80M Active, 55M Inact, 721M Wired, 215M Buf, 46G Free

First run:
$ ./mmap /mnt/random
mmap:  1 pass took:   7.462865 (none:      0; res: 262144; super:
0; other:      0)

No super pages after first run, why?..

Mem: 79M Active, 1079M Inact, 722M Wired, 216M Buf, 45G Free

Now the file is in inactive memory, that's good.

Second run:
$ ./mmap /mnt/random
mmap:  1 pass took:   0.004191 (none:      0; res: 262144; super:
511; other:      0)

All super pages are here, nice.

Mem: 1103M Active, 55M Inact, 722M Wired, 216M Buf, 45G Free

Wow, all inactive pages moved to active and sit there even after process
was terminated, that's not good, what do you think?
Why do you think this is 'not good' ? You have plenty of free memory,
there is no memory pressure, and all pages were referenced recently.
THere is no reason for them to be deactivated.


I always thought that active memory this is a sum of resident memory of
all processes, inactive shows disk cache and wired shows kernel itself.
So you are wrong. Both active and inactive memory can be mapped and
not mapped, both can belong to vnode or to anonymous objects etc.
Active/inactive distinction is only the amount of references that was
noted by pagedaemon, or some other page history like the way it was
unwired.

Wired is not neccessary means kernel-used pages, user processes can
wire their pages as well.

Let's talk about that in details.

My understanding is the following:

Active memory: the memory which is referenced by application. An application may get memory only through mmap() (allocator don't use brk()/sbrk() any more). The resident memory of an application is the sum of physical used memory. So, sum of RSS is active memory.

Inactive memory: the memory which has no references. Once we call read() on the file, the file is in inactive memory, because we have no references to this object, we just read it. This is also released memory by free().

Cache memory: I don't know what is it. It's always small enough to not think about it.

Wired memory: kernel memory and yes, application may get wired memory through mlock()/mlockall(), but I haven't seen any real application which calls mlock().



Read the file:
$ cat /mnt/random>   /dev/null

Mem: 79M Active, 55M Inact, 1746M Wired, 1240M Buf, 45G Free

Now the file is in wired memory.  I do not understand why so.
You do use UFS, right ?

Yes.

There is enough buffer headers and buffer KVA
to have buffers allocated for the whole file content. Since buffers wire
corresponding pages, you get pages migrated to wired.

When there appears a buffer pressure (i.e., any other i/o started),
the buffers will be repurposed and pages moved to inactive.


OK, how can I get amount of disk cache?
You cannot. At least I am not aware of any counter that keeps track
of the resident pages belonging to vnode pager.

Buffers should not be thought as disk cache, pages cache disk content.
Instead, VMIO buffers only provide bread()/bwrite() compatible interface
to the page cache (*) for filesystems.
(*) - The cache term is used in generic term, not to confuse with
cached pages counter from top etc.


Yes, I know that. I try once again to ask my question about buffers. Is this reasonable to use for them 10% of the physical memory or we may set rational upper limit automatically?



Could you please give me explanation about active/inactive/wired memory?


because I suspect that the current code does more harm than good. In
theory, it saves activations of the page daemon. However, more often
than not, I suspect that we are spending more on page reactivations than
we are saving on page daemon activations. The sequential access
detection heuristic is just too easily triggered. For example, I've seen
it triggered by demand paging of the gcc text segment. Also, I think
that pmap_remove_all() and especially vm_page_cache() are too severe for
a detection heuristic that is so easily triggered.

[snip]

--
Andrey Zonov

--
Andrey Zonov

--
Andrey Zonov
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Reply via email to