> Interesting, sounds reasonable.
>
> Running with absolutely 0 swap however can lead to unexpected problems
> from my experience:

Interesting that the Wiki page for swappiness (this kernel parameter is
officially more famous and I am) recommends setting it to at least 1.

    https://en.wikipedia.org/wiki/Swappiness

More on the vm.swapiness parameter.  It's a bit more subtle than I thought.

Some references:

1) http://askubuntu.com/questions/103915/how-do-i-configure-swappiness

This page describes it as being how prone Linux is to start swapping out
processes.  At 0, nothing will be swapped until memory is fully exhausted;
at 100, swapping out of processes will begin almost immediately.  It
indicates the default setting of 60 means swapping will start when memory
is around "half" full.

So a setting of zero doesn't prevent swapping, it just puts it off until
there is no memory available.  This is the old-school Unix behaviour I'm
used to, and probably best for VM.

2)
http://unix.stackexchange.com/questions/88693/why-is-swappiness-set-to-60-by-default

This page talks about it in relation to choosing pages with backing store
(programs, libraries, cached/mem-mapped data files, already-swapped data)
vs. "anonymous" pages of allocated memory.  Cached files have a weight of
200-vm.swappiness, and anonymous pages a weight of vm.swapiness.

That may be saying the same thing as #1 but in a different; possibly more
prescise way, since a setting of 100 gives a 50/50 weight for new page
acquisition betreen swap and cache.

3) http://www.cloudibee.com/linux-performance-tuning-vm-swappiness/

This one talks about it as a balance between "swapping and freeing cache,"
which is the same, I think.

-----

Any anonymous page is going to need to be written to swap before being
given to the VM needing memory.  (As well as a read when the page is used
in the future.)  And writes are usually more expensive than reads to start
with.

A cached file/program/library doesn't need to be written, the page can be
discarded and re-used immediately since it can be retrieved from the
backing file/program/library when needed in the future.

Having a swap/anon page swapped and retrieved has a cost of 1w+1r

Having a file/prog page discarded and later retrieved has a cost of 1r.

So swapping has a r/w cost of at least 2x that of stealing from the
file-backed cache.  (Writes are usually a bit more costly than reads, as
well.)  Obviously the nature of your machine (server/desktop) affects
things.

That 60 default setting means file-backed cached pages have a weight of
200-60, or 140, while the anonymous/to-be-swapped pages have a weight of
60, a 70%/30% balance in favour of resuing file-backed cached pages versus
swapping something out to get free pages.

Not a bad compromise for running on bare hardware, or a server; but not
appropriate/necessary for a VM.

With vm.swappiness set to 0 and the same swap space as before, swap can
get used when needed; and as much as before, but not until memory is
exhausted.

And when the free memory is exhausted, that also implies that all of the
cache has also been re-allocated as assigned memory as well.  Since the
VM's really shouldn't be caching in the first place (double-caching in
both dom0 and the VM has to be slower than just one level of cache).

I'm still looking around for options to disable file caching, but having
vm.swappiness low at least gives any running program priority over the
memory being used as cache.

The qmemman load balancer won't consider the memory used for a VM's cache
as part of it's "required" memory (but it does include swapped out stuff,
giving it reasonable chance of getting back into memory without
thrashing), so with low vm.swapiness a VM will not be given extra memory
to maintain any significant level of cache, unless there's free memory
around to be doled out between VM's overall.

I can't help but think the original intent was for vm.swappiness=0 behaviour.

Once vm.swappiness is >0, then some level swapping will occur resulting in
free pages for the VM, and Linux will then go and use these pages for
additional (and unnecessary) cache space; an all-around waste of
disk-access, CPU time, memory.

Running with vm.swappiness=0 seems to work in practice so far.  I'm still
amazed at the difference in memory/performance I'm seeing.

Zero swap on a system that used to have 40M-ish swapped on all VM's and in
dom0.  And smaller VM's, allowing more to be started.  I'm surprised that
this hasn't been a default, or at least some similar tuning done by
default.

In the source code, the 350M "dom0 memory boost" is mentioned as being
specifically to give dom0 free memory (that will inherently be used as
cache) beyond its actual needs (used memory+used swap).

So there is intent to let dom0 do the file caching.  But not a similar
effort to prevent unnecessary caching in the VM's.

Also, it's worth verifying the benefitting from a low vm.swappiness in
dom0 itself.  Swapping can just kill performance so badly.

(I was going to post a report that the "dom0 memory boost" value in the QM
Global Settings seems to be ignored in the source code and hard-coded to
350M.  But as I was looking at the code, an update fininshed, the file
reloaded, and the fix was in place.  Instant telepathic bug fix updates. 
Impressive, ITL. :) )

JJ

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/01c851ab4d746e0733c85cfc11ed5b3e.webmail%40localhost.
For more options, visit https://groups.google.com/d/optout.

Reply via email to