On 1 Feb 2008, at 16:09, M. Warner Losh wrote:
In message: <[EMAIL PROTECTED]>
Robert William Fuller <[EMAIL PROTECTED]> writes:
: Avi Kivity wrote:
: > Anthony Liguori wrote:
: >> I think I'll change this too into a single qemu_ram_alloc.
That will
: >> fix the bug with KVM when using -kernel and large memory
anyway :-)
: > Won't that cause all of the memory in the hole to be wasted?
: Linux doesn't commit mapped memory until it's faulted. As for other
: platforms, who knows?
It would appear that modern Windows also overcommits:
"This memory isn’t allocated until the application explicitly uses
it. Once the application uses the page, it becomes committed."
"When an application touches a virtual memory page (reads/write/
programmatically commits) the page becomes a committed page. It is
now backed by a physical memory page. This will usually be a
physical RAM page, but could eventually be a page in the page file on
the hard disk, or it could be a page in a memory mapped file on the
hard disk."
-- http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-
shell-game.aspx
So it looks like you could get away with this on the two big host
platforms.
Most BSDs are also similarly overcommitted. 95% of the users think
this is a feature, but the other 5 argue 20 times harder sometimes :-(
Some of us don't like the idea that our operating systems lie about
how many resources they have available, then have to club innocent
processes over the head when their lies catch up with them. ;)
Phil