On Mon, 25 Jul 2016, George Dunlap wrote:
> On Thu, Jul 21, 2016 at 10:15 PM, Stefano Stabellini
> <sstabell...@kernel.org> wrote:
> >> You are assuming that the guest will map the ACPI blob with the same
> >> attributes as the rest of the superpage.
> >>
> >> IHMO, a sane operating system will want to map the ACPI blob read-only.
> >
> > That's true. But there are other things which might be mapped
> > differently and could shatter a stage-1 superpage mapping (especially on
> > x86 that has a much more complex memory map than ARM). Obviously adding
> > one more is not doing it any good, but it might not make a difference in
> > practice.
> >
> > Anyway, I agree with Julien that his suggestion is the best for ARM. If
> > the libxl maintainers are willing to accept two different code paths for
> > this on ARM and x86, then I am fine with it too.
> 
> Sorry to be a bit late to this thread -- there's a interface principle
> that I think we should at some point have a larger discussion about:
> whether "maxmem" means the amount of RAM which the guest sees as RAM,
> or whether "maxmem" means the amount of RAM that the administrator
> sees as used by the guest.  At the moment tnhere's no consistent
> answer actually; but I am strongly of the opinion that for usability
> the best answer is for "memory" to be the *total* amount of *host*
> memory used by the guest.  In an ideal world, the admin should be able
> to do "xl info", see that there is 3000MiB free, and then start a
> guest with 3000MiB and expect it to succeed.  At the moment he has to
> guess.

I don't want to add to the confusion, but maxmem is often higher than
the actual memory allocated for the guest at any given moment, given
that it's the upper limit enforced by the hypervisor (maxmem and mem are
often different, think about ballooning). So how can it be "the amount
of RAM that the administrator sees as used by the guest"? At best it
could be "the amount of RAM that the administrator sees could be at most
used by the guest" or "the amount of RAM that the administrator sees as
allocated on behalf of the guest at boot".


> To confirm, do you include memory allocated by the hypervisor to keep
> track of the guest (i.e struc domain, struct vcpu...)?
> 
> If not, the problem stays the same because the admin will have to know
> how much memory Xen will allocate to keep track of the guest. So if "xl
> info" tells you that 3000MiB is free, you will only be able to use
> 3000MiB - few kilobytes.

That's right, unfortunately all those structs allocated by the
hypervisor are completely unknown to the tootlstack. However they should
be an order of magnitude or two smaller than things like the videoram,
the ethernet blob (on x86) or the ACPI blob. So taking the memory for
ACPI and videoram from the existing maxmem pool without increasing it,
would significantly improve, but not completely solve, the problem
described by George.


Going back to the discussion about how to account for the ACPI blob in
maxmem, let's make this simple, if we increase maxmem by the size of the
ACPI blob:

- the toolstack allocates more RAM than expected (bad)
- when the admin specifies 1GB of RAM, the guest actually gets 1GB of
  usable RAM (good)
- things are faster as Xen and the guest can exploit superpage mappings
  more easily at stage-1 and stage-2 (good)

Let's call this option A.

If we do not increase maxmem:

- the toolstack allocates less RAM, closer to the size specified in the
  VM config file (good)
- the guest gets less usable memory than expected, less than what was
  specified in the VM config file (bad)
- things get slower as one or two 1GB superpage mappings are going to be
  shattered, almost certainly the stage-1 mapping, probably the stage-2
  mapping too, depending on the guest memory layout which is arch
  specific (bad)

Let's call this option B.

Both have pros and cons. Julien feels strongly for option A. I vote for
option A, but I find option B also acceptable. Let's make a decision so
that Shannon can move forward.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to