On 13/08/18 17:29, Jan Beulich wrote:
>>>> On 13.08.18 at 16:20, <jgr...@suse.com> wrote:
>> On 13/08/18 15:54, Jan Beulich wrote:
>>>>>> On 13.08.18 at 15:06, <jgr...@suse.com> wrote:
>>>> Suggested new interface
>>>> -----------------------
>>>> Hypercalls, memory map(s) and ACPI tables should stay the same (for
>>>> compatibility reasons or because they are architectural interfaces).
>>>>
>>>> As the main confusion in the current interface is related to the
>>>> specification of the target memory size this part of the interface
>>>> should be changed: specifying the size of the ballooned area instead
>>>> is much clearer and will be the same for all guest types (no firmware
>>>> memory or magic additions involved).
>>>
>>> But isn't this backwards? The balloon size is a piece of information
>>> internal to the guest. Why should the outside world know or care?
>>
>> Instead of specifying an absolute value to reach you'd specify how much
>> memory the guest should stay below its maximum. I think this is a valid
>> approach.
> 
> But with you vNUMA model there's no single such value, and nothing
> like a "maximum" (which would need to be per virtual node afaics).

With vNUMA there is a current value of memory per node supplied by the
tools and a maximum per node can be caclulated the same way. This
results in a balloon size per node.

There is still the option to let the guest adjust the per node balloon
sizes after reaching the final memory size or maybe during the process
of ballooning at a certain rate.

> 
>>>> Any further thoughts on this?
>>>
>>> The other problem we've always had was that address information
>>> could not be conveyed to the driver. The worst example in the past
>>> was that 32-bit PV domains can't run on arbitrarily high underlying
>>> physical addresses, but of course there are other cases where
>>> memory below a certain boundary may be needed. The obvious
>>> problem with directly exposing address information through the
>>> interface is that for HVM guests machine addresses are meaningless.
>>> Hence I wonder whether a dedicated "balloon out this page if you
>>> can" mechanism would be something to consider.
>>
>> Isn't this a problem orthogonal to the one we are discussing here?
> 
> Yes, but I think we shouldn't design a new interface without
> considering all current shortcomings.

I don't think the suggested interface would make it harder to add a way
to request special pages to be preferred in the ballooning process.

> 
>> I'd rather do a localhost guest migration to free specific pages a
>> guest is owning and tell the Xen memory allocator not to hand them
>> out to the new guest created by the migration.
> 
> There may not be enough memory to do a localhost migration.
> Ballooning, after all, may be done just because of a memory
> shortage.

True.

Still I believe adding the tooling to identify domains owning needed
memory pages and demand them to balloon those out in order to make use
of those pages for creation of a special domain is nothing which is
going to happen soon.

So as long as we are confident that the new interface wouldn't block
such a usage I think we are fine.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to