Hi,

On 09/05/2024 12:28, Roger Pau Monné wrote:
On Thu, May 09, 2024 at 10:50:56AM +0100, Julien Grall wrote:


On 09/05/2024 09:13, Roger Pau Monné wrote:
On Wed, May 08, 2024 at 11:11:04PM +0100, Julien Grall wrote:
Also the interactions with the remote domain would need to be audited,
as the remote domain shattering the superpage would need to be
replicated in the mapping side in order to account for the changes.

... I don't understand this one. How is this different from today's where a
domain can foreign map a 2MB which may be using a superpage in the remote
domain?

Hm, right, I was wrong with that I think, as long as proper references
as taken for the superpage entries it should be fine.

  Not sure all paths will be easy to
audit for preemption if it's more than relinquish_p2m_mapping() that
you need to adjust.

I thought about it yesterday. But I came to the conclusion that if we have
any concern about removing 1GB foreign superpage then we would already have
the problem today as a domain can map contiguously 1GB worth of foreign
mapping using small pages.

Yeah, but in that case addition or removal is done in 4K chunks, and
hence we can preempt during the operation.

I am not entirely sure how that would work. From my understand, today, most of the users of the P2M code expects the operation to complete in one go and if preemption is needed then the caller is responsible to handle it by breaking up the happy.

With your suggestion, it sounds like you want to rework how the preemption today and push it to the P2M code. This would mean we would need to modify all the callers to check for -EERESTART (or similar) and also tell them how many pages were handled so the call can be restarted where it stopped. Is it what you had in mind?

I don't expect the work to be trivial, so I wonder if this is really worth it to try to change the way we preempt.


OTOH for 1GB given the code here the page could be freed in one go,
without a chance of preempting the operation.

Maybe you have to shatter superpages into 4K entries and then remove
them individually, as to allow for preemption to be possible by
calling put_page() for each 4K chunk?
This would require to allocate some pages from the P2M pool for the tables. As the pool may be exhausted, it could be problematic when relinquishing the resources.

It may be possible to find a way to have memory available by removing other mappings first. But it feels a bit hackish and I would rather prefer if we avoid allocating any memory when relinquishing.

So I think we want to allow at most 2MB superpages for foreign mapping.

Cheers,

--
Julien Grall

Reply via email to