On 10/01/2018 02:44 PM, Sergey Dyasli wrote:
> On Mon, 2018-10-01 at 07:38 -0600, Jan Beulich wrote:
>>>>> On 01.10.18 at 15:12, <andrew.coop...@citrix.com> wrote:
>>>
>>> On 01/10/18 12:13, Jan Beulich wrote:
>>>>>>> On 01.10.18 at 11:58, <sergey.dya...@citrix.com> wrote:
>>>>>
>>>>> Having the allocator return unscrubbed pages is a potential security
>>>>> concern: some domain can be given pages with memory contents of another
>>>>> domain. This may happen, for example, if a domain voluntarily releases
>>>>> its own memory (ballooning being the easiest way for doing this).
>>>>
>>>> And we've always said that in this case it's the domain's responsibility
>>>> to scrub the memory of secrets it cares about. Therefore I'm at the
>>>> very least missing some background on this change of expectations.
>>>
>>> You were on the call when this was discussed, along with the synchronous
>>> scrubbing in destroydomain.
>>
>> Quite possible, but it has been a while.
>>
>>> Put simply, the current behaviour is not good enough for a number of
>>> security sensitive usecases.
>>
>> Well, I'm looking forward for Sergey to expand on this in the commit
>> message.
>>
>>> The main reason however for doing this is the optimisations it enables,
>>> and in particular, not double scrubbing most of our pages.
>>
>> Well, wait - scrubbing != zeroing (taking into account also what you
>> say further down).
>>
>>>>> Change the allocator to always scrub the pages given to it by:
>>>>>
>>>>> 1. free_xenheap_pages()
>>>>> 2. free_domheap_pages()
>>>>> 3. online_page()
>>>>> 4. init_heap_pages()
>>>>>
>>>>> Performance testing has shown that on multi-node machines bootscrub
>>>>> vastly outperforms idle-loop scrubbing. So instead of marking all pages
>>>>> dirty initially, introduce bootscrub_done to track the completion of
>>>>> the process and eagerly scrub all allocated pages during boot.
>>>>
>>>> I'm afraid I'm somewhat lost: There still is active boot time scrubbing,
>>>> or at least I can't see how that might be skipped (other than due to
>>>> "bootscrub=0"). I was actually expecting this to change at some
>>>> point. Am I perhaps simply mis-reading this part of the description?
>>>
>>> No.  Sergey tried that, and found a massive perf difference between
>>> scrubbing in the idle loop and scrubbing at boot.  (1.2s vs 40s iirc)
>>
>> That's not something you can reasonably compare, imo: For one,
>> it is certainly expected for the background scrubbing to be slower,
>> simply because of other activity on the system. And then 1.2s
>> looks awfully small for a multi-Tb system. Yet it is mainly large
>> systems where the synchronous boot time scrubbing is a problem.
> 
> Let me throw in some numbers.
> 
> Performance of current idle loop scrubbing is just not good enough:
> on 8 nodes, 32 CPUs and 512GB RAM machine it takes ~40 seconds to scrub
> all the memory instead of ~8 seconds for current bootscrub implementation.
> 
> This was measured while synchronously waiting for CPUs to scrub all the
> memory in idle-loop. But scrubbing can happen in background, of course.

Right, the whole point of idle loop scrubbing is that you *don't*
syncronously wait for *all* the memory to finish scrubbing before you
can use part of it.  So why is this an issue for you guys -- what
concrete problem did it cause, that the full amount of memory took 40s
to finish scrubbing rather than only 8s?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to