On 03.02.2021 15:21, Vitaly Kuznetsov wrote:
"Maciej S. Szmigiero" writes:
On 03.02.2021 00:43, Sean Christopherson wrote:
On Tue, Feb 02, 2021, Maciej S. Szmigiero wrote:
On 02.02.2021 02:33, Sean Christopherson wrote:
...
I guess you mean to still turn id_to_index into a hash table,
On 03.02.2021 14:41, Paolo Bonzini wrote:
On 02/02/21 23:42, Maciej S. Szmigiero wrote:
I'm not opposed to using more sophisticated storage for the gfn lookups, but
only if there's a good reason for doing so. IMO, the
rbtree isn't simpler, just different.
And it also has worse cache
On 03/02/21 14:52, David Hildenbrand wrote:
However, note that the TDP MMU does not need an rmap at all. Since that
one is getting ready to become the default, the benefits of working on
the rmap would be quite small and only affect nested virtualization.
Right, but we currently always have
"Maciej S. Szmigiero" writes:
> On 03.02.2021 00:43, Sean Christopherson wrote:
>> On Tue, Feb 02, 2021, Maciej S. Szmigiero wrote:
>>> On 02.02.2021 02:33, Sean Christopherson wrote:
...
>>>
>>> I guess you mean to still turn id_to_index into a hash table, since
>>> otherwise a VMM which uses
On 03.02.21 14:46, Paolo Bonzini wrote:
On 03/02/21 14:44, David Hildenbrand wrote:
BTW: what are your thoughts regarding converting the rmap array on
x86-64 into some dynamic datastructre (xarray etc)? Has that already
been discussed?
Hasn't been discussed---as always, showing the code would
On 03/02/21 14:44, David Hildenbrand wrote:
BTW: what are your thoughts regarding converting the rmap array on
x86-64 into some dynamic datastructre (xarray etc)? Has that already
been discussed?
Hasn't been discussed---as always, showing the code would be the best
way to start a discussion.
The new implementation also uses standard kernel {rb,interval}-tree
and hash table implementation as its basic data structures, so it
automatically benefits from any generic improvements to these.
All for the low price of just 174 net lines of code added.
I think the best thing to do here is
On 03/02/21 00:43, Sean Christopherson wrote:
But overall, this solution (and the one with id_to_index moved into the
main array, too) is still O(n) per memslot operation as you still need to
copy the array to either make space for the new memslot or to remove the
hole from the removed memslot.
On 02/02/21 23:42, Maciej S. Szmigiero wrote:
I'm not opposed to using more sophisticated storage for the gfn
lookups, but only if there's a good reason for doing so. IMO, the
rbtree isn't simpler, just different.
And it also has worse cache utilization than an array, due to memory
On 03.02.2021 00:43, Sean Christopherson wrote:
On Tue, Feb 02, 2021, Maciej S. Szmigiero wrote:
On 02.02.2021 02:33, Sean Christopherson wrote:
Making lookup and memslot management operations O(log(n)) brings
some performance benefits (tested on a Xeon 8167M machine):
509 slots in use:
Test
On Tue, Feb 02, 2021, Maciej S. Szmigiero wrote:
> On 02.02.2021 02:33, Sean Christopherson wrote:
> > > Making lookup and memslot management operations O(log(n)) brings
> > > some performance benefits (tested on a Xeon 8167M machine):
> > > 509 slots in use:
> > > Test Before
Hi Sean,
Thanks for your quick look at the series.
On 02.02.2021 02:33, Sean Christopherson wrote:
The patch numbering and/or threading is messed up. This should either be a
standalone patch, or fully incorporated into the same series as the selftests
changes. But, that's somewhat of a moot
The patch numbering and/or threading is messed up. This should either be a
standalone patch, or fully incorporated into the same series as the selftests
changes. But, that's somewhat of a moot point...
On Mon, Feb 01, 2021, Maciej S. Szmigiero wrote:
> From: "Maciej S. Szmigiero"
>
> The
From: "Maciej S. Szmigiero"
The current memslot code uses a (reverse) gfn-ordered memslot array for
keeping track of them.
This only allows quick binary search by gfn, quick lookup by hva is not
possible (the implementation has to do a linear scan of the whole memslot
array).
Because the
14 matches
Mail list logo