Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-09-21 Thread Michal Prívozník
On 9/21/22 16:54, David Hildenbrand wrote:
> On 21.09.22 16:44, Michal Prívozník wrote:
>> On 7/21/22 14:07, David Hildenbrand wrote:
>>>
>>
>> Ping? Is there any plan how to move forward? I have libvirt patches
>> ready to consume this and I'd like to prune my old local branches :-)
> 
> Heh, I was thinking about this series just today. I was distracted with
> all other kind of stuff.
> 
> I'll move forward with this series later this week/early next week.

No rush, it's only that I don't want this to fall into void. Let me know
if I can help somehow. Meanwhile, here's my aforementioned branch:

https://gitlab.com/MichalPrivoznik/libvirt/-/tree/qemu_thread_context

I've made it so that ThreadContext is generated whenever
.prealloc-threads AND .host-nodes are used (i.e. no XML visible config
knob). And I'm generating ThreadContext objects for each memory backend
separately even though they can be reused, but IMO that's optimization
that can be done later.

Michal




Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-09-21 Thread David Hildenbrand

On 21.09.22 16:44, Michal Prívozník wrote:

On 7/21/22 14:07, David Hildenbrand wrote:




Ping? Is there any plan how to move forward? I have libvirt patches
ready to consume this and I'd like to prune my old local branches :-)


Heh, I was thinking about this series just today. I was distracted with 
all other kind of stuff.


I'll move forward with this series later this week/early next week.

Thanks!

--
Thanks,

David / dhildenb




Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-09-21 Thread Michal Prívozník
On 7/21/22 14:07, David Hildenbrand wrote:
>

Ping? Is there any plan how to move forward? I have libvirt patches
ready to consume this and I'd like to prune my old local branches :-)

Michal




Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-11 Thread Joao Martins
On 8/9/22 19:06, David Hildenbrand wrote:
> On 09.08.22 12:56, Joao Martins wrote:
>> On 7/21/22 13:07, David Hildenbrand wrote:
>>> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
>>> Michal.
>>>
>>> Setting the CPU affinity of threads from inside QEMU usually isn't
>>> easily possible, because we don't want QEMU -- once started and running
>>> guest code -- to be able to mess up the system. QEMU disallows relevant
>>> syscalls using seccomp, such that any such invocation will fail.
>>>
>>> Especially for memory preallocation in memory backends, the CPU affinity
>>> can significantly increase guest startup time, for example, when running
>>> large VMs backed by huge/gigantic pages, because of NUMA effects. For
>>> NUMA-aware preallocation, we have to set the CPU affinity, however:
>>>
>>> (1) Once preallocation threads are created during preallocation, management
>>> tools cannot intercept anymore to change the affinity. These threads
>>> are created automatically on demand.
>>> (2) QEMU cannot easily set the CPU affinity itself.
>>> (3) The CPU affinity derived from the NUMA bindings of the memory backend
>>> might not necessarily be exactly the CPUs we actually want to use
>>> (e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).
>>>
>>> There is an easy "workaround". If we have a thread with the right CPU
>>> affinity, we can simply create new threads on demand via that prepared
>>> context. So, all we have to do is setup and create such a context ahead
>>> of time, to then configure preallocation to create new threads via that
>>> environment.
>>>
>>> So, let's introduce a user-creatable "thread-context" object that
>>> essentially consists of a context thread used to create new threads.
>>> QEMU can either try setting the CPU affinity itself ("cpu-affinity",
>>> "node-affinity" property), or upper layers can extract the thread id
>>> ("thread-id" property) to configure it externally.
>>>
>>> Make memory-backends consume a thread-context object
>>> (via the "prealloc-context" property) and use it when preallocating to
>>> create new threads with the desired CPU affinity. Further, to make it
>>> easier to use, allow creation of "thread-context" objects, including
>>> setting the CPU affinity directly from QEMU, *before* enabling the
>>> sandbox option.
>>>
>>>
>>> Quick test on a system with 2 NUMA nodes:
>>>
>>> Without CPU affinity:
>>> time qemu-system-x86_64 \
>>> -object 
>>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
>>>  \
>>> -nographic -monitor stdio
>>>
>>> real0m5.383s
>>> real0m3.499s
>>> real0m5.129s
>>> real0m4.232s
>>> real0m5.220s
>>> real0m4.288s
>>> real0m3.582s
>>> real0m4.305s
>>> real0m5.421s
>>> real0m4.502s
>>>
>>> -> It heavily depends on the scheduler CPU selection
>>>
>>> With CPU affinity:
>>> time qemu-system-x86_64 \
>>> -object thread-context,id=tc1,node-affinity=0 \
>>> -object 
>>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
>>>  \
>>> -sandbox enable=on,resourcecontrol=deny \
>>> -nographic -monitor stdio
>>>
>>> real0m1.959s
>>> real0m1.942s
>>> real0m1.943s
>>> real0m1.941s
>>> real0m1.948s
>>> real0m1.964s
>>> real0m1.949s
>>> real0m1.948s
>>> real0m1.941s
>>> real0m1.937s
>>>
>>> On reasonably large VMs, the speedup can be quite significant.
>>>
>> Really awesome work!
> 
> Thanks!
> 
>>
>> I am not sure I picked up this well while reading the series, but it seems 
>> to me that
>> prealloc is still serialized on per memory-backend when solely configured by 
>> command-line
>> right?
> 
> I think it's serialized in any case, even when preallocation is
> triggered manually using prealloc=on. I might be wrong, but any kind of
> object creation or property changes should be serialized by the BQL.
> 
> In theory, we can "easily" preallocate in our helper --
> qemu_prealloc_mem() -- concurrently when we don't have to bother about
> handling SIGBUS -- that is, when the kernel supports
> MADV_POPULATE_WRITE. Without MADV_POPULATE_WRITE on older kernels, we'll
> serialize in there as well.
> 
/me nods matches my understanding

>>
>> Meaning when we start prealloc we wait until the memory-backend 
>> thread-context action is
>> completed (per-memory-backend) even if other to-be-configured 
>> memory-backends will use a
>> thread-context on a separate set of pinned CPUs on another node ... and 
>> wouldn't in theory
>> "need" to wait until the former prealloc finishes?
> 
> Yes. This series only takes care of NUMA-aware preallocation, but
> doesn't preallocate multiple memory backends in parallel.
> 
> In theory, it would be quite easy 

Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-09 Thread Michal Prívozník
On 8/9/22 20:06, David Hildenbrand wrote:
> On 09.08.22 12:56, Joao Martins wrote:
>> On 7/21/22 13:07, David Hildenbrand wrote:
>>> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
>>> Michal.
>>>
>>> Setting the CPU affinity of threads from inside QEMU usually isn't
>>> easily possible, because we don't want QEMU -- once started and running
>>> guest code -- to be able to mess up the system. QEMU disallows relevant
>>> syscalls using seccomp, such that any such invocation will fail.
>>>
>>> Especially for memory preallocation in memory backends, the CPU affinity
>>> can significantly increase guest startup time, for example, when running
>>> large VMs backed by huge/gigantic pages, because of NUMA effects. For
>>> NUMA-aware preallocation, we have to set the CPU affinity, however:
>>>
>>> (1) Once preallocation threads are created during preallocation, management
>>> tools cannot intercept anymore to change the affinity. These threads
>>> are created automatically on demand.
>>> (2) QEMU cannot easily set the CPU affinity itself.
>>> (3) The CPU affinity derived from the NUMA bindings of the memory backend
>>> might not necessarily be exactly the CPUs we actually want to use
>>> (e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).
>>>
>>> There is an easy "workaround". If we have a thread with the right CPU
>>> affinity, we can simply create new threads on demand via that prepared
>>> context. So, all we have to do is setup and create such a context ahead
>>> of time, to then configure preallocation to create new threads via that
>>> environment.
>>>
>>> So, let's introduce a user-creatable "thread-context" object that
>>> essentially consists of a context thread used to create new threads.
>>> QEMU can either try setting the CPU affinity itself ("cpu-affinity",
>>> "node-affinity" property), or upper layers can extract the thread id
>>> ("thread-id" property) to configure it externally.
>>>
>>> Make memory-backends consume a thread-context object
>>> (via the "prealloc-context" property) and use it when preallocating to
>>> create new threads with the desired CPU affinity. Further, to make it
>>> easier to use, allow creation of "thread-context" objects, including
>>> setting the CPU affinity directly from QEMU, *before* enabling the
>>> sandbox option.
>>>
>>>
>>> Quick test on a system with 2 NUMA nodes:
>>>
>>> Without CPU affinity:
>>> time qemu-system-x86_64 \
>>> -object 
>>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
>>>  \
>>> -nographic -monitor stdio
>>>
>>> real0m5.383s
>>> real0m3.499s
>>> real0m5.129s
>>> real0m4.232s
>>> real0m5.220s
>>> real0m4.288s
>>> real0m3.582s
>>> real0m4.305s
>>> real0m5.421s
>>> real0m4.502s
>>>
>>> -> It heavily depends on the scheduler CPU selection
>>>
>>> With CPU affinity:
>>> time qemu-system-x86_64 \
>>> -object thread-context,id=tc1,node-affinity=0 \
>>> -object 
>>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
>>>  \
>>> -sandbox enable=on,resourcecontrol=deny \
>>> -nographic -monitor stdio
>>>
>>> real0m1.959s
>>> real0m1.942s
>>> real0m1.943s
>>> real0m1.941s
>>> real0m1.948s
>>> real0m1.964s
>>> real0m1.949s
>>> real0m1.948s
>>> real0m1.941s
>>> real0m1.937s
>>>
>>> On reasonably large VMs, the speedup can be quite significant.
>>>
>> Really awesome work!
> 
> Thanks!
> 
>>
>> I am not sure I picked up this well while reading the series, but it seems 
>> to me that
>> prealloc is still serialized on per memory-backend when solely configured by 
>> command-line
>> right?
> 
> I think it's serialized in any case, even when preallocation is
> triggered manually using prealloc=on. I might be wrong, but any kind of
> object creation or property changes should be serialized by the BQL.
> 
> In theory, we can "easily" preallocate in our helper --
> qemu_prealloc_mem() -- concurrently when we don't have to bother about
> handling SIGBUS -- that is, when the kernel supports
> MADV_POPULATE_WRITE. Without MADV_POPULATE_WRITE on older kernels, we'll
> serialize in there as well.
> 
>>
>> Meaning when we start prealloc we wait until the memory-backend 
>> thread-context action is
>> completed (per-memory-backend) even if other to-be-configured 
>> memory-backends will use a
>> thread-context on a separate set of pinned CPUs on another node ... and 
>> wouldn't in theory
>> "need" to wait until the former prealloc finishes?
> 
> Yes. This series only takes care of NUMA-aware preallocation, but
> doesn't preallocate multiple memory backends in parallel.
> 
> In theory, it would be quite easy to preallocate concurrently: simply

Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-09 Thread David Hildenbrand
On 09.08.22 12:56, Joao Martins wrote:
> On 7/21/22 13:07, David Hildenbrand wrote:
>> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
>> Michal.
>>
>> Setting the CPU affinity of threads from inside QEMU usually isn't
>> easily possible, because we don't want QEMU -- once started and running
>> guest code -- to be able to mess up the system. QEMU disallows relevant
>> syscalls using seccomp, such that any such invocation will fail.
>>
>> Especially for memory preallocation in memory backends, the CPU affinity
>> can significantly increase guest startup time, for example, when running
>> large VMs backed by huge/gigantic pages, because of NUMA effects. For
>> NUMA-aware preallocation, we have to set the CPU affinity, however:
>>
>> (1) Once preallocation threads are created during preallocation, management
>> tools cannot intercept anymore to change the affinity. These threads
>> are created automatically on demand.
>> (2) QEMU cannot easily set the CPU affinity itself.
>> (3) The CPU affinity derived from the NUMA bindings of the memory backend
>> might not necessarily be exactly the CPUs we actually want to use
>> (e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).
>>
>> There is an easy "workaround". If we have a thread with the right CPU
>> affinity, we can simply create new threads on demand via that prepared
>> context. So, all we have to do is setup and create such a context ahead
>> of time, to then configure preallocation to create new threads via that
>> environment.
>>
>> So, let's introduce a user-creatable "thread-context" object that
>> essentially consists of a context thread used to create new threads.
>> QEMU can either try setting the CPU affinity itself ("cpu-affinity",
>> "node-affinity" property), or upper layers can extract the thread id
>> ("thread-id" property) to configure it externally.
>>
>> Make memory-backends consume a thread-context object
>> (via the "prealloc-context" property) and use it when preallocating to
>> create new threads with the desired CPU affinity. Further, to make it
>> easier to use, allow creation of "thread-context" objects, including
>> setting the CPU affinity directly from QEMU, *before* enabling the
>> sandbox option.
>>
>>
>> Quick test on a system with 2 NUMA nodes:
>>
>> Without CPU affinity:
>> time qemu-system-x86_64 \
>> -object 
>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
>>  \
>> -nographic -monitor stdio
>>
>> real0m5.383s
>> real0m3.499s
>> real0m5.129s
>> real0m4.232s
>> real0m5.220s
>> real0m4.288s
>> real0m3.582s
>> real0m4.305s
>> real0m5.421s
>> real0m4.502s
>>
>> -> It heavily depends on the scheduler CPU selection
>>
>> With CPU affinity:
>> time qemu-system-x86_64 \
>> -object thread-context,id=tc1,node-affinity=0 \
>> -object 
>> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
>>  \
>> -sandbox enable=on,resourcecontrol=deny \
>> -nographic -monitor stdio
>>
>> real0m1.959s
>> real0m1.942s
>> real0m1.943s
>> real0m1.941s
>> real0m1.948s
>> real0m1.964s
>> real0m1.949s
>> real0m1.948s
>> real0m1.941s
>> real0m1.937s
>>
>> On reasonably large VMs, the speedup can be quite significant.
>>
> Really awesome work!

Thanks!

> 
> I am not sure I picked up this well while reading the series, but it seems to 
> me that
> prealloc is still serialized on per memory-backend when solely configured by 
> command-line
> right?

I think it's serialized in any case, even when preallocation is
triggered manually using prealloc=on. I might be wrong, but any kind of
object creation or property changes should be serialized by the BQL.

In theory, we can "easily" preallocate in our helper --
qemu_prealloc_mem() -- concurrently when we don't have to bother about
handling SIGBUS -- that is, when the kernel supports
MADV_POPULATE_WRITE. Without MADV_POPULATE_WRITE on older kernels, we'll
serialize in there as well.

> 
> Meaning when we start prealloc we wait until the memory-backend 
> thread-context action is
> completed (per-memory-backend) even if other to-be-configured memory-backends 
> will use a
> thread-context on a separate set of pinned CPUs on another node ... and 
> wouldn't in theory
> "need" to wait until the former prealloc finishes?

Yes. This series only takes care of NUMA-aware preallocation, but
doesn't preallocate multiple memory backends in parallel.

In theory, it would be quite easy to preallocate concurrently: simply
create the memory backend objects passed on the QEMU cmdline
concurrently from multiple threads.

In practice, we have to be careful I think with the BQL. But it doesn't
sound horr

Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-09 Thread Joao Martins
On 7/21/22 13:07, David Hildenbrand wrote:
> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
> Michal.
> 
> Setting the CPU affinity of threads from inside QEMU usually isn't
> easily possible, because we don't want QEMU -- once started and running
> guest code -- to be able to mess up the system. QEMU disallows relevant
> syscalls using seccomp, such that any such invocation will fail.
> 
> Especially for memory preallocation in memory backends, the CPU affinity
> can significantly increase guest startup time, for example, when running
> large VMs backed by huge/gigantic pages, because of NUMA effects. For
> NUMA-aware preallocation, we have to set the CPU affinity, however:
> 
> (1) Once preallocation threads are created during preallocation, management
> tools cannot intercept anymore to change the affinity. These threads
> are created automatically on demand.
> (2) QEMU cannot easily set the CPU affinity itself.
> (3) The CPU affinity derived from the NUMA bindings of the memory backend
> might not necessarily be exactly the CPUs we actually want to use
> (e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).
> 
> There is an easy "workaround". If we have a thread with the right CPU
> affinity, we can simply create new threads on demand via that prepared
> context. So, all we have to do is setup and create such a context ahead
> of time, to then configure preallocation to create new threads via that
> environment.
> 
> So, let's introduce a user-creatable "thread-context" object that
> essentially consists of a context thread used to create new threads.
> QEMU can either try setting the CPU affinity itself ("cpu-affinity",
> "node-affinity" property), or upper layers can extract the thread id
> ("thread-id" property) to configure it externally.
> 
> Make memory-backends consume a thread-context object
> (via the "prealloc-context" property) and use it when preallocating to
> create new threads with the desired CPU affinity. Further, to make it
> easier to use, allow creation of "thread-context" objects, including
> setting the CPU affinity directly from QEMU, *before* enabling the
> sandbox option.
> 
> 
> Quick test on a system with 2 NUMA nodes:
> 
> Without CPU affinity:
> time qemu-system-x86_64 \
> -object 
> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
>  \
> -nographic -monitor stdio
> 
> real0m5.383s
> real0m3.499s
> real0m5.129s
> real0m4.232s
> real0m5.220s
> real0m4.288s
> real0m3.582s
> real0m4.305s
> real0m5.421s
> real0m4.502s
> 
> -> It heavily depends on the scheduler CPU selection
> 
> With CPU affinity:
> time qemu-system-x86_64 \
> -object thread-context,id=tc1,node-affinity=0 \
> -object 
> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
>  \
> -sandbox enable=on,resourcecontrol=deny \
> -nographic -monitor stdio
> 
> real0m1.959s
> real0m1.942s
> real0m1.943s
> real0m1.941s
> real0m1.948s
> real0m1.964s
> real0m1.949s
> real0m1.948s
> real0m1.941s
> real0m1.937s
> 
> On reasonably large VMs, the speedup can be quite significant.
> 
Really awesome work!

I am not sure I picked up this well while reading the series, but it seems to 
me that
prealloc is still serialized on per memory-backend when solely configured by 
command-line
right?

Meaning when we start prealloc we wait until the memory-backend thread-context 
action is
completed (per-memory-backend) even if other to-be-configured memory-backends 
will use a
thread-context on a separate set of pinned CPUs on another node ... and 
wouldn't in theory
"need" to wait until the former prealloc finishes?

Unless as you alluded in one of the last patches: we can pass these 
thread-contexts with
prealloc=off (and prealloc-context=NNN) while qemu is paused (-S) and have 
different QMP
clients set prealloc=on, and thus prealloc would happen concurrently per node?

We were thinking to extend it to leverage per socket bandwidth essentially to 
parallel
this even further (we saw improvements with something like that but haven't 
tried this
series yet). Likely this is already possible with your work and I didn't pick 
up on it,
hence just making sure this is the case :)



Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-05 Thread David Hildenbrand
> 
> I've timed 'virsh start' with a guest that has 47GB worth of 1GB
> hugepages and seen the startup time halved basically (from 10.5s to
> 5.6s). The host has 4 NUMA nodes and I'm pinning the guest onto two nodes.
> 
> I've written libvirt counterpart (which I'll post as soon as these are
> merged). The way it works is the whenever .prealloc-threads= is to be
> used AND qemu is capable of thread-context the thread-context object is
> generated before every memory-backend-*, like this:

Once interesting corner case might be with CPU-less NUMA nodes. Setting
the node-affinity would fail because there are no CPUs. Libvirt could
figure that out by testing if the selected node(s) have CPUs.

> 
> -object
> '{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[2]}' \
> -object
> '{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-threads":16,"size":21474836480,"host-nodes":[2],"policy":"bind","prealloc-context":"tc-ram-node0"}'
> \
> -numa node,nodeid=0,cpus=0,cpus=2,memdev=ram-node0 \
> -object
> '{"qom-type":"thread-context","id":"tc-ram-node1","node-affinity":[3]}' \
> -object
> '{"qom-type":"memory-backend-memfd","id":"ram-node1","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-threads":16,"size":28991029248,"host-nodes":[3],"policy":"bind","prealloc-context":"tc-ram-node1"}'
> \
> 
> 
> Now, it's not visible in this snippet, but my code does not reuse
> thread-context objects. So if there's another memfd, it'll get its own TC:
> 
> -object
> '{"qom-type":"thread-context","id":"tc-memdimm0","node-affinity":[1]}' \
> -object
> '{"qom-type":"memory-backend-memfd","id":"memdimm0","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-threads":16,"size":1073741824,"host-nodes":[1],"policy":"bind","prealloc-context":"tc-memdimm0"}'
> \
> 
> The reason is that logic generating memory-backends is very complex and
> separating out parts of it so that thread-context objects can be
> generated first and reused by those backends would inevitably lead to

Sounds like something we can work on later.

> regression. I guess my question is, whether it's a problem that libvirt
> would leave one additional thread, sleeping in a semaphore, for each
> memory-backend (iff prealloc-threads are used).

I guess in most setups we just don't care. Of course, with 256 DIMMs or
endless number of nodes, we *might* care.


One optimization for some ordinary setups (not caring about NUMA-aware
preallocation during DIMM hotplug) would be to assign some dummy thread
context once prealloc finished (e.g., once QEMU initialized after
prealloc) and delete the original thread context along with the thread.

> 
> Although, if I read the code correctly, thread-context object can be
> specified AFTER memory backends, because they are parsed and created
> before backends anyway. Well, something to think over the weekend.

Yes, the command line order does not matter.

[...]

> 
> Reviewed-by: Michal Privoznik 

Thanks!

-- 
Thanks,

David / dhildenb




Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-08-05 Thread Michal Prívozník
On 7/21/22 14:07, David Hildenbrand wrote:
> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
> Michal.
> 
> Setting the CPU affinity of threads from inside QEMU usually isn't
> easily possible, because we don't want QEMU -- once started and running
> guest code -- to be able to mess up the system. QEMU disallows relevant
> syscalls using seccomp, such that any such invocation will fail.
> 
> Especially for memory preallocation in memory backends, the CPU affinity
> can significantly increase guest startup time, for example, when running
> large VMs backed by huge/gigantic pages, because of NUMA effects. For
> NUMA-aware preallocation, we have to set the CPU affinity, however:
> 
> (1) Once preallocation threads are created during preallocation, management
> tools cannot intercept anymore to change the affinity. These threads
> are created automatically on demand.
> (2) QEMU cannot easily set the CPU affinity itself.
> (3) The CPU affinity derived from the NUMA bindings of the memory backend
> might not necessarily be exactly the CPUs we actually want to use
> (e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).
> 
> There is an easy "workaround". If we have a thread with the right CPU
> affinity, we can simply create new threads on demand via that prepared
> context. So, all we have to do is setup and create such a context ahead
> of time, to then configure preallocation to create new threads via that
> environment.
> 
> So, let's introduce a user-creatable "thread-context" object that
> essentially consists of a context thread used to create new threads.
> QEMU can either try setting the CPU affinity itself ("cpu-affinity",
> "node-affinity" property), or upper layers can extract the thread id
> ("thread-id" property) to configure it externally.
> 
> Make memory-backends consume a thread-context object
> (via the "prealloc-context" property) and use it when preallocating to
> create new threads with the desired CPU affinity. Further, to make it
> easier to use, allow creation of "thread-context" objects, including
> setting the CPU affinity directly from QEMU, *before* enabling the
> sandbox option.
> 
> 
> Quick test on a system with 2 NUMA nodes:
> 
> Without CPU affinity:
> time qemu-system-x86_64 \
> -object 
> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
>  \
> -nographic -monitor stdio
> 
> real0m5.383s
> real0m3.499s
> real0m5.129s
> real0m4.232s
> real0m5.220s
> real0m4.288s
> real0m3.582s
> real0m4.305s
> real0m5.421s
> real0m4.502s
> 
> -> It heavily depends on the scheduler CPU selection
> 
> With CPU affinity:
> time qemu-system-x86_64 \
> -object thread-context,id=tc1,node-affinity=0 \
> -object 
> memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
>  \
> -sandbox enable=on,resourcecontrol=deny \
> -nographic -monitor stdio
> 
> real0m1.959s
> real0m1.942s
> real0m1.943s
> real0m1.941s
> real0m1.948s
> real0m1.964s
> real0m1.949s
> real0m1.948s
> real0m1.941s
> real0m1.937s
> 
> On reasonably large VMs, the speedup can be quite significant.
> 

I've timed 'virsh start' with a guest that has 47GB worth of 1GB
hugepages and seen the startup time halved basically (from 10.5s to
5.6s). The host has 4 NUMA nodes and I'm pinning the guest onto two nodes.

I've written libvirt counterpart (which I'll post as soon as these are
merged). The way it works is the whenever .prealloc-threads= is to be
used AND qemu is capable of thread-context the thread-context object is
generated before every memory-backend-*, like this:

-object
'{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[2]}' \
-object
'{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-threads":16,"size":21474836480,"host-nodes":[2],"policy":"bind","prealloc-context":"tc-ram-node0"}'
\
-numa node,nodeid=0,cpus=0,cpus=2,memdev=ram-node0 \
-object
'{"qom-type":"thread-context","id":"tc-ram-node1","node-affinity":[3]}' \
-object
'{"qom-type":"memory-backend-memfd","id":"ram-node1","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-threads":16,"size":28991029248,"host-nodes":[3],"policy":"bind","prealloc-context":"tc-ram-node1"}'
\


Now, it's not visible in this snippet, but my code does not reuse
thread-context objects. So if there's another memfd, it'll get its own TC:

-object
'{"qom-type":"thread-context","id":"tc-memdimm0","node-affinity":[1]}' \
-object
'{"qom-type":"memory-backend-memfd","id":"memdimm0","hugetlb":true,"hugetlbsize":1073741824,"share":true,"prealloc":true,"prealloc-thread

Re: [PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-07-25 Thread Michal Prívozník
On 7/21/22 14:07, David Hildenbrand wrote:
> This is a follow-up on "util: NUMA aware memory preallocation" [1] by
> Michal.

I've skimmed through patches and haven't spotted anything obviously
wrong. I'll test these more once I write libvirt support for them (which
I plan to do soon).

Michal




[PATCH RFC 0/7] hostmem: NUMA-aware memory preallocation using ThreadContext

2022-07-21 Thread David Hildenbrand
This is a follow-up on "util: NUMA aware memory preallocation" [1] by
Michal.

Setting the CPU affinity of threads from inside QEMU usually isn't
easily possible, because we don't want QEMU -- once started and running
guest code -- to be able to mess up the system. QEMU disallows relevant
syscalls using seccomp, such that any such invocation will fail.

Especially for memory preallocation in memory backends, the CPU affinity
can significantly increase guest startup time, for example, when running
large VMs backed by huge/gigantic pages, because of NUMA effects. For
NUMA-aware preallocation, we have to set the CPU affinity, however:

(1) Once preallocation threads are created during preallocation, management
tools cannot intercept anymore to change the affinity. These threads
are created automatically on demand.
(2) QEMU cannot easily set the CPU affinity itself.
(3) The CPU affinity derived from the NUMA bindings of the memory backend
might not necessarily be exactly the CPUs we actually want to use
(e.g., CPU-less NUMA nodes, CPUs that are pinned/used for other VMs).

There is an easy "workaround". If we have a thread with the right CPU
affinity, we can simply create new threads on demand via that prepared
context. So, all we have to do is setup and create such a context ahead
of time, to then configure preallocation to create new threads via that
environment.

So, let's introduce a user-creatable "thread-context" object that
essentially consists of a context thread used to create new threads.
QEMU can either try setting the CPU affinity itself ("cpu-affinity",
"node-affinity" property), or upper layers can extract the thread id
("thread-id" property) to configure it externally.

Make memory-backends consume a thread-context object
(via the "prealloc-context" property) and use it when preallocating to
create new threads with the desired CPU affinity. Further, to make it
easier to use, allow creation of "thread-context" objects, including
setting the CPU affinity directly from QEMU, *before* enabling the
sandbox option.


Quick test on a system with 2 NUMA nodes:

Without CPU affinity:
time qemu-system-x86_64 \
-object 
memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind
 \
-nographic -monitor stdio

real0m5.383s
real0m3.499s
real0m5.129s
real0m4.232s
real0m5.220s
real0m4.288s
real0m3.582s
real0m4.305s
real0m5.421s
real0m4.502s

-> It heavily depends on the scheduler CPU selection

With CPU affinity:
time qemu-system-x86_64 \
-object thread-context,id=tc1,node-affinity=0 \
-object 
memory-backend-memfd,id=md1,hugetlb=on,hugetlbsize=2M,size=64G,prealloc-threads=12,prealloc=on,host-nodes=0,policy=bind,prealloc-context=tc1
 \
-sandbox enable=on,resourcecontrol=deny \
-nographic -monitor stdio

real0m1.959s
real0m1.942s
real0m1.943s
real0m1.941s
real0m1.948s
real0m1.964s
real0m1.949s
real0m1.948s
real0m1.941s
real0m1.937s

On reasonably large VMs, the speedup can be quite significant.

While this concept is currently only used for short-lived preallocation
threads, nothing major speaks against reusing the concept for other
threads that are harder to identify/configure -- except that
we need additional (idle) context threads that are otherwise left unused.

[1] 
https://lkml.kernel.org/r/ffdcd118d59b379ede2b64745144165a40f6a813.1652165704.git.mpriv...@redhat.com

Cc: Michal Privoznik 
Cc: Igor Mammedov 
Cc: "Michael S. Tsirkin" 
Cc: Paolo Bonzini 
Cc: "Daniel P. Berrangé" 
Cc: Eduardo Habkost 
Cc: Dr. David Alan Gilbert 
Cc: Eric Blake 
Cc: Markus Armbruster 
Cc: Richard Henderson 
Cc: Stefan Weil 

David Hildenbrand (7):
  util: Cleanup and rename os_mem_prealloc()
  util: Introduce qemu_thread_set_affinity() and
qemu_thread_get_affinity()
  util: Introduce ThreadContext user-creatable object
  util: Add write-only "node-affinity" property for ThreadContext
  util: Make qemu_prealloc_mem() optionally consume a ThreadContext
  hostmem: Allow for specifying a ThreadContext for preallocation
  vl: Allow ThreadContext objects to be created before the sandbox
option

 backends/hostmem.c|  13 +-
 hw/virtio/virtio-mem.c|   2 +-
 include/qemu/osdep.h  |  19 +-
 include/qemu/thread-context.h |  58 ++
 include/qemu/thread.h |   4 +
 include/sysemu/hostmem.h  |   2 +
 meson.build   |  16 ++
 qapi/qom.json |  25 +++
 softmmu/cpus.c|   2 +-
 softmmu/vl.c  |  30 ++-
 util/meson.build  |   1 +
 util/oslib-posix.c|  39 ++--
 util/oslib-win32.c|   8 +-
 util/qemu-thread-posix.c  |  70 +++
 util/qemu-thread-win32.c  |  12 ++
 util/thread-context.c | 363 +++