Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-05 Thread Michal Privoznik

On 2/5/21 9:34 AM, Jing Qi wrote:

Thanks Michal. I checked the virtio_mem module in the host last time.
I tried a new guest image with virtio_mem module and start the domain ,
then the value of actual can be set correctly.


Yeah, it's the guest that needs the module. Glad to hear it's working.

Michal



Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-05 Thread Jing Qi
Thanks Michal. I checked the virtio_mem module in the host last time.
I tried a new guest image with virtio_mem module and start the domain ,
then the value of actual can be set correctly.

Jing Qi


On Thu, Feb 4, 2021 at 4:52 PM Michal Privoznik  wrote:

> On 2/4/21 3:33 AM, Jing Qi wrote:
> > Michal,
> > I checked the virtio_mem module and it's loaded -
> >
> > lsmod |grep virtio_mem
> > virtio_mem 32768  0
> >
> > And I can't make the actual value change to non-zerio.
> > ->  virsh update-memory pc  --requested-size 256M
> > or
> > ->virsh setmem pc 1000M
>
> This is unrelated. 'setmem' modifies balloon not virtio-mem-pci device.
>
> >
> >   
> >
> >  524288
> >  0
> >  2048
> >  262144
> >  0
> >
> >
> > > function='0x0'/>
> >  
> > Any other suggestions ?
>
>
> Is there something in the guest dmesg? This is what I get:
>
> virsh update-memory-device gentoo --alias ua-virtiomem 256M
>
> [   17.619060] virtio_mem virtio3: plugged size: 0x0
> [   17.619062] virtio_mem virtio3: requested size: 0x1000
> [   17.653072] Built 4 zonelists, mobility grouping on.  Total pages:
> 2065850
> [   17.653074] Policy zone: Normal
>
>
> And I can see actual size updated:
>
>  
>
>  2048
>
>
>  4194304
>  0
>  2048
>  262144
>  262144
>
>
> function='0x0'/>
>  
>
> Maybe David knows the answer.
>
> Michal
>
>

-- 
Thanks & Regards,
Jing,Qi


Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-04 Thread Michal Privoznik

On 2/4/21 3:33 AM, Jing Qi wrote:

Michal,
I checked the virtio_mem module and it's loaded -

lsmod |grep virtio_mem
virtio_mem 32768  0

And I can't make the actual value change to non-zerio.
->  virsh update-memory pc  --requested-size 256M
or
->virsh setmem pc 1000M


This is unrelated. 'setmem' modifies balloon not virtio-mem-pci device.



  
   
 524288
 0
 2048
 262144
 0
   
   
   
 
Any other suggestions ?



Is there something in the guest dmesg? This is what I get:

virsh update-memory-device gentoo --alias ua-virtiomem 256M

[   17.619060] virtio_mem virtio3: plugged size: 0x0
[   17.619062] virtio_mem virtio3: requested size: 0x1000
[   17.653072] Built 4 zonelists, mobility grouping on.  Total pages: 
2065850

[   17.653074] Policy zone: Normal


And I can see actual size updated:


  
2048
  
  
4194304
0
2048
262144
262144
  
  
  function='0x0'/>



Maybe David knows the answer.

Michal



Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-03 Thread Jing Qi
Michal,
I checked the virtio_mem module and it's loaded -

lsmod |grep virtio_mem
virtio_mem 32768  0

And I can't make the actual value change to non-zerio.
->  virsh update-memory pc  --requested-size 256M
or
->virsh setmem pc 1000M

 
  
524288
0
2048
262144
0
  
  
  

Any other suggestions ?

> This is a bummer. What this represents is the actual value of membaloon.
> Yes and no. It shows that we need  because guest might ignore
> request to change the requested size. What you're probably experiencing
> is that you haven't loaded virtio_mem module and thus the virtio-mem
> device is ignoring requests for change (well, kernel is ignoring them,
> whatever).

On Thu, Feb 4, 2021 at 12:27 AM Michal Privoznik 
wrote:

> On 2/3/21 7:11 AM, Jing Qi wrote:
> > I did some test for virtio-mem with libvirt upstream version
> > v7.0.0-153-g5ea3ecd07d
> >   & qemu-kvm-5.2.0-0.7.rc2.fc34.x86_64
> >
> > S1. Start domain with memory device
> >
> > 1. Domain configuration-
> > 10485760
> >1572864
> >1572864
> >   ...
> >
> >  
> >
> >  
> >
> > ...
> >
> >
> >  524288
> >  0
> >  2048
> >  393216
> >
> > > function='0x0'/>
> >  
> > #virsh start pc
> >   Domain 'pc'  started
> >
> > 2. The domain is started and check mem status, the actual size is
> "1048576"
> > #virsh dommemstat pc
> > actual 1048576
>
> This is a bummer. What this represents is the actual value of membaloon.
>
> > swap_in 0
> > swap_out 0
> > major_fault 257
> > minor_fault 130540
> > unused 604064
> > available 761328
> > usable 578428
> > last_update 1612325471
> > disk_caches 49632
> > hugetlb_pgalloc 0
> > hugetlb_pgfail 0
> > rss 460260
> >
> > 3. Then, check the active xml -
> > # virsh dumpxml pc
> > ...
> > 1572864
> >1048576
> > 
> >   
> >
> >  524288
> >  0
> >  2048
> >  393216
> >  0
> >
> >
> > > function='0x0'/>
> >  
> >
> >   Question1 : the value of actual is "0". Is it expected?
>
> Yes and no. It shows that we need  because guest might ignore
> request to change the requested size. What you're probably experiencing
> is that you haven't loaded virtio_mem module and thus the virtio-mem
> device is ignoring requests for change (well, kernel is ignoring them,
> whatever).
>
> >
> > S2. Also, tried to use hugepage to start the domain -
> >
> > 
> >
> >  524288
> >  0
> >  2048
> >  393216
> >  0
> >
> >
> > > function='0x0'/>
> >  
> >
> > #virsh start pc
> > error: Failed to start domain 'pc'
> > error: internal error: process exited while connecting to monitor:
> > 2021-02-03T05:50:33.157836Z qemu-system-x86_64: -object
> >
> memory-backend-file,id=memvirtiomem0,mem-path=/dev/hugepages/libvirt/qemu/9-pc,size=536870912,host-nodes=0,policy=bind:
> > can't open backing store /dev/hugepages/libvirt/qemu/9-pc for guest RAM:
> > Permission denied
> >
> > Question 2: any bug here?
>
> Ah, good catch! I'll fix this in v2.
>
> Michal
>
>

-- 
Thanks & Regards,
Jing,Qi


Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-03 Thread Michal Privoznik

On 2/3/21 7:11 AM, Jing Qi wrote:

I did some test for virtio-mem with libvirt upstream version
v7.0.0-153-g5ea3ecd07d
  & qemu-kvm-5.2.0-0.7.rc2.fc34.x86_64

S1. Start domain with memory device

1. Domain configuration-
10485760
   1572864
   1572864
  ...
   
 
   
 
   
...
   
   
 524288
 0
 2048
 393216
   
   
 
#virsh start pc
  Domain 'pc'  started

2. The domain is started and check mem status, the actual size is "1048576"
#virsh dommemstat pc
actual 1048576


This is a bummer. What this represents is the actual value of membaloon.


swap_in 0
swap_out 0
major_fault 257
minor_fault 130540
unused 604064
available 761328
usable 578428
last_update 1612325471
disk_caches 49632
hugetlb_pgalloc 0
hugetlb_pgfail 0
rss 460260

3. Then, check the active xml -
# virsh dumpxml pc
...
1572864
   1048576

  
   
 524288
 0
 2048
 393216
 0
   
   
   
 

  Question1 : the value of actual is "0". Is it expected?


Yes and no. It shows that we need  because guest might ignore 
request to change the requested size. What you're probably experiencing 
is that you haven't loaded virtio_mem module and thus the virtio-mem 
device is ignoring requests for change (well, kernel is ignoring them, 
whatever).




S2. Also, tried to use hugepage to start the domain -


   
 524288
 0
 2048
 393216
 0
   
   
   
 

#virsh start pc
error: Failed to start domain 'pc'
error: internal error: process exited while connecting to monitor:
2021-02-03T05:50:33.157836Z qemu-system-x86_64: -object
memory-backend-file,id=memvirtiomem0,mem-path=/dev/hugepages/libvirt/qemu/9-pc,size=536870912,host-nodes=0,policy=bind:
can't open backing store /dev/hugepages/libvirt/qemu/9-pc for guest RAM:
Permission denied

Question 2: any bug here?


Ah, good catch! I'll fix this in v2.

Michal



Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-02 Thread Jing Qi
I did some test for virtio-mem with libvirt upstream version
v7.0.0-153-g5ea3ecd07d
 & qemu-kvm-5.2.0-0.7.rc2.fc34.x86_64

S1. Start domain with memory device

1. Domain configuration-
10485760
  1572864
  1572864
 ...
  

  

  
...
  
  
524288
0
2048
393216
  
  

#virsh start pc
 Domain 'pc'  started

2. The domain is started and check mem status, the actual size is "1048576"
#virsh dommemstat pc
actual 1048576
swap_in 0
swap_out 0
major_fault 257
minor_fault 130540
unused 604064
available 761328
usable 578428
last_update 1612325471
disk_caches 49632
hugetlb_pgalloc 0
hugetlb_pgfail 0
rss 460260

3. Then, check the active xml -
# virsh dumpxml pc
...
1572864
  1048576

 
  
524288
0
2048
393216
0
  
  
  


 Question1 : the value of actual is "0". Is it expected?

S2. Also, tried to use hugepage to start the domain -


  
524288
0
2048
393216
0
  
  
  


#virsh start pc
error: Failed to start domain 'pc'
error: internal error: process exited while connecting to monitor:
2021-02-03T05:50:33.157836Z qemu-system-x86_64: -object
memory-backend-file,id=memvirtiomem0,mem-path=/dev/hugepages/libvirt/qemu/9-pc,size=536870912,host-nodes=0,policy=bind:
can't open backing store /dev/hugepages/libvirt/qemu/9-pc for guest RAM:
Permission denied

Question 2: any bug here?

On Tue, Feb 2, 2021 at 9:44 PM Peter Krempa  wrote:

> On Fri, Jan 22, 2021 at 13:50:21 +0100, Michal Privoznik wrote:
> > Technically, this is another version of:
> >
> > https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html
> >
> > But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
> > and sending it as a new series.
> >
> > For curious ones, David summarized behaviour well when implementing
> > virtio-mem support in kernel:
> >
> > https://lwn.net/Articles/755423/
> >
> > For less curious ones:
> >
> >   # virsh update-memory $dom --requested-size 4G
> >
> > adds additional 4GiB of RAM to guest;
> >
> >   # virsh update-memory $dom --requested-size 0
> >
> > removes those 4GiB added earlier.
> >
> > Patches are also available on my GitLab:
> >
> > https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3
> >
>
> Patches 1-7,9-10 (but observe some individual comments, including
> the rename of the virsh commands):
>
> Reviewed-by: Peter Krempa 
>
> Patch 8 has severe semantic problems.
>
>

-- 
Thanks & Regards,
Jing,Qi


Re: [PATCH 00/10] Introduce virtio-mem model

2021-02-02 Thread Peter Krempa
On Fri, Jan 22, 2021 at 13:50:21 +0100, Michal Privoznik wrote:
> Technically, this is another version of:
> 
> https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html
> 
> But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
> and sending it as a new series.
> 
> For curious ones, David summarized behaviour well when implementing
> virtio-mem support in kernel:
> 
> https://lwn.net/Articles/755423/
> 
> For less curious ones:
> 
>   # virsh update-memory $dom --requested-size 4G
> 
> adds additional 4GiB of RAM to guest;
> 
>   # virsh update-memory $dom --requested-size 0
> 
> removes those 4GiB added earlier.
> 
> Patches are also available on my GitLab:
> 
> https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3
> 

Patches 1-7,9-10 (but observe some individual comments, including
the rename of the virsh commands):

Reviewed-by: Peter Krempa 

Patch 8 has severe semantic problems.



Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-25 Thread David Hildenbrand
On 22.01.21 23:03, Daniel Henrique Barboza wrote:
> 
> 
> On 1/22/21 6:19 PM, David Hildenbrand wrote:
>>
>>> Out of curiosity: are you aware of anyone working in enabling virtio-mem
>>> for pseries/ppc64? I'm wondering if there's some kind of architecture
>>> limitation in Power or if it's just a lack of interest.
>>
>> I remember there is interest, however:
>>
>> - arm64 and x86-64 is used more frequently in applicable (cloud?) setups, so 
>> it has high prio
>> - s390x doesn‘t have any proper memory hot(un)plug, and as I have a strong 
>> s399x background, it‘s rather easy for me to implement
>> - ppc64 at least supports hot(un)plug of DIMMs
>>
>> There is nothing fundamental speaking against ppc64 support AFAIR.
> 
> That's good to hear.
> 
>> A block size of 16MB should be possible. I‘m planning on looking into it, 
>> however, there are a lot of other things on my todo list for virtio-mem.
> 
> I'm not familiar with the 'block size' concept of the virtio-mem device that 
> would
> allow for 16MB increments. My knowledge of the pseries kernel/QEMU is that the
> guest visible memory must always be 256MiB aligned due to PAPR mechanics that
> forces a memory block to be at least this size. Albeit I believe that there is
> no constraints of the memory this device is providing being counted as
> non-hotplugable, then in this case the alignment shouldn't be needed.

In Linux guests, virtio-mem adds whole memory blocks (e.g., aligned
256MB), but is able to expose only parts of a memory block dynamically
to Linux mm - essentially in 16MB on ppc64 IIRC.

E.g., on x86-64 (and soon arm64), we mostly add 128MB memory blocks, but
can operate on (currently) 4MB blocks (MAX_ORDER - 1) inside these
blocks. A little like memory ballooning ... but also quite different :)

So far the theory on ppc64. I have no prototype on ppc64, so we'll have
to see what's actually possible.

> 
> But I digress. Thanks for the insights. I'll ping some people inside IBM and
> see if we have a more immediate use case for virtio-mem in Power. Perhaps
> we can do some sort of collaboration with your work.

Sure, I'll be happy to assist.

-- 
Thanks,

David / dhildenb



Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread Daniel Henrique Barboza




On 1/22/21 6:19 PM, David Hildenbrand wrote:



Out of curiosity: are you aware of anyone working in enabling virtio-mem
for pseries/ppc64? I'm wondering if there's some kind of architecture
limitation in Power or if it's just a lack of interest.


I remember there is interest, however:

- arm64 and x86-64 is used more frequently in applicable (cloud?) setups, so it 
has high prio
- s390x doesn‘t have any proper memory hot(un)plug, and as I have a strong 
s399x background, it‘s rather easy for me to implement
- ppc64 at least supports hot(un)plug of DIMMs

There is nothing fundamental speaking against ppc64 support AFAIR.


That's good to hear.


A block size of 16MB should be possible. I‘m planning on looking into it, 
however, there are a lot of other things on my todo list for virtio-mem.


I'm not familiar with the 'block size' concept of the virtio-mem device that 
would
allow for 16MB increments. My knowledge of the pseries kernel/QEMU is that the
guest visible memory must always be 256MiB aligned due to PAPR mechanics that
forces a memory block to be at least this size. Albeit I believe that there is
no constraints of the memory this device is providing being counted as
non-hotplugable, then in this case the alignment shouldn't be needed.

But I digress. Thanks for the insights. I'll ping some people inside IBM and
see if we have a more immediate use case for virtio-mem in Power. Perhaps
we can do some sort of collaboration with your work.



Thanks,


DHB








The QEMU code has an advanced block-size auto-detection code - e.g., querying 
from the kernel but limiting it to sane values (e.g., 512 MB on some arm64 
configurations). Maybe we can borrow some of that or even sense the block size 
via QEMU? Borrowing might be easier. :)


I guess it's a good candidate for a fancy QMP API.



One can at least query the block-size via „qom-get“, but that requires to spin 
up an QEMU instance with a virtio-mem device.




On x86-64 we are good to go with a 2MB default.



- in patch 03 it is mentioned that:

"If it wants to give more memory to the guest it changes 'requested-size' to
a bigger value, and if it wants to shrink guest memory it changes the
'requested-size' to a smaller value. Note, value of zero means that guest
should release all memory offered by the device."

Does size zero implicates the virtio-mem device unplug? Will the device still
exist in the guest even with zeroed memory, acting as a sort of 'deflated
virtio-balloon'?

Yes, the device will still exist, to be grown again later. Hotunplugging the 
device itself is not supported (yet, and also not in the near future).



Assuming that virtio-mem has low overhead in the guest when it's 'deflated',
I don't see any urgency into implementing hotunplug for this device TBH.


There are still things to be optimized in QEMU regarding virtual memory 
consumption, but that‘s more general work to be tackled within the next months. 
After that, not too much speaks against just letting the device stick around to 
provide more nemory later on demand.

Thanks!





Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread David Hildenbrand


> Out of curiosity: are you aware of anyone working in enabling virtio-mem
> for pseries/ppc64? I'm wondering if there's some kind of architecture
> limitation in Power or if it's just a lack of interest.

I remember there is interest, however:

- arm64 and x86-64 is used more frequently in applicable (cloud?) setups, so it 
has high prio
- s390x doesn‘t have any proper memory hot(un)plug, and as I have a strong 
s399x background, it‘s rather easy for me to implement
- ppc64 at least supports hot(un)plug of DIMMs

There is nothing fundamental speaking against ppc64 support AFAIR. A block size 
of 16MB should be possible. I‘m planning on looking into it, however, there are 
a lot of other things on my todo list for virtio-mem.


> 
> 
>> The QEMU code has an advanced block-size auto-detection code - e.g., 
>> querying from the kernel but limiting it to sane values (e.g., 512 MB on 
>> some arm64 configurations). Maybe we can borrow some of that or even sense 
>> the block size via QEMU? Borrowing might be easier. :)
> 
> I guess it's a good candidate for a fancy QMP API.
> 

One can at least query the block-size via „qom-get“, but that requires to spin 
up an QEMU instance with a virtio-mem device.

> 
>> On x86-64 we are good to go with a 2MB default.
>>> 
>>> 
>>> - in patch 03 it is mentioned that:
>>> 
>>> "If it wants to give more memory to the guest it changes 'requested-size' to
>>> a bigger value, and if it wants to shrink guest memory it changes the
>>> 'requested-size' to a smaller value. Note, value of zero means that guest
>>> should release all memory offered by the device."
>>> 
>>> Does size zero implicates the virtio-mem device unplug? Will the device 
>>> still
>>> exist in the guest even with zeroed memory, acting as a sort of 'deflated
>>> virtio-balloon'?
>> Yes, the device will still exist, to be grown again later. Hotunplugging the 
>> device itself is not supported (yet, and also not in the near future).
> 
> 
> Assuming that virtio-mem has low overhead in the guest when it's 'deflated',
> I don't see any urgency into implementing hotunplug for this device TBH.

There are still things to be optimized in QEMU regarding virtual memory 
consumption, but that‘s more general work to be tackled within the next months. 
After that, not too much speaks against just letting the device stick around to 
provide more nemory later on demand.

Thanks!




Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread Daniel Henrique Barboza




On 1/22/21 4:54 PM, David Hildenbrand wrote:



Am 22.01.2021 um 19:53 schrieb Daniel Henrique Barboza :




On 1/22/21 9:50 AM, Michal Privoznik wrote:
Technically, this is another version of:
https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html
But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
and sending it as a new series.
For curious ones, David summarized behaviour well when implementing
virtio-mem support in kernel:
https://lwn.net/Articles/755423/
For less curious ones:
   # virsh update-memory $dom --requested-size 4G
adds additional 4GiB of RAM to guest;
   # virsh update-memory $dom --requested-size 0
removes those 4GiB added earlier.
Patches are also available on my GitLab:
https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3


Code LGTM:

Reviewed-by: Daniel Henrique Barboza 



Hi,

Let me answer your questions.


Thanks for the reply!






A few questions about the overall design:

- it is mentioned that 'requested-size' should respect the granularity
of the block unit, but later on the 'actual' attribute is added to track
the size that the device was expanded/shrunk. What happens if we forfeit
the granularity check of the memory increments? Will QEMU error out because
we're requesting an invalid value or it will silently size the device to a
plausible size?


QEMU will error out, stating that the request-size has to be properly aligned 
to the block-size.


'requested-size' granularity check stays then :)






- Reading the lwn article I understood that David implemented this support
for s390x as well. If that's the case, then I believe you should double
check later on what's the THP size that Z uses to be sure that it's the
same 2MiB value you're considering in patch 03.


In the near future we might see arm64 and s390x support. The latter might 
probably take a bit longer. Both are not supported yet in QEMU/kernel.


Out of curiosity: are you aware of anyone working in enabling virtio-mem
for pseries/ppc64? I'm wondering if there's some kind of architecture
limitation in Power or if it's just a lack of interest.




The QEMU code has an advanced block-size auto-detection code - e.g., querying 
from the kernel but limiting it to sane values (e.g., 512 MB on some arm64 
configurations). Maybe we can borrow some of that or even sense the block size 
via QEMU? Borrowing might be easier. :)


I guess it's a good candidate for a fancy QMP API.




On x86-64 we are good to go with a 2MB default.




- in patch 03 it is mentioned that:

"If it wants to give more memory to the guest it changes 'requested-size' to
a bigger value, and if it wants to shrink guest memory it changes the
'requested-size' to a smaller value. Note, value of zero means that guest
should release all memory offered by the device."

Does size zero implicates the virtio-mem device unplug? Will the device still
exist in the guest even with zeroed memory, acting as a sort of 'deflated
virtio-balloon'?


Yes, the device will still exist, to be grown again later. Hotunplugging the 
device itself is not supported (yet, and also not in the near future).



Assuming that virtio-mem has low overhead in the guest when it's 'deflated',
I don't see any urgency into implementing hotunplug for this device TBH.



Thanks,


DHB




Thanks!





Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread David Hildenbrand


> Am 22.01.2021 um 19:53 schrieb Daniel Henrique Barboza 
> :
> 
> 
> 
>> On 1/22/21 9:50 AM, Michal Privoznik wrote:
>> Technically, this is another version of:
>> https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html
>> But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
>> and sending it as a new series.
>> For curious ones, David summarized behaviour well when implementing
>> virtio-mem support in kernel:
>> https://lwn.net/Articles/755423/
>> For less curious ones:
>>   # virsh update-memory $dom --requested-size 4G
>> adds additional 4GiB of RAM to guest;
>>   # virsh update-memory $dom --requested-size 0
>> removes those 4GiB added earlier.
>> Patches are also available on my GitLab:
>> https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3
> 
> Code LGTM:
> 
> Reviewed-by: Daniel Henrique Barboza 
> 

Hi,

Let me answer your questions.


> 
> A few questions about the overall design:
> 
> - it is mentioned that 'requested-size' should respect the granularity
> of the block unit, but later on the 'actual' attribute is added to track
> the size that the device was expanded/shrunk. What happens if we forfeit
> the granularity check of the memory increments? Will QEMU error out because
> we're requesting an invalid value or it will silently size the device to a
> plausible size?

QEMU will error out, stating that the request-size has to be properly aligned 
to the block-size.

> 
> 
> - Reading the lwn article I understood that David implemented this support
> for s390x as well. If that's the case, then I believe you should double
> check later on what's the THP size that Z uses to be sure that it's the
> same 2MiB value you're considering in patch 03.

In the near future we might see arm64 and s390x support. The latter might 
probably take a bit longer. Both are not supported yet in QEMU/kernel.

The QEMU code has an advanced block-size auto-detection code - e.g., querying 
from the kernel but limiting it to sane values (e.g., 512 MB on some arm64 
configurations). Maybe we can borrow some of that or even sense the block size 
via QEMU? Borrowing might be easier. :)

On x86-64 we are good to go with a 2MB default.

> 
> 
> - in patch 03 it is mentioned that:
> 
> "If it wants to give more memory to the guest it changes 'requested-size' to
> a bigger value, and if it wants to shrink guest memory it changes the
> 'requested-size' to a smaller value. Note, value of zero means that guest
> should release all memory offered by the device."
> 
> Does size zero implicates the virtio-mem device unplug? Will the device still
> exist in the guest even with zeroed memory, acting as a sort of 'deflated
> virtio-balloon'?

Yes, the device will still exist, to be grown again later. Hotunplugging the 
device itself is not supported (yet, and also not in the near future).

Thanks!




Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread Daniel Henrique Barboza




On 1/22/21 9:50 AM, Michal Privoznik wrote:

Technically, this is another version of:

https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html

But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
and sending it as a new series.

For curious ones, David summarized behaviour well when implementing
virtio-mem support in kernel:

https://lwn.net/Articles/755423/

For less curious ones:

   # virsh update-memory $dom --requested-size 4G

adds additional 4GiB of RAM to guest;

   # virsh update-memory $dom --requested-size 0

removes those 4GiB added earlier.

Patches are also available on my GitLab:

https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3


Code LGTM:

Reviewed-by: Daniel Henrique Barboza 


A few questions about the overall design:

- it is mentioned that 'requested-size' should respect the granularity
of the block unit, but later on the 'actual' attribute is added to track
the size that the device was expanded/shrunk. What happens if we forfeit
the granularity check of the memory increments? Will QEMU error out because
we're requesting an invalid value or it will silently size the device to a
plausible size?


- Reading the lwn article I understood that David implemented this support
for s390x as well. If that's the case, then I believe you should double
check later on what's the THP size that Z uses to be sure that it's the
same 2MiB value you're considering in patch 03.


- in patch 03 it is mentioned that:

"If it wants to give more memory to the guest it changes 'requested-size' to
a bigger value, and if it wants to shrink guest memory it changes the
'requested-size' to a smaller value. Note, value of zero means that guest
should release all memory offered by the device."

Does size zero implicates the virtio-mem device unplug? Will the device still
exist in the guest even with zeroed memory, acting as a sort of 'deflated
virtio-balloon'?



Thanks,


DHB






Michal Prívozník (10):
   virhostmem: Introduce virHostMemGetTHPSize()
   qemu_capabilities: Introduce QEMU_CAPS_DEVICE_VIRTIO_MEM_PCI
   conf: Introduce virtio-mem  model
   qemu: Build command line for virtio-mem
   qemu: Wire up  live update
   qemu: Wire up MEMORY_DEVICE_SIZE_CHANGE event
   qemu: Refresh the actual size of virtio-mem on monitor reconnect
   qemu: Recalculate balloon on MEMORY_DEVICE_SIZE_CHANGE event and
 reconnect
   virsh: Introduce update-memory command
   news: document recent virtio memory addition

  NEWS.rst  |   7 +
  docs/formatdomain.rst |  42 +++-
  docs/manpages/virsh.rst   |  31 +++
  docs/schemas/domaincommon.rng |  16 ++
  src/conf/domain_conf.c| 100 -
  src/conf/domain_conf.h|  13 ++
  src/conf/domain_validate.c|  39 
  src/libvirt_private.syms  |   3 +
  src/qemu/qemu_alias.c |  10 +-
  src/qemu/qemu_capabilities.c  |   2 +
  src/qemu/qemu_capabilities.h  |   1 +
  src/qemu/qemu_command.c   |  13 +-
  src/qemu/qemu_domain.c|  50 -
  src/qemu/qemu_domain.h|   1 +
  src/qemu/qemu_domain_address.c|  37 ++-
  src/qemu/qemu_driver.c| 211 +-
  src/qemu/qemu_hotplug.c   |  18 ++
  src/qemu/qemu_hotplug.h   |   5 +
  src/qemu/qemu_monitor.c   |  37 +++
  src/qemu/qemu_monitor.h   |  27 +++
  src/qemu/qemu_monitor_json.c  |  94 ++--
  src/qemu/qemu_monitor_json.h  |   5 +
  src/qemu/qemu_process.c   | 101 -
  src/qemu/qemu_validate.c  |   8 +
  src/security/security_apparmor.c  |   1 +
  src/security/security_dac.c   |   2 +
  src/security/security_selinux.c   |   2 +
  src/util/virhostmem.c |  63 ++
  src/util/virhostmem.h |   3 +
  tests/domaincapsmock.c|   9 +
  .../caps_5.1.0.x86_64.xml |   1 +
  .../caps_5.2.0.x86_64.xml |   1 +
  ...mory-hotplug-virtio-mem.x86_64-latest.args |  49 
  .../memory-hotplug-virtio-mem.xml |  66 ++
  tests/qemuxml2argvtest.c  |   1 +
  ...emory-hotplug-virtio-mem.x86_64-latest.xml |   1 +
  tests/qemuxml2xmltest.c   |   1 +
  tools/virsh-domain.c  | 154 +
  38 files changed, 1165 insertions(+), 60 deletions(-)
  create mode 100644 
tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml
  create mode 12 
tests/qemuxml2xmloutdata/memory-hotplug-virt

Re: [PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread David Hildenbrand
On 22.01.21 13:50, Michal Privoznik wrote:
> Technically, this is another version of:
> 
> https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html
> 
> But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
> and sending it as a new series.
> 
> For curious ones, David summarized behaviour well when implementing
> virtio-mem support in kernel:
> 
> https://lwn.net/Articles/755423/

... and for the really curious ones (because some details in that patch
are a little outdated), there is plenty more at:

https://virtio-mem.gitlab.io/

Thanks for all your effort Michal!

-- 
Thanks,

David / dhildenb



[PATCH 00/10] Introduce virtio-mem model

2021-01-22 Thread Michal Privoznik
Technically, this is another version of:

https://www.redhat.com/archives/libvir-list/2020-December/msg00199.html

But since virtio-pmem part is pushed now, I've reworked virtio-mem a bit
and sending it as a new series.

For curious ones, David summarized behaviour well when implementing
virtio-mem support in kernel:

https://lwn.net/Articles/755423/

For less curious ones:

  # virsh update-memory $dom --requested-size 4G

adds additional 4GiB of RAM to guest;

  # virsh update-memory $dom --requested-size 0

removes those 4GiB added earlier.

Patches are also available on my GitLab:

https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v3


Michal Prívozník (10):
  virhostmem: Introduce virHostMemGetTHPSize()
  qemu_capabilities: Introduce QEMU_CAPS_DEVICE_VIRTIO_MEM_PCI
  conf: Introduce virtio-mem  model
  qemu: Build command line for virtio-mem
  qemu: Wire up  live update
  qemu: Wire up MEMORY_DEVICE_SIZE_CHANGE event
  qemu: Refresh the actual size of virtio-mem on monitor reconnect
  qemu: Recalculate balloon on MEMORY_DEVICE_SIZE_CHANGE event and
reconnect
  virsh: Introduce update-memory command
  news: document recent virtio memory addition

 NEWS.rst  |   7 +
 docs/formatdomain.rst |  42 +++-
 docs/manpages/virsh.rst   |  31 +++
 docs/schemas/domaincommon.rng |  16 ++
 src/conf/domain_conf.c| 100 -
 src/conf/domain_conf.h|  13 ++
 src/conf/domain_validate.c|  39 
 src/libvirt_private.syms  |   3 +
 src/qemu/qemu_alias.c |  10 +-
 src/qemu/qemu_capabilities.c  |   2 +
 src/qemu/qemu_capabilities.h  |   1 +
 src/qemu/qemu_command.c   |  13 +-
 src/qemu/qemu_domain.c|  50 -
 src/qemu/qemu_domain.h|   1 +
 src/qemu/qemu_domain_address.c|  37 ++-
 src/qemu/qemu_driver.c| 211 +-
 src/qemu/qemu_hotplug.c   |  18 ++
 src/qemu/qemu_hotplug.h   |   5 +
 src/qemu/qemu_monitor.c   |  37 +++
 src/qemu/qemu_monitor.h   |  27 +++
 src/qemu/qemu_monitor_json.c  |  94 ++--
 src/qemu/qemu_monitor_json.h  |   5 +
 src/qemu/qemu_process.c   | 101 -
 src/qemu/qemu_validate.c  |   8 +
 src/security/security_apparmor.c  |   1 +
 src/security/security_dac.c   |   2 +
 src/security/security_selinux.c   |   2 +
 src/util/virhostmem.c |  63 ++
 src/util/virhostmem.h |   3 +
 tests/domaincapsmock.c|   9 +
 .../caps_5.1.0.x86_64.xml |   1 +
 .../caps_5.2.0.x86_64.xml |   1 +
 ...mory-hotplug-virtio-mem.x86_64-latest.args |  49 
 .../memory-hotplug-virtio-mem.xml |  66 ++
 tests/qemuxml2argvtest.c  |   1 +
 ...emory-hotplug-virtio-mem.x86_64-latest.xml |   1 +
 tests/qemuxml2xmltest.c   |   1 +
 tools/virsh-domain.c  | 154 +
 38 files changed, 1165 insertions(+), 60 deletions(-)
 create mode 100644 
tests/qemuxml2argvdata/memory-hotplug-virtio-mem.x86_64-latest.args
 create mode 100644 tests/qemuxml2argvdata/memory-hotplug-virtio-mem.xml
 create mode 12 
tests/qemuxml2xmloutdata/memory-hotplug-virtio-mem.x86_64-latest.xml

-- 
2.26.2