Hi Julien,

> On 13 Jun 2024, at 12:31, Julien Grall <jul...@xen.org> wrote:
> 
> Hi,
> 
> On 11/06/2024 13:42, Michal Orzel wrote:
>>> We would like this serie to be in Xen 4.19, there was a misunderstanding on 
>>> our side because we thought
>>> that since the serie was sent before the last posting date, it could have 
>>> been a candidate for merging in the
>>> new release, instead after speaking with Julien and Oleksii we are now 
>>> aware that we need to provide a
>>> justification for its presence.
>>> 
>>> Pros to this serie is that we are closing the circle for static shared 
>>> memory, allowing it to use memory from
>>> the host or from Xen, it is also a feature that is not enabled by default, 
>>> so it should not cause too much
>>> disruption in case of any bugs that escaped the review, however we’ve 
>>> tested many configurations for that
>>> with/without enabling the feature if that can be an additional value.
>>> 
>>> Cons: we are touching some common code related to p2m, but also there the 
>>> impact should be minimal because
>>> the new code is subject to l2 foreign mapping (to be confirmed maybe from a 
>>> p2m expert like Julien).
>>> 
>>> The comments on patch 3 of this serie are addressed by this patch:
>>> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.2467640-1-luca.fance...@arm.com/
>>> And the serie is fully reviewed.
>>> 
>>> So our request is to allow this serie in 4.19, Oleksii, ARM maintainers, do 
>>> you agree on that?
>> As a main reviewer of this series I'm ok to have this series in. It is 
>> nicely encapsulated and the feature itself
>> is still in unsupported state. I don't foresee any issues with it.
> 
> There are changes in the p2m code and the memory allocation for boot domains. 
> So is it really encapsulated?
> 
> For me there are two risks:
> * p2m (already mentioned by Luca): We modify the code to put foreign mapping. 
> The worse that can happen if we don't release the foreign mapping. This would 
> mean the page will not be freed. AFAIK, we don't exercise this path in the CI.
> * domain allocation: This mainly look like refactoring. And the path is 
> exercised in the CI.
> 
> So I am not concerned with the domain allocation one. @Luca, would it be 
> possible to detail how did you test the foreign pages were properly removed?

So at first we tested the code, with/without the static shared memory feature 
enabled, creating/destroying guest from Dom0 and checking that everything
was ok.

After a chat on Matrix with Julien he suggested that using a virtio-mmio disk 
was better to stress out the foreign mapping looking for
regressions.

Luckily I’ve found this slide deck from @Oleksandr: 
https://static.linaro.org/connect/lvc21/presentations/lvc21-314.pdf

So I did a setup using fvp-base, having a disk with two partitions containing 
Dom0 rootfs and DomU rootfs, Dom0 sees
this disk using VirtIO block.

The Dom0 rootfs contains the virtio-disk backend: 
https://github.com/xen-troops/virtio-disk

And the DomU XL configuration is using these parameters:

cmdline="console=hvc0 root=/dev/vda rw"
disk = ['/dev/vda2,raw,xvda,w,specification=virtio’]

Running the setup and creating/destroying a couple of times the guest is not 
showing regressions, here an example of the output:

root@fvp-base:/opt/xtp/guests/linux-guests# xl create -c 
linux-ext-arm64-stresstests-rootfs.cfg
Parsing config from linux-ext-arm64-stresstests-rootfs.cfg
main: read frontend domid 2
  Info: connected to dom2

demu_seq_next: >XENSTORE_ATTACHED
demu_seq_next: domid = 2
demu_seq_next: devid = 51712
demu_seq_next: filename[0] = /dev/vda2
demu_seq_next: readonly[0] = 0
demu_seq_next: base[0]     = 0x2000000
demu_seq_next: irq[0]      = 33
demu_seq_next: >XENEVTCHN_OPEN
demu_seq_next: >XENFOREIGNMEMORY_OPEN
demu_seq_next: >XENDEVICEMODEL_OPEN
demu_seq_next: >XENGNTTAB_OPEN
demu_initialize: 1 vCPU(s)
demu_seq_next: >SERVER_REGISTERED
demu_seq_next: ioservid = 0
demu_seq_next: >RESOURCE_MAPPED
demu_seq_next: shared_iopage = 0x7f80c58000
demu_seq_next: >SERVER_ENABLED
demu_seq_next: >PORT_ARRAY_ALLOCATED
demu_seq_next: >EVTCHN_PORTS_BOUND
demu_seq_next: VCPU0: 3 -> 6
demu_register_memory_space: 2000000 - 20001ff
  Info: (virtio/mmio.c) virtio_mmio_init:165: 
virtio-mmio.devices=0x200@0x2000000:33
demu_seq_next: >DEVICE_INITIALIZED
demu_seq_next: >INITIALIZED
IO request not ready
(XEN) d2v0 Unhandled SMC/HVC: 0x84000050
(XEN) d2v0 Unhandled SMC/HVC: 0x8600ff01
(XEN) d2v0: vGICD: RAZ on reserved register offset 0x00000c
(XEN) d2v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
(XEN) d2v0: vGICR: SGI: unhandled word write 0x000000ffffffff to ICACTIVER0
[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]
[    0.000000] Linux version 6.1.25 (lucfan01@e125770) (aarch64-poky-linux-gcc 
(GCC) 12.2.0, GNU ld (GNU Binutils) 2.40.20230119) #4 SMP PREEMPT Thu Jun 13 
21:55:06 UTC 2024
[    0.000000] Machine model: XENVM-4.19
[    0.000000] Xen 4.19 support found
[    0.000000] efi: UEFI not found.
[    0.000000] NUMA: No NUMA configuration found

[...]

[    0.737758] virtio_blk virtio0: 1/0/0 default/read/poll queues
demu_detect_mappings_model: Use foreign mapping (addr 0x5d660000)
[    0.764258] virtio_blk virtio0: [vda] 747094 512-byte logical blocks (383 
MB/365 MiB)
[    0.781866] Invalid max_queues (4), will use default max: 1.

[...]

INIT: Entering runlevel: 5
Configuring network interfaces... ip: SIOCGIFFLAGS: No such device
Starting syslogd/klogd: done

Poky (Yocto Project Reference Distro) 4.2.1 stressrootfs /dev/hvc0

stressrootfs login: [   62.593440] cfg80211: failed to load regulatory.db

Poky (Yocto Project Reference Distro) 4.2.1 stressrootfs /dev/hvc0

stressrootfs login: root
root@stressrootfs:~# ls /
bin         etc         lost+found  proc        sys         var
boot        home        media       run         tmp
dev         lib         mnt         sbin        usr
root@stressrootfs:~#

[...]

root@fvp-base:/opt/xtp/guests/linux-guests# xl destroy 2
  Error: reading frontend state failed

main: lost connection to dom2
demu_teardown: <INITIALIZED
demu_teardown: <DEVICE_INITIALIZED
demu_deregister_memory_space: 2000000
demu_teardown: <EVTCHN_PORTS_BOUND
demu_teardown: <PORT_ARRAY_ALLOCATED
demu_teardown: VCPU0: 6
demu_teardown: <SERVER_ENABLED
demu_teardown: <RESOURCE_MAPPED
demu_teardown: <SERVER_REGISTERED
demu_teardown: <XENGNTTAB_OPEN
demu_teardown: <XENDEVICEMODEL_OPEN
demu_teardown: <XENFOREIGNMEMORY_OPEN
demu_teardown: <XENEVTCHN_OPEN
demu_teardown: <XENSTORE_ATTACHED
  Info: disconnected from dom2

root@fvp-base:/opt/xtp/guests/linux-guests# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1024     2     r-----      66.6
root@fvp-base:/opt/xtp/guests/linux-guests#


Cheers,
Luca

Reply via email to