Hi Mohamed,

Am 04.11.24 um 06:26 schrieb Mohamed Dawod:
Hi Philipp,

I'm already using *start_vm()* and setting */cpu/* parameter to the values you mentioned, but it is still working randomly (sometimes works and sometimes hangs or one of the VM hangs).

Please find attached my ned script :
A note on the script:
You are using "console=hvc0" which is the virtio-console device. This is initialized only late in the linux boot process. For early console output I use "console=ttyS0 earlyprintk=serial,ttyS0" in the bootargs string.

Since you are using hvc0, I assume you have changed uvmm/configs/dts/virt-pc.dts, and enabled the virtio_uart node. I recommend to also change the uart8250 node to include the l4vmm,vcon_cap line. Then you also need to provide the capability named "uart" to the uvmm. You can do this via the ext_caps parameter. For example:

  ext_caps = { uart = vmm.loader.log_fab:create(L4.Proto.Log, "uart") }

Change the "uart" string in the create() function to something you like to distinguish the two uvmms.


Also I can not understand the effect of the */prio/* parameter!
I have changed it among different values but nothing changed!
What is the effect of the */prio/* parameter and when/how can I use it ?

I assume you are using the 2024-08 snapshot. In this version, both, the prio and cpus parameters are necessary to take effect. (A later version makes this more user friendly see [1].)

In more detail: To assign apps like the uvmm to specific cores, L4Re uses scheduling proxies, which ensure the application scheduled ontop of the scheduling proxy have only access to the resources managed by the proxy. Here resources means cores and priority range.

If given prio and cpus parameter, start_vm() creates a scheduling proxy with these parameters and limiting priority range to prio+10.
This means:
vm1 runs on cores 0xc with a priority range of [255, 255] (min/max).
vm2 runs on cores 0x3 with a priority range of [3, 13].

3 and 255 are the interesting numbers here: The base priority for l4re apps is 2 and the given prio parameter just adds to this, so 2+1=3. 2+12345 is 255, because 255 is the maximum priority level.

This can be source of the slowdown behavior you are observing, since this priority level is the same as for services vm1 depends upon.
My recommendation would be prio=2 for vm1.

Cheers,
Philipp


p.s. I haven't forgotten about your arm64 PCI MSI question, I just need some time to set this up myself to be able to give a good answer.


[1] https://github.com/kernkonzept/uvmm/blob/master/configs/vmm.lua#L40



Thanks in advance,
Regards

On Thu, Oct 31, 2024 at 8:47 PM Philipp Eppelt <[email protected] <mailto:[email protected]>> wrote:

    Hi Mohamed,

    Am 31.10.24 um 14:30 schrieb Mohamed Dawod:
     > Thanks Philipp,
     >
     > Multiple CPUs worked for virt-arm64 machine
    Yippie!

     > I tried to launch 2 linux VMs on top of L4 using uvmm and assign 2 CPUs
    to one
     > of the two VMs and another 2 CPUs to the other one.
     > I noticed that the linux booting process becomes slower and as more CPUs 
are
     > added to qemu with -smp option and passed to the VMs as more as the VM
    booting
     > becomes more slower!
     > Also VMs become working randomly (sometimes they work and sometimes they
    hang or
     > one of them hangs) >
     > Why does this strange behaviour happen when using uvmm and 2 Linux VMs  ?
    To make this easier please show me your ned script starting the VMs.

    If you use start_vm() please note that the `cpus` parameter takes a bitmap. 
So
    make sure to start VM1 with `cpus=0x3` and VM2 with `cpus=0xc` to place 
them on
    separate cores of a four core platform (e.g. QEMU with -smp 4).

    Cheers,
    Philipp

     >
     > Thanks,
     > Regards
     >
     > On Wed, Oct 30, 2024 at 8:46 PM Philipp Eppelt
    <[email protected] <mailto:[email protected]>
     > <mailto:[email protected]
    <mailto:[email protected]>>> wrote:
     >
     >     Hi Mohamed,
     >
     >     Am 29.10.24 um 10:43 schrieb Mohamed Dawod:
     >      > Hello,
     >      > I'm trying to provide multiple CPUs for a linux VM on top of L4.
     >      > I'm using the qemu virt machine and building for aarch64. so I
    used *-smp*
     >      > option to provide more CPUs.
     >      >
     >      >     $ qemu-system-aarch64 -M virt,virtualization=true -cpu 
cortex-a57
     >     -smp 4 -m
     >      >     1024 -kernel ....etc....
     >     I'm not sure which gic version qemu uses. Please try setting it
    explicitly to
     >     with gic-version=3 argument:  `-M 
virt,virtualization=true,gic-version=3`
     >
     >      >
     >      > Unfortunately, This didn't work. I tried to add more CPU device
    nodes to
     >     the dts
     >      > file *virt-arm_virt-64.dts *but it also didn't work. >
     >      > I think that it's because of the provided interrupt-controller 
with
     >      > *virt-arm_virt-64.dts* in /l4/pkg/uvmm/conf/dts/ which mentioned
    that it
     >      > supports only one CPU.
     >      >
     >      >     icsoc {
     >      >              compatible = "simple-bus";
     >      >              #address-cells = <2>;
     >      >              #size-cells = <2>;
     >      >              ranges;
     >      >
     >      >              /* Uvmm will adapt the compatible string depending 
on the
     >     present gic
     >      >               * version. It expects reg entries that provide
    enough space
     >     for the
     >      >               * Cpu/Dist interface for gicv2 (at least 0x1000,
    0x1000) or the
     >      >               * Dist/Redist interface for gicv3 (0x10000, 0x20000 
*
     >     number of cpus).
     >
     >     I'm not an expert for ARM64, but judging from the line above I'd say 
you
     >     have to
     >     increase the size of the second reg entry. For example for four 
cores:
     >              reg = <0 0x40000 0 0x10000>,
     >                    <0 0x50000 0 0x80000>;
     >
     >     You should be able to just use the github version of this file, it
    has a gic
     >     node that is configured for 32 cores and comes with four CPU nodes.
     >
    https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts 
<https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts> 
<https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts 
<https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts>>
     >
     >
     >      >               * *The entries provided here support any gicv2 setup
    or a
     >     gicv3 setup
     >      >               * with one Cpu.*
     >      >               */
     >      >              gic: interrupt-controller {
     >      >                  compatible = "arm,gic-400", "arm,cortex-a15-gic",
     >      >     "arm,cortex-a9-gic";
     >      >                  #interrupt-cells = <3>;
     >      >                  #address-cells = <0>;
     >      >                  interrupt-controller;
     >      >                  reg = <0 0x40000 0 0x10000>,
     >      >                        <0 0x50000 0 0x20000>;
     >      >                  };
     >      >          };
     >      >
     >      >
     >      > My question now, is there any workaround to support multiple CPUs
    for virt
     >      > machine  on arm64  ?
     >
     >     Multiple CPUs should work. For SMP there are a couple of things to
    consider:
     >     - QEMU: -smp parameter
     >     - Kernel configuration for SMP and the number of maximum cores
     >     - The DTS defines the maximum number of cores for the uvmm will set
    up. So
     >     adding CPU device nodes is the correct path.
     >     - The ned script defines the number of cores available at runtime to
    uvmm. No
     >     cpu parameter in the start_vm({}) call means the VM gets access to
    all cpus.
     >     - Linux must of course also support SMP, but that's very likely not
    the problem
     >     here ;-)
     >
     >     I hope this sheds some light.
     >
     >     Cheers,
     >     Philipp
     >
     >     --
     > [email protected] <mailto:[email protected]>
    <mailto:[email protected]
    <mailto:[email protected]>> -
     >     Tel. 0351-41 883 221
     > http://www.kernkonzept.com <http://www.kernkonzept.com>
    <http://www.kernkonzept.com <http://www.kernkonzept.com>>
     >
     >     Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
     >     Geschäftsführer: Dr.-Ing. Michael Hohmuth
     >     _______________________________________________
     >     l4-hackers mailing list -- [email protected]
    <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>
     >     To unsubscribe send an email to [email protected]
    <mailto:[email protected]>
     >     <mailto:[email protected]
    <mailto:[email protected]>>
     >
     >
     >
     > *Driving Innovation! Visit our website www.avelabs.com
    <http://www.avelabs.com>
     > <http://www.avelabs.com/ <http://www.avelabs.com/>>*, to read Avelabs
    Confidentiality Notice, follow this
     > link: http://www.avelabs.com/email/disclaimer.html
    <http://www.avelabs.com/email/disclaimer.html>
     > <http://www.avelabs.com/email/disclaimer.html
    <http://www.avelabs.com/email/disclaimer.html>>

-- [email protected] <mailto:[email protected]> -
    Tel. 0351-41 883 221
    http://www.kernkonzept.com <http://www.kernkonzept.com>

    Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
    Geschäftsführer: Dr.-Ing. Michael Hohmuth



*Driving Innovation! Visit our website www.avelabs.com <http://www.avelabs.com/>*, to read Avelabs Confidentiality Notice, follow this link: http://www.avelabs.com/email/disclaimer.html <http://www.avelabs.com/email/disclaimer.html>

--
[email protected] - Tel. 0351-41 883 221
http://www.kernkonzept.com

Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
Geschäftsführer: Dr.-Ing. Michael Hohmuth

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

_______________________________________________
l4-hackers mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to