Many Thanks Phillip
It's clear now and the issue had been resolved 😊

On Tue, Nov 5, 2024 at 12:40 PM Philipp Eppelt <
[email protected]> wrote:

> Hi Mohamed,
>
> Am 05.11.24 um 09:52 schrieb Mohamed Dawod:
> > Hi Philipp,
> > Thanks for the clarification
> > I changed the prop and cpus parameters for the two VMs to be like
> > VM1 : prio=1, cpus=0x2,
> > VM2 : prio=100, cpus=0xD,
>
> Why did you chose a priority of 100?
> Be aware, that the kernel implements priority based round robin. Meaning
> the
> scheduler selects all threads with the highest priority that are runnable
> on a
> specific core and among these implements round robin scheduling. Only if
> these
> threads yield/block the lower priority levels are considered / get
> computation time.
>
> Vm2 is configured to run on cores 0, 2, and 3. In the setup you sent last
> time,
> core 0 also runs ned, moe, io, cons and virtio switch. A prio of 100
> usually
> means that your VM runs with higher priority than the services it depends
> upon
> (namely: io, cons, virtio switch).
>
> If a thread of VM2 now requests a service of e.g. io, and core 0 of your
> VM
> runs, this request is delayed until core 0 of said VM yields/blocks. This
> is
> indeed a race condition and care must be taken when assigning priorities.
>
> You can have a look into the kernel debugger JDB to see the priority
> assignments. If JDB is configured in the kernel config press `ESC` to
> enter the
> debugger and then press `lp` to see the list of threads. (press `h` for
> help)
> The non self-explanatory column names in full are:
> - pr: priority
> - sp: address space
> - wait: ID of the thread this thread waits for
> - to: IPC timeout
>
> All threads of one application / task share have the same address space
> number.
> If you want to see the uvmm instances named in this list, pass jdb=1 to
> start_vm().
>
> To change the priority of a service, assign a new scheduler proxy in the
> ned
> script. E.g. my entry for cons looks like this:
>
>    L4.default_loader:start(
>      {
>        scheduler = vmm.new_sched(0x40),
>        log = L4.Env.log,
>        caps = { cons = vmm.loader.log_fab:svr(), jdb = L4.Env.jdb }
>
>      }, "rom/cons -k -a");
>
>
> Cheers,
> Philipp
>
> > I think that the slow boot issue has been fixed.
> >
> > But the hanging issue still persists.
> > About 90% of running trials fail because of hanging one of the VMs!
> > Sometimes, the VM hangs while Linux booting and sometimes it hangs after
> booting
> > and logging successfully while using the VM!
> >
> > The hanging behaviour seems like there are unresolved race conditions
> causing
> > deadlock.
> > Although it hangs, my host machine is working intensively and my laptop
> fans
> > work loudly!
> > My laptop has 8 cores CPU and 32 GB RAM.
> >
> >
> > What could be the problem?
> >
> > On Mon, Nov 4, 2024 at 1:04 PM Philipp Eppelt <
> [email protected]
> > <mailto:[email protected]>> wrote:
> >
> >     Hi Mohamed,
> >
> >     a colleague just informed me about a recently fixed cause for a slow
> boot with
> >     multiple VMs.
> >
> >     Try this commit for uvmm, it's not part of the 2024-08 snapshot.
> >
> >
> https://github.com/kernkonzept/uvmm/commit/8c6b3080d69e9e2c82211388ba641241f0e1759b
> <
> https://github.com/kernkonzept/uvmm/commit/8c6b3080d69e9e2c82211388ba641241f0e1759b
> >
> >
> >
> >     Cheers,
> >     Philipp
> >
> >
> >
> >
> >     Am 04.11.24 um 11:36 schrieb Philipp Eppelt:
> >      > Hi Mohamed,
> >      >
> >      >
> >      > Am 04.11.24 um 06:26 schrieb Mohamed Dawod:
> >      >> Hi Philipp,
> >      >>
> >      >> I'm already using *start_vm()* and setting */cpu/* parameter to
> the
> >     values you
> >      >> mentioned, but it is still working randomly (sometimes works and
> sometimes
> >      >> hangs or one of the VM hangs).
> >      >>
> >      >> Please find attached my ned script :
> >      > A note on the script:
> >      > You are using "console=hvc0" which is the virtio-console device.
> This is
> >      > initialized only late in the linux boot process. For early
> console output
> >     I use
> >      > "console=ttyS0 earlyprintk=serial,ttyS0" in the bootargs string.
> >      >
> >      > Since you are using hvc0, I assume you have changed
> >      > uvmm/configs/dts/virt-pc.dts, and enabled the virtio_uart node. I
> >     recommend to
> >      > also change the uart8250 node to include the l4vmm,vcon_cap line.
> Then
> >     you also
> >      > need to provide the capability named "uart" to the uvmm. You can
> do this
> >     via the
> >      > ext_caps parameter. For example:
> >      >
> >      >    ext_caps = { uart = vmm.loader.log_fab:create(L4.Proto.Log,
> "uart") }
> >      >
> >      > Change the "uart" string in the create() function to something
> you like to
> >      > distinguish the two uvmms.
> >      >
> >      >>
> >      >> Also I can not understand the effect of the */prio/* parameter!
> >      >> I have changed it among different values but nothing changed!
> >      >> What is the effect of the */prio/* parameter and when/how can I
> use it ?
> >      >
> >      > I assume you are using the 2024-08 snapshot. In this version,
> both, the
> >     prio and
> >      > cpus parameters are necessary to take effect. (A later version
> makes this
> >     more
> >      > user friendly see [1].)
> >      >
> >      > In more detail: To assign apps like the uvmm to specific cores,
> L4Re uses
> >      > scheduling proxies, which ensure the application scheduled ontop
> of the
> >      > scheduling proxy have only access to the resources managed by the
> proxy.
> >     Here
> >      > resources means cores and priority range.
> >      >
> >      > If given prio and cpus parameter, start_vm() creates a scheduling
> proxy with
> >      > these parameters and limiting priority range to prio+10.
> >      > This means:
> >      > vm1 runs on cores 0xc with a priority range of [255, 255]
> (min/max).
> >      > vm2 runs on cores 0x3 with a priority range of [3, 13].
> >      >
> >      > 3 and 255 are the interesting numbers here: The base priority for
> l4re
> >     apps is 2
> >      > and the given prio parameter just adds to this, so 2+1=3. 2+12345
> is 255,
> >      > because 255 is the maximum priority level.
> >      >
> >      > This can be source of the slowdown behavior you are observing,
> since this
> >      > priority level is the same as for services vm1 depends upon.
> >      > My recommendation would be prio=2 for vm1.
> >      >
> >      > Cheers,
> >      > Philipp
> >      >
> >      >
> >      > p.s. I haven't forgotten about your arm64 PCI MSI question, I
> just need some
> >      > time to set this up myself to be able to give a good answer.
> >      >
> >      >
> >      > [1]
> https://github.com/kernkonzept/uvmm/blob/master/configs/vmm.lua#L40
> >     <https://github.com/kernkonzept/uvmm/blob/master/configs/vmm.lua#L40
> >
> >      >
> >      >
> >      >>
> >      >> Thanks in advance,
> >      >> Regards
> >      >>
> >      >> On Thu, Oct 31, 2024 at 8:47 PM Philipp Eppelt
> >     <[email protected] <mailto:
> [email protected]>
> >      >> <mailto:[email protected]
> >     <mailto:[email protected]>>> wrote:
> >      >>
> >      >>     Hi Mohamed,
> >      >>
> >      >>     Am 31.10.24 um 14:30 schrieb Mohamed Dawod:
> >      >>      > Thanks Philipp,
> >      >>      >
> >      >>      > Multiple CPUs worked for virt-arm64 machine
> >      >>     Yippie!
> >      >>
> >      >>      > I tried to launch 2 linux VMs on top of L4 using uvmm and
> assign
> >     2 CPUs
> >      >>     to one
> >      >>      > of the two VMs and another 2 CPUs to the other one.
> >      >>      > I noticed that the linux booting process becomes slower
> and as more
> >      >> CPUs are
> >      >>      > added to qemu with -smp option and passed to the VMs as
> more as
> >     the VM
> >      >>     booting
> >      >>      > becomes more slower!
> >      >>      > Also VMs become working randomly (sometimes they work and
> >     sometimes they
> >      >>     hang or
> >      >>      > one of them hangs) >
> >      >>      > Why does this strange behaviour happen when using uvmm
> and 2
> >     Linux VMs  ?
> >      >>     To make this easier please show me your ned script starting
> the VMs.
> >      >>
> >      >>     If you use start_vm() please note that the `cpus` parameter
> takes a
> >      >> bitmap. So
> >      >>     make sure to start VM1 with `cpus=0x3` and VM2 with
> `cpus=0xc` to place
> >      >> them on
> >      >>     separate cores of a four core platform (e.g. QEMU with -smp
> 4).
> >      >>
> >      >>     Cheers,
> >      >>     Philipp
> >      >>
> >      >>      >
> >      >>      > Thanks,
> >      >>      > Regards
> >      >>      >
> >      >>      > On Wed, Oct 30, 2024 at 8:46 PM Philipp Eppelt
> >      >>     <[email protected]
> >     <mailto:[email protected]>
> >     <mailto:[email protected] <mailto:
> [email protected]>>
> >      >>      > <mailto:[email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>>> wrote:
> >      >>      >
> >      >>      >     Hi Mohamed,
> >      >>      >
> >      >>      >     Am 29.10.24 um 10:43 schrieb Mohamed Dawod:
> >      >>      >      > Hello,
> >      >>      >      > I'm trying to provide multiple CPUs for a linux VM
> on top
> >     of L4.
> >      >>      >      > I'm using the qemu virt machine and building for
> aarch64. so I
> >      >>     used *-smp*
> >      >>      >      > option to provide more CPUs.
> >      >>      >      >
> >      >>      >      >     $ qemu-system-aarch64 -M
> virt,virtualization=true -cpu
> >      >> cortex-a57
> >      >>      >     -smp 4 -m
> >      >>      >      >     1024 -kernel ....etc....
> >      >>      >     I'm not sure which gic version qemu uses. Please try
> setting it
> >      >>     explicitly to
> >      >>      >     with gic-version=3 argument:  `-M
> >      >> virt,virtualization=true,gic-version=3`
> >      >>      >
> >      >>      >      >
> >      >>      >      > Unfortunately, This didn't work. I tried to add
> more CPU
> >     device
> >      >>     nodes to
> >      >>      >     the dts
> >      >>      >      > file *virt-arm_virt-64.dts *but it also didn't
> work. >
> >      >>      >      > I think that it's because of the provided
> >     interrupt-controller with
> >      >>      >      > *virt-arm_virt-64.dts* in /l4/pkg/uvmm/conf/dts/
> which
> >     mentioned
> >      >>     that it
> >      >>      >      > supports only one CPU.
> >      >>      >      >
> >      >>      >      >     icsoc {
> >      >>      >      >              compatible = "simple-bus";
> >      >>      >      >              #address-cells = <2>;
> >      >>      >      >              #size-cells = <2>;
> >      >>      >      >              ranges;
> >      >>      >      >
> >      >>      >      >              /* Uvmm will adapt the compatible
> string
> >     depending
> >      >> on the
> >      >>      >     present gic
> >      >>      >      >               * version. It expects reg entries
> that provide
> >      >>     enough space
> >      >>      >     for the
> >      >>      >      >               * Cpu/Dist interface for gicv2 (at
> least 0x1000,
> >      >>     0x1000) or the
> >      >>      >      >               * Dist/Redist interface for gicv3
> (0x10000,
> >     0x20000 *
> >      >>      >     number of cpus).
> >      >>      >
> >      >>      >     I'm not an expert for ARM64, but judging from the
> line above I'd
> >      >> say you
> >      >>      >     have to
> >      >>      >     increase the size of the second reg entry. For
> example for
> >     four cores:
> >      >>      >              reg = <0 0x40000 0 0x10000>,
> >      >>      >                    <0 0x50000 0 0x80000>;
> >      >>      >
> >      >>      >     You should be able to just use the github version of
> this
> >     file, it
> >      >>     has a gic
> >      >>      >     node that is configured for 32 cores and comes with
> four CPU
> >     nodes.
> >      >>      >
> >      >>
> >      >>
> >
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts>
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts>>
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts>
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts
> <
> https://github.com/kernkonzept/uvmm/blob/master/configs/dts/virt-arm_virt-64.dts
> >>>
> >      >>      >
> >      >>      >
> >      >>      >      >               * *The entries provided here support
> any
> >     gicv2 setup
> >      >>     or a
> >      >>      >     gicv3 setup
> >      >>      >      >               * with one Cpu.*
> >      >>      >      >               */
> >      >>      >      >              gic: interrupt-controller {
> >      >>      >      >                  compatible = "arm,gic-400",
> >     "arm,cortex-a15-gic",
> >      >>      >      >     "arm,cortex-a9-gic";
> >      >>      >      >                  #interrupt-cells = <3>;
> >      >>      >      >                  #address-cells = <0>;
> >      >>      >      >                  interrupt-controller;
> >      >>      >      >                  reg = <0 0x40000 0 0x10000>,
> >      >>      >      >                        <0 0x50000 0 0x20000>;
> >      >>      >      >                  };
> >      >>      >      >          };
> >      >>      >      >
> >      >>      >      >
> >      >>      >      > My question now, is there any workaround to support
> >     multiple CPUs
> >      >>     for virt
> >      >>      >      > machine  on arm64  ?
> >      >>      >
> >      >>      >     Multiple CPUs should work. For SMP there are a couple
> of
> >     things to
> >      >>     consider:
> >      >>      >     - QEMU: -smp parameter
> >      >>      >     - Kernel configuration for SMP and the number of
> maximum cores
> >      >>      >     - The DTS defines the maximum number of cores for the
> uvmm
> >     will set
> >      >>     up. So
> >      >>      >     adding CPU device nodes is the correct path.
> >      >>      >     - The ned script defines the number of cores
> available at
> >     runtime to
> >      >>     uvmm. No
> >      >>      >     cpu parameter in the start_vm({}) call means the VM
> gets
> >     access to
> >      >>     all cpus.
> >      >>      >     - Linux must of course also support SMP, but that's
> very
> >     likely not
> >      >>     the problem
> >      >>      >     here ;-)
> >      >>      >
> >      >>      >     I hope this sheds some light.
> >      >>      >
> >      >>      >     Cheers,
> >      >>      >     Philipp
> >      >>      >
> >      >>      >     --
> >      >>      > [email protected]
> >     <mailto:[email protected]>
> >     <mailto:[email protected] <mailto:
> [email protected]>>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>> -
> >      >>      >     Tel. 0351-41 883 221
> >      >>      > http://www.kernkonzept.com <http://www.kernkonzept.com>
> >     <http://www.kernkonzept.com <http://www.kernkonzept.com>>
> >      >>     <http://www.kernkonzept.com <http://www.kernkonzept.com>
> >     <http://www.kernkonzept.com <http://www.kernkonzept.com>>>
> >      >>      >
> >      >>      >     Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht
> Dresden, HRB
> >     31129.
> >      >>      >     Geschäftsführer: Dr.-Ing. Michael Hohmuth
> >      >>      >     _______________________________________________
> >      >>      >     l4-hackers mailing list --
> [email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>
> >      >>      >     <mailto:[email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>>
> >      >>      >     To unsubscribe send an email to
> >     [email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>
> >      >>      >     <mailto:[email protected]
> >     <mailto:[email protected]>
> >      >>     <mailto:[email protected]
> >     <mailto:[email protected]>>>
> >      >>      >
> >      >>      >
> >      >>      >
> >      >>      > *Driving Innovation! Visit our website www.avelabs.com
> >     <http://www.avelabs.com>
> >      >>     <http://www.avelabs.com <http://www.avelabs.com>>
> >      >>      > <http://www.avelabs.com/ <http://www.avelabs.com/>
> >     <http://www.avelabs.com/ <http://www.avelabs.com/>>>*, to read
> Avelabs
> >      >>     Confidentiality Notice, follow this
> >      >>      > link: http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>
> >      >>     <http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>>
> >      >>      > <http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>
> >      >>     <http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>>>
> >      >>
> >      >>     -- [email protected]
> >     <mailto:[email protected]>
> >      >> <mailto:[email protected]
> >     <mailto:[email protected]>> -
> >      >>     Tel. 0351-41 883 221
> >      >> http://www.kernkonzept.com <http://www.kernkonzept.com>
> >     <http://www.kernkonzept.com <http://www.kernkonzept.com>>
> >      >>
> >      >>     Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB
> 31129.
> >      >>     Geschäftsführer: Dr.-Ing. Michael Hohmuth
> >      >>
> >      >>
> >      >>
> >      >> *Driving Innovation! Visit our website www.avelabs.com
> >     <http://www.avelabs.com>
> >      >> <http://www.avelabs.com/ <http://www.avelabs.com/>>*, to read
> Avelabs
> >     Confidentiality Notice, follow
> >      >> this link: http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>
> >      >> <http://www.avelabs.com/email/disclaimer.html
> >     <http://www.avelabs.com/email/disclaimer.html>>
> >      >
> >      >
> >      > _______________________________________________
> >      > l4-hackers mailing list -- [email protected]
> >     <mailto:[email protected]>
> >      > To unsubscribe send an email to
> [email protected]
> >     <mailto:[email protected]>
> >
> >     --
> >     [email protected] <mailto:
> [email protected]> -
> >     Tel. 0351-41 883 221
> >     http://www.kernkonzept.com <http://www.kernkonzept.com>
> >
> >     Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
> >     Geschäftsführer: Dr.-Ing. Michael Hohmuth
> >     _______________________________________________
> >     l4-hackers mailing list -- [email protected]
> >     <mailto:[email protected]>
> >     To unsubscribe send an email to
> [email protected]
> >     <mailto:[email protected]>
> >
> >
> >
> > *Driving Innovation! Visit our website www.avelabs.com
> > <http://www.avelabs.com/>*, to read Avelabs Confidentiality Notice,
> follow this
> > link: http://www.avelabs.com/email/disclaimer.html
> > <http://www.avelabs.com/email/disclaimer.html>
>
> --
> [email protected] - Tel. 0351-41 883 221
> http://www.kernkonzept.com
>
> Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
> Geschäftsführer: Dr.-Ing. Michael Hohmuth
>

-- 


*Driving Innovation! Visit our website www.avelabs.com 
<http://www.avelabs.com/>*, to read Avelabs Confidentiality Notice, follow 
this link: http://www.avelabs.com/email/disclaimer.html 
<http://www.avelabs.com/email/disclaimer.html>

_______________________________________________
l4-hackers mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to