Hi Richard, On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote: > But that brings me to a bigger question. > How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? > Github is woefully lacking in proper version support. > I can't send random untested code to my customers. >
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”. However, obviously we cannot possibly test every combination of configurations on every hardware. Kernkonzept offers commercial support for these cases, where we provide a dedicated delivery pipeline that is tailored to the customer use-case and hardware and where testing ensures that the concrete customer use-case works flawlessly for every software release on the hardware relevant to the customer. Please contact [email protected] for quotes or discussions on details of such an arrangement. Best regards, - Marcus Hähnel Principal Engineering Lead > > Richard > > > -----Original Message----- > From: Adam Lackorzynski <[email protected]> > Sent: Sunday, May 4, 2025 11:11 AM > To: Richard Clark <[email protected]>; > [email protected] > Cc: Bud Wykoff <[email protected]>; Douglas Schafer > <[email protected]> > Subject: Re: Upgrade issues. VM won't start. > > Richard, > > is that setup running on Linux QEMU+KVM? If yes, we recently (really a few > days ago) fixed an issue in this virtualized setup (wrt to performance > counter handling). It's on GH since Friday I believe. > Otherwise please provide me you fiasco binary, such that I can look up > fffffffff006a176 as this will point to the location that triggered the issue. > > Thanks, Adam > > On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote: > > Adam, > > > > So I have been using version 23.10.1, built as per the download page, > > and have gotten a couple VMs to ping eachother and give me a login > > prompt. I finally realized I was missing the virtio_switch package and > > grabbed it from github and put it where it is supposed to go. Of > > course, being a version mismatch now, it did not compile. I decide to > > bite the bullet and do an upgrade (always a mistake) and use the new > > build process given with the new website. That went smoothly! I > > install ham, run it, etc, everything builds, I update my local scripts and > > links to use the new environment variables instead of being hardcoded, and > > it all builds and looks good. Except that it doesn't run. The VMs get a > > memory exception and kick into jdb. > > > > I have not changed any of my .cfg or .list files. The VMs and ramdisks are > > untouched. > > My local (L4 native) processes start and appear to run. But the VMs crash > > for some reason. > > Even the device tree is unchanged. I also tried a newly built linux (as > > opposed to a prebuilt) and that failed as well. > > > > Is there some reason for this new crash between version 23.10.1 and the > > latest version from github? > > I've attached the VM startup output from both the old and new runs so you > > can take a look. > > > > I'm so close... I see the IP configuration parameter and how to set up lwip > > and virtio_switch.... > > Then my natives should be able to talk directly to my linuxes. > > > > > > Your help is greatly appreciated! > > > > > > Richard > > > > > vm1 | VMM: Created VCPU 0 @ 17000 > > vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not > > possible for VM. > > vm1 | VMM[main]: Hello out there. > > vm1 | VMM[ASM]: Sys Info: > > vm1 | vBus: 0 > > vm1 | DMA devs: 0 > > vm1 | IO-MMU: 0 > > vm1 | Identity forced: 0 > > vm1 | DMA phys addr: 0 > > vm1 | DT dma-ranges: 0 > > vm1 | VMM[ASM]: Operating mode: No DMA > > vm1 | VMM[ram]: RAM not set up for DMA. > > vm2 | VMM: Created VCPU 0 @ 17000 > > vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not > > possible for VM. > > vm2 | VMM[main]: Hello out there. > > vm2 | VMM[ASM]: Sys Info: > > vm2 | vBus: 0 > > vm2 | DMA devs: 0 > > vm2 | IO-MMU: 0 > > vm2 | Identity forced: 0 > > vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 > > vm2 | DMA phys addr: 0 > > vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 > > vm2 | DT dma-ranges: 0 > > vm1 | VMM[ram]: RAM: VM offset=0x1000000 > > vm2 | VMM[ASM]: Operating mode: No DMA > > vm1 | VMM[main]: Loading kernel... > > vm2 | VMM[ram]: RAM not set up for DMA. > > vm1 | VMM[loader]: Linux kernel detected > > vm1 | VMM[file]: load: @ 0xfc400 > > vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f > > vm1 | VMM[main]: Loading ram disk... > > vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 > > vm1 | VMM[file]: load: @ 0x1fc00000 > > vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff > > vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 > > (size: 00400000) > > vm1 | VMM[PIC]: Hello, Legacy_pic > > vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 > > vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 > > vm2 | VMM[ram]: RAM: VM offset=0x1000000 > > vm2 | VMM[main]: Loading kernel... > > vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': > > FDT_ERR_NOTFOUND > > vm2 | VMM[loader]: Linux kernel detected > > vm1 | VMM: Creating Acpi_platform > > vm2 | VMM[file]: load: @ 0xfc400 > > vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f > > vm1 | VMM[ACPI]: Acpi timer @ 0xb008 > > vm1 | VMM[RTC]: Hello from RTC. Irq=8 > > vm2 | VMM[main]: Loading ram disk... > > vm1 | VMM[uart_8250]: Create virtual 8250 console > > vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 > > vm2 | VMM[file]: load: @ 0x1fc00000 > > vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff > > vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 > > (size: 00400000) > > vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. > > vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. > > vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. > > Disabling device. > > vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. > > vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. > > Disabling device. > > vm2 | VMM[PIC]: Hello, Legacy_pic > > vm1 | VMM[PCI bus]: Creating host bridge > > vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff > > IO] > > vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, > > 0xaaffffff MMIO32] > > vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, > > 0x3ffffffff MMIO64] > > vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 > > vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': > > FDT_ERR_NOTFOUND > > vm2 | VMM: Creating Acpi_platform > > vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 > > vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge > > vm2 | VMM[ACPI]: Acpi timer @ 0xb008 > > vm2 | VMM[RTC]: Hello from RTC. Irq=8 > > vm1 | VMM[VIO Cons]: Create virtual PCI console > > vm2 | VMM[uart_8250]: Create virtual 8250 console > > vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, > > 0xaa001fff] > > vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 > > (non-prefetchable) > > vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] > > vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io > > vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. > > vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. > > vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 > > vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. > > Disabling device. > > vm1 | VMM[VIO Cons]: Console: 0x186b0 > > vm1 | VMM[VIO proxy]: Creating proxy > > vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. > > vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. > > Disabling device. > > vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, > > 0xaa003fff] > > vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 > > (non-prefetchable) > > vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] > > vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io > > p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 > > p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] > > offset 0 > > vm2 | VMM[PCI bus]: Creating host bridge > > p2p | register client: host IRQ: 420010 config DS: 41d000 > > vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff > > IO] > > vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, > > 0xaaffffff MMIO32] > > vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, > > 0x3ffffffff MMIO64] > > vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 > > vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 > > vm1 | VMM[VIO proxy]: Creating proxy > > vm1 | VMM: [email protected],virtiocap: capability qdrv is invalid. > > vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. > > Disabling device. > > vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 > > vm1 | VMM: [email protected],dscap: capability bios_code is invalid. > > vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge > > vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! > > vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. > > Disabling device. > > vm2 | VMM[VIO Cons]: Create virtual PCI console > > vm1 | VMM: [email protected],dscap: capability bios_vars is invalid. > > vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! > > vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. > > Disabling device. > > vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, > > 0xaa001fff] > > vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 > > (non-prefetchable) > > vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] > > vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io > > vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 > > vm1 | VMM: Created VCPU 1 @ 23000 > > vm2 | VMM[VIO Cons]: Console: 0x186b0 > > vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] > > ([1fbfe000]) > > vm2 | VMM[VIO proxy]: Creating proxy > > vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 > > vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, > > 0xaa003fff] > > vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 > > (non-prefetchable) > > vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] > > vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io > > p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 > > p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] > > offset 0 > > p2p | register client: host IRQ: 420010 config DS: 41e000 > > vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. > > vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 > > vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 > > vm2 | VMM[VIO proxy]: Creating proxy > > vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw > > vm2 | VMM: [email protected],virtiocap: capability qdrv is invalid. > > vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 > > root=/dev/ram0 rw > > vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. > > Disabling device. > > vm1 | VMM[vmmap]: VM map: > > vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram > > vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam > > vm2 | VMM: [email protected],dscap: capability bios_code is invalid. > > vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic > > vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! > > vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler > > vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. > > Disabling device. > > vm1 | VMM[main]: Populating guest physical address space > > vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] > > vm2 | VMM: [email protected],dscap: capability bios_vars is invalid. > > vm1 | VMM[vmmap]: IOport map: > > vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! > > vm1 | VMM[vmmap]: [ 20: 21]: PIC > > vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. > > Disabling device. > > vm1 | VMM[vmmap]: [ 40: 43]: PIT > > vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 > > vm1 | VMM[vmmap]: [ 70: 71]: RTC > > vm1 | VMM[vmmap]: [ a0: a1]: PIC > > vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 > > vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface > > vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg > > vm1 | VMM[vmmap]: [1800:1808]: ACPI platform > > vm2 | VMM: Created VCPU 1 @ 23000 > > vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer > > vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] > > ([1fbfe000]) > > vm1 | VMM[guest]: Starting VMM @ 0x100000 > > vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 > > vm1 | VMM[Cpu_dev]: [ 0] Reset called > > vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. > > vm1 | VMM: Hello clock source for vCPU 0 > > > > --------------------------------------------------------------------- > > > > CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) > > CPU(s) 0-5 entered JDB > > jdb: > > > _______________________________________________ > l4-hackers mailing list -- [email protected] > To unsubscribe send an email to [email protected] > -- +++ Register now for our workshop “Get to know L4Re in 3 days” on October 28–30. Learn to design and deploy secure system architectures for your product with L4Re: https://www.kernkonzept.com/workshop-getting-started-with-l4re/ +++ --- Kernkonzept GmbH Sitz: Dresden HRB 31129 Geschäftsführer: Dr.-Ing. Michael Hohmuth _______________________________________________ l4-hackers mailing list -- [email protected] To unsubscribe send an email to [email protected]
