On 28/04/2021 7:56 pm, Martin Maurer wrote:
We are proud to announce the general availability of Proxmox Virtual Environment 6.4, our open-source virtualization platform. This version brings unified single-file restore for virtual machine (VM) and container (CT) backup archives stored on a Proxmox Backup Server as well as live restore of VM backup archives located on a Proxmox Backup Server.

Version 6.4 also comes with Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20, many enhancements to KVM/QEMU, and notable bug fixes. Many new Ceph-specific management features have been added to the GUI. We have improved the integration of the placement group (PG) auto-scaler, and you can configure Target Size or Target Ratio settings in the GUI.

Did a rolling upgrade on a 5 node ceph cluster, 6.3 => latest, rebooted all nodes. No issues at all.


I did notice that on the dist-upgrade, the grub update generated a number of errors:

The following packages will be REMOVED:
  pve-kernel-5.4.78-2-pve
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 288 MB disk space will be freed.
Do you want to continue? [Y/n]
(Reading database ... 133531 files and directories currently installed.)
Removing pve-kernel-5.4.78-2-pve (5.4.78-2) ...
Examining /etc/kernel/prerm.d.
run-parts: executing /etc/kernel/prerm.d/dkms 5.4.78-2-pve /boot/vmlinuz-5.4.78-2-pve
Examining /etc/kernel/postrm.d.
run-parts: executing /etc/kernel/postrm.d/initramfs-tools 5.4.78-2-pve /boot/vmlinuz-5.4.78-2-pve
update-initramfs: Deleting /boot/initrd.img-5.4.78-2-pve
run-parts: executing /etc/kernel/postrm.d/proxmox-auto-removal 5.4.78-2-pve /boot/vmlinuz-5.4.78-2-pve run-parts: executing /etc/kernel/postrm.d/zz-proxmox-boot 5.4.78-2-pve /boot/vmlinuz-5.4.78-2-pve Re-executing '/etc/kernel/postrm.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postrm.d/zz-update-grub 5.4.78-2-pve /boot/vmlinuz-5.4.78-2-pve
Generating grub configuration file ...
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
Found linux image: /boot/vmlinuz-5.4.106-1-pve
Found initrd image: /boot/initrd.img-5.4.106-1-pve
Found linux image: /boot/vmlinuz-5.4.98-1-pve
Found initrd image: /boot/initrd.img-5.4.98-1-pve
Found linux image: /boot/vmlinuz-5.3.18-3-pve
Found initrd image: /boot/initrd.img-5.3.18-3-pve
Found linux image: /boot/vmlinuz-4.19.0-14-amd64
Found initrd image: /boot/initrd.img-4.19.0-14-amd64
Found linux image: /boot/vmlinuz-4.19.0-13-amd64
Found initrd image: /boot/initrd.img-4.19.0-13-amd64
  /dev/sdg: open failed: No such device or address
  /dev/sdg: open failed: No such device or address
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin


But on reboot all devices where available, not sure if there is any significance to that.


Thanks!

--
Lindsay


_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to