--- Begin Message ---
Hi Uwe,
July 25, 2023 9:24 AM, "Uwe Sauter" <[email protected]> wrote:
> So, I've been looking further into this and indeed, there seem to be very
> strict filters regarding
> the block device names that Proxmox allows to be used.
>
> /usr/share/perl5/PVE/Diskmanage.pm
>
> 512 # whitelisting following devices
> 513 # - hdX ide block device
> 514 # - sdX scsi/sata block device
> 515 # - vdX virtIO block device
> 516 # - xvdX: xen virtual block device
> 517 # - nvmeXnY: nvme devices
> 518 # - cciss!cXnY cciss devices
> 519 print Dumper($dev);
> 520 return if $dev !~ m/^(h|s|x?v)d[a-z]+$/ &&
> 521 $dev !~ m/^nvme\d+n\d+$/ &&
> 522 $dev !~ m/^cciss\!c\d+d\d+$/;
>
> I don't understand all the consequences of allowing ALL ^dm-\d+$ devices but
> with proper filtering
> it should be possible to allow multipath devices (and given that there might
> be udev rules that
> create additinal symlinks below /dev, each device's name should be resolved
> to its canonical name
> before checking).
It is also a matter of ceph support [0]. Aside the extra complexity, using the
amount of HDDs is not a good use-case for virtualization. And HDDs definitely
need the DB/WAL on a separate device (60x disks -> 5x NVMe).
Best to set it up with ceph-volume directly. See the forum post [1] for the
experience of other users.
Cheers,
Alwin
[0] https://docs.ceph.com/en/latest/ceph-volume/lvm/prepare/#multipath-support
[1] https://forum.proxmox.com/threads/ceph-with-multipath.70813/
--- End Message ---
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user