The commit message doesn't explain the actual issue that it is trying to solve.
AFAICT we do not need the ceph.conf symlinked right away for normal PVE
operations. If it is not present in /etc/ceph/ceph.conf, the RBD and CephFS
connections will use the dedicated parameters to connect and
On 12/19/23 15:03, Hannes Duerr wrote:
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image
Signed-off-by: Hannes Duerr
---
src/PVE/Storage/LvmThinPlugin.pm | 50
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image
Signed-off-by: Hannes Duerr
---
src/PVE/Storage/LvmThinPlugin.pm | 50
1 file changed, 50 insertions(+)
diff --git
if a base-image is to be migrated to a lvm-thin storage, a new
vm-image is allocated on the target side, then the data is written
and afterwards the image is converted to a base-image
Changes in V2:
* restructure and remove duplicaiton
* fix deactivation of volumes after migration
Changes in
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.
Signed-off-by: Hannes Duerr
---
PVE/QemuMigrate.pm | 5 +++--
1 file changed, 3
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore,
Although already shortly discussed off-list, here the summary of the
discussion. v3 coming soon.
On 12/19/23 12:54, Fabian Grünbichler wrote:
> this part is now a lot stricter then before (e.g., if the user has
> added multipath devices or something else to the filter for whatever
> reason, the
Tested-by: Friedrich Weber
Tried a couple of upgrades from PVE 7 to PVE 8 (including pve-manager
with this patch). When upgrading, dpkg asks (in most cases) whether to
keep local /etc/lvm/lvm.conf or install package maintainer version, so I
tried both answers. Results were as I'd expect. I'm
On December 15, 2023 2:51 pm, Stefan Hanreich wrote:
> Since LVM 2.03.15 RBD devices are also scanned by default [1]. This
> can lead to guest volumes being recognized and displayed on the host
> when using KRBD for RBD-backed disks. In order to prevent this we add
> an additional filter to the
On December 19, 2023 11:43 am, Hannes Duerr wrote:
> During migration, the volume names may change if the name is already in
> use at the target location. We therefore want to save the original names
> before the migration so that we can deactivate the original volumes
> afterwards.
we already do
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
before the migration so that we can deactivate the original volumes
afterwards.
Signed-off-by: Hannes Duerr
---
PVE/QemuMigrate.pm | 8 ++--
1
for base images we call the volume_import of the parent plugin and pass
it as vm-image instead of base-image, then convert it back as base-image
Signed-off-by: Hannes Duerr
---
src/PVE/Storage/LvmThinPlugin.pm | 51
1 file changed, 51 insertions(+)
diff --git
Changes in V2:
* restructure and remove duplication
* fix deactivation of volumes after migration
Changes in V3:
* fix nits
* remove unnecessary oldname override
* deactivate not only offline volumes, but all of them
qemu-server:
Hannes Duerr (1):
migration: secure and use source volume
Patch series v7 is available:
https://lists.proxmox.com/pipermail/pve-devel/2023-December/061147.html
On 14/12/2023 12:09, Filip Schauer wrote:
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
Make the default value for 'kvm' consistent and take into account
whether the VM will run on the same CPU architecture as the host. This
is a breaking change for VMs with a different CPU architecture running
on an x86_64 host, since in this case the default CPU type for
CPU hotplug switches from
Signed-off-by: Filip Schauer
---
PVE/QemuServer/CPUConfig.pm | 9 +++--
PVE/QemuServer/Helpers.pm | 10 ++
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index ca2946b..c25c2c8 100644
---
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries
This patch series prevents starting a 32-bit VM using a 64-bit OVMF BIOS
and makes the default value for 'kvm' during CPU hotplug consistent with
the rest of the code. This is a breaking change for VMs with a different
CPU architecture running on an x86_64 host.
Changes since v6:
* Skip the CPU
Add an is_native($arch) subroutine to compare a CPU architecture to the
host CPU architecture. This is brought in from PVE::QemuServer.
Signed-off-by: Filip Schauer
---
src/PVE/Tools.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index
Signed-off-by: Filip Schauer
---
PVE/QemuServer.pm | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a7b237e..1a1080d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -45,7 +45,7 @@ use PVE::RPCEnvironment;
use
Previously, PVE (7 and 8) hosts would hang at boot if both ntpsec and
ntpsec-ntpdate are installed. The root cause for the hang is an
unfortunate interaction between ntpsec, ntpsec-ntpdate and the PVE
ifupdown2 package. The hang happens because ntpsec-ntpdate installs a
hook
Signed-off-by: Alexandre Derumier
---
src/services/01-dnsmasq-vrf.conf | 4
src/services/Makefile| 1 +
2 files changed, 5 insertions(+)
create mode 100644 src/services/01-dnsmasq-vrf.conf
diff --git a/src/services/01-dnsmasq-vrf.conf b/src/services/01-dnsmasq-vrf.conf
new
add gateway ip to vnet and force /32 for ipv4 to avoid
arp problem, and disable forwarding by security
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Zones/QinQPlugin.pm | 32 +
.../zones/qinq/dhcp/expected_sdn_interfaces | 34 +++
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 4 ++--
src/PVE/Network/SDN/Zones/EvpnPlugin.pm | 2 +-
src/PVE/Network/SDN/Zones/Plugin.pm | 2 +-
src/PVE/Network/SDN/Zones/SimplePlugin.pm | 9 +
4 files changed, 13 insertions(+), 4 deletions(-)
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Zones/EvpnPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/Network/SDN/Zones/EvpnPlugin.pm
b/src/PVE/Network/SDN/Zones/EvpnPlugin.pm
index 3c3278a..26a22c7 100644
--- a/src/PVE/Network/SDN/Zones/EvpnPlugin.pm
+++
add gateway ip to vnet and force /32 for ipv4 to avoid
arp problem, and disable forwarding by security
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Zones/VxlanPlugin.pm | 32 +++
.../zones/vxlan/dhcp/expected_sdn_interfaces | 19 +++
launch dnsmasq in a vrf context with "ip vrf exec dnsmasq.."
use "default" vrf if plugin don't return a specific vrf
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Dhcp.pm | 3 ++-
src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 3 ++-
src/PVE/Network/SDN/Zones.pm
add gateway ip to vnet and force /32 for ipv4 to avoid
arp problem, and disable forwarding by security
Signed-off-by: Alexandre Derumier
---
src/PVE/Network/SDN/Zones/VlanPlugin.pm | 33 +++
.../zones/vlan/dhcp/expected_sdn_interfaces | 27 +++
This patch serie add dhcp support for all zones types.
also:
- Exec dnsmasq in a specific vrf if needed (currently only evpn)
- Enable-ra only on layer3 subnets
TO FIX:
- Dnsmasq is currently buggy with ipv6 && vrf (no crash but it's not
listening), and need to be patched with:
29 matches
Mail list logo