On 24/02/2023 12:06, Dominik Csapak wrote:
> Otherwise the created vlan bridge has the default MTU, which is
> unexpected when the original bridge has some other MTU configured.
>
> We already do this for the firewall bridges, so we should do so too for
> the vlan bridges.
>
> Signed-off-by: Domi
On 17/02/2023 17:08, Fiona Ebner wrote:
> Also, updates the section for the SCSI controller to mention the
> current default.
>
> Fiona Ebner (4):
> qm: hard disk controllers: reword and update section
> qm: hard disk controllers: recommend VirtIO controllers
> qm: emulated devices: mention
This should reduce confusion, as it uses the same naming as smartctl
Signed-off-by: Matthias Heiserer
---
Changes from v2:
new change
src/window/DiskSmart.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/window/DiskSmart.js b/src/window/DiskSmart.js
index 3824175..ba64
Signed-off-by: Matthias Heiserer
---
Changes from v2:
Calculate the field in a different way...
Thanks to dominik for simplifying/fixing the logic!
src/window/DiskSmart.js | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/window/DiskSmart.js b/src/window/DiskSmart.js
On 2/24/23 13:10, Matthias Heiserer wrote:
Signed-off-by: Matthias Heiserer
---
Sorry for the long delay and the overall confusion!
This is the v2 of
https://lists.proxmox.com/pipermail/pve-devel/2023-February/055798.html
The second arrow function spans multiple lines, that's because it really
in some nvidia grid drivers (e.g. 14.4 and 15.x), their kernel module
tries to clean up the mdev device when the vm is shutdown and if it
cannot do that (e.g. becaues we already cleaned it up), their removal
process cancels with an error such that the vgpu does still exist inside
their book-keeping
Signed-off-by: Matthias Heiserer
---
Sorry for the long delay and the overall confusion!
This is the v2 of
https://lists.proxmox.com/pipermail/pve-devel/2023-February/055798.html
The second arrow function spans multiple lines, that's because it really does
not fit on a single one
and previous a
default kernel vhost config only support 64 slots by default,
for performance since 2015.
Original memory hotplug code was done before, using qemu
max supported 255 slots.
To reach max mem (4TB), we used incremental dimm size.
Instead of dynamic memory size, use 1 static dimm size, compute
from
max can be multiple of 64GiB only,
The dimm size is compute from the max memory
we can have 64 slots:
64GiB = 64 slots x 1GiB
128GiB = 64 slots x 2GiB
..
4TiB = 64 slots x 64GiB
Also, with numa, we need to share slot between (up to 8) sockets.
64 is a multiple of 8,
64GiB = 8 sockets * 8 slots
current qemu_dimm_list can return any kind of memory devices.
make it more generic, with an optionnal type device
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 12 +---
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 99b7a21..014d19d 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -151,1
Signed-off-by: Alexandre Derumier
---
test/cfg2cmd/memory-virtio-hugepages-1G.conf | 12 +++
.../memory-virtio-hugepages-1G.conf.cmd | 35 +++
test/cfg2cmd/memory-virtio-max.conf | 11 ++
test/cfg2cmd/memory-virtio-max.conf.cmd | 35 +
a 4GiB static memory is needed for DMA+boot memory, as this memory
is almost always un-unpluggeable.
1 virtio-mem pci device is setup for each numa node on pci.4 bridge
virtio-mem use a fixed blocksize with 32000 blocks
Blocksize is computed from the maxmemory-4096/32000 with a minimum of
2MiB to
Signed-off-by: Alexandre Derumier
---
test/cfg2cmd/memory-max-128G.conf | 11
test/cfg2cmd/memory-max-128G.conf.cmd | 86 +++
test/cfg2cmd/memory-max-512G.conf | 11
test/cfg2cmd/memory-max-512G.conf.cmd | 58 ++
4 files changed, 166 inser
verify than defined vm memorymax is not bigger than
host cpu supported memory
Add add early check in update vm api
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 35 +--
PVE/QemuServer/Memory.pm | 19 ++-
2 files changed, 43 inse
If some memory can be removed on a specific node,
we try to rebalance again on other nodes
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 59 +---
1 file changed, 43 insertions(+), 16 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 7 ++--
PVE/QemuConfig.pm | 4 +--
PVE/QemuMigrate.pm| 6 ++--
PVE/QemuServer.pm | 27
PVE/QemuServer/Helpers.pm | 3 +-
PVE/QemuServer/Memory.pm | 67 --
simple use dimm_list() returned by qemu
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 76 +---
1 file changed, 25 insertions(+), 51 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 7ad8fcb..0f4229c 100644
This patch series rework the current memory hotplug + virtiomem.
memory option now have extra options:
memory: [[current=]] [,max=] [,virtio=<1|0>]
ex: memory: current=1024,max=131072,virtio=1
for classic memory hotplug, when maxmemory is defined,
we use 64 fixed size dimm.
The max option is a
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer/Memory.pm | 37 +++--
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e1b811f..99b7a21 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/Qem
Otherwise the created vlan bridge has the default MTU, which is
unexpected when the original bridge has some other MTU configured.
We already do this for the firewall bridges, so we should do so too for
the vlan bridges.
Signed-off-by: Dominik Csapak
---
technically this is a breaking change i t
21 matches
Mail list logo