This series has been superseeded by version 4:
https://lore.proxmox.com/pve-devel/20250726010626.1496866-1-a.laute...@proxmox.com/T/#t
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9-storage to pve-storage-90 schema
src/PVE/API2/Storage/Status.pm | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.p
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
* add pressures to return properties
]
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
v2:
* add return properties for p
Signed-off-by: Aaron Lauterer
---
Notes:
currently it checks for lt 9.0.0~12. should it only be applied to a
later version, don't forget to adapt the version check!
I tested it by bumping the version to 9.0.0~12
upgraded to it -> migration ran
reinstalled -> no migration
Signed-off-by: Aaron Lauterer
---
Notes:
changes since RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/LXC.pm | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index a56c441..eb8873e 100644
--- a/src/PVE
If we see that the migration to the new pve-{type}-9.0 rrd format has been done
or is ongoing (new dir exists), we collect and send out the new format with
additional
columns for nodes and VMs (guests).
Those are:
Nodes:
* memfree
* arcsize
* pressures:
* cpu some
* io some
* io full
* me
by adding the new memhost field, which is populated for VMs, and
using it if the guest is of type qemu and the field is numerical.
As a result, if the cluster is in a mixed PVE8 / PVE9 situation, for
example during a migration, we will not report any host memory usage, in
numbers or percent, as we
From: Folke Gleumes
Originally-by: Folke Gleumes
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
* add pressures to return properties
]
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
v2:
* added return properties
As the new RRD files are quite a bit larger than the old ones, we should
check if the estimated required space is actually available and let the
users know if not.
Secondly, it could be possible that a new resource is added while the
node is migrating the RRD files. Therefore, there could be some
Instead of RSS, let's use the same PSS values as for the specific host
view as default, in case this value is not overwritten by the balloon
info.
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
v2:
* follow reorder of memhost collection, before cpu collection that might
From: Folke Gleumes
Pressures are indicatios that processes needed to wait for their
resources. While 'some' means, that some of the processes on the host
(node summary) or in the guests cgroup had to wait, 'full' means that
all processes couldn't get the resources fast enough.
We set the colors
They were missing and just showed the actual field names.
Signed-off-by: Aaron Lauterer
Reviewed-by: Dominik Csapak
---
www/manager6/node/Summary.js | 1 +
www/manager6/panel/GuestSummary.js | 2 ++
2 files changed, 3 insertions(+)
diff --git a/www/manager6/node/Summary.js b/www/manager6
To display the used memory and the ZFS arc as a separate data point,
keeping the old line overlapping filled line graphs won't work
anymore. We therefore switch them to area graphs which are stacked by
default.
The order of the fields is important here as it affects the order in the
stacking. This
With the new memhost field, the vertical space is getting tight. We
therefore reduce the height of the separator boxes.
Signed-off-by: Aaron Lauterer
Reviewed-by: Dominik Csapak
---
www/manager6/panel/GuestStatusView.js | 18 --
1 file changed, 16 insertions(+), 2 deletions(-)
as this will also be displayed in the status of VMs
Signed-off-by: Aaron Lauterer
---
Notes:
this is a dedicated patch that should be applied only for PVE9 as it
adds new data in the result
PVE/API2/Cluster.pm | 7 +++
PVE/API2Tools.pm| 3 +++
2 files changed, 10 insertions(+)
if the new rrd pve-node-9.0 files are present, they contain the current
data and should be used.
'decade' is now possible as timeframe with the new RRD format.
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/API2/Nodes.
This way we can define a listener when needed to react to any clicks in
the legend. Usually enabling/disabling some data series.
The event was nowhere documented, but by using the following snippet,
right where we add the listener, it can be observed to happen.
```
Ext.mixin.Observable.capture(th
This makes targeting the undo button more stable in situations where it
might not be the 0 indexed item in the tools.
Signed-off-by: Aaron Lauterer
Reviewed-by: Dominik Csapak
---
src/panel/RRDChart.js | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/panel/RRDChart.js b
This tool is intended to migrate the Proxmox VE (PVE) RRD data files to
the new schema.
Up until PVE8 the schema has been the same for a long time. With PVE9 we
introduced new columns to guests (vm) and nodes. We also switched all
types (vm, node, storate) to the same aggregation schemas as we do
The mem field itself will switch from the outside view to the "inside"
view if the VM is reporting detailed memory usage informatio via the
ballooning device.
Since sometimes other processes belong to a VM too, for example swtpm,
we collect all PIDs belonging to the VM cgroup and fetch their PSS d
Signed-off-by: Aaron Lauterer
---
Notes:
changes since:
RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/Qemu.pm | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 09d4411b..105dd69e 100644
---
This way, we can provide a bit more context to what the graph is
showing. Hopefully making it easier for our users to draw useful
conclusions from the provided information.
Signed-off-by: Aaron Lauterer
---
Notes:
while not available for all graphs for now, this should help users
underst
by utilizing the itemclick event of a charts legend and storing it as
state.
Signed-off-by: Aaron Lauterer
---
Notes:
this could potentially be squashed with (ui: GuestSummary: memory switch
to stacked and add hostmem)
www/manager6/panel/GuestSummary.js | 12 ++--
1 fil
The new columns we get from RRD are added.
Since we are switching the memory graphs to stacked graphs, we need to
handle them a bit different because:
* gaps are not possible, we need to have a value, ideally 'null' when
there is no data, makes it easier to handle in the tooltip
* calculate some
We switch the memory graph to a stacked area graph, similar to what we
have now on the node summary page.
Since the order is important, we need to define the colors manually, as
the default color scheme would switch the colors as we usually have
them.
Additionally we add the host memory view as a
With PVE9 now we have additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the existing ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future pro
Signed-off-by: Aaron Lauterer
---
debian/control | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/control b/debian/control
index ffac171c..d1985f65 100644
--- a/debian/control
+++ b/debian/control
@@ -83,6 +83,7 @@ Depends: apt (>= 1.5~),
postfix | mail-transport-agent,
based on the termproxy packaging. Nothing fancy so far.
Signed-off-by: Aaron Lauterer
---
Notes:
I added the links to the repos even though they don't exist yet. So if
the package and repo name is to change. make sure to adapt those :)
Cargo.toml | 4 +-
Makefile
to make it more obious that the legend items can be clicked
Signed-off-by: Aaron Lauterer
---
src/panel/RRDChart.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
index 35bc186..7f0b923 100644
--- a/src/panel/RRDChart.js
+++ b/src/panel/RRDChart
This patch series does a few things. It expands the RRD format for nodes and
VMs. For all types (nodes, VMs, storage) we adjust the aggregation to align
them with the way they are done on the Backup Server. Therefore, we have new
RRD defitions for all 3 types.
New values are added for nodes and
With PVE9 we introduced a new RRD format that has different aggregation
steps, similar to what we use in the Backup Server.
We therefore need to adapt the functions that get data from RRD
accordingly.
The result is usually a finer resolution for time windows larger than
hourly.
We also introduce d
this way we can keep the current behavior, but also make it possible to
finely control a series if needed. For example, if we want a stacked
graph, or just a line without fill.
Additionally we need to adjust the tooltip renderer to also gather the
titles from these directly configured series.
We a
--- Begin Message ---
Hi everyone,
While setting up automated testing for PVE9 we noticed that scsi block
devices were missing several udev properties.
After a bit of digging, the root cause turns out to be a regression in
the sg_inq tool (from sg3_utils). This bug slipped in as part of some
refa
Without untainting, offline-deleting a volume-chain snapshot on a
directory storage via the GUI fails with an "Insecure dependecy in
exec [...]" error, because volume_snapshot_delete uses the filename
its qemu-img invocation.
Signed-off-by: Friedrich Weber
---
Notes:
I'm not too familiar wit
Am 25.07.25 um 13:24 schrieb Adam Kalisz:
> I missed whether the chunk verification speedup when loading chunks got
> applied or whether it was somehow included in the S3-like storage
> option change set.
btw. had a quick talk with Dominik about this topic, while having the
possibility for this wo
--- Begin Message ---
Small correction, the write threshold should be applied to the file
node, since it's the one that will have the correct filesystem
wr_highest_offset.
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
http
On 25/07/2025 14:23, Mira Limbeck wrote:
>
>
> On 7/25/25 13:50, Friedrich Weber wrote:
>> On 25/07/2025 13:39, Friedrich Weber wrote:
>>> [...]
>>> +Corosync Over Bonds
>>> +~~~
>>> +
>>> +Using a xref:sysadmin_network_bond[bond] as the only Corosync link can be
>>> +problematic
Superseded by:
https://lore.proxmox.com/pve-devel/20250725140312.250936-1-f.we...@proxmox.com/T/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Testing has shown that running corosync (only) over a bond can be
problematic in some failure scenarios and for certain bond modes. The
documentation only discourages bonds for corosync because corosync can
switch between available networks itself, but does not mention other
caveats when using bond
the first button is 'Create' which creates the configured settings and
closes the window
the second button (currently 'Create another') creates the configured
settings and reopens the edit window so the user can create it for the
next node
improving the text to better convey what it actually does
On 7/25/25 13:50, Friedrich Weber wrote:
> On 25/07/2025 13:39, Friedrich Weber wrote:
>> [...]
>> +Corosync Over Bonds
>> +~~~
>> +
>> +Using a xref:sysadmin_network_bond[bond] as the only Corosync link can be
>> +problematic in certain failure scenarios. If one of the bonded in
without the last patch, as discussed off-list - it's a bit involved,
we'll see if we can find a better way to handle this, and for the
current stop-gap measure the simple approach is good enough.
added a FIXME instead so we don't forget.
On July 25, 2025 12:50 pm, Fiona Ebner wrote:
> Changes in
On Tue, 22 Jul 2025 11:30:33 +0200, Fiona Ebner wrote:
> The default timeout is not appropriate in all cases, e.g. removing a
> VirtIO SCSI controller can take more than 5 seconds.
>
>
Applied, thanks!
[1/2] qmp: verify device deletion: allow specifying timeout
commit: 6d212deaadb83edaff
On 25/07/2025 13:39, Friedrich Weber wrote:
> [...]
> +Corosync Over Bonds
> +~~~
> +
> +Using a xref:sysadmin_network_bond[bond] as the only Corosync link can be
> +problematic in certain failure scenarios. If one of the bonded interfaces
> fails
> +and stops transmitting packets,
Superseded by:
https://lore.proxmox.com/pve-devel/20250725113922.99886-1-f.we...@proxmox.com/T/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Testing has shown that running corosync (only) over a bond can be
problematic in some failure scenarios and for certain bond modes. The
documentation only discourages bonds for corosync because corosync can
switch between available networks itself, but does not mention other
caveats when using bond
--- Begin Message ---
Hi list,
I missed whether the chunk verification speedup when loading chunks got
applied or whether it was somehow included in the S3-like storage
option change set.
In
https://forum.proxmox.com/threads/abysmally-slow-restore-from-backup.133602/page-7
we have discussed some
The 'lvmqcow2_external_snapshot' test case uses qcow2 on top of LVM
which can only be used with that option currently.
Signed-off-by: Fiona Ebner
Reviewed-by: Fabian Grünbichler
---
src/test/run_qemu_img_convert_tests.pl | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/test/run_qemu_img_
Signed-off-by: Fiona Ebner
Reviewed-by: Fabian Grünbichler
---
src/test/run_qemu_img_convert_tests.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/test/run_qemu_img_convert_tests.pl
b/src/test/run_qemu_img_convert_tests.pl
index a3c4cb59..3f7fb98e 100755
--- a/src/tes
Stat-ing /dev/null worked, because the check in the blockdev module is
for block and character devices and then decides based on the
'media=cdrom' flag whether to use the host_cdrom or host_device
driver. But the result should actually be mocked to represent a block
device rather than a character d
Without the 'discard-no-unref', a qcow2 file can grow beyond what
'qemu-img measure' reports, because of fragmentation. This can lead to
IO errors with qcow2 on top of LVM storages, where the containing LV
is allocated with that size. Guard enabling the option with
having 'snapshot-as-volume-chain'
This avoids having the handling for 'discard-no-unref' in two places.
In the tests, rename the relevant target images with a '-target'
suffix to test for them in the mocked volume_snapshot_info() helper.
Suggested-by: Fabian Grünbichler
Signed-off-by: Fiona Ebner
---
Seems quite complex now fo
Signed-off-by: Fiona Ebner
Reviewed-by: Fabian Grünbichler
---
src/test/run_qemu_img_convert_tests.pl | 39 +-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/src/test/run_qemu_img_convert_tests.pl
b/src/test/run_qemu_img_convert_tests.pl
index 4bfcf4fb..64
Changes in v2:
* add missing check for qcow2 format for qemu-img convert
* add patch to improve File::stat mocking in tests
* add patch to re-use blockdev infrastructure for qemu-img case
First part is fixing discard in combination with -blockdev. The option
needs to be set for the whole throttle-
Certain options like read-only need to be set on all nodes in the
throttle->fmt->file chain to apply correctly and consistently.
Signed-off-by: Fiona Ebner
Reviewed-by: Fabian Grünbichler
---
src/PVE/QemuServer/Blockdev.pm | 19 +++
1 file changed, 15 insertions(+), 4 deletions(
- Start by mentioning the preconfigured Ceph repository and what options
there are for using Ceph (HCI and external cluster)
- Link to available installation methods (web-based wizard, CLI tool)
- Describe when and how to upgrade
- Add new attributes to avoid manual editing multiple lines
- Creat
On Fri, 25 Jul 2025 10:18:50 +0200, Lukas Wagner wrote:
> The latest updates to the backup-job UI completely drop the term
> "Notification System" from the UI, instead we now use "Global
> notification settings", which should be hopefully a bit clearer to users
> with regards to what this actually
--- Begin Message ---
Hi,
As previously discussed with Alexandre, we talked about an architecture
that enables the use of thin-provisioned LVs with LVM. The idea is to
implement a daemon that processes LV extend requests from a queue.
We considered two possible implementations for the queue a
Signed-off-by: Fabian Grünbichler
---
Notes:
not sure whether we want to also add a note suggesting to (heavily)
overprovision the LUN/backing device on the storage side, so that there is
enough "space" for creating snapshots?
pve-storage-lvm.adoc | 8 ++--
1 file changed, 6 ins
Am 25.07.25 um 9:39 AM schrieb Fabian Grünbichler:
> On July 24, 2025 3:59 pm, Fiona Ebner wrote:
>> diff --git a/src/PVE/QemuServer/QemuImage.pm
>> b/src/PVE/QemuServer/QemuImage.pm
>> index 026c24e9..7f6d5f01 100644
>> --- a/src/PVE/QemuServer/QemuImage.pm
>> +++ b/src/PVE/QemuServer/QemuImage.p
v2:
https://lore.proxmox.com/pve-devel/20250725082046.51199-1-l.wag...@proxmox.com/T/#u
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
The latest updates to the backup-job UI completely drop the term
"Notification System" from the UI, instead we now use "Global
notification settings", which should be hopefully a bit clearer to users
with regards to what this actually means.
Furthermore, the 'auto' notification mode is not exposed
except for some questions on the last patch, consider this series
Reviewed-by: Fabian Grünbichler
and that last patch Acked-by in principle as well, with those questions
addressed
On July 24, 2025 3:59 pm, Fiona Ebner wrote:
> First part is fixing discard in combination with -blockdev. The opti
On July 24, 2025 3:59 pm, Fiona Ebner wrote:
> Without the 'discard-no-unref', a qcow2 file can grow beyond what
> 'qemu-img measure' reports, because of fragmentation. This can lead to
> IO errors with qcow2 on top of LVM storages, where the containing LV
> is allocated with that size. Guard enabl
64 matches
Mail list logo