OpenSSH 10.0 removes support for the DSA signature algorithm [0], which
is the base version that will be shipped for Debian 13 trixie. Since it
has been marked deprecated for some time and generating DSA signatures
with OpenSSH 10.0 will fail, remove it.
[0] https://www.openssh.com/txt/release-10.
Skip rewriting any SSH host keys that are actively marked as ignored by
the container template.
This is done for consistency with remove_existing_ssh_host_keys(), which
skips removing any ignored SSH host keys as well.
Signed-off-by: Daniel Kral
---
Because of the HA Rules stuff, I unfortunately
Remove existing SSH host keys after container creation to prevent
multiple containers sharing the same SSH host keys, especially those
which are not overwritten/generated by rewrite_ssh_host_keys() later.
This is called in the Base's post_create_hook(...) to prevent unwanted
removal for certain ty
Add test cases for strict positive resource affinity rules, i.e. where
resources must be kept on the same node together. These verify the
behavior of the resources in strict positive resource affinity rules in
case of a failover of their assigned nodes in the following scenarios:
1. 2 resources in
Add documentation about HA Resource Affinity rules, what effects those
have on the CRS scheduler, and what users can expect when those are
changed.
There are also a few points on the rule conflicts/errors list which
describe some conflicts that can arise from a mixed usage of HA Node
Affinity rule
Add a migration preconditions api endpoint for containers in similar
vein to the one which is already present for virtual machines.
This is needed to inform callees about positive and negative ha resource
affinity rules, which the container is part of. These inform callees
about any comigrated res
Add information about positive and negative ha resource affinity rules,
which the VM is part of, to the migration precondition API endpoint.
These inform callees about any comigrated resources or blocking
resources that are caused by the resource affinity rules.
Signed-off-by: Daniel Kral
---
sr
Add the resource affinity rule plugin to allow users to specify
inter-resource affinity constraints. Resource affinity rules must
specify two or more resources and one of the affinity types:
* positive: keeping HA resources together, or
* negative: keeping HA resources separate;
The initial i
Extend the container precondition check to show whether a migration of a
container results in any additional migrations because of positive HA
resource affinity rules or if any migrations cannot be completed because
of any negative resource affinity rules.
In the latter case these migrations would
Make any manual user migration of a resource follow the resource
affinity rules it is part of. That is:
- prevent a resource to be manually migrated to a node, which contains a
resource, that the resource must be kept separate from (negative
resource affinity).
- make resources, which must be
Add an option to the VirtFail's name to allow the start and migrate fail
counts to only apply on a certain node number with a specific naming
scheme.
This allows a slightly more elaborate test type, e.g. where a service
can start on one node (or any other in that case), but fails to start on
a spe
Add checks, which determine infeasible resource affinity rules, because
their resources are already restricted by their node affinity rules in
such a way, that these cannot be satisfied or are reasonable to be
proven to be satisfiable.
Node affinity rules restrict resources to certain nodes by the
Add the HA environment's node list information to the feasibility
check/canonicalization stage, which is needed for at least one rule
check for negative resource affinity rules in an upcoming patch, which
verifies that there are enough available nodes to separate the HA
resources on.
Signed-off-by
Add test cases, where resource affinity rules are used with the static
utilization scheduler and the rebalance on start option enabled. These
verify the behavior in the following scenarios:
- 7 resources with interwined resource affinity rules in a 3 node
cluster; 1 node failing
- 3 resources in
Add test cases for strict negative resource affinity rules, i.e. where
resources must be kept on separate nodes. These verify the behavior of
the resources in strict negative resource affinity rules in case of a
failover of the node of one or more of these resources in the following
scenarios:
1.
RFC v1:
https://lore.proxmox.com/pve-devel/20250325151254.193177-1-d.k...@proxmox.com/
RFC v2:
https://lore.proxmox.com/pve-devel/20250620143148.218469-1-d.k...@proxmox.com/
HA rules:
https://lore.proxmox.com/pve-devel/20250704181659.465441-1-d.k...@proxmox.com/
This is the other part, where th
Add test cases to verify that the rule checkers correctly identify and
remove HA Resource Affinity rules from the rules to make the rule set
feasible. The added test cases verify:
- Resource Affinity rules retrieve the correct optional default values
- Resource Affinity rules, which state that two
The HA Manager already handles positive and negative resource affinity
rules for individual resource migrations, but the information about
these is only redirected to the HA environment's logger, i.e., for
production usage these messages are redirected to the HA Manager node's
syslog.
Therefore, a
Extend the VM precondition check to show whether a migration of a VM
results in any additional migrations because of positive HA resource
affinity rules or if any migrations cannot be completed because of any
negative resource affinity rules.
In the latter case these migrations would be blocked wh
Add HA resource affinity rules as a second rule type to the HA Rules'
tab page as a separate grid so that the columns match the content of
these rules better.
Signed-off-by: Daniel Kral
---
www/manager6/Makefile | 2 ++
www/manager6/ha/Rules.js | 12
Add a mechanism to the node selection subroutine, which enforces the
resource affinity rules defined in the rules config.
The algorithm makes in-place changes to the set of nodes in such a way,
that the final set contains only the nodes where the resource affinity
rules allow the HA resources to r
This will be used to retrieve the nodes, which a service is currently
putting load on and using their resources, when dealing with HA resource
affinity rules in select_service_node(...).
For example, a migrating service A in a negative resource affinity with
services B and C will need to block tho
These are needed by the resource affinity rule type in an upcoming
patch, which needs to make changes to the existing rule set to properly
synthesize inferred rules after the rule set is already made feasible.
Signed-off-by: Daniel Kral
---
src/PVE/HA/Rules.pm | 18 ++
1 file cha
Migrate the HA groups config to the HA resources and HA rules config
persistently on disk and retry until it succeeds.
The HA group config is already migrated in the HA Manager in-memory, but
to persistently use them as HA node affinity rules, they must be
migrated to the HA rules config.
As the
Add documentation about HA Node Affinity rules and general documentation
what HA rules are for in a format that is extendable with other HA rule
types in the future.
Signed-off-by: Daniel Kral
append to ha intro
Signed-off-by: Daniel Kral
---
Makefile | 2 +
gen-ha
Add a rules base plugin to allow users to specify different kinds of HA
rules in a single configuration file, which put constraints on the HA
Manager's behavior.
Signed-off-by: Daniel Kral
---
debian/pve-ha-manager.install | 1 +
src/PVE/HA/Makefile | 2 +-
src/PVE/HA/Rules.pm
As the HA groups' failback flag is now being part of the HA resources
config, it should also be shown there instead of the previous HA groups
view.
Signed-off-by: Daniel Kral
---
www/manager6/ha/Resources.js | 6 ++
www/manager6/ha/StatusView.js | 4
2 files changed, 10 insertions(+)
Signed-off-by: Daniel Kral
---
PVE/API2/HAConfig.pm | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/HAConfig.pm b/PVE/API2/HAConfig.pm
index 35f49cbb..d29211fb 100644
--- a/PVE/API2/HAConfig.pm
+++ b/PVE/API2/HAConfig.pm
@@ -12,6 +12,7 @@ use PVE::JSONSchema qw
Migrate the currently configured groups to node affinity rules
in-memory, so that they can be applied as such in the next patches and
therefore replace HA groups internally.
HA node affinity rules in their initial implementation are designed to
be as restrictive as HA groups, i.e. only allow a HA
Add the failback property in the HA resources config, which is
functionally equivalent to the negation of the HA group's nofailback
property. It will be used to migrate HA groups to HA node affinity
rules.
The 'failback' flag is set to be enabled by default as the HA group's
nofailback property wa
Explicitly state all the parameters at all call sites for
select_service_node(...) to clarify in which states these are.
The call site in next_state_recovery(...) sets $best_scored to 1, as it
should find the next best node when recovering from the failed node
$current_node. All references to $bes
Signed-off-by: Daniel Kral
---
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index 3b1de57..9ec4f66 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -69,6 +69,7 @@ my $observed = {
'ha/c
Add test cases to verify that the rule checkers correctly identify and
remove HA rules from the rules to make the rule set feasible. For now,
there only are HA Node Affinity rules, which verify:
- Node Affinity rules retrieve the correct optional default values
- Node Affinity rules, which specify
RFC v1:
https://lore.proxmox.com/pve-devel/20250325151254.193177-1-d.k...@proxmox.com/
RFC v2:
https://lore.proxmox.com/pve-devel/20250620143148.218469-1-d.k...@proxmox.com/
I've separated the core HA Rules module and the transformation from HA
groups to HA Node Affinity rules (formerly known as
Add CRUD API endpoints for HA rules, which assert whether the given
properties for the rules are valid and will not make the existing rule
set infeasible.
Disallowing changes to the rule set via the API, which would make this
and other rules infeasible, makes it safer for users of the HA Manager
t
Replace the HA group mechanism with the functionally equivalent node
affinity rules' get_node_affinity(...), which enforces the node affinity
rules defined in the rules config.
This allows the $groups parameter to be replaced with the $rules
parameter in select_service_node(...) as all behavior of
Remove the HA group column from the HA Resources grid view and the HA
group selector from the HA Resources edit window, as these will be
replaced by semantically equivalent HA node affinity rules in the next
patch.
Add the field 'failback' that is moved to the HA Resources config as
part of the mi
Introduce the node affinity rule plugin to allow users to specify node
affinity constraints for independent HA resources.
Node affinity rules must specify one or more HA resources, one or more
nodes with optional priorities (the default is 0), and a strictness,
which is either
* 0 (non-strict):
Expose the HA rules API endpoints through the CLI in its own subcommand.
The names of the subsubcommands are chosen to be consistent with the
other commands provided by the ha-manager CLI for HA resources and
groups, but grouped into a subcommand.
The properties specified for the 'rules config' c
Adds methods to the HA environment to read and write the rules
configuration file for the different environment implementations.
The HA Rules are initialized with property isolation since it is
expected that other rule types will use similar property names with
different semantic meanings and/or p
Remove HA resources from rules, where these HA resources are used, if
they are removed by delete_service_from_config(...), which is called by
the HA resources' delete API endpoint and possibly external callers,
e.g. if the HA resource is removed externally.
If all of the rules' HA resources have b
As the signature of select_service_node(...) has become rather long
already, make it more compact by retrieving service- and
affinity-related data directly from the service state in $sd and
introduce a $node_preference parameter to distinguish the behaviors of
$try_next and $best_scored, which have
Read the rules configuration in each round and update the canonicalized
rules configuration if there were any changes since the last round to
reduce the amount of times of verifying the rule set.
Signed-off-by: Daniel Kral
---
src/PVE/HA/Manager.pm | 20 +++-
1 file changed, 19 i
diff --git a/pve-rs/src/bindings/sdn/fabrics.rs
b/pve-rs/src/bindings/sdn/fabrics.rs
index 099c1a7ab515..f5abb1b72099 100644
--- a/pve-rs/src/bindings/sdn/fabrics.rs
+++ b/pve-rs/src/bindings/sdn/fabrics.rs
@@ -46,6 +46,34 @@ pub mod pve_rs_sdn_fabrics {
perlmod::declare_magic!(Box : &PerlF
SSHFS Example Storage Plugin - v2
=
Add a custom storage plugin based on SSHFS [0] to serve as an example
for an upcoming storage plugin development guide. This plugin should
also be ready for production usage, though it would be nice to get some
more testing (and p
Signed-off-by: Max R. Carrara
---
Makefile | 1 +
plugin-sshfs/Makefile | 71 +++
plugin-sshfs/debian/changelog | 5 +++
plugin-sshfs/debian/control | 22 ++
plugin-sshfs/debian/copyright | 20 +
plug
This commit adds an example implementation of a custom storage plugin
that uses SSHFS [0] as the underlying filesystem.
The implementation is very similar to that of the NFS plugin; as a
prerequisite, it is currently necessary to use pubkey auth and have
the host's root user's public key deployed
[snip]
diff --git a/pve-rs/src/bindings/sdn/fabrics.rs
b/pve-rs/src/bindings/sdn/fabrics.rs
index a7a740f5aac9..099c1a7ab515 100644
--- a/pve-rs/src/bindings/sdn/fabrics.rs
+++ b/pve-rs/src/bindings/sdn/fabrics.rs
@@ -6,6 +6,8 @@ pub mod pve_rs_sdn_fabrics {
//! / writing the configuration,
On 04.07.2025 14:57, Wolfgang Bumiller wrote:
On Wed, Jul 02, 2025 at 04:50:15PM +0200, Gabriel Goller wrote:
From: Stefan Hanreich
The FabricConfig from proxmox-ve-config implements CRUD functionality
for Fabrics and Nodes stored in the section config. We expose them via
perlmod, so they can
According to Apple's App Store review guidelines all apps must include a
link to their privacy policy within the App [0]. To fix the issue add a
new list item in the settings screen that will allow users to access the
privacy policy.
[0] - https://developer.apple.com/app-store/review/guidelines/#l
The `.lock` file was not upto date with the newer flutter version.
Ran `flutter pub get` again to fix the issue. Some of the transitive
dependencies are also updated to support the newer dart SDK constraint.
Signed-off-by: Shan Shaji
---
pubspec.lock | 84 ++--
According to Apple's App Store review guidelines all apps must include a
link to their privacy policy within the App [0]. To fix the issue add a
new list item in the settings screen that will allow users to access the
privacy policy.
[0] - https://developer.apple.com/app-store/review/guidelines/#l
On 6/20/25 16:31, Daniel Kral wrote:
> Add checks, which determine infeasible colocation rules, because their
> services are already restricted by their location rules in such a way,
> that these cannot be satisfied or are reasonable to be proven to be
> satisfiable.
>
> Positive colocation rule s
[snip]
diff --git a/proxmox-sdn-types/src/net.rs b/proxmox-sdn-types/src/net.rs
new file mode 100644
index ..78a47983f0c7
--- /dev/null
+++ b/proxmox-sdn-types/src/net.rs
@@ -0,0 +1,329 @@
+use std::{
+fmt::Display,
+net::{IpAddr, Ipv4Addr, Ipv6Addr},
+};
+
+use anyhow::{bail,
On Thu Jul 3, 2025 at 4:00 PM CEST, Dominik Csapak wrote:
> > + backgroundColor: Theme.of(context).colorScheme.surfaceContainer,
>
> this change seems to be unrelated?
This was intended. The change was made to match the backgroundColor of
`ProxmoxLoginSelector`. Previously it was using the de
On 7/2/25 16:50, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> There is one endpoint (/all) at the top-level that fetches both types
> of fabric entities (fabrics & nodes) and lists them separately. This
> is used for the main view, in order to avoid having to do two API
> calls. It works
looks like some unrelated formatting changes are here? The previous
commit only contained one line of changes.
https://lore.proxmox.com/pve-devel/20250522161731.537011-35-s.hanre...@proxmox.com/
On 7/2/25 16:50, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> For special types that were encod
On 7/2/25 16:50, Gabriel Goller wrote:
> With the introduction of fabrics, frr configuration generation and
> etc/network/interfaces generation has been reworked and renamed for
> better clarity, since now not only zones / controllers are responsible
> for generating the ifupdown / FRR configura
On Fri Jul 4, 2025 at 3:30 PM CEST, Thomas Lamprecht wrote:
> Am 04.07.25 um 14:33 schrieb Max Carrara:
> > On Wed Jul 2, 2025 at 10:14 PM CEST, Thomas Lamprecht wrote:
> >> Am 16.04.25 um 14:47 schrieb Max Carrara:
> >>> +my $CLUSTER_KNOWN_HOSTS = "/etc/pve/priv/known_hosts";
> >>
> >> For intra-c
Just noted a couple of typos inline
On 6/20/25 16:31, Daniel Kral wrote:
> Add a rules base plugin to allow users to specify different kinds of HA
> rules in a single configuration file, which put constraints on the HA
> Manager's behavior.
>
> Rule checkers can be registered for every plugin wit
On 6/20/25 16:31, Daniel Kral wrote:
> +my $check_feasibility = sub {
> +my ($rules) = @_;
> +
> +$rules = dclone($rules);
> +
> +# set optional rule parameter's default values
> +for my $rule (values %{ $rules->{ids} }) {
> +PVE::HA::Rules->set_rule_defaults($rule);
> +
On Wed, Jul 02, 2025 at 04:49:54PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> This crate contains SDN specific types, so they can be re-used across
> multiple crates (The initial use-case being shared types between
> proxmox-frr and proxmox-ve-config).
>
> This initial commit conta
applied this one early, thanks
On Wed, Jul 02, 2025 at 04:49:52PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> proxmox_serde provides helpers for parsing optional numbers / booleans
> coming from perl, so move to using them instead of implementing our
> own versions here. No function
The `.lock` file was not upto date with the newer flutter version.
Ran `flutter pub get` again to fix the issue. Some of the transitive
dependencies are also updated to support the newer dart SDK constraint.
Signed-off-by: Shan Shaji
---
pubspec.lock | 84 ++--
applied this one early, thanks
On Wed, Jul 02, 2025 at 04:49:50PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> The API macro required the enum variants to either have a oneOf or
> ObjectSchema, but did not allow allOf schemas. There's not really a
> reason to not allow allOf as well,
We check for the same condition in the wrapping if-block added in the
previous commit making these redundant.
Signed-off-by: Maximiliano Sandoval
---
src/watchdog-mux.c | 44
1 file changed, 20 insertions(+), 24 deletions(-)
diff --git a/src/watchdog
Without a clear-cut message in the log, it is very hard to provide a definitive
answer to whether a host fenced or not. In some cases the journal on the disk
can be missing up to 2 minutes since its last logged entry and the time where
another node detects the corosync link is down, with such a gap
Since this journal entry can be logged multiple times in the lifespan on
the process, we double fork to prevent accumulating zombie processes.
Signed-off-by: Maximiliano Sandoval
---
src/watchdog-mux.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/src/watchdog-mux.c b/s
Without this check, if nfds is zero, the `continue` statement right
before of the `break` will prevent from breaking out of the loop and
existing the process.
If a node does not have corosync quorum, then neither the lrm or crm
will update watchdog-mux and this the epoll_wait will timeout, hence
n
This change allows to have a second constant defined in terms of this
one.
Signed-off-by: Maximiliano Sandoval
---
src/watchdog-mux.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/watchdog-mux.c b/src/watchdog-mux.c
index e324c20..d38116b 100644
--- a/src/watchdog-
The sole purpose of this commit is to make the following commit's diff
easier to read.
Signed-off-by: Maximiliano Sandoval
---
src/watchdog-mux.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/src/watchdog-mux.c b/src/watchdog-mux.c
index d38116b..2b8cebf 100644
---
Signed-off-by: Maximiliano Sandoval
---
src/watchdog-mux.c | 21 +
1 file changed, 21 insertions(+)
diff --git a/src/watchdog-mux.c b/src/watchdog-mux.c
index 2b8cebf..0518e86 100644
--- a/src/watchdog-mux.c
+++ b/src/watchdog-mux.c
@@ -30,15 +30,23 @@
#define JOURNALCTL_BIN
On Wed, Jul 02, 2025 at 04:50:18PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> In PVE we use the GET /nodes/{node}/network API endpoint to return all
> currently configured network interfaces on a specific node. In order
> to be able to use SDN fabrics in Ceph and the migration setti
On Fri, Jul 04, 2025 at 03:23:14PM +0200, Stefan Hanreich wrote:
> On 7/4/25 15:14, Wolfgang Bumiller wrote:
>
> [snip]
>
> >> +
> >> +/// Class method: Return all FRR daemons that need to be enabled for
> >> this fabric configuration
> >> +/// instance.
> >
> > Method*
> >
> > Daemons
Am 04.07.25 um 14:33 schrieb Max Carrara:
> On Wed Jul 2, 2025 at 10:14 PM CEST, Thomas Lamprecht wrote:
>> Am 16.04.25 um 14:47 schrieb Max Carrara:
>>> +my $CLUSTER_KNOWN_HOSTS = "/etc/pve/priv/known_hosts";
>>
>> For intra-cluster ssh this shared known_host file is being deprecated in
>> favor o
On Wed, Jul 02, 2025 at 04:50:17PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> SDN fabrics can be used to configure IP addresses on interfaces
> directly, so we need to generate the respective ifupdown2
> configuration from the fabrics configuration. We also set some
> additional pro
--- Begin Message ---
> + snapext => { optional => 1 },
>>needs to be "fixed", as the code doesn't handle mixing internal
>>and external snapshots on a single storage..
indeed, I'll fix it
>
>
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) =
On 7/4/25 15:14, Wolfgang Bumiller wrote:
[snip]
>> +
>> +/// Class method: Return all FRR daemons that need to be enabled for
>> this fabric configuration
>> +/// instance.
>
> Method*
>
> Daemons? Or would "services" make more sense (and a `.service` suffix?)
It's a bit weird with F
On Fri Jul 4, 2025 at 2:33 PM CEST, Max Carrara wrote:
> On Wed Jul 2, 2025 at 10:14 PM CEST, Thomas Lamprecht wrote:
> > Am 16.04.25 um 14:47 schrieb Max Carrara:
> > > This commit adds an example implementation of a custom storage plugin
> > > that uses SSHFS [0] as the underlying filesystem.
> >
On Wed, Jul 02, 2025 at 04:50:16PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> We use proxmox-ve-config to generate a FRR config and serialize it
> with the proxmox-frr crate in order to return it to perl in its
> internally used format (an array of strings). The Perl SDN module in
>
On Wed, Jul 02, 2025 at 04:50:15PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> The FabricConfig from proxmox-ve-config implements CRUD functionality
> for Fabrics and Nodes stored in the section config. We expose them via
> perlmod, so they can be used in the API endpoints defined in
--- Begin Message ---
-
> + #we skip snapshot for tpmstate
> + return if $deviceid && $deviceid =~ m/tpmstate0/;
>>I think this is wrong.. this should return 'storage' as well?
Ah yes,indeed, I don't known why I was confused, and thinked we
couldn't take a storage snapshot of tmpstate when
Thomas Lamprecht writes:
> Am 19.05.25 um 15:09 schrieb Maximiliano Sandoval:
>> One sync comes after warning that the watchdog is about to expire, and a
>> second right after the watchdog expires.
>>
>> To maximize the chances the log will contain entries relevant to a fence
>> event. This wo
On Wed Jul 2, 2025 at 10:14 PM CEST, Thomas Lamprecht wrote:
> Am 16.04.25 um 14:47 schrieb Max Carrara:
> > This commit adds an example implementation of a custom storage plugin
> > that uses SSHFS [0] as the underlying filesystem.
> >
> > The implementation is very similar to that of the NFS plu
--- Begin Message ---
>>these should probably stay in Plugin.pm
Ok, will do, no problem (Fiona asked me to move it out of Plugin)
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinf
--- Begin Message ---
>
> >>cluster_size is set to 128k, as it reduce qcow2 overhead (reduce
> >>disk,
> >>but also memory needed to cache metadatas)
>>
>>should we make this configurable?
I'm not sure yet, I have choose the best balance between memory <-
>performance (too big block reduce perfor
[snip]
# workspace dependencies
-proxmox-access-control = { version = "0.2.5", path = "proxmox-access-control" }
-proxmox-acme = { version = "1.0.0", path = "proxmox-acme", default-features =
false }
-proxmox-api-macro = { version = "1.4.0", path = "proxmox-api-macro" }
-proxmox-apt-api-types =
On Wed, Jul 02, 2025 at 04:50:14PM +0200, Gabriel Goller wrote:
> From: Stefan Hanreich
>
> This module exposes the functionality provided proxmox-ve-config for
> the SDN fabrics to perl. We add initial support for reading and
> writing the section config stored in /etc/pve/sdn/fabrics.cfg as wel
Got it. Thanks for sharing.
On Fri Jul 4, 2025 at 11:34 AM CEST, Thomas Lamprecht wrote:
> Am 04.07.25 um 09:31 schrieb Dominik Csapak:
> >>
> >> This is just my doubt. Can i ask why we seperated the
> >> proxmox_login_manager as a seperate package?
> > I'm not sure why we did this, but my guess
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier , Thomas
Lamprecht
Objet: Re: [pve-devel] [PATCH-SERIES v7 pve-storage/qemu-server] add
external qcow2 snapshot support
Date: 04/07/2025 13:58:38
> Alexand
Am 04.07.25 um 13:52 schrieb Fabian Grünbichler:
>> Alexandre Derumier via pve-devel hat am
>> 04.07.2025 08:45 CEST geschrieben:
>> allow to rename from|to external snapshot volname
>
> we could consider adding a new API method `rename_snapshot` instead:
>
> my ($class, $scfg, $storeid, $volna
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:44 CEST geschrieben:
> This patch series implement qcow2 external snapshot support for files && lvm
> volumes
>
> The current internal qcow2 snapshots have bad write performance because no
> metadatas can be preallocated.
>
> This is
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> allow to rename from|to external snapshot volname
we could consider adding a new API method `rename_snapshot` instead:
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
for the two plugins
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> This add a $running param to volume_snapshot,
> it can be used if some extra actions need to be done at the storage
> layer when the snapshot has already be done at qemu level.
>
> Note: zfs && rbd plugins already
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> add a snapext option to enable the feature
>
> When a snapshot is taken, the current volume is renamed to snap volname
> and a current image is created with the snap volume as backing file
>
> Signed-off-by: Alex
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:44 CEST geschrieben:
> Signed-off-by: Alexandre Derumier
> ---
> src/PVE/Storage/Common.pm | 52 +++
> src/PVE/Storage/Plugin.pm | 47 +--
> 2 files changed, 53 insert
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:44 CEST geschrieben:
> and use it for plugin linked clone
>
> This also enable extended_l2=on, as it's mandatory for backing file
> preallocation.
>
> Preallocation was missing previously, so it should increase performance
> for linked
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> ---
> src/PVE/Storage/Common.pm | 28
> 1 file changed, 28 insertions(+)
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index c15cc88..e73eeab 100644
> --- a
haven't fully managed to get through the qemu-server part, but one small thing
below..
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> fixme:
> - add test for internal (was missing) && external qemu snapshots
> - is it possible to use blockjob transactions for
> Alexandre Derumier via pve-devel hat am
> 04.07.2025 08:45 CEST geschrieben:
> Returns if the volume is supporting qemu snapshot:
> 'internal' : do the snapshot with qemu internal snapshot
> 'external' : do the snapshot with qemu external snapshot
> undef : does not support qemu snaps
1 - 100 of 111 matches
Mail list logo