Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-server/container/docs 00/33] Add support for DHCP servers to SDN

2023-11-17 Thread DERUMIER, Alexandre
ok, I have fixed ipv6 support (tested with linux && windows).

reservation by mac seem to be work. (Maybe the dbus update is wrong, I
need to test it more deeply).

I'll send a patch tommorow.


 Message initial 
De: "DERUMIER, Alexandre" 
Répondre à: Proxmox VE development discussion 
À: pve-devel@lists.proxmox.com ,
s.hanre...@proxmox.com 
Objet: Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-
server/container/docs 00/33] Add support for DHCP servers to SDN
Date: 17/11/2023 17:09:51


> > I've checked the documentation and it seems to support it (in our
case
> > at least, since we have direct connection - correct me if I am
wrong):

> > From [1], the documentation for dhcp-host:

> > " Note that in IPv6 DHCP, the hardware address may not be
> > available,
> > though it normally is for direct-connected clients, or clients
> > using
> > DHCP relays which support RFC 6939. "

> > Maybe the issue here are the respective fwbr interfaces inbetween?

mmmn, it seem that openstack is doing it with mac reservation

https://bugs.launchpad.net/neutron/+bug/1861032


but I think that the current format for ipv6 address in ether is wrong.
I'll do tests this weekend.

(I using dual stack ipv4/ipv6 in production, so I really need to get
this works ^_^ )


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH installer] tui: fix interface sort order

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 18:30 schrieb Stoiko Ivanov:
> currently when multiple nics are present in a system the TUI
> sometimes selects the wrong interface (not the one that has the
> default gateway/dhcp lease)
> 
> I assume this is due to HashMap's values yielding an iterator in
> arbitrary order
> 
> Signed-off-by: Stoiko Ivanov 
> ---
> sadly a bit difficult to test due to the randomnes - but at least the 3
> tests on a VM were consistent.
> 
>  proxmox-tui-installer/src/main.rs | 13 ++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
>

applied, thanks!

as talked off-list, I remolded this to avoid a separate vector by re-using the
SelectView's data, which is already sorted:

diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index 4c14482..4b6b5b2 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -493,13 +493,14 @@ fn network_dialog(siv:  Cursive) -> InstallerView {
 .map(|iface| (iface.render(), iface.name.clone()));
 let mut ifaces_selection = 
SelectView::new().popup().with_all(ifnames.clone());
 
+// sort first to always have stable view
 ifaces_selection.sort();
-ifaces_selection.set_selection(
-ifnames
-.clone()
-.position(|iface| iface.1 == options.ifname)
-.unwrap_or(ifaces.len() - 1),
-);
+let selected = ifaces_selection
+.iter()
+.position(|(_label, iface)| *iface == options.ifname)
+.unwrap_or(ifaces.len() - 1);
+
+ifaces_selection.set_selection(selected);
 
 let inner = FormView::new()
 .child("Management interface", ifaces_selection)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: fix interface sort order

2023-11-17 Thread Stoiko Ivanov
currently when multiple nics are present in a system the TUI
sometimes selects the wrong interface (not the one that has the
default gateway/dhcp lease)

I assume this is due to HashMap's values yielding an iterator in
arbitrary order

Signed-off-by: Stoiko Ivanov 
---
sadly a bit difficult to test due to the randomnes - but at least the 3
tests on a VM were consistent.

 proxmox-tui-installer/src/main.rs | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index 4c14482..85b6811 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -488,16 +488,23 @@ fn network_dialog(siv:  Cursive) -> InstallerView {
 let state = siv.user_data::().unwrap();
 let options = 
 let ifaces = state.runtime_info.network.interfaces.values();
-let ifnames = ifaces
+let ifname_entries = ifaces
 .clone()
 .map(|iface| (iface.render(), iface.name.clone()));
-let mut ifaces_selection = 
SelectView::new().popup().with_all(ifnames.clone());
+let mut ifaces_selection = 
SelectView::new().popup().with_all(ifname_entries.clone());
+
+let mut ifnames = ifaces
+.clone()
+.map(|iface| iface.name.clone())
+.collect::>();
+ifnames.sort();
 
 ifaces_selection.sort();
 ifaces_selection.set_selection(
 ifnames
+.iter()
 .clone()
-.position(|iface| iface.1 == options.ifname)
+.position(|iface| *iface == options.ifname)
 .unwrap_or(ifaces.len() - 1),
 );
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] tui: bootdisk zfs config: add a maximum value to the `copies` option

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 17:32 schrieb Stefan Sterz:
> according to `man zfsprops` the copies option can only be 1, 2, or 3.
> limit the field to 3, as setting higher options can't work anyway.
> 
> Signed-off-by: Stefan Sterz 
> ---
> i would have added a `min_value` of 1 too, but `IntegerEditView` is
> based on `NumericEditView` and that doesn't offer a minimal value.
> 
>  proxmox-tui-installer/src/views/bootdisk.rs | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, fleeced in a note that we already limit that in the GTK UI
(for the exact reason you reference), thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH zfsonlinux] pick bug-fixes staged for 2.2.1

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:03 schrieb Stoiko Ivanov:
> ZFS 2.2.1 is currently being prepared, but the 3 patches added here
> seem quite relevant, as the might cause dataloss/panics on setups
> which run `zpool upgrade`.
> See upstreams discussion for 2.2.1:
> https://github.com/openzfs/zfs/pull/15498/
> and the most critical issue:
> https://github.com/openzfs/zfs/pull/15529
> finally:
> https://github.com/openzfs/zfs/commit/459c99ff2339a4a514abcf2255f9b3e5324ef09e
> should not hurt either
> 
> the change to the UBSAN patch (0013) is unrelate, cosmetic only and
> happened by running export-patchqueue.
> 
> Signed-off-by: Stoiko Ivanov 
> ---
> minimally tested by building our current kernel with this and booting it in
> 2 VMs - the tunable (module parameter) is present and set to 0
>  ...und-UBSAN-errors-for-variable-arrays.patch |   5 +-
>  ...g-between-unencrypted-and-encrypted-.patch |  44 
>  ...Add-a-tunable-to-disable-BRT-support.patch | 201 ++
>  ...2.1-Disable-block-cloning-by-default.patch |  42 
>  debian/patches/series |   3 +
>  5 files changed, 291 insertions(+), 4 deletions(-)
>  create mode 100644 
> debian/patches/0015-Fix-block-cloning-between-unencrypted-and-encrypted-.patch
>  create mode 100644 
> debian/patches/0016-Add-a-tunable-to-disable-BRT-support.patch
>  create mode 100644 
> debian/patches/0017-zfs-2.2.1-Disable-block-cloning-by-default.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: bootdisk zfs config: add a maximum value to the `copies` option

2023-11-17 Thread Stefan Sterz
according to `man zfsprops` the copies option can only be 1, 2, or 3.
limit the field to 3, as setting higher options can't work anyway.

Signed-off-by: Stefan Sterz 
---
i would have added a `min_value` of 1 too, but `IntegerEditView` is
based on `NumericEditView` and that doesn't offer a minimal value.

 proxmox-tui-installer/src/views/bootdisk.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-tui-installer/src/views/bootdisk.rs 
b/proxmox-tui-installer/src/views/bootdisk.rs
index 00e6ade..7e13e91 100644
--- a/proxmox-tui-installer/src/views/bootdisk.rs
+++ b/proxmox-tui-installer/src/views/bootdisk.rs
@@ -592,7 +592,7 @@ impl ZfsBootdiskOptionsView {
 .unwrap_or_default(),
 ),
 )
-.child("copies", IntegerEditView::new().content(options.copies))
+.child("copies", 
IntegerEditView::new().content(options.copies).max_value(3))
 .child_conditional(
 is_pve,
 "ARC max size",
--
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-server/container/docs 00/33] Add support for DHCP servers to SDN

2023-11-17 Thread DERUMIER, Alexandre


>>I've checked the documentation and it seems to support it (in our
case
>>at least, since we have direct connection - correct me if I am
wrong):

>>From [1], the documentation for dhcp-host:

>>" Note that in IPv6 DHCP, the hardware address may not be available,
>>though it normally is for direct-connected clients, or clients using
>>DHCP relays which support RFC 6939. "

>>Maybe the issue here are the respective fwbr interfaces inbetween?

mmmn, it seem that openstack is doing it with mac reservation

https://bugs.launchpad.net/neutron/+bug/1861032


but I think that the current format for ipv6 address in ether is wrong.
I'll do tests this weekend.

(I using dual stack ipv4/ipv6 in production, so I really need to get
this works ^_^ )


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [RFC PATCH installer 1/2] ui: stdio: replace newlines with whitespaces in prompt messages

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 14:45 schrieb Christoph Heiss:
> The line-based protocol currently used cannot handle this properly, so
> introduce this as a stop-gap measure - otherwise messages might be cut
> off.
> 
> This makes it work for now, and the text is wrapped correctely for the
> screen width in the TUI anyway - which is the only user of this so far.
> 
> Will be reworked properly later on.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  Proxmox/UI/StdIO.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied both patches, good enough as a stop-gap, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-server/container/docs 00/33] Add support for DHCP servers to SDN

2023-11-17 Thread Stefan Hanreich
On 11/17/23 17:05, Stefan Hanreich wrote:
> Maybe the issue here are the respective fwbr interfaces inbetween?

I guess that's unlikely since that would affect VMs as well I suppose


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-server/container/docs 00/33] Add support for DHCP servers to SDN

2023-11-17 Thread Stefan Hanreich


On 11/17/23 16:47, DERUMIER, Alexandre wrote:
>>>   * dnsmasq and IPv6 (and DHCP in general) do not really play well
>>> together,
>>>     so using subnets with IPv6 configured is wonky
> 
> I didn't have tested yet, but it's seem that dnsmasq only support old
> classic duid reservation and not mac ? 

I've checked the documentation and it seems to support it (in our case
at least, since we have direct connection - correct me if I am wrong):

From [1], the documentation for dhcp-host:

" Note that in IPv6 DHCP, the hardware address may not be available,
though it normally is for direct-connected clients, or clients using
DHCP relays which support RFC 6939. "

Maybe the issue here are the respective fwbr interfaces inbetween?

[1] https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH container v2] setup: fix architecture detection for NixOS containers

2023-11-17 Thread Wolfgang Bumiller
applied, thanks


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 cluster/network/manager/qemu-server/container/docs 00/33] Add support for DHCP servers to SDN

2023-11-17 Thread DERUMIER, Alexandre
>>  * dnsmasq and IPv6 (and DHCP in general) do not really play well
>>together,
>>    so using subnets with IPv6 configured is wonky

I didn't have tested yet, but it's seem that dnsmasq only support old
classic duid reservation and not mac ? 


because kea allow mac reservation for
https://kea.readthedocs.io/en/kea-2.2.0/arm/dhcp6-srv.html#mac-hardware-addresses-in-dhcpv6

using different techniques to find it.



But for now, I think it could be better indeed to disable ipv6 ipam
reservation in code.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH v2 many 00/52] revamp notifications; smtp endpoints; system mail

2023-11-17 Thread Thomas Lamprecht
applied the whole series, much thanks, really nice work here!

Also applied the follow-ups from Dominik, thanks for helping out!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH access-control 1/2] allow up to three levels of pool nesting

2023-11-17 Thread Fabian Grünbichler
> Wolfgang Bumiller  hat am 17.11.2023 11:00 CET 
> geschrieben:
> 
>  
> On Thu, Nov 16, 2023 at 04:31:25PM +0100, Fabian Grünbichler wrote:
> > with ACLs being inherited along the pool hierarchy.
> > 
> > Signed-off-by: Fabian Grünbichler 
> > ---
> >  src/PVE/AccessControl.pm | 10 --
> >  src/test/perm-test6.pl   | 16 
> >  src/test/test6.cfg   |  5 +
> >  3 files changed, 29 insertions(+), 2 deletions(-)
> > 
> > diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
> > index 9600e59..d9ae611 100644
> > --- a/src/PVE/AccessControl.pm
> > +++ b/src/PVE/AccessControl.pm
> > @@ -1264,7 +1264,7 @@ sub check_path {
> > |/nodes
> > |/nodes/[[:alnum:]\.\-\_]+
> > |/pool
> > -   |/pool/[[:alnum:]\.\-\_]+
> > +   |/pool/(:?[[:alnum:]\.\-\_]+\/?)+
> 
> Should we incorporate the 3 level limit here?
> eg. [chars]+(?:/[chars]+){0,2}
> Although regex would differ from the one used below (although it could
> use the same with only the `{0,2}` bit removed...).

well, there is no harm in accepting a sub-ACL path that has no effect. we 
usually have the opposite issue (forgetting to add/extending the entries here), 
but I also don't mind adding it here and a reminder comment below where the 
limit is enforced for the config/parameter values.
 
> > |/sdn
> > |/sdn/controllers
> > |/sdn/controllers/[[:alnum:]\_\-]+


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH access-control 2/2] pools: record parent/subpool information

2023-11-17 Thread Fabian Grünbichler

> Wolfgang Bumiller  hat am 17.11.2023 11:10 CET 
> geschrieben:
> 
>  
> On Thu, Nov 16, 2023 at 04:31:26PM +0100, Fabian Grünbichler wrote:
> > and ensure a missing intermediate pool exists at all times.
> > 
> > Signed-off-by: Fabian Grünbichler 
> > ---
> > 
> > Notes:
> > a "missing link" should never happen when modifying via the API (both 
> > deletion
> > with children and addition without the parent existing is blocked 
> > there), but
> > it could happen when manually editing the config.
> > 
> >  src/PVE/AccessControl.pm  | 14 +-
> >  src/test/parser_writer.pl |  4 
> >  2 files changed, 17 insertions(+), 1 deletion(-)
> > 
> > diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
> > index d9ae611..e33f844 100644
> > --- a/src/PVE/AccessControl.pm
> > +++ b/src/PVE/AccessControl.pm
> > @@ -1529,7 +1529,19 @@ sub parse_user_config {
> > }
> >  
> > # make sure to add the pool (even if there are no members)
> > -   $cfg->{pools}->{$pool} = { vms => {}, storage => {} } if 
> > !$cfg->{pools}->{$pool};
> > +   $cfg->{pools}->{$pool} = { vms => {}, storage => {}, pools => {} } 
> > if !$cfg->{pools}->{$pool};
> > +
> > +   if ($pool =~ m!/!) {
> > +   my $curr = $pool;
> > +   while ($curr =~ m!^(.*)/[^/]+$!) {
> 
> I wonder if we should use `.+` instead of `.*`.
> This way it would work the same even with a leading slash.
> That said, we don't allow leading slashes and there's a verify_poolname
> further up in the function so it doesn't really matter much.
> We just need to be careful that we never allow/introduce leading slashes
> anywhere, otherwise this runs with a final iteration where $parent is an
> empty string.

ack.

> > +   # ensure nested pool info is correctly recorded
> > +   my $parent = $1;
> > +   $cfg->{pools}->{$curr}->{parent} = $parent;
> > +   $cfg->{pools}->{$parent} = { vms => {}, storage => {}, 
> > pools => {} } if !$cfg->{pools}->{$parent};
> 
> (could use //= instead of the suffix if, IMO a bit easier to read (and
> doesn't break the 100 char limit :p)

that style is used across the whole parser here, I am always a bit hesitant to 
mix styles within a sub as IMHO that makes it harder to parse.

move the post-if to its own line, and optional follow-up to convert the whole 
parser to drop post ifs for initialization? ;)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/2] api: pools: support nested pools

2023-11-17 Thread Fabian Grünbichler
> Wolfgang Bumiller  hat am 17.11.2023 12:58 CET 
> geschrieben: 
> minor issue

> > diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
> > index 51ac71941..54e744558 100644
> > --- a/PVE/API2/Pool.pm
> > +++ b/PVE/API2/Pool.pm
> > @@ -354,6 +476,9 @@ __PACKAGE__->register_method ({
> >  
> > my $pool_config = $usercfg->{pools}->{$pool};
> > die "pool '$pool' does not exist\n" if !$pool_config;
> > +   for my $subpool (sort keys %{$pool_config->{pools}}) {
> 
> would prefer $pool_config->{pools}->%*

I'll add a follow-up patch that changes the whole sub as clean-up, I used the 
old style to be internally consist with the loops below ;)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] pvesh: proxy handler: fix handling array parameters

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:58 schrieb Fiona Ebner:
> As reported in the community forum and reproduced locally, issuing a
> QEMU guest agent command would lead to an error when proxying to
> another node:
> 
>> root@pve8a2 ~ # pvesh create /nodes/pve8a1/qemu/126/agent/exec --command 
>> 'whoami'
>> Wide character in die at /usr/share/perl5/PVE/RESTHandler.pm line 918.
>> proxy handler failed: Agent error: Guest agent command failed, error was 
>> 'Failed to execute child process “ARRAY(0x55842bb161a0)” (No such file or 
>> directory)'
> 
> Fix it, by splitting up array references correctly.
> 
> [0]: https://forum.proxmox.com/threads/136520/
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Not sure if this is the correct place to fix it?

from a quick check I think this should be fine here, as one should expect
arrays on when handling API stuff since pve-comnmon's 07f136d ("JSONSchema: add
support for array parameter in api calls, cli and config"), like the command for
QGA's exec, which is marked as array.
And here it's centrally enough for pvesh for this use case.

 
>  PVE/CLI/pvesh.pm | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v4 pve-manager 20/33] sdn: ipam: add ipam panel

2023-11-17 Thread Stefan Hanreich



On 11/17/23 16:04, DERUMIER, Alexandre wrote:
> I wonder if this panel could be integrated in zone panel (accessible
> from the tree).

I fear that this might overload the panel a bit.

> as It's not related to the sdn configuration itself. (and don't need
> sdn reload)
> 
> I think yhis could allow to give permissions to user to manage ips in
> the zone, without need to access to datacenter panel
I agree with this general sentiment though I'm not sure whether adding
it to Zone might be a good solution. Doesn't the user then still require
access to the Datacenter panel (since Zone is included there as well).

Also another point was not adding any specific new permissions for now.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 2/2] ui: fw: allow selecting network interface for rules using combogrid

2023-11-17 Thread Wolfgang Bumiller
just some thoughts from my side:

On Thu, May 11, 2023 at 11:46:20AM +0200, Christoph Heiss wrote:
> For nodes, VMs and CTs we can show the user a list of available network
> interfaces (as that information is available) when creating a new
> firewall rule, much like it is already done in similar places.
> Adds a lot of convenience when creating new firewall rules if they are
> interface-specific, as you get a nice summary of the available ones and
> can simply select it instead of typing it out each time.
> 
> Nodes can use the new `NetworkInterfaceSelector`, for VMs and CTs a new
> component is needed, as the VM/CT config needs to be parsed
> appropriately. It's mostly modeled after the `NetworkInterfaceSelector`
> component and pretty straight-forward.
> For datacenter rules, the simple textbox is kept.
> 
> Signed-off-by: Christoph Heiss 
> ---
> Note: iptables(8) allows two wildcards for the interface, `!` and `+`.
> For VMs and CTs this cannot be specified currently anyway, as the API
> only allows /^net\d+$/. For nodes, since they accept any arbritrary
> string as interface name, this possibility to specify a wildcard for the
> interface gets essentially lost.
> 
> I guess we could still allow users to input any strings if they want -
> is that something that should be possible (using the GUI)? IOW, do we
> want to allow that?



> 
>  www/manager6/Makefile |  1 +
>  .../form/VMNetworkInterfaceSelector.js| 79 +++
>  www/manager6/grid/FirewallRules.js| 37 -
>  www/manager6/lxc/Config.js|  1 +
>  www/manager6/qemu/Config.js   |  1 +
>  5 files changed, 115 insertions(+), 4 deletions(-)
>  create mode 100644 www/manager6/form/VMNetworkInterfaceSelector.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index a2f5116c..57ba331b 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -71,6 +71,7 @@ JSSRC=  
> \
>   form/UserSelector.js\
>   form/VLanField.js   \
>   form/VMCPUFlagSelector.js   \
> + form/VMNetworkInterfaceSelector.js  \
>   form/VMSelector.js  \
>   form/VNCKeyboardSelector.js \
>   form/ViewSelector.js\
> diff --git a/www/manager6/form/VMNetworkInterfaceSelector.js 
> b/www/manager6/form/VMNetworkInterfaceSelector.js
> new file mode 100644
> index ..fbe631ba
> --- /dev/null
> +++ b/www/manager6/form/VMNetworkInterfaceSelector.js
> @@ -0,0 +1,79 @@
> +Ext.define('PVE.form.VMNetworkInterfaceSelector', {
> +extend: 'Proxmox.form.ComboGrid',
> +alias: 'widget.PVE.form.VMNetworkInterfaceSelector',
> +mixins: ['Proxmox.Mixin.CBind'],
> +
> +cbindData: (initialConfig) => ({
> + isQemu: initialConfig.pveSelNode.data.type === 'qemu',
> +}),
> +
> +displayField: 'id',
> +
> +store: {
> + fields: ['id', 'name', 'bridge', 'ip'],

Not a fan of only including the 'ip' field without also including the
'ip6' field. And not sure about the formatting with both included :-)
(They can also be "manual" and "dhcp", and ip6 can additionally be
"auto", so it might look weird, but 路)

In patch 1 the NetworkInterfaceSelector has different fields
('iface', 'active', 'type')

> + filterOnLoad: true,
> + sorters: {
> + property: 'id',
> + direction: 'ASC',
> + },
> +},
> +
> +listConfig: {
> + cbind: {},
> + columns: [
> + {
> + header: 'ID',
> + dataIndex: 'id',
> + hideable: false,
> + width: 80,
> + },
> + {
> + header: gettext('Name'),
> + dataIndex: 'name',
> + flex: 1,
> + cbind: {
> + hidden: '{isQemu}',
> + },
> + },
> + {
> + header: gettext('Bridge'),
> + dataIndex: 'bridge',
> + flex: 1,
> + },
> + {
> + header: gettext('IP address'),
> + dataIndex: 'ip',
> + flex: 1,
> + cbind: {
> + hidden: '{isQemu}',
> + },
> + },
> + ],
> +},
> +
> +initComponent: function() {
> + const { node: nodename, type, vmid } = this.pveSelNode.data;
> +
> + Proxmox.Utils.API2Request({
> + url: `/nodes/${nodename}/${type}/${vmid}/config`,
> + method: 'GET',
> + success: ({ result: { data } }) => {
> + let networks = [];
> + for (const [id, value] of Object.entries(data)) {
> + if (id.match(/^net\d+/)) {
> + const parsed = type === 'lxc'
> + ? PVE.Parser.parseLxcNetwork(value)
> + : PVE.Parser.parseQemuNetwork(id, value);

Re: [pve-devel] [PATCH v4 pve-network 06/33] subnet: vnet: refactor IPAM related methods

2023-11-17 Thread Stefan Hanreich



On 11/17/23 15:13, Stefan Lendl wrote:
> If an IP was found, the loop will just start again.
> This should be (tested):
> 
> last if $ip;

Ah yes, I fixed the symptom of this already elsewhere but here it makes
a lot more sense!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v14 5/6] add clipboard checkbox to VM Options

2023-11-17 Thread Dominik Csapak

one minor nit inline, but not a blocker for me

did not test again, but the changes are small enough imho

aside from that, this patch (and the series since i already
reviewed the previous versions and there did not change anything)

Reviewed-by: Dominik Csapak 

On 11/14/23 10:22, Markus Frank wrote:

Signed-off-by: Markus Frank 
---
  www/manager6/qemu/DisplayEdit.js |  8 
  www/manager6/qemu/Options.js | 82 
  2 files changed, 90 insertions(+)

diff --git a/www/manager6/qemu/DisplayEdit.js b/www/manager6/qemu/DisplayEdit.js
index 9bb1763e..d7cd51a9 100644
--- a/www/manager6/qemu/DisplayEdit.js
+++ b/www/manager6/qemu/DisplayEdit.js
@@ -4,6 +4,9 @@ Ext.define('PVE.qemu.DisplayInputPanel', {
  onlineHelp: 'qm_display',
  
  onGetValues: function(values) {

+   if (typeof this.originalConfig.clipboard !== 'undefined') {
+   values.clipboard = this.originalConfig.clipboard;
+   }
let ret = PVE.Parser.printPropertyString(values, 'type');
if (ret === '') {
return { 'delete': 'vga' };
@@ -11,6 +14,11 @@ Ext.define('PVE.qemu.DisplayInputPanel', {
return { vga: ret };
  },
  
+onSetValues: function(values) {

+   this.originalConfig = values;
+   return values;
+},
+
  items: [{
name: 'type',
xtype: 'proxmoxKVComboBox',
diff --git a/www/manager6/qemu/Options.js b/www/manager6/qemu/Options.js
index 7b112400..fd76feb8 100644
--- a/www/manager6/qemu/Options.js
+++ b/www/manager6/qemu/Options.js
@@ -154,6 +154,88 @@ Ext.define('PVE.qemu.Options', {
},
} : undefined,
},
+   vga: {
+   header: gettext('Clipboard'),
+   defaultValue: false,
+   renderer: function(value) {
+   let vga = PVE.Parser.parsePropertyString(value, 'type');
+   if (vga.clipboard) {
+   return vga.clipboard.toUpperCase();
+   } else {
+   return Proxmox.Utils.defaultText + ' (SPICE)';
+   }
+   },
+   editor: caps.vms['VM.Config.HWType'] ? {
+   xtype: 'proxmoxWindowEdit',
+   subject: gettext('Clipboard'),
+   onlineHelp: 'qm_display',
+   items: {
+   xtype: 'pveDisplayInputPanel',
+   referenceHolder: true,
+   items: [
+   {
+   xtype: 'proxmoxKVComboBox',
+   name: 'clipboard',
+   reference: 'clipboard',
+   itemId: 'clipboardBox',
+   fieldLabel: gettext('Clipboard'),
+   deleteDefaultValue: true,
+   listeners: {
+   change: function(field, value) {
+   let inputpanel = field.up("inputpanel");
+   let isVnc = value === 'vnc';
+   
inputpanel.lookup('vncHint').setVisible(isVnc);
+   
inputpanel.lookup('defaultHint').setVisible(!isVnc);
+   },
+   },
+   value: '__default__',
+   comboItems: [
+   ['__default__', Proxmox.Utils.defaultText + 
' (SPICE)'],
+   ['vnc', 'VNC'],
+   ],
+   },
+   {
+   itemId: 'vncHint',
+   name: 'vncHint',
+   reference: 'vncHint',
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   hidden: true,
+   value: gettext('You cannot use the default 
SPICE clipboard if the VNC Clipboard is selected.')
+   + ' ' + gettext('VNC Clipboard requires 
spice-tools installed in the Guest-VM.'),
+   },


this



+
+   {
+   itemId: 'defaultHint',
+   name: 'defaultHint',
+   reference: 'defaultHint',
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   hidden: false,
+   value: gettext('This option depends on your 
display type.') + ' ' +
+   gettext('If the display type uses SPICE you 
are able to use the default SPICE Clipboard.'),
+   },


and this should have the same style of 

[pve-devel] applied: [PATCH pve-docs] gitignore

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:40 schrieb Stefan Lendl:
> build output and temporary intermediate files
> 
> Signed-off-by: Stefan Lendl 
> ---
>  .gitignore | 11 +++
>  1 file changed, 11 insertions(+)
> 
>

applied, same & same as the others, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 pve-manager 20/33] sdn: ipam: add ipam panel

2023-11-17 Thread DERUMIER, Alexandre
I wonder if this panel could be integrated in zone panel (accessible
from the tree).

as It's not related to the sdn configuration itself. (and don't need
sdn reload)

I think yhis could allow to give permissions to user to manage ips in
the zone, without need to access to datacenter panel


 Message initial 
De: Stefan Hanreich 
Répondre à: Proxmox VE development discussion 
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [PATCH v4 pve-manager 20/33] sdn: ipam: add ipam
panel
Date: 17/11/2023 12:39:58

Signed-off-by: Stefan Hanreich 
---
 www/css/ext6-pve.css  |  22 ++-
 www/manager6/Makefile |   1 +
 www/manager6/dc/Config.js |  12 +-
 www/manager6/sdn/IpamEdit.js  |  78 ++
 www/manager6/tree/DhcpTree.js | 267 ++
 5 files changed, 372 insertions(+), 8 deletions(-)
 create mode 100644 www/manager6/sdn/IpamEdit.js
 create mode 100644 www/manager6/tree/DhcpTree.js

diff --git a/www/css/ext6-pve.css b/www/css/ext6-pve.css
index e18b173f5..091855356 100644
--- a/www/css/ext6-pve.css
+++ b/www/css/ext6-pve.css
@@ -510,28 +510,38 @@ div.right-aligned {
 content: ' ';
 }
 
-.fa-sdn:before {
+.x-fa-sdn-treelist:before {
 width: 14px;
 height: 14px;
 position: absolute;
 left: 1px;
 top: 4px;
+}
+
+.fa-sdn:before {
 background-image:url(../images/icon-sdn.svg);
 background-size: 14px 14px;
 content: ' ';
 }
 
 .fa-network-wired:before {
-    width: 14px;
-    height: 14px;
-    position: absolute;
-    left: 1px;
-    top: 4px;
 background-image:url(../images/icon-fa-network-wired.svg);
 background-size: 14px 14px;
 content: ' ';
 }
 
+.x-fa-treepanel:before {
+    width: 16px;
+    height: 24px;
+    display: block;
+    background-repeat: no-repeat;
+    background-position: center;
+}
+
+.x-tree-icon-none {
+    display: none;
+}
+
 .x-treelist-row-over > * > .x-treelist-item-icon,
 .x-treelist-row-over > * > .x-treelist-item-text{
 color: #000;
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 093452cd7..93b4ff155 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -108,6 +108,7 @@
JSSRC=  \
    tree/ResourceTree.js\
    tree/SnapshotTree.js\
    tree/ResourceMapTree.js \
+   tree/DhcpTree.js\
    window/Backup.js\
    window/BackupConfig.js  \
    window/BulkAction.js\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 7d01da5fb..7c2b7b168 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -185,7 +185,7 @@ Ext.define('PVE.dc.Config', {
    me.items.push({
        xtype: 'pveSDNStatus',
        title: gettext('SDN'),
-       iconCls: 'fa fa-sdn',
+       iconCls: 'fa fa-sdn x-fa-sdn-treelist',
        hidden: true,
        itemId: 'sdn',
        expandedOnInit: true,
@@ -203,7 +203,7 @@ Ext.define('PVE.dc.Config', {
        groups: ['sdn'],
        title: 'VNets',
        hidden: true,
-       iconCls: 'fa fa-network-wired',
+       iconCls: 'fa fa-network-wired x-fa-sdn-treelist',
        itemId: 'sdnvnet',
    },
    {
@@ -213,6 +213,14 @@ Ext.define('PVE.dc.Config', {
        hidden: true,
        iconCls: 'fa fa-gear',
        itemId: 'sdnoptions',
+   },
+   {
+       xtype: 'pveDhcpTree',
+       groups: ['sdn'],
+       title: gettext('IPAM'),
+       hidden: true,
+       iconCls: 'fa fa-map-signs',
+       itemId: 'sdnmappings',
    });
        }
 
diff --git a/www/manager6/sdn/IpamEdit.js
b/www/manager6/sdn/IpamEdit.js
new file mode 100644
index 0..18e22c592
--- /dev/null
+++ b/www/manager6/sdn/IpamEdit.js
@@ -0,0 +1,78 @@
+Ext.define('PVE.sdn.IpamEditInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    isCreate: false,
+
+    onGetValues: function(values) {
+   let me = this;
+
+   if (!values.vmid) {
+       delete values.vmid;
+   }
+
+   return values;
+    },
+
+    items: [
+   {
+       xtype: 'pmxDisplayEditField',
+       name: 'vmid',
+       fieldLabel: gettext('VMID'),
+       allowBlank: false,
+       editable: false,
+       cbind: {
+   hidden: '{isCreate}',
+       },
+   },
+   {
+       xtype: 'pmxDisplayEditField',
+       name: 'mac',
+       fieldLabel: gettext('MAC'),
+       allowBlank: false,
+       cbind: {
+   

[pve-devel] [PATCH manager 2/2] ui: vm wizard: allow second iso for windows vms

2023-11-17 Thread Dominik Csapak
This is useful for adding the virtio-win iso for new installs, and thus
we change the default disk type to scsi and network type to virtio.

Signed-off-by: Dominik Csapak 
---
there is still a bug (can look into it monday) where when the the
checkbox is unselected again, the default disk is ide1 instead of ide0
(which won't work because q35 only knows ide0 and ide2)
it seems to be a timing issue of when the vmconfig for the disk panel is
updated and how we generate the baseconfig

 www/manager6/qemu/CreateWizard.js |  3 ++
 www/manager6/qemu/MultiHDEdit.js  |  8 +++-
 www/manager6/qemu/OSTypeEdit.js   | 64 ++-
 3 files changed, 73 insertions(+), 2 deletions(-)

diff --git a/www/manager6/qemu/CreateWizard.js 
b/www/manager6/qemu/CreateWizard.js
index 74b1feb6..443e42c6 100644
--- a/www/manager6/qemu/CreateWizard.js
+++ b/www/manager6/qemu/CreateWizard.js
@@ -161,6 +161,9 @@ Ext.define('PVE.qemu.CreateWizard', {
{
xtype: 'pveQemuOSTypePanel',
insideWizard: true,
+   bind: {
+   nodename: '{nodename}',
+   },
},
],
},
diff --git a/www/manager6/qemu/MultiHDEdit.js b/www/manager6/qemu/MultiHDEdit.js
index caf74fad..27884f3f 100644
--- a/www/manager6/qemu/MultiHDEdit.js
+++ b/www/manager6/qemu/MultiHDEdit.js
@@ -37,11 +37,17 @@ Ext.define('PVE.qemu.MultiHDPanel', {
let me = this;
let vm = me.getViewModel();
 
-   return {
+   let res = {
ide2: 'media=cdrom',
scsihw: vm.get('current.scsihw'),
ostype: vm.get('current.ostype'),
};
+
+   if (vm.get('current.ide0') === "some") {
+   res.ide0 = "media=cdrom";
+   }
+
+   return res;
},
 
diskSorter: {
diff --git a/www/manager6/qemu/OSTypeEdit.js b/www/manager6/qemu/OSTypeEdit.js
index 3332a0bc..b3c8fda8 100644
--- a/www/manager6/qemu/OSTypeEdit.js
+++ b/www/manager6/qemu/OSTypeEdit.js
@@ -14,9 +14,19 @@ Ext.define('PVE.qemu.OSTypeInputPanel', {
afterrender: 'onOSTypeChange',
change: 'onOSTypeChange',
},
+   'checkbox[reference=enableSecondCD]': {
+   change: 'onSecondCDChange',
+   },
},
onOSBaseChange: function(field, value) {
-   
this.lookup('ostype').getStore().setData(PVE.Utils.kvm_ostypes[value]);
+   let me = this;
+   
me.lookup('ostype').getStore().setData(PVE.Utils.kvm_ostypes[value]);
+   let isWindows = value === 'Microsoft Windows';
+   let enableSecondCD = me.lookup('enableSecondCD');
+   enableSecondCD.setVisible(isWindows);
+   if (!isWindows) {
+   enableSecondCD.setValue(false);
+   }
},
onOSTypeChange: function(field) {
var me = this, ostype = field.getValue();
@@ -42,6 +52,30 @@ Ext.define('PVE.qemu.OSTypeInputPanel', {
// ignore multiple disks, we only want to set the type if there 
is a single disk
}
},
+   onSecondCDChange: function(widget, value, lastValue) {
+   let me = this;
+   let vm = me.getViewModel();
+   if (value) {
+   // only for windows
+   vm.set('current.ide0', "some");
+   vm.notify();
+   me.setWidget('pveBusSelector', 'scsi');
+   me.setWidget('pveNetworkCardSelector', 'virtio');
+   } else {
+   vm.set('current.ide0', "");
+   vm.notify();
+   console.log('set ide0 to 0');
+   me.setWidget('pveBusSelector', 'scsi');
+   let ostype = me.lookup('ostype').getValue();
+   var targetValues = PVE.qemu.OSDefaults.getDefaults(ostype);
+   me.setWidget('pveBusSelector', targetValues.busType);
+   }
+   },
+},
+
+setNodename: function(nodename) {
+   var me = this;
+   me.lookup('isoSelector').setNodename(nodename);
 },
 
 initComponent: function() {
@@ -92,6 +126,34 @@ Ext.define('PVE.qemu.OSTypeInputPanel', {
},
];
 
+   if (me.insideWizard) {
+   me.items.push(
+   {
+   xtype: 'proxmoxcheckbox',
+   reference: 'enableSecondCD',
+   isFormField: false,
+   hidden: true,
+   checked: false,
+   boxLabel: gettext('Add Second CD/DVD Image file (iso)'),
+   listeners: {
+   change: function(cb, value) {
+   me.lookup('isoSelector').setDisabled(!value);
+   me.lookup('isoSelector').setHidden(!value);
+   },
+   },
+   },
+   {
+   xtype: 'pveIsoSelector',
+   reference: 

[pve-devel] [PATCH manager 1/2] ui: refactor iso selector out of the cd input panel

2023-11-17 Thread Dominik Csapak
and make it into a proper field
it's intended to be used like a single field and exactly as before

Signed-off-by: Dominik Csapak 
---
 www/manager6/Makefile|   1 +
 www/manager6/form/IsoSelector.js | 107 +++
 www/manager6/qemu/CDEdit.js  |  38 ++-
 3 files changed, 115 insertions(+), 31 deletions(-)
 create mode 100644 www/manager6/form/IsoSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index dccd2ba1..56dce790 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -88,6 +88,7 @@ JSSRC=
\
form/TagEdit.js \
form/MultiFileButton.js \
form/TagFieldSet.js \
+   form/IsoSelector.js \
grid/BackupView.js  \
grid/FirewallAliases.js \
grid/FirewallOptions.js \
diff --git a/www/manager6/form/IsoSelector.js b/www/manager6/form/IsoSelector.js
new file mode 100644
index ..632ee7f0
--- /dev/null
+++ b/www/manager6/form/IsoSelector.js
@@ -0,0 +1,107 @@
+Ext.define('PVE.form.IsoSelector', {
+extend: 'Ext.container.Container',
+alias: 'widget.pveIsoSelector',
+mixins: [
+   'Ext.form.field.Field',
+   'Proxmox.Mixin.CBind',
+],
+
+nodename: undefined,
+insideWizard: false,
+
+cbindData: function() {
+   let me = this;
+   return {
+   nodename: me.nodename,
+   insideWizard: me.insideWizard,
+   };
+},
+
+getValue: function() {
+   return this.lookup('file').getValue();
+},
+
+setValue: function(value) {
+   let me = this;
+   if (!value) {
+   me.lookup('file').reset();
+   return;
+   }
+   var match = value.match(/^([^:]+):/);
+   if (match) {
+   me.lookup('storage').setValue(match[1]);
+   me.lookup('file').setValue(value);
+   }
+},
+
+getErrors: function() {
+   let me = this;
+   me.lookup('storage').validate();
+   let file = me.lookup('file');
+   file.validate();
+   let value = file.getValue();
+   if (!value || !value.length) {
+   return [""]; // for validation
+   }
+   return [];
+},
+
+setNodename: function(nodename) {
+   let me = this;
+   me.lookup('storage').setNodename(nodename);
+   me.lookup('file').setStorage(undefined, nodename);
+},
+
+setDisabled: function(disabled) {
+   let me = this;
+   me.lookup('storage').setDisabled(disabled);
+   me.lookup('file').setDisabled(disabled);
+   me.callParent();
+},
+
+referenceHolder: true,
+
+items: [
+   {
+   xtype: 'pveStorageSelector',
+   reference: 'storage',
+   isFormField: false,
+   fieldLabel: gettext('Storage'),
+   labelAlign: 'right',
+   storageContent: 'iso',
+   allowBlank: false,
+   cbind: {
+   nodename: '{nodename}',
+   autoSelect: '{insideWizard}',
+   insideWizard: '{insideWizard}',
+   disabled: '{disabled}',
+   },
+   listeners: {
+   change: function(f, value) {
+   let me = this;
+   let selector = me.up('pveIsoSelector');
+   selector.lookup('file').setStorage(value);
+   selector.checkChange();
+   },
+   },
+   },
+   {
+   xtype: 'pveFileSelector',
+   reference: 'file',
+   isFormField: false,
+   storageContent: 'iso',
+   fieldLabel: gettext('ISO image'),
+   labelAlign: 'right',
+   cbind: {
+   nodename: '{nodename}',
+   disabled: '{disabled}',
+   },
+   allowBlank: false,
+   listeners: {
+   change: function() {
+   this.up('pveIsoSelector').checkChange();
+   },
+   },
+   },
+],
+});
diff --git a/www/manager6/qemu/CDEdit.js b/www/manager6/qemu/CDEdit.js
index fc7a59cc..3cc16205 100644
--- a/www/manager6/qemu/CDEdit.js
+++ b/www/manager6/qemu/CDEdit.js
@@ -43,11 +43,7 @@ Ext.define('PVE.qemu.CDInputPanel', {
values.mediaType = 'none';
} else {
values.mediaType = 'iso';
-   var match = drive.file.match(/^([^:]+):/);
-   if (match) {
-   values.cdstorage = match[1];
-   values.cdimage = drive.file;
-   }
+   values.cdimage = drive.file;
}
 
me.drive = drive;
@@ -58,8 +54,7 @@ Ext.define('PVE.qemu.CDInputPanel', {
 setNodename: function(nodename) {
var me = this;
 
-   me.cdstoragesel.setNodename(nodename);
-   me.cdfilesel.setStorage(undefined, nodename);
+   

[pve-devel] applied: [PATCH pve-container] gitignore

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:30 schrieb Stefan Lendl:
> build output
> 
> Signed-off-by: Stefan Lendl 
> ---
>  .gitignore | 4 
>  1 file changed, 4 insertions(+)
>  create mode 100644 .gitignore
> 
>

applied, same w.r.t. commit subject and glob specificness


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-manager] gitignore

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:26 schrieb Stefan Lendl:
> build outputs
> 
> Signed-off-by: Stefan Lendl 
> ---
>  .gitignore | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
>

applied, same w.r.t. actual commit message, single words are never
a good idea for them. I also made the globs more specific here.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] pvesh: proxy handler: fix handling array parameters

2023-11-17 Thread Fiona Ebner
As reported in the community forum and reproduced locally, issuing a
QEMU guest agent command would lead to an error when proxying to
another node:

> root@pve8a2 ~ # pvesh create /nodes/pve8a1/qemu/126/agent/exec --command 
> 'whoami'
> Wide character in die at /usr/share/perl5/PVE/RESTHandler.pm line 918.
> proxy handler failed: Agent error: Guest agent command failed, error was 
> 'Failed to execute child process “ARRAY(0x55842bb161a0)” (No such file or 
> directory)'

Fix it, by splitting up array references correctly.

[0]: https://forum.proxmox.com/threads/136520/

Signed-off-by: Fiona Ebner 
---

Not sure if this is the correct place to fix it?

 PVE/CLI/pvesh.pm | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pvesh.pm b/PVE/CLI/pvesh.pm
index 730e09af..44a65213 100755
--- a/PVE/CLI/pvesh.pm
+++ b/PVE/CLI/pvesh.pm
@@ -109,7 +109,11 @@ sub proxy_handler {
 my $args = [];
 foreach my $key (keys %$param) {
next if $key eq 'quiet' || $key eq 'output-format'; # just to  be sure
-   push @$args, "--$key", $_ for split(/\0/, $param->{$key});
+   if (ref($param->{$key}) eq 'ARRAY') {
+   push @$args, "--$key", $_ for $param->{$key}->@*;
+   } else {
+   push @$args, "--$key", $_ for split(/\0/, $param->{$key});
+   }
 }
 
 my @ssh_tunnel_cmd = ('ssh', '-o', 'BatchMode=yes', "root\@$remip");
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH pve-network] gitignore

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:23 schrieb Stefan Lendl:
> build outputs
> 
> Signed-off-by: Stefan Lendl 
> ---
>  .gitignore | 5 +
>  1 file changed, 5 insertions(+)
>  create mode 100644 .gitignore
> 
>

applied, but same here, added an actual subject and a more
specific glob for the build-dir...


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] gitignore

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 15:20 schrieb Stefan Lendl:
> build output and .vscode
> 
> Signed-off-by: Stefan Lendl 
> ---
>  .gitignore | 2 ++
>  1 file changed, 2 insertions(+)
> 
>

applied, with the build-dir made slightly more specific by using
/qemu-server-[0-9]*/ and the subject changed to something meaningful than
just a single word (would have been only the second commit consisting of
only one word in the subject), as gitignore, especially if one exists
already, is really not helpful when scrolling e.g., git log --oneline


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-docs] gitignore

2023-11-17 Thread Stefan Lendl
build output and temporary intermediate files

Signed-off-by: Stefan Lendl 
---
 .gitignore | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/.gitignore b/.gitignore
index 36e2dd5..4f95ee9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -5,3 +5,14 @@
 *.tmp
 *.epub
 *.swp
+/#*#
+/.pve-doc-depends
+/api-viewer/apidoc.js
+/asciidoc-pve
+/chapter-index-table.adoc
+/link-refs.json
+/man1-index-table.adoc
+/man5-index-table.adoc
+/man8-index-table.adoc
+/pve-doc-generator.mk
+/pve-docs-*/
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-container] gitignore

2023-11-17 Thread Stefan Lendl
build output

Signed-off-by: Stefan Lendl 
---
 .gitignore | 4 
 1 file changed, 4 insertions(+)
 create mode 100644 .gitignore

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..e9d7353
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,4 @@
+/pve-container-*/
+/*.deb
+/*.buildinfo
+/*.changes
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-manager] gitignore

2023-11-17 Thread Stefan Lendl
build outputs

Signed-off-by: Stefan Lendl 
---
 .gitignore | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/.gitignore b/.gitignore
index a6ab4ea7..a4093add 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,7 +4,8 @@ dest/
 *.buildinfo
 *.changes
 
-www/manager6/OnlineHelpInfo.js
-www/manager6/pvemanagerlib.js
-www/mobile/pvemanager-mobile.js
-www/touch/touch-2.4.2/
\ No newline at end of file
+/www/manager6/OnlineHelpInfo.js
+/www/manager6/pvemanagerlib.js
+/www/mobile/pvemanager-mobile.js
+/www/touch/touch-*/
+/pve-manager-*/
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network] gitignore

2023-11-17 Thread Stefan Lendl
build outputs

Signed-off-by: Stefan Lendl 
---
 .gitignore | 5 +
 1 file changed, 5 insertions(+)
 create mode 100644 .gitignore

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..6370dfa
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,5 @@
+/libpve-network-perl-*/
+*.deb
+/*.buildinfo
+/*.changes
+/.vscode/
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server] gitignore

2023-11-17 Thread Stefan Lendl
build output and .vscode

Signed-off-by: Stefan Lendl 
---
 .gitignore | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
index e48cf98..caaef23 100644
--- a/.gitignore
+++ b/.gitignore
@@ -9,3 +9,5 @@ sparsecp
 vmtar
 build
 qm.bash-completion
+/.vscode/
+/qemu-server-*/
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 pve-network 06/33] subnet: vnet: refactor IPAM related methods

2023-11-17 Thread Stefan Lendl
Stefan Hanreich  writes:

> @@ -230,10 +227,28 @@ sub next_free_ip {
>   my $plugin_config = $ipam_cfg->{ids}->{$ipamid};
>   my $plugin = 
> PVE::Network::SDN::Ipams::Plugin->lookup($plugin_config->{type});
>   eval {
> - $cidr = $plugin->add_next_freeip($plugin_config, $subnetid, 
> $subnet, $hostname, $mac, $description);
> - ($ip, undef) = split(/\//, $cidr);
> + if ($dhcprange) {
> + my $data = {
> + mac => $mac,
> + hostname => $hostname,
> + vmid => $vmid,
> + };
> +
> + my $dhcp_ranges = 
> PVE::Network::SDN::Subnets::get_dhcp_ranges($subnet);
> +
> + foreach my $range (@$dhcp_ranges) {
> + $ip = $plugin->add_range_next_freeip($plugin_config, 
> $subnet, $range, $data);
> + next if !$ip;

If an IP was found, the loop will just start again.
This should be (tested):

last if $ip;


> + }
> + } else {
> + $ip = $plugin->add_next_freeip($plugin_config, $subnetid, 
> $subnet, $hostname, $mac, $vmid);
> + }
>   };
> +
>   die $@ if $@;
> +
> + eval { PVE::Network::SDN::Ipams::add_cache_mac_ip($mac, $ip); };
> + warn $@ if $@;
>  }
>  
>  eval {


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-docs] sdn: update documentation

2023-11-17 Thread Stefan Lendl


sent v2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network] sdn: allow deletion of empty subnet with gateway

2023-11-17 Thread Stefan Lendl
If the gateway IP is last remaining IP in the subnet (in IPAM), allow
deleting the subnet.

Signed-off-by: Stefan Lendl 
---
 src/PVE/Network/SDN/Ipams/PVEPlugin.pm | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Network/SDN/Ipams/PVEPlugin.pm 
b/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
index a5b4fe7..270fb04 100644
--- a/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
+++ b/src/PVE/Network/SDN/Ipams/PVEPlugin.pm
@@ -56,6 +56,16 @@ sub add_subnet {
 die "$@" if $@;
 }
 
+sub only_gateway_remains {
+my ($ips) = @_;
+
+if (keys %{$ips} == 1 &&
+   (values %{$ips})[0]->{gateway} == 1) {
+   return 1;
+}
+return 0;
+};
+
 sub del_subnet {
 my ($class, $plugin_config, $subnetid, $subnet) = @_;
 
@@ -71,7 +81,11 @@ sub del_subnet {
my $dbsubnet = $dbzone->{subnets}->{$cidr};
die "subnet '$cidr' doesn't exist in IPAM DB\n" if !$dbsubnet;
 
-   die "cannot delete subnet '$cidr', not empty\n" if keys 
%{$dbsubnet->{ips}} > 0;
+   my $ips = $dbsubnet->{ips};
+
+   if (keys %{$ips} > 0 && !only_gateway_remains($ips)) {
+   die "cannot delete subnet '$cidr', not empty\n";
+   }
 
delete $dbzone->{subnets}->{$cidr};
 
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH zfsonlinux] pick bug-fixes staged for 2.2.1

2023-11-17 Thread Stoiko Ivanov
ZFS 2.2.1 is currently being prepared, but the 3 patches added here
seem quite relevant, as the might cause dataloss/panics on setups
which run `zpool upgrade`.
See upstreams discussion for 2.2.1:
https://github.com/openzfs/zfs/pull/15498/
and the most critical issue:
https://github.com/openzfs/zfs/pull/15529
finally:
https://github.com/openzfs/zfs/commit/459c99ff2339a4a514abcf2255f9b3e5324ef09e
should not hurt either

the change to the UBSAN patch (0013) is unrelate, cosmetic only and
happened by running export-patchqueue.

Signed-off-by: Stoiko Ivanov 
---
minimally tested by building our current kernel with this and booting it in
2 VMs - the tunable (module parameter) is present and set to 0
 ...und-UBSAN-errors-for-variable-arrays.patch |   5 +-
 ...g-between-unencrypted-and-encrypted-.patch |  44 
 ...Add-a-tunable-to-disable-BRT-support.patch | 201 ++
 ...2.1-Disable-block-cloning-by-default.patch |  42 
 debian/patches/series |   3 +
 5 files changed, 291 insertions(+), 4 deletions(-)
 create mode 100644 
debian/patches/0015-Fix-block-cloning-between-unencrypted-and-encrypted-.patch
 create mode 100644 
debian/patches/0016-Add-a-tunable-to-disable-BRT-support.patch
 create mode 100644 
debian/patches/0017-zfs-2.2.1-Disable-block-cloning-by-default.patch

diff --git 
a/debian/patches/0013-Workaround-UBSAN-errors-for-variable-arrays.patch 
b/debian/patches/0013-Workaround-UBSAN-errors-for-variable-arrays.patch
index 02815311..0b98c42a 100644
--- a/debian/patches/0013-Workaround-UBSAN-errors-for-variable-arrays.patch
+++ b/debian/patches/0013-Workaround-UBSAN-errors-for-variable-arrays.patch
@@ -1,4 +1,4 @@
-From 28be24aefc13b11e4c96e172cf2685994e03150d Mon Sep 17 00:00:00 2001
+From  Mon Sep 17 00:00:00 2001
 From: Tony Hutter 
 Date: Thu, 9 Nov 2023 16:43:35 -0800
 Subject: [PATCH] Workaround UBSAN errors for variable arrays
@@ -70,6 +70,3 @@ index c13217159..b9c284a24 100644
  # Suppress incorrect warnings from versions of objtool which are not
  # aware of x86 EVEX prefix instructions used for AVX512.
  OBJECT_FILES_NON_STANDARD_vdev_raidz_math_avx512bw.o := y
--- 
-2.39.2
-
diff --git 
a/debian/patches/0015-Fix-block-cloning-between-unencrypted-and-encrypted-.patch
 
b/debian/patches/0015-Fix-block-cloning-between-unencrypted-and-encrypted-.patch
new file mode 100644
index ..c2fc506e
--- /dev/null
+++ 
b/debian/patches/0015-Fix-block-cloning-between-unencrypted-and-encrypted-.patch
@@ -0,0 +1,44 @@
+From  Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Martin=20Matu=C5=A1ka?= 
+Date: Tue, 31 Oct 2023 21:49:41 +0100
+Subject: [PATCH] Fix block cloning between unencrypted and encrypted datasets
+
+Block cloning from an encrypted dataset into an unencrypted dataset
+and vice versa is not possible. The current code did allow cloning
+unencrypted files into an encrypted dataset causing a panic when
+these were accessed. Block cloning between encrypted and encrypted
+is currently supported on the same filesystem only.
+
+Reviewed-by: Alexander Motin 
+Reviewed-by: Kay Pedersen 
+Reviewed-by: Rob N 
+Reviewed-by: Brian Behlendorf 
+Signed-off-by: Martin Matuska 
+Closes #15464
+Closes #15465
+(cherry picked from commit 459c99ff2339a4a514abcf2255f9b3e5324ef09e)
+Signed-off-by: Stoiko Ivanov 
+---
+ module/zfs/zfs_vnops.c | 9 +
+ 1 file changed, 9 insertions(+)
+
+diff --git a/module/zfs/zfs_vnops.c b/module/zfs/zfs_vnops.c
+index 40d6c87a7..84e6b10ef 100644
+--- a/module/zfs/zfs_vnops.c
 b/module/zfs/zfs_vnops.c
+@@ -1094,6 +1094,15 @@ zfs_clone_range(znode_t *inzp, uint64_t *inoffp, 
znode_t *outzp,
+ 
+   ASSERT(!outzfsvfs->z_replay);
+ 
++  /*
++   * Block cloning from an unencrypted dataset into an encrypted
++   * dataset and vice versa is not supported.
++   */
++  if (inos->os_encrypted != outos->os_encrypted) {
++  zfs_exit_two(inzfsvfs, outzfsvfs, FTAG);
++  return (SET_ERROR(EXDEV));
++  }
++
+   error = zfs_verify_zp(inzp);
+   if (error == 0)
+   error = zfs_verify_zp(outzp);
diff --git a/debian/patches/0016-Add-a-tunable-to-disable-BRT-support.patch 
b/debian/patches/0016-Add-a-tunable-to-disable-BRT-support.patch
new file mode 100644
index ..53977479
--- /dev/null
+++ b/debian/patches/0016-Add-a-tunable-to-disable-BRT-support.patch
@@ -0,0 +1,201 @@
+From  Mon Sep 17 00:00:00 2001
+From: Rich Ercolani <214141+rincebr...@users.noreply.github.com>
+Date: Thu, 16 Nov 2023 14:35:22 -0500
+Subject: [PATCH] Add a tunable to disable BRT support.
+
+Copy the disable parameter that FreeBSD implemented, and extend it to
+work on Linux as well, until we're sure this is stable.
+
+Reviewed-by: Alexander Motin 
+Reviewed-by: Brian Behlendorf 
+Signed-off-by: Rich Ercolani 
+Closes #15529
+(cherry picked from commit 

[pve-devel] [PATCH v2 pve-docs 6/6] sdn: Examples

2023-11-17 Thread Stefan Lendl
Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 486 +++-
 1 file changed, 180 insertions(+), 306 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index 450955d..17135cc 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -508,74 +508,96 @@ key:: An API access key
 ttl:: The default TTL for records
 
 
+[[pvesdn_setup_examples]]
 Examples
-
+-
 
-[[pvesdn_setup_example_vlan]]
-VLAN Setup Example
-~~
+This section presents multiple configuration examples tailored for common SDN
+use cases. It aims to offer tangible implementations, providing additional
+details to enhance comprehension of the available configuration options.
 
-TIP: While we show plaintext configuration content here, almost everything
-should be configurable using the web-interface only.
 
-Node1: /etc/network/interfaces
+[[pvesdn_setup_example_simple]]
+Simple Zone Example
+~
 
-
-auto vmbr0
-iface vmbr0 inet manual
-   bridge-ports eno1
-   bridge-stp off
-   bridge-fd 0
-   bridge-vlan-aware yes
-   bridge-vids 2-4094
+Simple zone networks create an isolated network for quests on a single host to
+connect to each other.
 
-#management ip on vlan100
-auto vmbr0.100
-iface vmbr0.100 inet static
-   address 192.168.0.1/24
+TIP: connection between quests are possible if all quests reside on a same host
+but cannot be reached on other nodes.
 
-source /etc/network/interfaces.d/*
-
+* Create a simple zone named `simple`.
+* Add a VNet names `vnet1`.
+* Create a Subnet with a gateway and the SNAT option enabled.
+* This creates a network bridge `vnet1` on the node. Assign this bridge to the
+  quests that shall join the network and configure an IP address.
 
-Node2: /etc/network/interfaces
+The network interface configuration in two VMs may look like this which allows
+them to communicate via the 10.0.1.0/24 network.
 
 
-auto vmbr0
-iface vmbr0 inet manual
-   bridge-ports eno1
-   bridge-stp off
-   bridge-fd 0
-   bridge-vlan-aware yes
-   bridge-vids 2-4094
+allow-hotplug ens19
+iface ens19 inet static
+   address 10.0.1.14/24
+
 
-#management ip on vlan100
-auto vmbr0.100
-iface vmbr0.100 inet static
-   address 192.168.0.2/24
+
+allow-hotplug ens19
+iface ens19 inet static
+   address 10.0.1.15/24
+
 
-source /etc/network/interfaces.d/*
+
+[[pvesdn_setup_example_nat]]
+Source NAT Example
+~
+
+If you want to allow outgoing connections for quests in the simple network zone
+the simple zone offers a Source NAT (SNAT) option.
+
+Starting from the configuration xref:pvesdn_setup_example_simple[above], Add a
+Subnet to the VNet `vnet1`, set a gateway IP and enable the SNAT option.
+
+
+Subnet: 172.16.0.0/24
+Gateway: 172.16.0.1
+SNAT: checked
 
 
-Create a VLAN zone named `myvlanzone':
+In the quests configure the static IP address inside the subnet's IP range.
+
+The node itself will join this network with the Gateway IP '172.16.0.1' and
+function as the NAT gateway for quests within the subnet range.
+
+
+[[pvesdn_setup_example_vlan]]
+VLAN Setup Example
+~
+
+When VMs on different nodes need to communicate through an isolated network, 
the
+VLAN zone allows network level isolation using VLAN tags.
+
+Create a VLAN zone named `myvlanzone`:
 
 
-id: myvlanzone
-bridge: vmbr0
+ID: myvlanzone
+Bridge: vmbr0
 
 
-Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
-`myvlanzone' as its zone.
+Create a VNet named `myvnet1` with VLAN tag 10 and the previously created
+`myvlanzone`.
 
 
-id: myvnet1
-zone: myvlanzone
-tag: 10
+ID: myvnet1
+Zone: myvlanzone
+Tag: 10
 
 
 Apply the configuration through the main SDN panel, to create VNets locally on
 each node.
 
-Create a Debian-based virtual machine (vm1) on node1, with a vNIC on `myvnet1'.
+Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on 
`myvnet1`.
 
 Use the following network configuration for this VM:
 
@@ -585,8 +607,8 @@ iface eth0 inet static
address 10.0.3.100/24
 
 
-Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
-`myvnet1' as vm1.
+Create a second virtual machine ('vm2') on node2, with a vNIC on the same VNet
+`myvnet1` as vm1.
 
 Use the following network configuration for this VM:
 
@@ -596,234 +618,124 @@ iface eth0 inet static
address 10.0.3.101/24
 
 
-Following this, you should be able to ping between both VMs over that network.
+Following this, you should be able to ping between both VMs using that network.
 
 
 [[pvesdn_setup_example_qinq]]
 QinQ Setup Example
-~~
-
-TIP: While we show plaintext configuration content here, almost everything
-should be configurable using the web-interface only.
-
-Node1: /etc/network/interfaces
-
-
-auto vmbr0
-iface vmbr0 inet manual
-   bridge-ports eno1
-   bridge-stp off
-   bridge-fd 0
-   

[pve-devel] [PATCH v2 pve-docs 2/6] sdn: Zones

2023-11-17 Thread Stefan Lendl
Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 185 ++--
 1 file changed, 93 insertions(+), 92 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index 562e081..8a71c03 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -86,189 +86,190 @@ in your SDN setup.
   virtual guests' hostname and IP
   addresses
 
-[[pvesdn_config_main_sdn]]
 
+[[pvesdn_config_main_sdn]]
 SDN
-~~~
+~
 
 This is the main status panel. Here you can see the deployment status of zones
 on different nodes.
 
-The 'Apply' button is used to push and reload local configuration on all 
cluster
+Pressing the 'Apply' button reloads the local configuration on all cluster
 nodes.
 
 
-[[pvesdn_local_deployment_monitoring]]
-Local Deployment Monitoring
-~~~
-
-After applying the configuration through the main SDN panel,
-the local network configuration is generated locally on each node in
-the file `/etc/network/interfaces.d/sdn`, and reloaded with ifupdown2.
-
-You can monitor the status of local zones and VNets through the main tree.
-
-
 [[pvesdn_config_zone]]
 Zones
--
+-
 
-A zone defines a virtually separated network. Zones can be restricted to
+A zone defines a virtually separated network. Zones are restricted to
 specific nodes and assigned permissions, in order to restrict users to a 
certain
 zone and its contained VNets.
 
 Different technologies can be used for separation:
 
+* Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
+
 * VLAN: Virtual LANs are the classic method of subdividing a LAN
 
 * QinQ: Stacked VLAN (formally known as `IEEE 802.1ad`)
 
-* VXLAN: Layer2 VXLAN
+* VXLAN: Layer 2 VXLAN network via a UDP tunnel
 
-* Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
+* EVPN (BGP EVPN): VXLAN with BGP to establish Layer 3 routing
 
-* EVPN (BGP EVPN): VXLAN using layer 3 border gateway protocol (BGP) routing
 
+[[pvesdn_config_common_options]]
 Common options
-~~
+~
 
 The following options are available for all zone types:
 
-nodes:: The nodes which the zone and associated VNets should be deployed on
+Nodes:: The nodes which the zone and associated VNets should be deployed on.
 
-ipam:: Optional. Use an IP Address Management (IPAM) tool to manage IPs in the
-  zone.
+IPAM:: Use an IP Address Management (IPAM) tool to manage IPs in the
+  zone. Optional, defaults to `pve`.
 
-dns:: Optional. DNS API server.
+DNS:: DNS API server. Optional.
 
-reversedns:: Optional. Reverse DNS API server.
+ReverseDNS:: Reverse DNS API server. Optional.
 
-dnszone:: Optional. DNS domain name. Used to register hostnames, such as
-  `.`. The DNS zone must already exist on the DNS server.
+DNSZone:: DNS domain name. Used to register hostnames, such as
+  `.`. The DNS zone must already exist on the DNS server. 
Optional.
 
 
 [[pvesdn_zone_plugin_simple]]
 Simple Zones
-
+~
+
+This is the simplest plugin. It will create an isolated VNet bridge.  This
+bridge is not linked to a physical interface, and VM traffic is only local on
+each the node.
+It can be used in NAT or routed setups.
 
-This is the simplest plugin. It will create an isolated VNet bridge.
-This bridge is not linked to a physical interface, and VM traffic is only
-local between the node(s).
-It can also be used in NAT or routed setups.
 
 [[pvesdn_zone_plugin_vlan]]
 VLAN Zones
-~~
+~
 
-This plugin reuses an existing local Linux or OVS bridge, and manages the VLANs
-on it. The benefit of using the SDN module is that you can create different
-zones with specific VNet VLAN tags, and restrict virtual machines to separated
-zones.
+The VLAN plugin uses an existing local Linux or OVS bridge to connect to the
+node's physical interface.  It uses VLAN tagging defined in the VNet to isolate
+the network segments.  This allows connectivity of VMs between different nodes.
 
-Specific `VLAN` configuration options:
+VLAN zone configuration options:
+
+Bridge:: The local bridge or OVS switch, already configured on *each* node that
+  allows node-to-node connection.
 
-bridge:: Reuse this local bridge or OVS switch, already configured on *each*
-  local node.
 
 [[pvesdn_zone_plugin_qinq]]
 QinQ Zones
-~~
+~
 
-QinQ also known as VLAN stacking, wherein the first VLAN tag is defined for the
-zone (the 'service-vlan'), and the second VLAN tag is defined for the
-VNets.
+QinQ also known as VLAN stacking, that uses multiple layers of VLAN tags for
+isolation.  The QinQ zone defines the outer VLAN tag (the 'Service VLAN')
+whereas the inner VLAN tag is defined by the VNet.
 
 NOTE: Your physical network switches must support stacked VLANs for this
-configuration!
+configuration.
 
-Below are the configuration options specific to QinQ:
+QinQ zone configuration options:
 
-bridge:: A local, VLAN-aware bridge that is already configured on each local
+Bridge:: A local, VLAN-aware bridge that 

[pve-devel] [PATCH v2 pve-docs 4/6] sdn: Controllers

2023-11-17 Thread Stefan Lendl
Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 56 +
 1 file changed, 31 insertions(+), 25 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index c4b77f0..73d3dee 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -336,36 +336,41 @@ DNS Zone Prefix:: Add a prefix to the domain 
registration, like
 
 [[pvesdn_config_controllers]]
 Controllers

+-
+
+Some zones implement a separated control and data plane that require an 
external
+external controller to manage the VNet's control plane.
+
+Currently, only the `EVPN` zone requires an external controller.
 
-Some zone types need an external controller to manage the VNet control-plane.
-Currently this is only required for the `bgp-evpn` zone plugin.
 
 [[pvesdn_controller_plugin_evpn]]
 EVPN Controller
-~~~
+~
 
-For `BGP-EVPN`, we need a controller to manage the control plane.
-The currently supported software controller is the "frr" router.
-You may need to install it on each node where you want to deploy EVPN zones.
+The `EVPN`, zone requires an external controller to manage the control plane.
+The EVPN controller plugin configures the Free Range Routing (frr) router.
+
+To enable the EVPN controller, you need to install frr on every node that shall
+participate in the EVPN zone.
 
 
 apt install frr frr-pythontools
 
 
-Configuration options:
+EVPN controller configuration options:
 
-asn:: A unique BGP ASN number. It's highly recommended to use a private ASN
+ASN #:: A unique BGP ASN number. It's highly recommended to use a private ASN
   number (64512 – 65534, 42 – 4294967294), as otherwise you could end 
up
   breaking global routing by mistake.
 
-peers:: An IP list of all nodes where you want to communicate for the EVPN
-  (could also be external nodes or route reflectors servers)
+Peers:: An IP list of all nodes that are part of the EVPN zone.  (could also be
+  external nodes or route reflector servers)
 
 
 [[pvesdn_controller_plugin_BGP]]
 BGP Controller
-~~~
+~
 
 The BGP controller is not used directly by a zone.
 You can use it to configure FRR to manage BGP peers.
@@ -376,20 +381,20 @@ It can also be used to export EVPN routes to an external 
BGP peer.
 NOTE: By default, for a simple full mesh EVPN, you don't need to define a BGP
 controller.
 
-Configuration options:
+BGP controller configuration options:
 
-node:: The node of this BGP controller
+Node:: The node of this BGP controller
 
-asn:: A unique BGP ASN number. It's highly recommended to use a private ASN
+ASN #:: A unique BGP ASN number. It's highly recommended to use a private ASN
   number in the range (64512 - 65534) or (42 - 4294967294), as 
otherwise
   you could break global routing by mistake.
 
-peers:: A list of peer IP addresses you want to communicate with using the
+Peer:: A list of peer IP addresses you want to communicate with using the
   underlying BGP network.
 
-ebgp:: If your peer's remote-AS is different, this enables EBGP.
+EBGP:: If your peer's remote-AS is different, this enables EBGP.
 
-loopback:: Use a loopback or dummy interface as the source of the EVPN network
+Loopback Interface:: Use a loopback or dummy interface as the source of the 
EVPN network
   (for multipath).
 
 ebgp-mutltihop:: Increase the number of hops to reach peers, in case they are
@@ -403,21 +408,22 @@ ISIS Controller
 ~~~
 
 The ISIS controller is not used directly by a zone.
-You can use it to configure FRR to export evpn routes to an ISIS domain.
+You can use it to configure FRR to export EVPN routes to an ISIS domain.
 
-Configuration options:
+ISIS controller configuration options:
 
-node:: The node of this ISIS controller.
+Node:: The node of this ISIS controller.
 
-domain:: A unique ISIS domain.
+Domain:: A unique ISIS domain.
 
-network entity title:: A Unique ISIS network address that identifies this node.
+Network Entity Title:: A Unique ISIS network address that identifies this node.
 
-interfaces:: A list of physical interface(s) used by ISIS.
+Interfaces:: A list of physical interface(s) used by ISIS.
 
-loopback:: Use a loopback or dummy interface as the source of the EVPN network
+Loopback:: Use a loopback or dummy interface as the source of the EVPN network
   (for multipath).
 
+
 [[pvesdn_config_ipam]]
 IPAMs
 -
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 pve-docs 1/6] sdn: installation, overview and main configuration

2023-11-17 Thread Stefan Lendl
refs only work with subsequent patches

Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 67 +
 1 file changed, 37 insertions(+), 30 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index b796c5e..562e081 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -15,18 +15,16 @@ xref:getting_help[mailing lists or in the forum] for 
questions and feedback.
 
 [[pvesdn_installation]]
 Installation
-
+-
 
 To enable the experimental Software-Defined Network (SDN) integration, you need
-to install the `libpve-network-perl` and `ifupdown2` packages on every node:
+to install the `libpve-network-perl` package on every node:
 
 
 apt update
-apt install libpve-network-perl ifupdown2
+apt install libpve-network-perl
 
 
-NOTE: {pve} version 7 and above come installed with ifupdown2.
-
 After this, you need to add the following line to the end of the
 `/etc/network/interfaces` configuration file, so that the SDN configuration 
gets
 included and activated.
@@ -36,47 +34,56 @@ source /etc/network/interfaces.d/*
 
 
 
-Basic Overview
---
+[[pvesdn_overview]]
+Overview
+-
 
 The {pve} SDN allows for separation and fine-grained control of virtual guest
 networks, using flexible, software-controlled configurations.
 
-Separation is managed through zones, where a zone is its own virtual separated
-network area. A 'VNet' is a type of a virtual network connected to a zone.
-Depending on which type or plugin the zone uses, it can behave differently and
-offer different features, advantages, and disadvantages. Normally, a 'VNet'
-appears as a common Linux bridge with either a VLAN or 'VXLAN' tag, however,
-some can also use layer 3 routing for control. 'VNets' are deployed locally on
-each node, after being configured from the cluster-wide datacenter SDN
-administration interface.
+Separation is managed through *zones*, virtual networks (*VNets*), and
+*subnets*.  A zone is its own virtually separated network area.  A VNet is a
+virtual network that belongs to a zone. A subnet is an IP range inside a VNet.
 
+Depending on the type of the zone, the network behaves differently and offers
+specific features, advantages, and limitations.
 
-Main Configuration
-~~
+Use cases for SDN range from an isolated private network on each individual 
node
+to complex overlay networks across multiple PVE clusters on different 
locations.
 
-Configuration is done at the datacenter (cluster-wide) level and is saved in
-files located in the shared configuration file system:
-`/etc/pve/sdn`
+After configuring an VNet in the cluster-wide datacenter SDN administration
+interface, it is available as a common Linux bridge, locally on each node, to 
be
+assigned to VMs and Containers.
+
+
+[[pvesdn_main_configuration]]
+Main Configuration
+-
 
-On the web-interface, SDN features 3 main sections:
+Configuration is done at the web UI at datacenter level and is saved in files
+located in the shared configuration file system at `/etc/pve/sdn`.
 
-* SDN: An overview of the SDN state
+On the web interface, SDN features the following sections:
 
-* Zones: Create and manage the virtually separated network zones
+* xref:pvesdn_config_main_sdn[SDN]:: An overview of the SDN state
 
-* VNets: Create virtual network bridges and manage subnets
+* xref:pvesdn_config_zone[Zones]: Create and manage the virtually separated
+  network zones
 
-In addition to this, the following options are offered:
+* xref:pvesdn_config_vnets[VNets] VNets: Create virtual network bridges and
+  manage subnets
 
-* Controller: For controlling layer 3 routing in complex setups
+The Options category allows adding and managing additional services to be used
+in your SDN setup.
 
-* Subnets: Used to defined IP networks on VNets
+* xref:pvesdn_config_controllers[Controllers]: For controlling layer 3 routing
+  in complex setups
 
-* IPAM: Enables the use of external tools for IP address management (guest
-  IPs)
+* xref:pvesdn_config_ipam[IPAM]: Enables external for IP address management for
+  guests
 
-* DNS: Define a DNS server API for registering virtual guests' hostname and IP
+* xref:pvesdn_config_dns[DNS]: Define a DNS server integration for registering
+  virtual guests' hostname and IP
   addresses
 
 [[pvesdn_config_main_sdn]]
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-docs 3/6] sdn: VNets and Subnets

2023-11-17 Thread Stefan Lendl
Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 46 --
 1 file changed, 28 insertions(+), 18 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index 8a71c03..c4b77f0 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -272,32 +272,40 @@ MTU:: Because VXLAN encapsulation uses 50 bytes, the MTU 
needs to be 50 bytes
   defaults to 1450.
 
 
-[[pvesdn_config_vnet]]
+[[pvesdn_config_vnets]]
 VNets
--
+-
+
+After creating a virtual network (VNet) through the SDN GUI, a local network
+interface with the same name is available on each node. To connect a guest to 
the
+VNet, assign the interface to the guest and set the IP address accordingly.
+
+Depending on the zone, these options have different meanings and are explained
+in the respective zone section in this document.
 
-A `VNet` is, in its basic form, a Linux bridge that will be deployed locally on
-the node and used for virtual machine communication.
+WARNING: In the current state, some options may have no effect or won't work in
+certain zones.
 
-The VNet configuration properties are:
+VNet configuration options:
 
-ID:: An 8 character ID to name and identify a VNet
+ID:: An up to 8 character ID to identify a VNet
 
-Alias:: Optional longer name, if the ID isn't enough
+Comment:: More descriptive identifier. Assigned as an alias on the interface. 
Optional
 
 Zone:: The associated zone for this VNet
 
 Tag:: The unique VLAN or VXLAN ID
 
-VLAN Aware:: Enable adding an extra VLAN tag in the virtual machine or
-container's vNIC configuration, to allow the guest OS to manage the VLAN's tag.
+VLAN Aware:: Enables vlan-aware option on the interface, enabling configuration
+  in the quest.
+
 
 [[pvesdn_config_subnet]]
 Subnets
-
+-
 
-A subnetwork (subnet) allows you to define a specific IP network
-(IPv4 or IPv6). For each VNet, you can define one or more subnets.
+A subnet define a specific IP range, described by the CIDR network address.
+Each VNet, can have one or more subnets.
 
 A subnet can be used to:
 
@@ -310,19 +318,21 @@ A subnet can be used to:
 If an IPAM server is associated with the subnet zone, the subnet prefix will be
 automatically registered in the IPAM.
 
-Subnet properties are:
+Subnet configuration options:
 
 ID:: A CIDR network address, for example 10.0.0.0/8
 
 Gateway:: The IP address of the network's default gateway. On layer 3 zones
   (Simple/EVPN plugins), it will be deployed on the VNet.
 
-SNAT:: Optional. Enable SNAT for layer 3 zones (Simple/EVPN plugins), for this
-  subnet. The subnet's source IP will be NATted to server's outgoing 
interface/IP.
-  On EVPN zones, this is only done on EVPN gateway-nodes.
+SNAT:: Enable Source NAT which allows VMs from inside a
+  VNet to connect to the outside network by forwarding the packets to the nodes
+  outgoing interface. On EVPN zones, forwarding is done on EVPN gateway-nodes.
+  Optional.
+
+DNS Zone Prefix:: Add a prefix to the domain registration, like
+  .prefix.  Optional.
 
-Dnszoneprefix:: Optional. Add a prefix to the domain registration, like
-.prefix.
 
 [[pvesdn_config_controllers]]
 Controllers
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-docs 0/6] sdn: Documentation

2023-11-17 Thread Stefan Lendl
* Try to homogenize style and format
* Title case naming conventions for configuration options
* Simplify examples
* Re-phrase descriptions

Changes to v1 -> v2:

* Split changes per chapter.
* Use multi-line heading format like in the original file.
  I used search and replace to "revert" and now the format is identical on all
  headings.

Stefan Lendl (6):
  sdn: installation, overview and main configuration
  sdn: Zones
  sdn: VNets and Subnets
  sdn: Controllers
  sdn: IPAM
  sdn: Examples

 pvesdn.adoc | 906 +++-
 1 file changed, 402 insertions(+), 504 deletions(-)

-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 pve-docs 5/6] sdn: IPAM

2023-11-17 Thread Stefan Lendl
Signed-off-by: Stefan Lendl 
---
 pvesdn.adoc | 66 ++---
 1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index 73d3dee..450955d 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -425,56 +425,56 @@ Loopback:: Use a loopback or dummy interface as the 
source of the EVPN network
 
 
 [[pvesdn_config_ipam]]
-IPAMs
--
+IPAM
+-
+
+IP Address Management (IPAM) tools manage the IP addresses of clients on the
+network. SDN in {pve} uses IPAM for example to find free IP addresses for new
+guests.
 
-IPAM (IP Address Management) tools are used to manage/assign the IP addresses 
of
-guests on the network. It can be used to find free IP addresses when you create
-a VM/CT for example (not yet implemented).
+A single IPAM instance can be associated with one or more zones.
 
-An IPAM can be associated with one or more zones, to provide IP addresses
-for all subnets defined in those zones.
 
 [[pvesdn_ipam_plugin_pveipam]]
-{pve} IPAM Plugin
-~
+PVE IPAM Plugin
+~
 
-This is the default internal IPAM for your {pve} cluster, if you don't have
-external IPAM software.
+The default built-in IPAM for your {pve} cluster.
 
-[[pvesdn_ipam_plugin_phpipam]]
-phpIPAM Plugin
-~~
-https://phpipam.net/
 
-You need to create an application in phpIPAM and add an API token with admin
-privileges.
+[[pvesdn_ipam_plugin_netbox]]
+NetBox IPAM Plugin
+~
 
-The phpIPAM configuration properties are:
+link:https://github.com/netbox-community/netbox[NetBox] is an open-source IP
+Address Management (IPAM) and datacenter infrastructure management (DCIM) tool.
 
-url:: The REST-API endpoint: `http://phpipam.domain.com/api//`
+To integrate NetBox with {pve} SDN, create an API token in NetBox as described
+here: https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
 
-token:: An API access token
+The NetBox configuration properties are:
 
-section:: An integer ID. Sections are a group of subnets in phpIPAM. Default
-  installations use `sectionid=1` for customers.
+URL:: The NetBox REST API endpoint: `http://yournetbox.domain.com/api`
 
-[[pvesdn_ipam_plugin_netbox]]
-NetBox IPAM Plugin
-~~
+Token:: An API access token
+
+
+[[pvesdn_ipam_plugin_phpipam]]
+phpIPAM Plugin
+~
+
+In link:https://phpipam.net/[phpIPAM] you need to create an "application" and 
add
+an API token with admin privileges to the application.
 
-NetBox is an IP address management (IPAM) and datacenter infrastructure
-management (DCIM) tool. See the source code repository for details:
-https://github.com/netbox-community/netbox
+The phpIPAM configuration properties are:
 
-You need to create an API token in NetBox to use it:
-https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
+URL:: The REST-API endpoint: `http://phpipam.domain.com/api//`
 
-The NetBox configuration properties are:
+Token:: An API access token
 
-url:: The REST API endpoint: `http://yournetbox.domain.com/api`
+Section:: An integer ID. Sections are a group of subnets in phpIPAM. Default
+  installations use `sectionid=1` for customers.
 
-token:: An API access token
 
 [[pvesdn_config_dns]]
 DNS
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v4 pve-cluster 01/33] add priv/macs.db

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 12:39 schrieb Stefan Hanreich:
> From: Alexandre Derumier 
> 
> use to cache mac-ip list association.
> 
> can be use by external ipam, firewall,etc for fast lookup
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
>

I'm currently preparing a version bump, and it might be easier for all
if this is already included, even if we'd drop it again (unlikely), so:

applied this one for now, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH cluster v4 1/1] add profiles.cfg to cluster fs

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 12:45 schrieb Dominik Csapak:
> Signed-off-by: Dominik Csapak 
> ---
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
>

I'm currently preparing a version bump, and it might be easier for all
if this is already included, even if we'd drop it again (unlikely), so:

applied this one for now, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer 2/2] tui: install progress: use ok/cancel as button text for installer prompt

2023-11-17 Thread Christoph Heiss
The GTK installer/UI module in the low-level installer does the same.
Messages used with this are worded for this, using yes/no instead can be
quite confusing (e.g.
Proxmox::Install::ask_existing_vg_rename_or_abort())

Signed-off-by: Christoph Heiss 
---
 proxmox-tui-installer/src/main.rs   | 12 
 proxmox-tui-installer/src/views/install_progress.rs |  6 --
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/proxmox-tui-installer/src/main.rs 
b/proxmox-tui-installer/src/main.rs
index e1411c6..4c14482 100644
--- a/proxmox-tui-installer/src/main.rs
+++ b/proxmox-tui-installer/src/main.rs
@@ -285,21 +285,23 @@ fn switch_to_prev_screen(siv:  Cursive) {
 siv.set_screen(id);
 }
 
-fn yes_no_dialog(
+fn prompt_dialog(
 siv:  Cursive,
 title: ,
 text: ,
+yes_text: ,
 callback_yes: Box,
+no_text: ,
 callback_no: Box,
 ) {
 siv.add_layer(
 Dialog::around(TextView::new(text))
 .title(title)
-.button("No", move |siv| {
+.button(no_text, move |siv| {
 siv.pop_layer();
 callback_no(siv);
 })
-.button("Yes", move |siv| {
+.button(yes_text, move |siv| {
 siv.pop_layer();
 callback_yes(siv);
 }),
@@ -311,11 +313,13 @@ fn trigger_abort_install_dialog(siv:  Cursive) {
 siv.quit();
 
 #[cfg(not(debug_assertions))]
-yes_no_dialog(
+prompt_dialog(
 siv,
 "Abort installation?",
 "Are you sure you want to abort the installation?",
+"Yes",
 Box::new(Cursive::quit),
+"No",
 Box::new(|_| {}),
 )
 }
diff --git a/proxmox-tui-installer/src/views/install_progress.rs 
b/proxmox-tui-installer/src/views/install_progress.rs
index 76dd518..01c9941 100644
--- a/proxmox-tui-installer/src/views/install_progress.rs
+++ b/proxmox-tui-installer/src/views/install_progress.rs
@@ -13,7 +13,7 @@ use cursive::{
 CbSink, Cursive,
 };
 
-use crate::{abort_install_button, setup::InstallConfig, yes_no_dialog, 
InstallerState};
+use crate::{abort_install_button, prompt_dialog, setup::InstallConfig, 
InstallerState};
 use proxmox_installer_common::setup::spawn_low_level_installer;
 
 pub struct InstallProgressView {
@@ -189,10 +189,11 @@ impl InstallProgressView {
 }
 
 fn show_prompt(siv:  Cursive, text: , writer: 
Arc>) {
-yes_no_dialog(
+prompt_dialog(
 siv,
 "Prompt",
 text,
+"OK",
 Box::new({
 let writer = writer.clone();
 move |_| {
@@ -201,6 +202,7 @@ impl InstallProgressView {
 }
 }
 }),
+"Cancel",
 Box::new(move |_| {
 if let Ok(mut writer) = writer.lock() {
 let _ = writeln!(writer);
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [RFC PATCH installer 1/2] ui: stdio: replace newlines with whitespaces in prompt messages

2023-11-17 Thread Christoph Heiss
The line-based protocol currently used cannot handle this properly, so
introduce this as a stop-gap measure - otherwise messages might be cut
off.

This makes it work for now, and the text is wrapped correctely for the
screen width in the TUI anyway - which is the only user of this so far.

Will be reworked properly later on.

Signed-off-by: Christoph Heiss 
---
 Proxmox/UI/StdIO.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Proxmox/UI/StdIO.pm b/Proxmox/UI/StdIO.pm
index 0ee6311..f734c0b 100644
--- a/Proxmox/UI/StdIO.pm
+++ b/Proxmox/UI/StdIO.pm
@@ -36,6 +36,7 @@ sub finished {
 sub prompt {
 my ($self, $query) = @_;

+$query =~ s/\n/ /g;
 print STDOUT "prompt: $query\n";

 my $response =  // ''; # FIXME: error handling?
--
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu-server 3/4] fix #4474: qemu api: add overrule-shutdown parameter to stop endpoint

2023-11-17 Thread Wolfgang Bumiller
This one LGTM.

On Thu, Jan 26, 2023 at 09:32:13AM +0100, Friedrich Weber wrote:
> The new `overrule-shutdown` parameter is boolean and defaults to 0. If
> it is 1, all active `qmshutdown` tasks by the current user for the same
> VM are aborted before attempting to stop the VM.
> 
> Passing `overrule-shutdown=1` is forbidden for HA resources.
> 
> Signed-off-by: Friedrich Weber 

Acked-by: Wolfgang Bumiller 

> ---
>  PVE/API2/Qemu.pm | 16 +++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index c87602d..b253e1f 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -2799,7 +2799,13 @@ __PACKAGE__->register_method({
>   type => 'boolean',
>   optional => 1,
>   default => 0,
> - }
> + },
> + 'overrule-shutdown' => {
> + description => "Abort any active 'qmshutdown' task by the 
> current user for this VM before stopping",
> + optional => 1,
> + type => 'boolean',
> + default => 0,
> + },
>   },
>  },
>  returns => {
> @@ -2826,10 +2832,13 @@ __PACKAGE__->register_method({
>   raise_param_exc({ migratedfrom => "Only root may use this option." })
>   if $migratedfrom && $authuser ne 'root@pam';
>  
> + my $overrule_shutdown = extract_param($param, 'overrule-shutdown');
>  
>   my $storecfg = PVE::Storage::config();
>  
>   if (PVE::HA::Config::vm_is_ha_managed($vmid) && ($rpcenv->{type} ne 
> 'ha') && !defined($migratedfrom)) {
> + raise_param_exc({ 'overrule-shutdown' => "Not applicable for HA 
> resources." })
> + if $overrule_shutdown;
>  
>   my $hacmd = sub {
>   my $upid = shift;
> @@ -2849,6 +2858,11 @@ __PACKAGE__->register_method({
>  
>   syslog('info', "stop VM $vmid: $upid\n");
>  
> + if ($overrule_shutdown) {
> + my $overruled_tasks = 
> PVE::GuestHelpers::overrule_tasks('qmshutdown', $authuser, $vmid);
> + print "overruled qmshutdown tasks: " . join(", ", 
> $overruled_tasks->@*) . "\n";
> + };
> +
>   PVE::QemuServer::vm_stop($storecfg, $vmid, $skiplock, 0,
>$param->{timeout}, 0, 1, $keepActive, 
> $migratedfrom);
>   return;
> -- 
> 2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC container 2/4] fix #4474: lxc api: add overrule-shutdown parameter to stop endpoint

2023-11-17 Thread Wolfgang Bumiller
On Thu, Jan 26, 2023 at 09:32:12AM +0100, Friedrich Weber wrote:
> The new `overrule-shutdown` parameter is boolean and defaults to 0. If
> it is 1, all active `vzshutdown` tasks by the current user for the same
> CT are aborted before attempting to stop the CT.
> 
> Passing `overrule-shutdown=1` is forbidden for HA resources.
> 
> Signed-off-by: Friedrich Weber 
> ---
>  src/PVE/API2/LXC/Status.pm | 16 
>  1 file changed, 16 insertions(+)
> 
> diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
> index f7e3128..d1d67f4 100644
> --- a/src/PVE/API2/LXC/Status.pm
> +++ b/src/PVE/API2/LXC/Status.pm
> @@ -221,6 +221,12 @@ __PACKAGE__->register_method({
>   node => get_standard_option('pve-node'),
>   vmid => get_standard_option('pve-vmid', { completion => 
> \::LXC::complete_ctid_running }),
>   skiplock => get_standard_option('skiplock'),
> + 'overrule-shutdown' => {
> + description => "Abort any active 'vzshutdown' task by the 
> current user for this CT before stopping",
> + optional => 1,
> + type => 'boolean',
> + default => 0,
> + }
>   },
>  },
>  returns => {
> @@ -238,10 +244,15 @@ __PACKAGE__->register_method({
>   raise_param_exc({ skiplock => "Only root may use this option." })
>   if $skiplock && $authuser ne 'root@pam';
>  
> + my $overrule_shutdown = extract_param($param, 'overrule-shutdown');
> +
>   die "CT $vmid not running\n" if !PVE::LXC::check_running($vmid);
>  
>   if (PVE::HA::Config::vm_is_ha_managed($vmid) && $rpcenv->{type} ne 
> 'ha') {
>  
> + raise_param_exc({ 'overrule-shutdown' => "Not applicable for HA 
> resources." })
> + if $overrule_shutdown;
> +
>   my $hacmd = sub {
>   my $upid = shift;
>  
> @@ -272,6 +283,11 @@ __PACKAGE__->register_method({
>   return $rpcenv->fork_worker('vzstop', $vmid, $authuser, 
> $realcmd);
>   };
>  
> + if ($overrule_shutdown) {
> + my $overruled_tasks = 
> PVE::GuestHelpers::overrule_tasks('vzshutdown', $authuser, $vmid);
> + syslog('info', "overruled vzshutdown tasks: " . join(", ", 
> $overruled_tasks->@*) . "\n");
> + };
> +

^ So this part is fine (mostly¹)

>   return PVE::LXC::Config->lock_config($vmid, $lockcmd);

^ Here we lock first, then fork the worker, then do `vm_stop` with the
config lock inherited.

This means that creating multiple shutdown tasks before using one with
override=true could cause the override task to cancel the *first* ongoing
shutdown task, then move on to the `lock_config` call - in the meantime
a second shutdown task acquires this very lock and performs another
long-running shutdown, causing the `override` parameter to be
ineffective.

We should switch the ordering here: first fork the worker, then lock.
(¹ And your new chunk would go into the worker as well)

Unless I'm missing something, but AFAICT the current ordering there is
rather ... bad :-)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC guest-common 4/4] guest helpers: add helper to overrule active tasks of a specific type

2023-11-17 Thread Wolfgang Bumiller
Patch itself LGTM, just a note on sending patch series in general:

If you number patches throughout a whole series rather than the
individual repositories (as in, this one is labeled 4/4 instead of 1/1),
it would be nice if the order also helps determine dependencies.

Since the sub introduced here is used in 2/4 and 3/4, it doesn't make
sense for this to come last.
`qemu-server` and `container` will both want their guest-common
dependency bumped to the version which includes this patch.

On Thu, Jan 26, 2023 at 09:32:14AM +0100, Friedrich Weber wrote:
> This helper is used to abort any active qmshutdown/vzshutdown tasks
> before attempting to stop a VM/CT (if requested).
> 
> Signed-off-by: Friedrich Weber 
> ---
>  src/PVE/GuestHelpers.pm | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/src/PVE/GuestHelpers.pm b/src/PVE/GuestHelpers.pm
> index b4ccbaa..3cdf5d7 100644
> --- a/src/PVE/GuestHelpers.pm
> +++ b/src/PVE/GuestHelpers.pm
> @@ -366,4 +366,22 @@ sub get_unique_tags {
>  return !$no_join_result ? join(';', $res->@*) : $res;
>  }
>  
> +sub overrule_tasks {
> +my ($type, $user, $vmid) = @_;
> +
> +my $active_tasks = PVE::INotify::read_file('active');
> +my $res = [];
> +for my $task (@$active_tasks) {
> + if (!$task->{saved}
> + && $task->{type} eq $type
> + && $task->{user} eq $user
> + && $task->{id} eq $vmid
> + ) {
> + PVE::RPCEnvironment->check_worker($task->{upid}, 1);
> + push @$res, $task->{upid};
> + }
> +}
> +return $res;
> +}
> +
>  1;
> -- 
> 2.30.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs 1/2] pci passthrough: mention incompatibility with ballooning

2023-11-17 Thread Friedrich Weber
I took another look at ballooning+PCI passthrough and reporting of
memory usage with Markus. Apparently QEMU does not *always* map the
complete guest memory -- at least it didn't with a passed-through NIC.
So both the warning as well as the docs section may be worded too
strongly ("Ballooning is not possible", "VM will use maximum configured
memory", "QEMU needs to map" ...). I'll have to take another look at
this to see how we can phrase this correctly (and hopefully somewhat
precisely) for v2.

On 14/11/2023 11:20, Friedrich Weber wrote:
> On 14/11/2023 09:30, Fiona Ebner wrote:
>> Am 13.11.23 um 18:09 schrieb Friedrich Weber:
>>>  
>>> +xref:qm_ballooning[Automatic memory allocation (ballooning)] is not 
>>> possible
>>> +when using PCI(e) passthrough. As the PCI device may use DMA (Direct Memory
>>> +Access), QEMU needs to map the complete guest memory on VM startup. Hence, 
>>> the
>>> +QEMU process will use at least the (maximum) configured amount of VM 
>>> memory and
>>> +setting a minimum amount does not have any effect. When using PCI(e)
>>> +passthrough, it is recommended to set memory and minimum memory to the same
>>> +amount and keep the balloning device enabled. However, keep in mind that 
>>> the
>>
>> typo: s/balloning/ballooning/
> 
> Oops, thanks.
> 
>> Is there any advantage to keeping the ballooning device enabled?
>>
>>> +memory consumption reported in the GUI for the VM may be much lower than 
>>> the
>>> +memory consumption of the QEMU process.
>>
>> Maybe mention what the reported value is?
> 
> In my understanding: If the ballooning device is enabled (and a balloon
> driver present in the guest), the VM memory usage numbers are taken as
> reported by the balloon driver [1] (I'd say "as seen from within the
> guest"?). If the ballooning device is disabled, they are inferred from
> the rss and vsize of the QEMU process [2], so I'd say "as seen from the
> host".
> 
> I guess in the end the user has to decide which perspective they care
> about? I'll try to make this clearer in the v2.
> 
> [1]
> https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer.pm;h=c465fb6f64ae30dec5112fc4439f9181c2eba4e9;hb=feb51881d#l3013
> [2]
> https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer.pm;h=c465fb6f64ae30dec5112fc4439f9181c2eba4e9;hb=feb51881d#l2967
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied: [PATCH installer] tui: fix changing between non-LVM and LVM filesystem in bootdisk chooser

2023-11-17 Thread Christoph Heiss


Thanks!

On Fri, Nov 17, 2023 at 01:20:31PM +0100, Thomas Lamprecht wrote:
>
> Am 17/11/2023 um 13:12 schrieb Christoph Heiss:
> > Happens due to a force-unwrap() under the false assumption that the
> > disk for LVM configurations always exists when switching to a LVM
> > filesystem.
> > This fails spectacularly with a panic when switching from e.g. Btrfs to
> > ext4 in the filesystem chooser.
> >
> > Fixes: eda9fa0 ("fix #4856: tui: bootdisk: use correct defaults in advanced 
> > dialog")
> > Reported-by: Christian Ebner 
> > Signed-off-by: Christoph Heiss 
> > ---
> >  proxmox-tui-installer/src/views/bootdisk.rs | 29 +++--
> >  1 file changed, 15 insertions(+), 14 deletions(-)
> >
> >
>
> applied, thanks!
>
> Could be nice to get the installer code unwrap free in the future, or
> at least add a safety comment for each case (but preferably the former)

Definitely! It's on my todo-list, and most occurences are already gone
since introducing the TUI, fortunately.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC manager/container/qemu-server/guest-common 0/4] fix #4474: stop tasks may overrule shutdown tasks

2023-11-17 Thread Wolfgang Bumiller
On Wed, Sep 27, 2023 at 11:04:26AM +0200, Friedrich Weber wrote:
> Lost track of this a bit, reviving due to user interest [1].
> 
> As the series does not apply anymore, I'll send a new version in any
> case, but wanted to ask for feedback before I do.
> 
> My questions from the cover letter still apply:
> 
> On 26/01/2023 09:32, Friedrich Weber wrote:
> > * Does it make sense to have overruling optional? Or should "stop"
> >   generally overrule shutdown? This might lead to confusing
> >   interactions, as Thomas noted [0].

Although whenever I ran into that I had simply misclicked shutdown or
became impatient. I never had any automated shutdown tasks happen.
Yet I still feel like this should be optional ;-)
(I usually just ended up using `qm stop` on the cli :P)

> > * Backend: Is there a more elegant way to overrule shutdown tasks,
> >   and a better place than pve-guest-common?
> > * Frontend: When stopping a VM/CT, we already ask for confirmation.
> >   Is an (occasional) second modal dialog with a lot of text a good user
> >   experience? Alternatively, I could imagine a checkbox in the first
> >   dialog saying "Overrule any active shutdown tasks".
> 
> Actually I don't really like the second modal dialog. What about the
> following: When the user clicks "Stop" and the frontend detects an
> active shutdown task, the already-existing "Confirm" dialog has an
> additional default-off checkbox "Kill active shutdown tasks" (or
> similar). This way the default behavior does not change, but users do
> not have to kill active shutdown tasks manually anymore.

Sounds good to me.
But maybe don't use the word "kill"  "Replace/Override" should work.

> 
> > * This patch series forbids `overrule-shutdown=1` for HA-managed VMs/CTs
> >   because I didn't know how overruling should work in a HA setting. Do
> >   you have any suggestions?

I think it's okay to disable this for now.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v5 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Thomas Lamprecht
no in-depth review, but a meta style issue and some nits in line, the former
could be fixed up on applying, the nits are not really important in general.

Am 17/11/2023 um 13:17 schrieb Hannes Duerr:
> adds vendor and product information for SCSI devices to the json schema and
> checks in the VM create/update API call if it is possible to add these to 
> QEMU as a device option
> 

please keep commit messages below 70 characters per line, where
possible:
https://pve.proxmox.com/wiki/Developer_Documentation#Commits_and_Commit_Messages

> @@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
>   my $conf = $param;
>   my $arch = PVE::QemuServer::get_vm_arch($conf);
>  
> + for my $opt (sort keys $param->%*) {
> + if ($opt =~ m/^scsi(\d)+$/) {

nit: unnecessary capture group, not costly here but we normally try to
either avoid them or mark them as non-capturing (tiny performance benefit),
i.e., one of:

$opt =~ m/^scsi\d+$/
$opt =~ m/^scsi(?:\d)+$/


and fwiw, this could be made shorter by either

- reversing the match and skip to next loop iteration early:
  next if $opt !~ m/^scsi(\d)+$/;

- use grep 

for $scsi_disk (grep { /^scsi\d+$/ } keys $param->%*) {
# ...
}


> + assert_scsi_feature_compatibility(
> + $opt, $conf, $storecfg, $param->{$opt});
> + }
> + }
> +
>   $conf->{meta} = PVE::QemuServer::new_meta_info_string();
>  
>   my $vollist = [];
> @@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
>   PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
> $vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
>   if defined($conf->{pending}->{$opt});
>  
> + if ($opt =~ m/^scsi(\d)+$/) {

same as above w.r.t. caputre group

> + assert_scsi_feature_compatibility(
> + $opt, $conf, $storecfg, $param->{$opt});
> + }
> +
>   my (undef, $created_opts) = $create_disks->(
>   $rpcenv,
>   $authuser,



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v4 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Dürr

New Version can be found here:
https://lists.proxmox.com/pipermail/pve-devel/2023-November/060429.html

On 11/17/23 12:53, Hannes Duerr wrote:

adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
  PVE/API2/Qemu.pm| 39 +++
  PVE/QemuServer.pm   | 13 -
  PVE/QemuServer/Drive.pm | 24 
  3 files changed, 75 insertions(+), 1 deletion(-)

  __PACKAGE__->register_method({
  name => 'vmlist',
  path => '',
@@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
  
+		for my $opt (sort keys $param->%*) {

+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
  
  		my $vollist = [];

@@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, $vmid, 
$conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
  
+		if ($opt =~ m/^scsi(\d)+$/) {

+   PVE::QemuServer::assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});

^ copy paste error

+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] tui: fix changing between non-LVM and LVM filesystem in bootdisk chooser

2023-11-17 Thread Thomas Lamprecht
Am 17/11/2023 um 13:12 schrieb Christoph Heiss:
> Happens due to a force-unwrap() under the false assumption that the
> disk for LVM configurations always exists when switching to a LVM
> filesystem.
> This fails spectacularly with a panic when switching from e.g. Btrfs to
> ext4 in the filesystem chooser.
> 
> Fixes: eda9fa0 ("fix #4856: tui: bootdisk: use correct defaults in advanced 
> dialog")
> Reported-by: Christian Ebner 
> Signed-off-by: Christoph Heiss 
> ---
>  proxmox-tui-installer/src/views/bootdisk.rs | 29 +++--
>  1 file changed, 15 insertions(+), 14 deletions(-)
> 
>

applied, thanks!

Could be nice to get the installer code unwrap free in the future, or
at least add a safety comment for each case (but preferably the former)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 13 -
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b9c8f20..75c7161 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6090f91..4fbb9b2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1210,7 +1210,8 @@ sub kvm_user_version {
 return $kvm_user_version->{$binary};
 
 }
-my sub extract_version {
+
+our sub extract_version {
 my ($machine_type, $version) = @_;
 $version = kvm_user_version() if !defined($version);
 return PVE::QemuServer::Machine::extract_version($machine_type, $version)
@@ -1404,6 +1405,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index de62d43..4e1646d 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -161,6 +161,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'vendor',
+   description => "The drive's vendor name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
 my %model_fmt = (
 model => {
type => 'string',
@@ -278,10 +298,12 @@ PVE::JSONSchema::register_standard_option("pve-qm-ide", 

[pve-devel] [PATCH v5 qemu-server 2/4] Move NEW_DISK_RE to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Move it due to better context and preparation of fix

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 10 --
 PVE/QemuServer/Drive.pm |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..b9c8f20 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -86,8 +86,6 @@ my $foreach_volume_with_alloc = sub {
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
-
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
@@ -98,7 +96,7 @@ my $check_drive_param = sub {
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
if ($drive->{'import-from'}) {
-   if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+   if ($drive->{file} !~ $PVE::QemuServer::Drive::NEW_DISK_RE || $3 != 
0) {
raise_param_exc({
$opt => "'import-from' requires special syntax - ".
"use :0,import-from=",
@@ -142,7 +140,7 @@ my $check_storage_access = sub {
# nothing to check
} elsif ($isCDROM && ($volid eq 'cdrom')) {
$rpcenv->check($authuser, "/", ['Sys.Console']);
-   } elsif (!$isCDROM && ($volid =~ $NEW_DISK_RE)) {
+   } elsif (!$isCDROM && ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE)) 
{
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
$rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
@@ -365,7 +363,7 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
print "$ds: successfully created disk '$res->{$ds}'\n";
-   } elsif ($volid =~ $NEW_DISK_RE) {
+   } elsif ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
 
@@ -1626,7 +1624,7 @@ my $update_vm_api  = sub {
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-   if ($volid =~ $NEW_DISK_RE) {
+   if ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
$storeid = $2;
$format = $drive->{format} || 
PVE::Storage::storage_default_format($storecfg, $storeid);
} else {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index dce1398..6d94a2f 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -34,6 +34,7 @@ my $MAX_SCSI_DISKS = 31;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
+our $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 
 our $drivedesc_hash;
 # Schema when disk allocation is possible.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 3/4] drive: Create get_scsi_devicetype

2023-11-17 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type in a 
new function
for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 29 -
 PVE/QemuServer/Drive.pm | 35 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 294702d..6090f91 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1386,31 +1386,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 6d94a2f..de62d43 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -15,9 +15,9 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
-path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -822,4 +822,37 @@ sub path_is_scsi {
 return $res;
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_version) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } elsif ($drive->{file} =~ $NEW_DISK_RE){
+   # special syntax cannot be parsed to path
+   return $devicetype;
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($machine_version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 1/4] Move path_is_scsi to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 62 +
 PVE/QemuServer/Drive.pm | 61 
 2 files changed, 62 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6..294702d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1342,66 +1342,6 @@ sub pve_verify_hotplug_features {
 die "unable to parse hotplug option\n";
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..dce1398 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -17,6 +17,7 @@ drive_is_cdrom
 drive_is_read_only
 parse_drive
 print_drive
+path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -760,4 +761,64 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+my $buf = "\x00" x 36;
+my $sensebuf = "\x00" x 8;
+my $cmd = pack("C x3 C x1", 0x12, 36);
+
+# see /usr/include/scsi/sg.h
+my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
+
+my $packet = pack(
+   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
+);
+
+$ret = ioctl($fh, $SG_IO, $packet);
+if (!$ret) {
+   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
+   return;
+}
+
+my @res = unpack($sg_io_hdr_t, $packet);
+if ($res[17] || $res[18]) {
+   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
+   return;
+}
+
+my $res = {};
+$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
+
+$res->{removable} = $res->{removable} & 128 ? 1 : 0;
+$res->{type} &= 0x1F;
+
+return $res;
+}
+
+sub path_is_scsi {
+my ($path) = @_;
+
+my $fh = IO::File->new("+<$path") || return;
+my $res = scsi_inquiry($fh, 1);
+close($fh);
+
+return $res;
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v5 qemu-server 0/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

changes in v5:
- fix copy/paste mistake

Hannes Duerr (4):
  Move path_is_scsi to QemuServer/Drive.pm
  Move NEW_DISK_RE to QemuServer/Drive.pm
  drive: Create get_scsi_devicetype
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  49 +++--
 PVE/QemuServer.pm   | 100 +
 PVE/QemuServer/Drive.pm | 119 
 3 files changed, 177 insertions(+), 91 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] tui: fix changing between non-LVM and LVM filesystem in bootdisk chooser

2023-11-17 Thread Christoph Heiss
Happens due to a force-unwrap() under the false assumption that the
disk for LVM configurations always exists when switching to a LVM
filesystem.
This fails spectacularly with a panic when switching from e.g. Btrfs to
ext4 in the filesystem chooser.

Fixes: eda9fa0 ("fix #4856: tui: bootdisk: use correct defaults in advanced 
dialog")
Reported-by: Christian Ebner 
Signed-off-by: Christoph Heiss 
---
 proxmox-tui-installer/src/views/bootdisk.rs | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/proxmox-tui-installer/src/views/bootdisk.rs 
b/proxmox-tui-installer/src/views/bootdisk.rs
index 4bd504b..00e6ade 100644
--- a/proxmox-tui-installer/src/views/bootdisk.rs
+++ b/proxmox-tui-installer/src/views/bootdisk.rs
@@ -49,7 +49,9 @@ impl BootdiskOptionsView {
 target_bootdisk_selectview(
 ,
 advanced_options.clone(),
-options.disks.first(),
+// At least one disk must always exist to even get to this 
point,
+// see proxmox_installer_common::setup::installer_setup()
+[0],
 ),
 )
 .with_name("bootdisk-options-target-disk");
@@ -181,9 +183,14 @@ impl AdvancedBootdiskOptionsView {
 let product_conf = state.setup_info.config.clone();
 
 // Only used for LVM configurations, ZFS and Btrfs do not use the 
target disk selector
+// Must be done here, as we cannot mutable borrow `siv` a second time 
inside the closure
+// below.
 let selected_lvm_disk = siv
 .find_name::("bootdisk-options-target-disk")
-.and_then(|v| v.get_value::, _>(0));
+.and_then(|v| v.get_value::, _>(0))
+// If not defined, then the view was switched from a non-LVM 
filesystem to a LVM one.
+// Just use the first disk is such a case.
+.unwrap_or_else(|| runinfo.disks[0].clone());
 
 // Update the (inner) options view
 siv.call_on_name("advanced-bootdisk-options-dialog", |view:  
Dialog| {
@@ -193,11 +200,8 @@ impl AdvancedBootdiskOptionsView {
 view.remove_child(3);
 match fstype {
 FsType::Ext4 | FsType::Xfs => {
-// Safety: For LVM setups, the bootdisk SelectView 
always exists, thus
-// there will also always be a value.
-let selected_disk = selected_lvm_disk.clone().unwrap();
 
view.add_child(LvmBootdiskOptionsView::new_with_defaults(
-_disk,
+_lvm_disk,
 _conf,
 ));
 }
@@ -222,11 +226,7 @@ impl AdvancedBootdiskOptionsView {
 FsType::Ext4 | FsType::Xfs => {
 view.replace_child(
 0,
-target_bootdisk_selectview(
-,
-options_ref,
-selected_lvm_disk.as_ref(),
-),
+target_bootdisk_selectview(, 
options_ref, _lvm_disk),
 );
 }
 other => view.replace_child(0, 
TextView::new(other.to_string())),
@@ -714,10 +714,11 @@ fn advanced_options_view(
 fn target_bootdisk_selectview(
 avail_disks: &[Disk],
 options_ref: BootdiskOptionsRef,
-selected_disk: Option<>,
+selected_disk: ,
 ) -> SelectView {
-let selected_disk_pos = selected_disk
-.and_then(|disk| avail_disks.iter().position(|d| d.index == 
disk.index))
+let selected_disk_pos = avail_disks
+.iter()
+.position(|d| d.index == selected_disk.index)
 .unwrap_or_default();
 
 SelectView::new()
-- 
2.42.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/2] api: pools: support nested pools

2023-11-17 Thread Wolfgang Bumiller
minor issue

On Fri, Nov 17, 2023 at 08:09:10AM +0100, Fabian Grünbichler wrote:
> since poolid can now contain `/`, it's not possible to use it (properly) as
> path parameter anymore.
> 
> accordingly:
> - merge `read_pool` (`GET /pools/{poolid}`) into 'index' (`GET
>   /pools/?poolid={poolid}`) (requires clients to extract the only member of 
> the returned array if they want to query an individual pool)
> - move `update_pool` to `/pools`, deprecating the old variant with path 
> parameter
> - move `delete_pool` to `/pools`, deprecating the old variant with path 
> parameter
> - deprecate `read_pool` API endpoint
> 
> pool creation is blocked for nested pools where the parent does not already
> exist. similarly, the checks for deletion are extended to block deletion if
> sub-pools still exist.
> 
> the old API endpoints continue to work for non-nested pools. `pvesh ls /pools`
> is semi-broken for nested pools, listing the entries, but no methods on them,
> since they reference the old API. fixing this would require extending the REST
> handling to support a new type of child reference.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> 
> Notes:
> requires bumped pve-access-control
> 
>  PVE/API2/Pool.pm | 243 +++
>  1 file changed, 184 insertions(+), 59 deletions(-)
> 
> diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm
> index 51ac71941..54e744558 100644
> --- a/PVE/API2/Pool.pm
> +++ b/PVE/API2/Pool.pm
> @@ -354,6 +476,9 @@ __PACKAGE__->register_method ({
>  
>   my $pool_config = $usercfg->{pools}->{$pool};
>   die "pool '$pool' does not exist\n" if !$pool_config;
> + for my $subpool (sort keys %{$pool_config->{pools}}) {

would prefer $pool_config->{pools}->%*

> + die "pool '$pool' is not empty (contains pool '$subpool')\n";

a 'for' loop for a single error entry, looks like you meant to do
something like this instead:

if (length(my $subpools = join(', ', sort keys $pool_config->{pools}->%*))) 
{
die "pool '$pool' is not empty (contains pool '$subpools')\n";
}

> + }
>  
>   for my $vmid (sort keys %{$pool_config->{vms}}) {
>   next if !$idlist->{$vmid}; # ignore destroyed guests
> -- 
> 2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network 0/2] dnsmasq: fix dhcp leases

2023-11-17 Thread Alexandre Derumier


This need to deploy permission for dbus uk.org.thekelleys.dnsmasq.*
/etc/dbus-1/system.d/dnsmasq-pve.conf
I don't have added patch for this.

http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd;>
















Alexandre Derumier (2):
  dnsmasq: configure static range for each subnet
  dnsmasq: enable dbus && purge old ip lease on reservation

 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network 1/2] dnsmasq: configure static range for each subnet

2023-11-17 Thread Alexandre Derumier
we don't want dynamic lease, simply define each subnet as a static range.

dhcp-range defined on a subnet is only used by ipam plugin.

This will also allow to use dhcp subnet without need to define a range.
Can be usefull for external ipam like phpipam, where you can't define ranges.

Signed-off-by: Alexandre Derumier 
---
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm 
b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
index 46172c5..2db7f4f 100644
--- a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
+++ b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
@@ -112,11 +112,18 @@ sub configure_subnet {
 sub configure_range {
 my ($class, $dhcpid, $subnet_config, $range_config) = @_;
 
-my $range_file = 
"$DNSMASQ_CONFIG_ROOT/$dhcpid/10-$subnet_config->{id}.ranges.conf",
+my $subnet_file = 
"$DNSMASQ_CONFIG_ROOT/$dhcpid/10-$subnet_config->{id}.conf";
 my $tag = $subnet_config->{id};
 
-open(my $fh, '>>', $range_file) or die "Could not open file '$range_file' 
$!\n";
-print $fh 
"dhcp-range=set:$tag,$range_config->{'start-address'},$range_config->{'end-address'}\n";
+my ($zone, $network, $mask) = split(/-/, $tag);
+
+if (Net::IP::ip_is_ipv4($network)) {
+   $mask = (2 ** $mask - 1) << (32 - $mask);
+   $mask = join( '.', unpack( "C4", pack( "N", $mask ) ) );
+}
+
+open(my $fh, '>>', $subnet_file) or die "Could not open file 
'$subnet_file' $!\n";
+print $fh "dhcp-range=set:$tag,$network,static,$mask,infinite\n";
 close $fh;
 }
 
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH pve-network 2/2] dnsmasq: enable dbus && purge old ip lease on reservation

2023-11-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm 
b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
index 2db7f4f..f4225d0 100644
--- a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
+++ b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
@@ -9,6 +9,7 @@ use Net::IP qw(:PROC);
 use PVE::Tools qw(file_set_contents run_command lock_file);
 
 use File::Copy;
+use Net::DBus;
 
 my $DNSMASQ_CONFIG_ROOT = '/etc/dnsmasq.d';
 my $DNSMASQ_DEFAULT_ROOT = '/etc/default';
@@ -77,6 +78,16 @@ sub add_ip_mapping {
 
 my $service_name = "dnsmasq\@$dhcpid";
 PVE::Tools::run_command(['systemctl', 'reload', $service_name]) if $change;
+
+#update lease as ip could still be associated to an old removed mac
+my $bus = Net::DBus->system();
+my $dnsmasq = $bus->get_service("uk.org.thekelleys.dnsmasq.$dhcpid");
+my $manager = 
$dnsmasq->get_object("/uk/org/thekelleys/dnsmasq","uk.org.thekelleys.dnsmasq.$dhcpid");
+
+my @hostname = unpack("C*", "*");
+$manager->AddDhcpLease($ip4, $mac, \@hostname, undef, 0, 0, 0) if $ip4;
+$manager->AddDhcpLease($ip6, $mac, \@hostname, undef, 0, 0, 0) if $ip6;
+
 }
 
 sub configure_subnet {
@@ -136,7 +147,7 @@ sub before_configure {
 
 my $default_config = 

[pve-devel] [PATCH v4 qemu-server 4/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
adds vendor and product information for SCSI devices to the json schema and
checks in the VM create/update API call if it is possible to add these to QEMU 
as a device option

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 39 +++
 PVE/QemuServer.pm   | 13 -
 PVE/QemuServer/Drive.pm | 24 
 3 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b9c8f20..fc8c876 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -696,6 +696,33 @@ my $check_vm_modify_config_perm = sub {
 return 1;
 };
 
+sub assert_scsi_feature_compatibility {
+my ($opt, $conf, $storecfg, $drive_attributes) = @_;
+
+my $drive = PVE::QemuServer::Drive::parse_drive($opt, $drive_attributes);
+
+my $machine_type = PVE::QemuServer::get_vm_machine($conf, undef, 
$conf->{arch});
+my $machine_version = PVE::QemuServer::extract_version(
+   $machine_type, PVE::QemuServer::kvm_user_version());
+my $drivetype = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
+
+if ($drivetype ne 'hd' && $drivetype ne 'cd') {
+   if ($drive->{product}) {
+   raise_param_exc({
+   product => "Passing of product information is only supported 
for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+   if ($drive->{vendor}) {
+   raise_param_exc({
+   vendor => "Passing of vendor information is only supported for".
+   "'scsi-hd' and 'scsi-cd' devices (e.g. not pass-through)."
+   });
+   }
+}
+}
+
 __PACKAGE__->register_method({
 name => 'vmlist',
 path => '',
@@ -1011,6 +1038,13 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   for my $opt (sort keys $param->%*) {
+   if ($opt =~ m/^scsi(\d)+$/) {
+   assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+   }
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
@@ -1826,6 +1860,11 @@ my $update_vm_api  = sub {
PVE::QemuServer::vmconfig_register_unused_drive($storecfg, 
$vmid, $conf, PVE::QemuServer::parse_drive($opt, $conf->{pending}->{$opt}))
if defined($conf->{pending}->{$opt});
 
+   if ($opt =~ m/^scsi(\d)+$/) {
+   PVE::QemuServer::assert_scsi_feature_compatibility(
+   $opt, $conf, $storecfg, $param->{$opt});
+   }
+
my (undef, $created_opts) = $create_disks->(
$rpcenv,
$authuser,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6090f91..4fbb9b2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1210,7 +1210,8 @@ sub kvm_user_version {
 return $kvm_user_version->{$binary};
 
 }
-my sub extract_version {
+
+our sub extract_version {
 my ($machine_type, $version) = @_;
 $version = kvm_user_version() if !defined($version);
 return PVE::QemuServer::Machine::extract_version($machine_type, $version)
@@ -1404,6 +1405,16 @@ sub print_drivedevice_full {
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
 
+   # only scsi-hd and scsi-cd support passing vendor and product 
information
+   if ($devicetype eq 'hd' || $devicetype eq 'cd') {
+   if (my $vendor = $drive->{vendor}) {
+   $device .= ",vendor=$vendor";
+   }
+   if (my $product = $drive->{product}) {
+   $device .= ",product=$product";
+   }
+   }
+
 } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index de62d43..4e1646d 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -161,6 +161,26 @@ my %iothread_fmt = ( iothread => {
optional => 1,
 });
 
+my %product_fmt = (
+product => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'product',
+   description => "The drive's product name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
+my %vendor_fmt = (
+vendor => {
+   type => 'string',
+   pattern => '[A-Za-z0-9\-_]{,40}',
+   format_description => 'vendor',
+   description => "The drive's vendor name, up to 40 bytes long.",
+   optional => 1,
+},
+);
+
 my %model_fmt = (
 model => {
type => 'string',
@@ -278,10 +298,12 @@ 

[pve-devel] [PATCH v4 qemu-server 2/4] Move NEW_DISK_RE to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Move it due to better context and preparation of fix

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm| 10 --
 PVE/QemuServer/Drive.pm |  1 +
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..b9c8f20 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -86,8 +86,6 @@ my $foreach_volume_with_alloc = sub {
 }
 };
 
-my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
-
 my $check_drive_param = sub {
 my ($param, $storecfg, $extra_checks) = @_;
 
@@ -98,7 +96,7 @@ my $check_drive_param = sub {
raise_param_exc({ $opt => "unable to parse drive options" }) if !$drive;
 
if ($drive->{'import-from'}) {
-   if ($drive->{file} !~ $NEW_DISK_RE || $3 != 0) {
+   if ($drive->{file} !~ $PVE::QemuServer::Drive::NEW_DISK_RE || $3 != 
0) {
raise_param_exc({
$opt => "'import-from' requires special syntax - ".
"use :0,import-from=",
@@ -142,7 +140,7 @@ my $check_storage_access = sub {
# nothing to check
} elsif ($isCDROM && ($volid eq 'cdrom')) {
$rpcenv->check($authuser, "/", ['Sys.Console']);
-   } elsif (!$isCDROM && ($volid =~ $NEW_DISK_RE)) {
+   } elsif (!$isCDROM && ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE)) 
{
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
$rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
@@ -365,7 +363,7 @@ my $create_disks = sub {
delete $disk->{format}; # no longer needed
$res->{$ds} = PVE::QemuServer::print_drive($disk);
print "$ds: successfully created disk '$res->{$ds}'\n";
-   } elsif ($volid =~ $NEW_DISK_RE) {
+   } elsif ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
my ($storeid, $size) = ($2 || $default_storage, $3);
die "no storage ID specified (and no default storage)\n" if 
!$storeid;
 
@@ -1626,7 +1624,7 @@ my $update_vm_api  = sub {
return if defined($volname) && $volname eq 'cloudinit';
 
my $format;
-   if ($volid =~ $NEW_DISK_RE) {
+   if ($volid =~ $PVE::QemuServer::Drive::NEW_DISK_RE) {
$storeid = $2;
$format = $drive->{format} || 
PVE::Storage::storage_default_format($storecfg, $storeid);
} else {
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index dce1398..6d94a2f 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -34,6 +34,7 @@ my $MAX_SCSI_DISKS = 31;
 my $MAX_VIRTIO_DISKS = 16;
 our $MAX_SATA_DISKS = 6;
 our $MAX_UNUSED_DISKS = 256;
+our $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 
 our $drivedesc_hash;
 # Schema when disk allocation is possible.
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 1/4] Move path_is_scsi to QemuServer/Drive.pm

2023-11-17 Thread Hannes Duerr
Prepare for introduction of new helper

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 62 +
 PVE/QemuServer/Drive.pm | 61 
 2 files changed, 62 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6..294702d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1342,66 +1342,6 @@ sub pve_verify_hotplug_features {
 die "unable to parse hotplug option\n";
 }
 
-sub scsi_inquiry {
-my($fh, $noerr) = @_;
-
-my $SG_IO = 0x2285;
-my $SG_GET_VERSION_NUM = 0x2282;
-
-my $versionbuf = "\x00" x 8;
-my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
-if (!$ret) {
-   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
-   return;
-}
-my $version = unpack("I", $versionbuf);
-if ($version < 3) {
-   die "scsi generic interface too old\n"  if !$noerr;
-   return;
-}
-
-my $buf = "\x00" x 36;
-my $sensebuf = "\x00" x 8;
-my $cmd = pack("C x3 C x1", 0x12, 36);
-
-# see /usr/include/scsi/sg.h
-my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
-
-my $packet = pack(
-   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
-);
-
-$ret = ioctl($fh, $SG_IO, $packet);
-if (!$ret) {
-   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
-   return;
-}
-
-my @res = unpack($sg_io_hdr_t, $packet);
-if ($res[17] || $res[18]) {
-   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
-   return;
-}
-
-my $res = {};
-$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
-
-$res->{removable} = $res->{removable} & 128 ? 1 : 0;
-$res->{type} &= 0x1F;
-
-return $res;
-}
-
-sub path_is_scsi {
-my ($path) = @_;
-
-my $fh = IO::File->new("+<$path") || return;
-my $res = scsi_inquiry($fh, 1);
-close($fh);
-
-return $res;
-}
-
 sub print_tabletdevice_full {
 my ($conf, $arch) = @_;
 
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index e24ba12..dce1398 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -17,6 +17,7 @@ drive_is_cdrom
 drive_is_read_only
 parse_drive
 print_drive
+path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -760,4 +761,64 @@ sub resolve_first_disk {
 return;
 }
 
+sub scsi_inquiry {
+my($fh, $noerr) = @_;
+
+my $SG_IO = 0x2285;
+my $SG_GET_VERSION_NUM = 0x2282;
+
+my $versionbuf = "\x00" x 8;
+my $ret = ioctl($fh, $SG_GET_VERSION_NUM, $versionbuf);
+if (!$ret) {
+   die "scsi ioctl SG_GET_VERSION_NUM failoed - $!\n" if !$noerr;
+   return;
+}
+my $version = unpack("I", $versionbuf);
+if ($version < 3) {
+   die "scsi generic interface too old\n"  if !$noerr;
+   return;
+}
+
+my $buf = "\x00" x 36;
+my $sensebuf = "\x00" x 8;
+my $cmd = pack("C x3 C x1", 0x12, 36);
+
+# see /usr/include/scsi/sg.h
+my $sg_io_hdr_t = "i i C C s I P P P I I i P C C C C S S i I I";
+
+my $packet = pack(
+   $sg_io_hdr_t, ord('S'), -3, length($cmd), length($sensebuf), 0, 
length($buf), $buf, $cmd, $sensebuf, 6000
+);
+
+$ret = ioctl($fh, $SG_IO, $packet);
+if (!$ret) {
+   die "scsi ioctl SG_IO failed - $!\n" if !$noerr;
+   return;
+}
+
+my @res = unpack($sg_io_hdr_t, $packet);
+if ($res[17] || $res[18]) {
+   die "scsi ioctl SG_IO status error - $!\n" if !$noerr;
+   return;
+}
+
+my $res = {};
+$res->@{qw(type removable vendor product revision)} = unpack("C C x6 A8 
A16 A4", $buf);
+
+$res->{removable} = $res->{removable} & 128 ? 1 : 0;
+$res->{type} &= 0x1F;
+
+return $res;
+}
+
+sub path_is_scsi {
+my ($path) = @_;
+
+my $fh = IO::File->new("+<$path") || return;
+my $res = scsi_inquiry($fh, 1);
+close($fh);
+
+return $res;
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 0/4] fix #4957: add vendor and product information passthrough for SCSI-Disks

2023-11-17 Thread Hannes Duerr
changes in v2:
- when calling the API to create/update a VM, check whether the devices
are "scsi-hd" or "scsi-cd" devices,where there is the option to add
vendor and product information, if not error out
- change the format in product_fmt and vendor_fmt to a pattern that only
allows 40 characters consisting of upper and lower case letters, numbers and 
'-' and '_'.

changes in v3:
- splitup into preparation and fix patch
- move get_scsi_devicetype into QemuServer/Drive.pm
- refactor check_scsi_feature_compatibility to assert_scsi_feature_compatibility
- assert_scsi_feature_compatibility before creating the device
- handle 'local-lvm:' syntax in get_scsi_devicetype
- fix style issues

changes in v4:
- create assert_scsi_feature_compatibility() in API2/Qemu.pm
- divide the preparation into smaller steps
- remove or harden brittle regex
- fix wrong storagename assumption

Hannes Duerr (4):
  Move path_is_scsi to QemuServer/Drive.pm
  Move NEW_DISK_RE to QemuServer/Drive.pm
  drive: Create get_scsi_devicetype
  fix #4957: add vendor and product information passthrough for
SCSI-Disks

 PVE/API2/Qemu.pm|  49 +++--
 PVE/QemuServer.pm   | 100 +
 PVE/QemuServer/Drive.pm | 119 
 3 files changed, 177 insertions(+), 91 deletions(-)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 3/4] drive: Create get_scsi_devicetype

2023-11-17 Thread Hannes Duerr
Encapsulation of the functionality for determining the scsi device type in a 
new function
for reusability in QemuServer/Drive.pm

Signed-off-by: Hannes Duerr 
---
 PVE/QemuServer.pm   | 29 -
 PVE/QemuServer/Drive.pm | 35 ++-
 2 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 294702d..6090f91 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -53,7 +53,7 @@ use PVE::QemuServer::Helpers qw(config_aware_timeout 
min_version windows_version
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CGroup;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive path_is_scsi);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom drive_is_read_only parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory qw(get_current_memory);
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -1386,31 +1386,10 @@ sub print_drivedevice_full {
 
my ($maxdev, $controller, $controller_prefix) = scsihw_infos($conf, 
$drive);
my $unit = $drive->{index} % $maxdev;
-   my $devicetype = 'hd';
-   my $path = '';
-   if (drive_is_cdrom($drive)) {
-   $devicetype = 'cd';
-   } else {
-   if ($drive->{file} =~ m|^/|) {
-   $path = $drive->{file};
-   if (my $info = path_is_scsi($path)) {
-   if ($info->{type} == 0 && $drive->{scsiblock}) {
-   $devicetype = 'block';
-   } elsif ($info->{type} == 1) { # tape
-   $devicetype = 'generic';
-   }
-   }
-   } else {
-$path = PVE::Storage::path($storecfg, $drive->{file});
-   }
 
-   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
-   my $version = extract_version($machine_type, kvm_user_version());
-   if ($path =~ m/^iscsi\:\/\// &&
-  !min_version($version, 4, 1)) {
-   $devicetype = 'generic';
-   }
-   }
+   my $machine_version = extract_version($machine_type, 
kvm_user_version());
+   my $devicetype  = PVE::QemuServer::Drive::get_scsi_devicetype(
+   $drive, $storecfg, $machine_version);
 
if (!$conf->{scsihw} || $conf->{scsihw} =~ m/^lsi/ || $conf->{scsihw} 
eq 'pvscsi') {
$device = 
"scsi-$devicetype,bus=$controller_prefix$controller.0,scsi-id=$unit";
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 6d94a2f..de62d43 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -15,9 +15,9 @@ is_valid_drivename
 drive_is_cloudinit
 drive_is_cdrom
 drive_is_read_only
+get_scsi_devicetype
 parse_drive
 print_drive
-path_is_scsi
 );
 
 our $QEMU_FORMAT_RE = qr/raw|cow|qcow|qcow2|qed|vmdk|cloop/;
@@ -822,4 +822,37 @@ sub path_is_scsi {
 return $res;
 }
 
+sub get_scsi_devicetype {
+my ($drive, $storecfg, $machine_version) = @_;
+
+my $devicetype = 'hd';
+my $path = '';
+if (drive_is_cdrom($drive)) {
+   $devicetype = 'cd';
+} else {
+   if ($drive->{file} =~ m|^/|) {
+   $path = $drive->{file};
+   if (my $info = path_is_scsi($path)) {
+   if ($info->{type} == 0 && $drive->{scsiblock}) {
+   $devicetype = 'block';
+   } elsif ($info->{type} == 1) { # tape
+   $devicetype = 'generic';
+   }
+   }
+   } elsif ($drive->{file} =~ $NEW_DISK_RE){
+   # special syntax cannot be parsed to path
+   return $devicetype;
+   } else {
+   $path = PVE::Storage::path($storecfg, $drive->{file});
+   }
+
+   # for compatibility only, we prefer scsi-hd (#2408, #2355, #2380)
+   if ($path =~ m/^iscsi\:\/\// &&
+  !min_version($machine_version, 4, 1)) {
+   $devicetype = 'generic';
+   }
+}
+
+return $devicetype;
+}
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 26/33] nic online bridge/vlan change: link disconnect/reconnect

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

We want to notify guest of the change, so it can resubmit dhcp request,
or send gratuitous arp,...

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 13 +
 1 file changed, 13 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5e158b3..e87df9d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5309,6 +5309,12 @@ sub vmconfig_update_net {
safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
PVE::Network::tap_unplug($iface);
 
+   #set link_down in guest if bridge or vlan change to notify 
guest (dhcp renew for example)
+   if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
+   safe_num_ne($oldnet->{tag}, $newnet->{tag})) {
+   qemu_set_link_status($vmid, $opt, 0);
+   }
+
if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge})) {
if ($have_sdn) {

PVE::Network::SDN::Vnets::del_ips_from_mac($oldnet->{bridge}, 
$oldnet->{macaddr}, $conf->{name});
@@ -5321,6 +5327,13 @@ sub vmconfig_update_net {
} else {
PVE::Network::tap_plug($iface, $newnet->{bridge}, 
$newnet->{tag}, $newnet->{firewall}, $newnet->{trunks}, $newnet->{rate});
}
+
+   #set link_up in guest if bridge or vlan change to notify guest 
(dhcp renew for example)
+   if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge}) ||
+   safe_num_ne($oldnet->{tag}, $newnet->{tag})) {
+   qemu_set_link_status($vmid, $opt, 1);
+   }
+
} elsif (safe_num_ne($oldnet->{rate}, $newnet->{rate})) {
# Rate can be applied on its own but any change above needs to
# include the rate in tap_plug since OVS resets everything.
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 23/33] api2: create|restore|clone: add_free_ip

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Co-Authored-by: Stefan Lendl 
Signed-off-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm  |  6 ++
 PVE/QemuServer.pm | 15 +++
 2 files changed, 21 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaab..7ae2f58 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -991,6 +991,8 @@ __PACKAGE__->register_method({
eval { PVE::QemuServer::template_create($vmid, 
$restored_conf) };
warn $@ if $@;
}
+
+   PVE::QemuServer::create_ifaces_ipams_ips($restored_conf, $vmid) 
if $unique;
};
 
# ensure no old replication state are exists
@@ -1066,6 +1068,8 @@ __PACKAGE__->register_method({
}
 
PVE::AccessControl::add_vm_to_pool($vmid, $pool) if $pool;
+
+   PVE::QemuServer::create_ifaces_ipams_ips($conf, $vmid);
};
 
PVE::QemuConfig->lock_config_full($vmid, 1, $realcmd);
@@ -3763,6 +3767,8 @@ __PACKAGE__->register_method({
 
PVE::QemuConfig->write_config($newid, $newconf);
 
+   PVE::QemuServer::create_ifaces_ipams_ips($newconf, $newid);
+
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on 
several nodes
PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5f15c66..b92743c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -8628,4 +8628,19 @@ sub del_nets_bridge_fdb {
 }
 }
 
+sub create_ifaces_ipams_ips {
+my ($conf, $vmid) = @_;
+
+return if !$have_sdn;
+
+foreach my $opt (keys %$conf) {
+if ($opt =~ m/^net(\d+)$/) {
+my $value = $conf->{$opt};
+my $net = PVE::QemuServer::parse_net($value);
+eval { 
PVE::Network::SDN::Vnets::add_next_free_cidr($net->{bridge}, $conf->{name}, 
$net->{macaddr}, $vmid, undef, 1) };
+warn $@ if $@;
+}
+}
+}
+
 1;
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-manager 18/33] sdn: add DHCP option to Zone dialogue

2023-11-17 Thread Stefan Hanreich
Co-Authored-by: Stefan Lendl 
Signed-off-by: Stefan Hanreich 
---
 www/manager6/sdn/zones/Base.js   |  6 --
 www/manager6/sdn/zones/SimpleEdit.js | 10 ++
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/www/manager6/sdn/zones/Base.js b/www/manager6/sdn/zones/Base.js
index 602e4c16b..db9b47b18 100644
--- a/www/manager6/sdn/zones/Base.js
+++ b/www/manager6/sdn/zones/Base.js
@@ -55,7 +55,9 @@ Ext.define('PVE.panel.SDNZoneBase', {
},
);
 
-   me.advancedItems = [
+   me.advancedItems = me.advancedItems ?? [];
+
+   me.advancedItems.unshift(
{
xtype: 'pveSDNDnsSelector',
fieldLabel: gettext('DNS Server'),
@@ -77,7 +79,7 @@ Ext.define('PVE.panel.SDNZoneBase', {
fieldLabel: gettext('DNS Zone'),
allowBlank: true,
},
-   ];
+   );
 
me.callParent();
 },
diff --git a/www/manager6/sdn/zones/SimpleEdit.js 
b/www/manager6/sdn/zones/SimpleEdit.js
index cb7c34035..7a6f1d0d9 100644
--- a/www/manager6/sdn/zones/SimpleEdit.js
+++ b/www/manager6/sdn/zones/SimpleEdit.js
@@ -19,6 +19,16 @@ Ext.define('PVE.sdn.zones.SimpleInputPanel', {
var me = this;
 
 me.items = [];
+   me.advancedItems = [
+   {
+   xtype: 'proxmoxcheckbox',
+   name: 'dhcp',
+   inputValue: 'dnsmasq',
+   uncheckedValue: undefined,
+   checked: false,
+   fieldLabel: gettext('automatic DHCP'),
+   },
+   ];
 
me.callParent();
 },
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-network 16/33] add add_dhcp_mapping

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Signed-off-by: Alexandre Derumier 
---
 src/PVE/Network/SDN/Dhcp.pm |  9 ---
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 40 -
 src/PVE/Network/SDN/Dhcp/Plugin.pm  |  2 +-
 src/PVE/Network/SDN/Vnets.pm| 15 +++
 4 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
index a05b441..fc33f08 100644
--- a/src/PVE/Network/SDN/Dhcp.pm
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -8,6 +8,7 @@ use PVE::Cluster qw(cfs_read_file);
 use PVE::Network::SDN;
 use PVE::Network::SDN::SubnetPlugin;
 use PVE::Network::SDN::Dhcp qw(config);
+use PVE::Network::SDN::Ipams;
 use PVE::Network::SDN::Subnets qw(sdn_subnets_config config get_dhcp_ranges);
 use PVE::Network::SDN::Dhcp::Plugin;
 use PVE::Network::SDN::Dhcp::Dnsmasq;
@@ -30,9 +31,11 @@ sub add_mapping {
 
 return if !$zone->{ipam} || !$zone->{dhcp};
 
-my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($zone->{dhcp});
-$dhcp_plugin->add_ip_mapping($zoneid, $mac, $ip4) if $ip4;
-$dhcp_plugin->add_ip_mapping($zoneid, $mac, $ip6) if $ip6;
+my $dhcptype = $zone->{dhcp};
+
+my $macdb = PVE::Network::SDN::Ipams::read_macdb();
+my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($dhcptype);
+$dhcp_plugin->add_ip_mapping($zoneid, $macdb, $mac, $ip4, $ip6)
 }
 
 sub remove_mapping {
diff --git a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm 
b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
index 21a6ddd..c4b6bde 100644
--- a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
+++ b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
@@ -53,21 +53,49 @@ sub del_ip_mapping {
 }
 
 sub add_ip_mapping {
-my ($class, $dhcpid, $mac, $ip) = @_;
+my ($class, $dhcpid, $macdb, $mac, $ip4, $ip6) = @_;
 
 my $ethers_file = "$DNSMASQ_CONFIG_ROOT/$dhcpid/ethers";
 my $ethers_tmp_file = "$ethers_file.tmp";
 
+my $change = undef;
+my $match4 = undef;
+my $match6 = undef;
+
 my $appendFn = sub {
open(my $in, '<', $ethers_file) or die "Could not open file 
'$ethers_file' $!\n";
open(my $out, '>', $ethers_tmp_file) or die "Could not open file 
'$ethers_tmp_file' $!\n";
 
 while (my $line = <$in>) {
-   next if $line =~ m/^$mac/;
-   print $out $line;
+   chomp($line);
+   my ($parsed_mac, $parsed_ip) = split(/,/, $line);
+   #delete removed mac
+   if (!defined($macdb->{macs}->{$parsed_mac})) {
+   $change = 1;
+   next;
+   }
+
+   #delete changed ip
+   my $ipversion = Net::IP::ip_is_ipv4($parsed_ip) ? "ip4" : "ip6";
+   if ($macdb->{macs}->{$parsed_mac}->{$ipversion} && 
$macdb->{macs}->{$parsed_mac}->{$ipversion} ne $parsed_ip) {
+   $change = 1;
+   next;
+   }
+   print $out "$parsed_mac,$parsed_ip\n";
+   #check if mac/ip already exist
+   $match4 = 1 if $parsed_mac eq $mac && 
$macdb->{macs}->{$mac}->{'ip4'} && $macdb->{macs}->{$mac}->{'ip4'} eq $ip4;
+   $match6 = 1 if $parsed_mac eq $mac && 
$macdb->{macs}->{$mac}->{'ip6'} && $macdb->{macs}->{$mac}->{'ip6'} eq $ip6;
}
 
-   print $out "$mac,$ip\n";
+   if(!$match4 && $ip4) {
+   print $out "$mac,$ip4\n";
+   $change = 1;
+   }
+
+   if(!$match6 && $ip6) {
+   print $out "$mac,$ip6\n";
+   $change = 1;
+   }
close $in;
close $out;
move $ethers_tmp_file, $ethers_file;
@@ -77,12 +105,12 @@ sub add_ip_mapping {
 PVE::Tools::lock_file($ethers_file, 10, $appendFn);
 
 if ($@) {
-   warn "Unable to add $mac/$ip to the dnsmasq configuration: $@\n";
+   warn "Unable to add $mac to the dnsmasq configuration: $@\n";
return;
 }
 
 my $service_name = "dnsmasq\@$dhcpid";
-PVE::Tools::run_command(['systemctl', 'reload', $service_name]);
+PVE::Tools::run_command(['systemctl', 'reload', $service_name]) if $change;
 }
 
 sub configure_subnet {
diff --git a/src/PVE/Network/SDN/Dhcp/Plugin.pm 
b/src/PVE/Network/SDN/Dhcp/Plugin.pm
index 7b9e9b7..8d0f7ba 100644
--- a/src/PVE/Network/SDN/Dhcp/Plugin.pm
+++ b/src/PVE/Network/SDN/Dhcp/Plugin.pm
@@ -23,7 +23,7 @@ sub private {
 }
 
 sub add_ip_mapping {
-my ($class, $dhcp_config, $mac, $ip) = @_;
+my ($class, $dhcpid, $macdb, $mac, $ip4, $ip6) = @_;
 die 'implement in sub class';
 }
 
diff --git a/src/PVE/Network/SDN/Vnets.pm b/src/PVE/Network/SDN/Vnets.pm
index 4b3276e..09378ff 100644
--- a/src/PVE/Network/SDN/Vnets.pm
+++ b/src/PVE/Network/SDN/Vnets.pm
@@ -7,6 +7,7 @@ use Net::IP;
 
 use PVE::Cluster qw(cfs_read_file cfs_write_file cfs_lock_file);
 use PVE::Network::SDN;
+use PVE::Network::SDN::Dhcp;
 use PVE::Network::SDN::Subnets;
 use PVE::Network::SDN::Zones;
 
@@ -184,4 +185,18 @@ sub del_ips_from_mac {
 return ($ip4, $ip6);
 }
 
+sub add_dhcp_mapping {
+my ($vnetid, $mac) = @_;
+
+my $vnet = 

[pve-devel] [PATCH v4 pve-manager 20/33] sdn: ipam: add ipam panel

2023-11-17 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich 
---
 www/css/ext6-pve.css  |  22 ++-
 www/manager6/Makefile |   1 +
 www/manager6/dc/Config.js |  12 +-
 www/manager6/sdn/IpamEdit.js  |  78 ++
 www/manager6/tree/DhcpTree.js | 267 ++
 5 files changed, 372 insertions(+), 8 deletions(-)
 create mode 100644 www/manager6/sdn/IpamEdit.js
 create mode 100644 www/manager6/tree/DhcpTree.js

diff --git a/www/css/ext6-pve.css b/www/css/ext6-pve.css
index e18b173f5..091855356 100644
--- a/www/css/ext6-pve.css
+++ b/www/css/ext6-pve.css
@@ -510,28 +510,38 @@ div.right-aligned {
 content: ' ';
 }
 
-.fa-sdn:before {
+.x-fa-sdn-treelist:before {
 width: 14px;
 height: 14px;
 position: absolute;
 left: 1px;
 top: 4px;
+}
+
+.fa-sdn:before {
 background-image:url(../images/icon-sdn.svg);
 background-size: 14px 14px;
 content: ' ';
 }
 
 .fa-network-wired:before {
-width: 14px;
-height: 14px;
-position: absolute;
-left: 1px;
-top: 4px;
 background-image:url(../images/icon-fa-network-wired.svg);
 background-size: 14px 14px;
 content: ' ';
 }
 
+.x-fa-treepanel:before {
+width: 16px;
+height: 24px;
+display: block;
+background-repeat: no-repeat;
+background-position: center;
+}
+
+.x-tree-icon-none {
+display: none;
+}
+
 .x-treelist-row-over > * > .x-treelist-item-icon,
 .x-treelist-row-over > * > .x-treelist-item-text{
 color: #000;
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 093452cd7..93b4ff155 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -108,6 +108,7 @@ JSSRC=  
\
tree/ResourceTree.js\
tree/SnapshotTree.js\
tree/ResourceMapTree.js \
+   tree/DhcpTree.js\
window/Backup.js\
window/BackupConfig.js  \
window/BulkAction.js\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 7d01da5fb..7c2b7b168 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -185,7 +185,7 @@ Ext.define('PVE.dc.Config', {
me.items.push({
xtype: 'pveSDNStatus',
title: gettext('SDN'),
-   iconCls: 'fa fa-sdn',
+   iconCls: 'fa fa-sdn x-fa-sdn-treelist',
hidden: true,
itemId: 'sdn',
expandedOnInit: true,
@@ -203,7 +203,7 @@ Ext.define('PVE.dc.Config', {
groups: ['sdn'],
title: 'VNets',
hidden: true,
-   iconCls: 'fa fa-network-wired',
+   iconCls: 'fa fa-network-wired x-fa-sdn-treelist',
itemId: 'sdnvnet',
},
{
@@ -213,6 +213,14 @@ Ext.define('PVE.dc.Config', {
hidden: true,
iconCls: 'fa fa-gear',
itemId: 'sdnoptions',
+   },
+   {
+   xtype: 'pveDhcpTree',
+   groups: ['sdn'],
+   title: gettext('IPAM'),
+   hidden: true,
+   iconCls: 'fa fa-map-signs',
+   itemId: 'sdnmappings',
});
}
 
diff --git a/www/manager6/sdn/IpamEdit.js b/www/manager6/sdn/IpamEdit.js
new file mode 100644
index 0..18e22c592
--- /dev/null
+++ b/www/manager6/sdn/IpamEdit.js
@@ -0,0 +1,78 @@
+Ext.define('PVE.sdn.IpamEditInputPanel', {
+extend: 'Proxmox.panel.InputPanel',
+mixins: ['Proxmox.Mixin.CBind'],
+
+isCreate: false,
+
+onGetValues: function(values) {
+   let me = this;
+
+   if (!values.vmid) {
+   delete values.vmid;
+   }
+
+   return values;
+},
+
+items: [
+   {
+   xtype: 'pmxDisplayEditField',
+   name: 'vmid',
+   fieldLabel: gettext('VMID'),
+   allowBlank: false,
+   editable: false,
+   cbind: {
+   hidden: '{isCreate}',
+   },
+   },
+   {
+   xtype: 'pmxDisplayEditField',
+   name: 'mac',
+   fieldLabel: gettext('MAC'),
+   allowBlank: false,
+   cbind: {
+   editable: '{isCreate}',
+   },
+   },
+   {
+   xtype: 'proxmoxtextfield',
+   name: 'ip',
+   fieldLabel: gettext('IP'),
+   allowBlank: false,
+   },
+],
+});
+
+Ext.define('PVE.sdn.IpamEdit', {
+extend: 'Proxmox.window.Edit',
+
+subject: gettext('DHCP Mapping'),
+width: 350,
+
+isCreate: false,
+mapping: {},
+
+submitUrl: function(url, values) {
+   return `${url}/${values.zone}/${values.vnet}/${values.mac}`;
+},
+
+

[pve-devel] [PATCH v4 pve-container 31/33] vm_apply_pending: add|del ips from ipam for offline changes

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Co-Authored-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 src/PVE/LXC/Config.pm | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index c884313..823a2b9 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1471,6 +1471,12 @@ sub vmconfig_apply_pending {
} elsif ($opt =~ m/^unused(\d+)$/) {
PVE::LXC::delete_mountpoint_volume($storecfg, $vmid, 
$conf->{$opt})
if !$class->is_volume_in_use($conf, $conf->{$opt}, 1, 1);
+   } elsif ($opt =~ m/^net(\d+)$/) {
+   if ($have_sdn) {
+   my $net = $class->parse_lxc_network($conf->{$opt});
+   eval { 
PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, $net->{hwaddr}, 
$conf->{hostname}) };
+   warn $@ if $@;
+   }
}
};
if (my $err = $@) {
@@ -1493,6 +1499,15 @@ sub vmconfig_apply_pending {
my $netid = $1;
my $net = $class->parse_lxc_network($conf->{pending}->{$opt});
$conf->{pending}->{$opt} = $class->print_lxc_network($net);
+   if ($have_sdn) {
+   if($conf->{$opt}) {
+   my $old_net = $class->parse_lxc_network($conf->{$opt});
+   if ($old_net->{bridge} ne $net->{bridge} || 
$old_net->{hwaddr} ne $net->{hwaddr}) {
+   
PVE::Network::SDN::Vnets::del_ips_from_mac($old_net->{bridge}, 
$old_net->{hwaddr}, $conf->{name});
+   }
+   }
+   
PVE::Network::SDN::Vnets::add_next_free_cidr($net->{bridge}, $conf->{hostname}, 
$net->{hwaddr}, $vmid, undef, 1);
+   }
}
};
if (my $err = $@) {
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-container 28/33] vm_destroy: remove ips from ipam for all interfaces

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Signed-off-by: Alexandre Derumier 
---
 src/PVE/LXC.pm | 16 
 1 file changed, 16 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index b6df6d6..4472e0f 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -46,6 +46,7 @@ use PVE::LXC::Tools;
 my $have_sdn;
 eval {
 require PVE::Network::SDN::Zones;
+require PVE::Network::SDN::Vnets;
 $have_sdn = 1;
 };
 
@@ -898,6 +899,8 @@ sub destroy_lxc_container {
});
 }
 
+delete_ifaces_ipams_ips($conf, $vmid);
+
 rmdir "/var/lib/lxc/$vmid/rootfs";
 unlink "/var/lib/lxc/$vmid/config";
 rmdir "/var/lib/lxc/$vmid";
@@ -2755,4 +2758,17 @@ sub thaw($) {
 }
 }
 
+sub delete_ifaces_ipams_ips {
+my ($conf, $vmid) = @_;
+
+return if !$have_sdn;
+
+for my $opt (keys %$conf) {
+   next if $opt !~ m/^net(\d+)$/;
+   my $net = PVE::QemuServer::parse_net($conf->{$opt});
+   eval { PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, 
$net->{hwaddr}, $conf->{hostname}) };
+   warn $@ if $@;
+}
+}
+
 1;
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v4 1/1] api: add guest profile api endpoint

2023-11-17 Thread Dominik Csapak
basic CRUD for the profile section config

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Cluster.pm  |   7 +
 PVE/API2/Cluster/Makefile|   1 +
 PVE/API2/Cluster/Profiles.pm | 239 +++
 3 files changed, 247 insertions(+)
 create mode 100644 PVE/API2/Cluster/Profiles.pm

diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index 04387ab4..d628df85 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -30,6 +30,7 @@ use PVE::API2::Cluster::Mapping;
 use PVE::API2::Cluster::Jobs;
 use PVE::API2::Cluster::MetricServer;
 use PVE::API2::Cluster::Notifications;
+use PVE::API2::Cluster::Profiles;
 use PVE::API2::ClusterConfig;
 use PVE::API2::Firewall::Cluster;
 use PVE::API2::HAConfig;
@@ -103,6 +104,11 @@ __PACKAGE__->register_method ({
 path => 'mapping',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Cluster::Profiles",
+path => 'profiles',
+});
+
 if ($have_sdn) {
 __PACKAGE__->register_method ({
subclass => "PVE::API2::Network::SDN",
@@ -158,6 +164,7 @@ __PACKAGE__->register_method ({
{ name => 'notifications' },
{ name => 'nextid' },
{ name => 'options' },
+   { name => 'profiles' },
{ name => 'replication' },
{ name => 'resources' },
{ name => 'status' },
diff --git a/PVE/API2/Cluster/Makefile b/PVE/API2/Cluster/Makefile
index b109e5cb..35a3f871 100644
--- a/PVE/API2/Cluster/Makefile
+++ b/PVE/API2/Cluster/Makefile
@@ -9,6 +9,7 @@ PERLSOURCE= \
MetricServer.pm \
Mapping.pm  \
Notifications.pm\
+   Profiles.pm \
Jobs.pm \
Ceph.pm
 
diff --git a/PVE/API2/Cluster/Profiles.pm b/PVE/API2/Cluster/Profiles.pm
new file mode 100644
index ..1631f4bd
--- /dev/null
+++ b/PVE/API2/Cluster/Profiles.pm
@@ -0,0 +1,239 @@
+package PVE::API2::Cluster::Profiles;
+
+use warnings;
+use strict;
+
+use PVE::Tools qw(extract_param extract_sensitive_params);
+use PVE::Exception qw(raise_perm_exc raise_param_exc);
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RPCEnvironment;
+
+use PVE::Profiles::Plugin;
+use PVE::Profiles::VM;
+use PVE::Profiles::CT;
+
+PVE::Profiles::VM->register();
+PVE::Profiles::CT->register();
+PVE::Profiles::Plugin->init(1);
+
+use PVE::RESTHandler;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'profile_index',
+path => '',
+method => 'GET',
+description => "List configured guest profiles.",
+permissions => {
+   user => 'all',
+   description => "Only lists entries where you have 'Mapping.Modify', 
'Mapping.Use' or".
+   " 'Mapping.Audit' permissions on 'mapping/guest-profile/'.",
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   type => {
+   type => 'string',
+   description => "If set, return only profiles of this type.",
+   optional => 1,
+   enum => ['vm', 'ct'],
+   },
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => "object",
+   properties => {
+   id => {
+   description => "The ID of the entry.",
+   type => 'string'
+   },
+   type => {
+   description => "Plugin type.",
+   type => 'string',
+   },
+   },
+   },
+   links => [ { rel => 'child', href => "{id}" } ],
+},
+code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   my $res = [];
+   my $cfg = PVE::Cluster::cfs_read_file('virtual-guest/profiles.cfg');
+   my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 
'Mapping.Audit'];
+
+   for my $id (sort keys $cfg->{ids}->%*) {
+   next if !$rpcenv->check_any($authuser, 
"/mapping/guest-profile/$id", $can_see_mapping_privs, 1);
+   my $plugin_config = $cfg->{ids}->{$id};
+   next if defined($param->{type}) && $plugin_config->{type} ne 
$param->{type};
+   push @$res, {
+   id => $id,
+   type => $plugin_config->{type},
+   'profile-description' => 
$plugin_config->{'profile-description'},
+   };
+   }
+
+   return $res;
+}});
+
+__PACKAGE__->register_method ({
+name => 'read',
+path => '{id}',
+method => 'GET',
+description => "Read profile configuration.",
+permissions => {
+   check =>['or',
+   ['perm', '/mapping/guest-profile/{id}', ['Mapping.Use']],
+   ['perm', '/mapping/guest-profile/{id}', ['Mapping.Modify']],
+   ['perm', '/mapping/guest-profile/{id}', ['Mapping.Audit']],
+   ],
+},
+parameters => {
+   additionalProperties => 0,
+   properties 

[pve-devel] [PATCH container v4 3/3] pct: register and init the profiles plugins

2023-11-17 Thread Dominik Csapak
we have to that here, so the properties/options are correctly configured
when using that feature on the cli

Signed-off-by: Dominik Csapak 
---
 src/PVE/CLI/pct.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index a0b9bce..e579b7a 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -24,6 +24,12 @@ use PVE::API2::LXC::Snapshot;
 use PVE::API2::LXC::Status;
 use PVE::API2::LXC;
 
+use PVE::Profiles::Plugin;
+use PVE::Profiles::CT;
+
+PVE::Profiles::CT->register();
+PVE::Profiles::Plugin->init();
+
 use base qw(PVE::CLIHandler);
 
 my $nodename = PVE::INotify::nodename();
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container v4 1/3] add the CT profiles plugin

2023-11-17 Thread Dominik Csapak
simply uses the json_config_properties for the ct config and maps them
to "ct_${opt}"

Signed-off-by: Dominik Csapak 
---
 src/PVE/Makefile  |  1 +
 src/PVE/Profiles/CT.pm| 28 
 src/PVE/Profiles/Makefile |  4 
 3 files changed, 33 insertions(+)
 create mode 100644 src/PVE/Profiles/CT.pm
 create mode 100644 src/PVE/Profiles/Makefile

diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index 40742e4..e9ad9a3 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -8,5 +8,6 @@ install: ${SOURCES}
make -C LXC install
make -C VZDump install
make -C CLI install
+   make -C Profiles install
 
 
diff --git a/src/PVE/Profiles/CT.pm b/src/PVE/Profiles/CT.pm
new file mode 100644
index 000..513fad4
--- /dev/null
+++ b/src/PVE/Profiles/CT.pm
@@ -0,0 +1,28 @@
+package PVE::Profiles::CT;
+
+use strict;
+use warnings;
+
+use PVE::Profiles::Plugin;
+use PVE::LXC::Config;
+
+use base qw(PVE::Profiles::Plugin);
+
+sub type {
+return "ct";
+}
+
+sub properties {
+return PVE::LXC::Config::json_config_properties();
+}
+
+sub options {
+my $props = PVE::LXC::Config::json_config_properties();
+my $opts = {};
+for my $opt (keys $props->%*) {
+   $opts->{$opt} = { optional => 1 };
+}
+return $opts;
+}
+
+1;
diff --git a/src/PVE/Profiles/Makefile b/src/PVE/Profiles/Makefile
new file mode 100644
index 000..e63a9f2
--- /dev/null
+++ b/src/PVE/Profiles/Makefile
@@ -0,0 +1,4 @@
+
+.PHONY: install
+install:
+   install -D -m 0644 CT.pm ${PERLDIR}/PVE/Profiles/CT.pm
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster v4 1/1] add profiles.cfg to cluster fs

2023-11-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index cfa2583..c01bf89 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -80,6 +80,7 @@ my $observed = {
 'sdn/dns.cfg' => 1,
 'sdn/.running-config' => 1,
 'virtual-guest/cpu-models.conf' => 1,
+'virtual-guest/profiles.cfg' => 1,
 'mapping/pci.cfg' => 1,
 'mapping/usb.cfg' => 1,
 };
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index c8094ac..bcec079 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -109,6 +109,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "sdn/dns.cfg" },
{ .path = "sdn/.running-config" },
{ .path = "virtual-guest/cpu-models.conf" },
+   { .path = "virtual-guest/profiles.cfg" },
{ .path = "firewall/cluster.fw" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH container v4 2/3] api: add profile option to create ct api call

2023-11-17 Thread Dominik Csapak
we use the profile cfg as the 'param' hash, but overwrite the values
with the ones from the api call, so one can overwrite options from
the profile easily

Signed-off-by: Dominik Csapak 
---
 src/PVE/API2/LXC.pm | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 8839105..12a1439 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -27,6 +27,10 @@ use PVE::API2::LXC::Config;
 use PVE::API2::LXC::Status;
 use PVE::API2::LXC::Snapshot;
 use PVE::JSONSchema qw(get_standard_option);
+
+use PVE::Profiles::Plugin;
+use PVE::Profiles::CT;
+
 use base qw(PVE::RESTHandler);
 
 BEGIN {
@@ -196,6 +200,11 @@ __PACKAGE__->register_method({
default => 0,
description => "Start the CT after its creation finished 
successfully.",
},
+   profile => {
+   optional => 1,
+   type => 'string',
+   description => "The profile to use as base config.",
+   },
}),
 },
 returns => {
@@ -209,6 +218,19 @@ __PACKAGE__->register_method({
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
 
+   my $profile = extract_param($param, 'profile');
+   if (defined($profile)) {
+   $rpcenv->check_full($authuser, "/mapping/guest-profile/${profile}", 
['Mapping.Use']);
+   my $profile_cfg = eval { 
PVE::Profiles::Plugin::load_profile($profile, 'ct') };
+   raise_param_exc({ profile => "$@" }) if $@;
+
+   for my $opt (keys $param->%*) {
+   $profile_cfg->{$opt} = $param->{$opt};
+   }
+
+   $param = $profile_cfg;
+   }
+
my $node = extract_param($param, 'node');
my $vmid = extract_param($param, 'vmid');
my $ignore_unpack_errors = extract_param($param, 
'ignore-unpack-errors');
@@ -381,6 +403,7 @@ __PACKAGE__->register_method({
my $vollist = [];
eval {
my $orig_mp_param; # only used if $restore
+   print "using profile '$profile'\n" if $profile;
if ($restore) {
die "can't overwrite running container\n" if 
PVE::LXC::check_running($vmid);
if ($archive ne '-') {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v4 1/1] add profiles section config plugin

2023-11-17 Thread Dominik Csapak
this is intended to house custom profiles which can be used
on guest creation instead of manually needing to specify every option.

we do special things here:
* we always set 'allow_unknown' to 1, because when using the guest
  specific parts in the cli, we cannot depend on the other one, else
  we'd get a cyclic dependency, and without that we need to ignore
  unknown sections

* we add the 'profile-description' to the options of all registered
  plugins (so we don't have to do it manually)

* we always call 'init(property_isolation => 1)' of the SectionConfig so we 
cannot forget that

Signed-off-by: Dominik Csapak 
---
 src/Makefile   |  2 +
 src/PVE/Profiles/Plugin.pm | 83 ++
 2 files changed, 85 insertions(+)
 create mode 100644 src/PVE/Profiles/Plugin.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..d99645c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -17,6 +17,8 @@ install: PVE
install -d ${PERL5DIR}/PVE/Mapping
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
+   install -d ${PERL5DIR}/PVE/Profiles
+   install -m 0644 PVE/Profiles/Plugin.pm ${PERL5DIR}/PVE/Profiles/
install -d ${PERL5DIR}/PVE/VZDump
install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
diff --git a/src/PVE/Profiles/Plugin.pm b/src/PVE/Profiles/Plugin.pm
new file mode 100644
index 000..717383b
--- /dev/null
+++ b/src/PVE/Profiles/Plugin.pm
@@ -0,0 +1,83 @@
+package PVE::Profiles::Plugin;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file);
+use PVE::SectionConfig;
+
+use base qw(PVE::SectionConfig);
+
+my $CFG_PATH = 'virtual-guest/profiles.cfg';
+
+cfs_register_file(
+$CFG_PATH,
+sub { __PACKAGE__->parse_config(@_); },
+sub { __PACKAGE__->write_config(@_); }
+);
+
+my $defaultData = {
+propertyList => {
+   type => { description => 'Profile type.' },
+   id => {
+   type => 'string',
+   description => "The ID of the profile.",
+   format => 'pve-configid',
+   },
+   'profile-description' => {
+   description => "Use this to add a short comment about a profile.",
+   type => 'string',
+   optional => 1,
+   maxLength => 128,
+   },
+},
+};
+
+sub private {
+return $defaultData;
+}
+
+sub parse_config {
+my ($class, $filename, $raw, $allow_unknown) = @_;
+
+# always allow unknown section types, so that qemu-server/pct-container
+# can parse the file without loading the other plugin type
+return $class->SUPER::parse_config($filename, $raw, 1);
+}
+
+sub write_config {
+my ($class, $filename, $cfg, $allow_unknown) = @_;
+
+return $class->SUPER::write_config($filename, $cfg, 1);
+}
+
+# gets, checks and prepares the guest config
+# throws an error if it does not exist or the type is wrong
+sub load_profile {
+my ($profile_id, $profile_type) = @_;
+
+my $cfg = PVE::Cluster::cfs_read_file($CFG_PATH);
+
+my $profile = $cfg->{ids}->{$profile_id};
+die "no such profile '$profile_id'\n" if !defined $profile;
+die "wrong type '$profile->{type}'\n" if $profile->{type} ne $profile_type;
+
+delete $profile->{type};
+delete $profile->{'profile-description'};
+
+return $profile;
+}
+
+# override so we can add the profile description to all sub classes
+sub init {
+my ($class) = @_;
+
+$class->SUPER::init(property_isolation => 1);
+
+my $pdata = $class->private();
+for my $type (keys $pdata->{plugins}->%*) {
+   $pdata->{options}->{$type}->{'profile-description'} = { optional => 1 };
+}
+}
+
+1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v4 2/3] api: add profile option to create vm api call

2023-11-17 Thread Dominik Csapak
we use the the profile cfg as the 'param' hash, but overwrite the values
with the ones from the api call, so one can overwrite options from the
profile easily

also we add the used profile to the meta info in the config, since
it might be interesting which one was used

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm  | 23 +++
 PVE/QemuServer.pm |  6 ++
 2 files changed, 29 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 38bdaabd..cce36a8d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -49,6 +49,9 @@ use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
 
+use PVE::Profiles::Plugin;
+use PVE::Profiles::VM;
+
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
require PVE::HA::Env::PVE2;
@@ -837,6 +840,11 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created 
successfully.",
},
+   profile => {
+   optional => 1,
+   type => 'string',
+   description => "The profile to use as base config.",
+   },
},
1, # with_disk_alloc
),
@@ -850,6 +858,19 @@ __PACKAGE__->register_method({
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
 
+   my $profile = extract_param($param, 'profile');
+   if (defined($profile)) {
+   $rpcenv->check_full($authuser, "/mapping/guest-profile/${profile}", 
['Mapping.Use']);
+   my $profile_cfg = eval { 
PVE::Profiles::Plugin::load_profile($profile, 'vm') };
+   raise_param_exc({ profile => "$@"}) if $@;
+
+   for my $opt (keys $param->%*) {
+   $profile_cfg->{$opt} = $param->{$opt};
+   }
+
+   $param = $profile_cfg;
+   }
+
my $node = extract_param($param, 'node');
my $vmid = extract_param($param, 'vmid');
 
@@ -1013,6 +1034,8 @@ __PACKAGE__->register_method({
my $conf = $param;
my $arch = PVE::QemuServer::get_vm_arch($conf);
 
+   print "using profile '$profile'\n" if $profile;
+
$conf->{meta} = PVE::QemuServer::new_meta_info_string();
 
my $vollist = [];
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6f..1c7e65ea 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -293,6 +293,12 @@ my $meta_info_fmt = {
pattern => '\d+(\.\d+)+',
optional => 1,
 },
+'profile' => {
+   type => 'string',
+   description => 'The Profile used during creation.',
+   format => 'pve-configid',
+   optional => 1,
+},
 };
 
 my $confdesc = {
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v4 3/3] qm: register and init the profiles plugins

2023-11-17 Thread Dominik Csapak
we have to that here, so the properties/options are correctly configured
when using that feature on the cli

Signed-off-by: Dominik Csapak 
---
 PVE/CLI/qm.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b17b4fe2..82240ba3 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -37,6 +37,12 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::OVF;
 use PVE::QemuServer;
 
+use PVE::Profiles::Plugin;
+use PVE::Profiles::VM;
+
+PVE::Profiles::VM->register();
+PVE::Profiles::Plugin->init();
+
 use PVE::CLIHandler;
 use base qw(PVE::CLIHandler);
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster/guest-common/qemu-server/container/manager v4] add backend profile support

2023-11-17 Thread Dominik Csapak
This series aims to provide profile support when creating guests (ct/vm)
so that users can reuse options without having to specify them every
time.

UI is still not done, but sending the backend again.

changes from v3:
* adapt to the changes in section config regarding the 'init' ->
  now `init(property_isolation => 1)` instead of `init(1)`

  otherwise there where no changes since v3

changes from v2:
* rename get_guest_ready_config into load_profile
* rename some variables
* add type parameter to profile listing so we can filter
* whitespace fixes
* comment fixes

changes from rfc/v1:
* uses a section config with separated plugins, using the oneOf schema
* move the parsing with preparing into a helper into the Plugin class
* don't save the profile into the 'meta' option, but log it instead

0: https://lists.proxmox.com/pipermail/pve-devel/2023-November/060300.html

pve-cluster:

Dominik Csapak (1):
  add profiles.cfg to cluster fs

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

pve-guest-common:

Dominik Csapak (1):
  add profiles section config plugin

 src/Makefile   |  2 +
 src/PVE/Profiles/Plugin.pm | 83 ++
 2 files changed, 85 insertions(+)
 create mode 100644 src/PVE/Profiles/Plugin.pm

qemu-server:

Dominik Csapak (3):
  add the VM profiles plugin
  api: add profile option to create vm api call
  qm: register and init the profiles plugins

 PVE/API2/Qemu.pm  | 23 +++
 PVE/CLI/qm.pm |  6 ++
 PVE/Makefile  |  1 +
 PVE/Profiles/Makefile |  5 +
 PVE/Profiles/VM.pm| 28 
 PVE/QemuServer.pm |  6 ++
 6 files changed, 69 insertions(+)
 create mode 100644 PVE/Profiles/Makefile
 create mode 100644 PVE/Profiles/VM.pm

pve-container:

Dominik Csapak (3):
  add the CT profiles plugin
  api: add profile option to create ct api call
  pct: register and init the profiles plugins

 src/PVE/API2/LXC.pm   | 23 +++
 src/PVE/CLI/pct.pm|  6 ++
 src/PVE/Makefile  |  1 +
 src/PVE/Profiles/CT.pm| 28 
 src/PVE/Profiles/Makefile |  4 
 5 files changed, 62 insertions(+)
 create mode 100644 src/PVE/Profiles/CT.pm
 create mode 100644 src/PVE/Profiles/Makefile

pve-manager:

Dominik Csapak (1):
  api: add guest profile api endpoint

 PVE/API2/Cluster.pm  |   7 +
 PVE/API2/Cluster/Makefile|   1 +
 PVE/API2/Cluster/Profiles.pm | 239 +++
 3 files changed, 247 insertions(+)
 create mode 100644 PVE/API2/Cluster/Profiles.pm

-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v4 1/3] add the VM profiles plugin

2023-11-17 Thread Dominik Csapak
simply uses the json_config_properties for the vm config and maps them
to "vm_${opt}"

Signed-off-by: Dominik Csapak 
---
 PVE/Makefile  |  1 +
 PVE/Profiles/Makefile |  5 +
 PVE/Profiles/VM.pm| 28 
 3 files changed, 34 insertions(+)
 create mode 100644 PVE/Profiles/Makefile
 create mode 100644 PVE/Profiles/VM.pm

diff --git a/PVE/Makefile b/PVE/Makefile
index dc173681..d09ca98d 100644
--- a/PVE/Makefile
+++ b/PVE/Makefile
@@ -12,3 +12,4 @@ install:
$(MAKE) -C API2 install
$(MAKE) -C CLI install
$(MAKE) -C QemuServer install
+   $(MAKE) -C Profiles install
diff --git a/PVE/Profiles/Makefile b/PVE/Profiles/Makefile
new file mode 100644
index ..e5f56833
--- /dev/null
+++ b/PVE/Profiles/Makefile
@@ -0,0 +1,5 @@
+SOURCES=VM.pm
+
+.PHONY: install
+install: ${SOURCES}
+   for i in ${SOURCES}; do install -D -m 0644 $$i 
${DESTDIR}${PERLDIR}/PVE/Profiles/$$i; done
diff --git a/PVE/Profiles/VM.pm b/PVE/Profiles/VM.pm
new file mode 100644
index ..dc664ec5
--- /dev/null
+++ b/PVE/Profiles/VM.pm
@@ -0,0 +1,28 @@
+package PVE::Profiles::VM;
+
+use strict;
+use warnings;
+
+use PVE::Profiles::Plugin;
+use PVE::QemuServer;
+
+use base qw(PVE::Profiles::Plugin);
+
+sub type {
+return "vm";
+}
+
+sub properties {
+return PVE::QemuServer::json_config_properties();
+}
+
+sub options {
+my $props = PVE::QemuServer::json_config_properties();
+my $opts = {};
+for my $opt (keys $props->%*) {
+   $opts->{$opt} = { optional => 1 };
+}
+return $opts;
+}
+
+1;
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-docs 33/33] sdn: dhcp: Add documentation for DHCP

2023-11-17 Thread Stefan Hanreich
Signed-off-by: Stefan Hanreich 
---
 pvesdn.adoc | 122 
 1 file changed, 122 insertions(+)

diff --git a/pvesdn.adoc b/pvesdn.adoc
index b796c5e..24878e2 100644
--- a/pvesdn.adoc
+++ b/pvesdn.adoc
@@ -79,6 +79,9 @@ In addition to this, the following options are offered:
 * DNS: Define a DNS server API for registering virtual guests' hostname and IP
   addresses
 
+* DHCP: Define a DHCP server for a zone that automatically allocates IPs for
+  guests in the IPAM and leases them to the guests via DHCP.
+
 [[pvesdn_config_main_sdn]]
 
 SDN
@@ -418,6 +421,17 @@ for all subnets defined in those zones.
 This is the default internal IPAM for your {pve} cluster, if you don't have
 external IPAM software.
 
+You can inspect the current status of the PVE IPAM Plugin via the Panel IPAM in
+the SDN section of the datacenter configuration. This UI can be used to create,
+update and delete IP mappings. This is particularly convenient in conjunction
+with the xref:pvesdn_config_dhcp[DHCP feature].
+
+If you are using DHCP, you can use the IPAM panel to create or edit leases for
+specific VMs, which enables you to change the IPs allocated via DHCP. When
+editing an IP of a VM that is using DHCP you must make sure to force the guest
+to acquire a new DHCP leases. This can usually be done by reloading the network
+stack of the guest or rebooting it.
+
 [[pvesdn_ipam_plugin_phpipam]]
 phpIPAM Plugin
 ~~
@@ -484,6 +498,114 @@ key:: An API access key
 ttl:: The default TTL for records
 
 
+[[pvesdn_config_dhcp]]
+DHCP
+--
+
+The DHCP plugin in {pve} SDN can be used to automatically deploy a DHCP server
+for a Zone. It provides DHCP for all Subnets in a Zone that have a DHCP range
+configured. Currently the only available backend plugin for DHCP is the dnsmasq
+plugin.
+
+The DHCP plugin works by allocating an IP in the IPAM plugin configured in the
+Zone when adding a new network interface to a VM/CT. You can find more
+information on how to configure an IPAM in the
+xref:pvesdn_config_ipam[respective section of our documentation].
+
+When the VM starts, a mapping for the MAC address and IP gets created in the 
DHCP
+plugin of the zone. When the network interfaces is removed or the VM/CT are
+destroyed, then the entry in the IPAM and the DHCP server are deleted as well.
+
+NOTE: Some features (adding/editing/removing IP mappings) are currently only
+available when using the xref:pvesdn_ipam_plugin_pveipam[PVE IPAM plugin].
+
+
+Configuration
+~
+
+You can enable automatic DHCP for a zone in the Web UI via the Zones panel and
+enabling DHCP in the advanced options of a zone.
+
+NOTE: Currently only Simple Zones have support for automatic DHCP
+
+After automatic DHCP has been enabled for a Zone, DHCP Ranges need to be
+configured for the subnets in a Zone. In order to that, go to the Vnets panel 
and
+select the Subnet for which you want to configure DHCP ranges. In the edit
+dialogue you can configure DHCP ranges in the respective Tab. Alternatively you
+can set DHCP ranges for a Subnet via the following CLI command:
+
+
+pvesh set /cluster/sdn/vnets//subnets/
+ -dhcp-range start-address=10.0.1.100,end-address=10.0.1.200
+ -dhcp-range start-address=10.0.2.100,end-address=10.0.2.200
+
+
+You also need to have a gateway configured for the subnet - otherwise
+automatic DHCP will not work.
+
+The DHCP plugin will then allocate IPs in the IPAM only in the configured
+ranges.
+
+Do not forget to follow the installation steps for the
+xref:pvesdn_dhcp_dnsmasq_installation[dnsmasq DHCP plugin] as well.
+
+Plugins
+~~~
+
+Dnsmasq Plugin
+^^
+Currently this is the only DHCP plugin and therefore the plugin that gets used
+when you enable DHCP for a zone.
+
+[[pvesdn_dhcp_dnsmasq_installation]]
+.Installation
+In order to be able to use the Dnsmasq plugin you need to install
+the dnsmasq package and disable the default DNS server that gets automatically
+started:
+
+
+apt install dnsmasq
+systemctl disable --now dnsmasq
+
+
+.Configuration
+The plugin will create a new systemd service for each zone that dnsmasq gets
+deployed to. The name for the service is `dnsmasq@`. The lifecycle of 
this
+service is managed by the DHCP plugin.
+
+The plugin automatically generates the following configuration files in the
+folder `/etc/dnsmasq.d/`:
+
+`00-default.conf`::
+This contains the default global configuration for a dnsmasq instance.
+
+`10--.conf`::
+This file configures specific options for a subnet, such as the DNS server that
+should get configured via DHCP.
+
+`10--.ranges.conf`::
+This file configures the DHCP ranges for the dnsmasq instance.
+
+`ethers`::
+This file contains the MAC-address and IP mappings from the IPAM plugin. In
+order to override those mappings, please use the respective IPAM plugin rather
+than editing this file, as it will get overwritten by the dnsmasq plugin.
+
+You must not edit any of the above files, 

[pve-devel] [PATCH v4 pve-container 27/33] nic hotplug : add|del ips in ipam

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Co-Authored-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 src/PVE/LXC.pm| 17 +
 src/PVE/LXC/Config.pm | 12 
 2 files changed, 29 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 8f53b53..b6df6d6 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -961,6 +961,12 @@ sub update_net {
safe_string_ne($oldnet->{name}, $newnet->{name})) {
 
PVE::Network::veth_delete($veth);
+
+   if ($have_sdn) {
+   eval { 
PVE::Network::SDN::Vnets::del_ips_from_mac($oldnet->{bridge}, 
$oldnet->{hwaddr}, $conf->{hostname}) };
+   warn $@ if $@;
+   }
+
delete $conf->{$opt};
PVE::LXC::Config->write_config($vmid, $conf);
 
@@ -974,14 +980,23 @@ sub update_net {
) {
 
if ($oldnet->{bridge}) {
+
PVE::Network::tap_unplug($veth);
foreach (qw(bridge tag firewall)) {
delete $oldnet->{$_};
}
$conf->{$opt} = 
PVE::LXC::Config->print_lxc_network($oldnet);
PVE::LXC::Config->write_config($vmid, $conf);
+
+   if ($have_sdn) {
+   eval { 
PVE::Network::SDN::Vnets::del_ips_from_mac($oldnet->{bridge}, 
$oldnet->{hwaddr}, $conf->{hostname}) };
+   warn $@ if $@;
+   }
}
 
+   if ($have_sdn) {
+   
PVE::Network::SDN::Vnets::add_next_free_cidr($newnet->{bridge}, 
$conf->{hostname}, $newnet->{hwaddr}, $vmid, undef, 1);
+   }
PVE::LXC::net_tap_plug($veth, $newnet);
 
# This includes the rate:
@@ -1012,6 +1027,8 @@ sub hotplug_net {
 my $eth = $newnet->{name};
 
 if ($have_sdn) {
+   PVE::Network::SDN::Vnets::add_next_free_cidr($newnet->{bridge}, 
$conf->{hostname}, $newnet->{hwaddr}, $vmid, undef, 1);
+   PVE::Network::SDN::Vnets::add_dhcp_mapping($newnet->{bridge}, 
$newnet->{hwaddr});
PVE::Network::SDN::Zones::veth_create($veth, $vethpeer, 
$newnet->{bridge}, $newnet->{hwaddr});
 } else {
PVE::Network::veth_create($veth, $vethpeer, $newnet->{bridge}, 
$newnet->{hwaddr});
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 56e1f10..c884313 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -22,6 +22,12 @@ use constant {
 FITHAW   => 0xc0045878,
 };
 
+my $have_sdn;
+eval {
+require PVE::Network::SDN::Vnets;
+$have_sdn = 1;
+};
+
 my $nodename = PVE::INotify::nodename();
 my $lock_handles =  {};
 my $lockdir = "/run/lock/lxc";
@@ -1383,6 +1389,12 @@ sub vmconfig_hotplug_pending {
} elsif ($opt =~ m/^net(\d)$/) {
my $netid = $1;
PVE::Network::veth_delete("veth${vmid}i$netid");
+   if ($have_sdn) {
+   my $net = 
PVE::LXC::Config->parse_lxc_network($conf->{$opt});
+   print "delete ips from $opt\n";
+   eval { 
PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, $net->{hwaddr}, 
$conf->{hostname}) };
+   warn $@ if $@;
+   }
} else {
die "skip\n"; # skip non-hotpluggable opts
}
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-container 30/33] vm_clone : create ips in ipams

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

also delete ips in case of failure

Signed-off-by: Alexandre Derumier 
---
 src/PVE/API2/LXC.pm | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index e15de28..ee4fdca 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1830,7 +1830,9 @@ __PACKAGE__->register_method({
$lock_and_reload->($newid, sub {
my $conf = shift;
my $rootdir = PVE::LXC::mount_all($newid, $storecfg, $conf, 
1);
+
eval {
+   PVE::LXC::create_ifaces_ipams_ips($conf, $vmid);
my $lxc_setup = PVE::LXC::Setup->new($conf, $rootdir);
$lxc_setup->post_clone_hook($conf);
};
@@ -1850,7 +1852,7 @@ __PACKAGE__->register_method({
warn $@ if $@;
 
if ($err) {
-   # Now cleanup the config & disks:
+   # Now cleanup the config & disks & ipam:
sleep 1; # some storages like rbd need to wait before release 
volume - really?
 
foreach my $volid (@$newvollist) {
@@ -1860,6 +1862,8 @@ __PACKAGE__->register_method({
 
eval {
$lock_and_reload->($newid, sub {
+   my $conf = shift;
+   PVE::LXC::delete_ifaces_ipams_ips($conf, $newid);
PVE::LXC::Config->destroy_config($newid);
PVE::Firewall::remove_vmfw_conf($newid);
});
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-container 29/33] vm_create|restore: create ips in ipam

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

also delete ips on create failure

Co-Authored-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 src/PVE/API2/LXC.pm |  4 
 src/PVE/LXC.pm  | 13 +
 2 files changed, 17 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 8839105..e15de28 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -475,9 +475,11 @@ __PACKAGE__->register_method({
if ($restore) {
print "merging backed-up and given configuration..\n";
PVE::LXC::Create::restore_configuration($vmid, 
$storage_cfg, $archive, $rootdir, $conf, !$is_root, $unique, 
$skip_fw_config_restore);
+   PVE::LXC::create_ifaces_ipams_ips($conf, $vmid) if 
$unique;
my $lxc_setup = PVE::LXC::Setup->new($conf, $rootdir);
$lxc_setup->template_fixup($conf);
} else {
+   PVE::LXC::create_ifaces_ipams_ips($conf, $vmid);
my $lxc_setup = PVE::LXC::Setup->new($conf, $rootdir); 
# detect OS
PVE::LXC::Config->write_config($vmid, $conf); # safe 
config (after OS detection)
$lxc_setup->post_create_hook($password, $ssh_keys);
@@ -503,6 +505,8 @@ __PACKAGE__->register_method({
PVE::LXC::Config->write_config($vmid, $conf);
};
if (my $err = $@) {
+   eval { PVE::LXC::delete_ifaces_ipams_ips($conf, $vmid) };
+   warn $@ if $@;
PVE::LXC::destroy_disks($storage_cfg, $vollist);
if ($destroy_config_on_error) {
eval { PVE::LXC::Config->destroy_config($vmid) };
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 4472e0f..2dad83d 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2758,6 +2758,19 @@ sub thaw($) {
 }
 }
 
+sub create_ifaces_ipams_ips {
+my ($conf, $vmid) = @_;
+
+return if !$have_sdn;
+
+for my $opt (keys %$conf) {
+   next if $opt !~ m/^net(\d+)$/;
+   my $net = PVE::QemuServer::parse_net($conf->{$opt});
+   next if $net->{type} ne 'veth';
+PVE::Network::SDN::Vnets::add_next_free_cidr($net->{bridge}, 
$conf->{hostname}, $net->{hwaddr}, $vmid, undef, 1);
+}
+}
+
 sub delete_ifaces_ipams_ips {
 my ($conf, $vmid) = @_;
 
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 21/33] vmnic add|remove : add|del ip in ipam

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Co-Authored-by: Stefan Lendl 
Signed-off-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 40 
 1 file changed, 40 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c465fb6..5f15c66 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -64,6 +64,7 @@ use PVE::QemuServer::USB;
 my $have_sdn;
 eval {
 require PVE::Network::SDN::Zones;
+require PVE::Network::SDN::Vnets;
 $have_sdn = 1;
 };
 
@@ -4998,6 +4999,10 @@ sub vmconfig_hotplug_pending {
} elsif ($opt =~ m/^net(\d+)$/) {
die "skip\n" if !$hotplug_features->{network};
vm_deviceunplug($vmid, $conf, $opt);
+   if($have_sdn) {
+   my $net = PVE::QemuServer::parse_net($conf->{$opt});
+   PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, 
$net->{macaddr}, $conf->{name});
+   }
} elsif (is_valid_drivename($opt)) {
die "skip\n" if !$hotplug_features->{disk} || $opt =~ 
m/(ide|sata)(\d+)/;
vm_deviceunplug($vmid, $conf, $opt);
@@ -5203,6 +5208,12 @@ sub vmconfig_apply_pending {
die "internal error";
} elsif (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_delete_or_detach_drive($vmid, $storecfg, $conf, $opt, 
$force);
+   } elsif (defined($conf->{$opt}) && $opt =~ m/^net\d+$/) {
+   if($have_sdn) {
+   my $net = PVE::QemuServer::parse_net($conf->{$opt});
+   eval { 
PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, $net->{macaddr}, 
$conf->{name}) };
+   warn if $@;
+   }
}
};
if (my $err = $@) {
@@ -5222,6 +5233,20 @@ sub vmconfig_apply_pending {
eval {
if (defined($conf->{$opt}) && is_valid_drivename($opt)) {
vmconfig_register_unused_drive($storecfg, $vmid, $conf, 
parse_drive($opt, $conf->{$opt}))
+   } elsif (defined($conf->{pending}->{$opt}) && $opt =~ m/^net\d+$/) {
+   if($have_sdn) {
+my $new_net = 
PVE::QemuServer::parse_net($conf->{pending}->{$opt});
+   if ($conf->{$opt}){
+   my $old_net = PVE::QemuServer::parse_net($conf->{$opt});
+
+   if ($old_net->{bridge} ne $new_net->{bridge} ||
+   $old_net->{macaddr} ne $new_net->{macaddr}) {
+   
PVE::Network::SDN::Vnets::del_ips_from_mac($old_net->{bridge}, 
$old_net->{macaddr}, $conf->{name});
+   }
+  }
+  #fixme: reuse ip if mac change && same bridge
+  
PVE::Network::SDN::Vnets::add_next_free_cidr($new_net->{bridge}, $conf->{name}, 
$new_net->{macaddr}, $vmid, undef, 1);
+   }
}
};
if (my $err = $@) {
@@ -5265,6 +5290,11 @@ sub vmconfig_update_net {
 # for non online change, we try to hot-unplug
die "skip\n" if !$hotplug;
vm_deviceunplug($vmid, $conf, $opt);
+
+   if($have_sdn) {
+   PVE::Network::SDN::Vnets::del_ips_from_mac($oldnet->{bridge}, 
$oldnet->{macaddr}, $conf->{name});
+   }
+
} else {
 
die "internal error" if $opt !~ m/net(\d+)/;
@@ -5276,6 +5306,13 @@ sub vmconfig_update_net {
safe_num_ne($oldnet->{firewall}, $newnet->{firewall})) {
PVE::Network::tap_unplug($iface);
 
+   if (safe_string_ne($oldnet->{bridge}, $newnet->{bridge})) {
+   if ($have_sdn) {
+   
PVE::Network::SDN::Vnets::del_ips_from_mac($oldnet->{bridge}, 
$oldnet->{macaddr}, $conf->{name});
+   
PVE::Network::SDN::Vnets::add_next_free_cidr($newnet->{bridge}, $conf->{name}, 
$newnet->{macaddr}, $vmid, undef, 1);
+   }
+   }
+
if ($have_sdn) {
PVE::Network::SDN::Zones::tap_plug($iface, 
$newnet->{bridge}, $newnet->{tag}, $newnet->{firewall}, $newnet->{trunks}, 
$newnet->{rate});
} else {
@@ -5296,6 +5333,9 @@ sub vmconfig_update_net {
 }
 
 if ($hotplug) {
+   if ($have_sdn) {
+   PVE::Network::SDN::Vnets::add_next_free_cidr($newnet->{bridge}, 
$conf->{name}, $newnet->{macaddr}, $vmid, undef, 1);
+   }
vm_deviceplug($storecfg, $conf, $vmid, $opt, $newnet, $arch, 
$machine_type);
 } else {
die "skip\n";
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 24/33] vm_destroy: delete ip from ipam

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Co-Authored-By: Stefan Hanreich 
Signed-off-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 17 +
 1 file changed, 17 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b92743c..b4cb741 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2341,6 +2341,9 @@ sub destroy_vm {
});
 }
 
+eval { delete_ifaces_ipams_ips($conf, $vmid)};
+warn $@ if $@;
+
 if (defined $replacement_conf) {
PVE::QemuConfig->write_config($vmid, $replacement_conf);
 } else {
@@ -8643,4 +8646,18 @@ sub create_ifaces_ipams_ips {
 }
 }
 
+sub delete_ifaces_ipams_ips {
+my ($conf, $vmid) = @_;
+
+return if !$have_sdn;
+
+foreach my $opt (keys %$conf) {
+   if ($opt =~ m/^net(\d+)$/) {
+   my $net = PVE::QemuServer::parse_net($conf->{$opt});
+   eval { PVE::Network::SDN::Vnets::del_ips_from_mac($net->{bridge}, 
$net->{macaddr}, $conf->{name}) };
+   warn $@ if $@;
+   }
+}
+}
+
 1;
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 22/33] vm_start : vm-network-scripts: add_dhcp_reservation

2023-11-17 Thread Stefan Hanreich
From: Alexandre Derumier 

Signed-off-by: Stefan Hanreich 
Signed-off-by: Alexandre Derumier 
---
 vm-network-scripts/pve-bridge | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/vm-network-scripts/pve-bridge b/vm-network-scripts/pve-bridge
index d37ce33..e8f8798 100755
--- a/vm-network-scripts/pve-bridge
+++ b/vm-network-scripts/pve-bridge
@@ -10,6 +10,7 @@ use PVE::Network;
 my $have_sdn;
 eval {
 require PVE::Network::SDN::Zones;
+require PVE::Network::SDN::Vnets;
 $have_sdn = 1;
 };
 
@@ -44,6 +45,7 @@ my $net = PVE::QemuServer::parse_net($netconf);
 die "unable to parse network config '$netid'\n" if !$net;
 
 if ($have_sdn) {
+PVE::Network::SDN::Vnets::add_dhcp_mapping($net->{bridge}, 
$net->{macaddr});
 PVE::Network::SDN::Zones::tap_create($iface, $net->{bridge});
 PVE::Network::SDN::Zones::tap_plug($iface, $net->{bridge}, $net->{tag}, 
$net->{firewall}, $net->{trunks}, $net->{rate});
 } else {
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 pve-network 08/33] sdn: dhcp: add dnsmasq plugin

2023-11-17 Thread Stefan Hanreich
The plugin creates a dnsmasq@ service that spawns a dnsmasq
instance that handles dhcp for that zone.

The configuration files for a dnsmasq instance lie within
/etc/dnsmasq.d/

The plugin generates the following configuration files:
* 00-default.conf
  Contains the default global configuration for dnsmasq. Disables DNS,
  enables some specific options for Windows, does some
  security-related configuration and makes dnsmasq bind only to the
  interfaces it is responsible for
* 10-.conf
  Contains the default settings for a subnet. Sets dhcp options for
  DNS and gateway.
* 10-.ranges.conf
  Contains the configuration of DHCP ranges for a subnet.
* ethers
  Contains the MAC address to IP mappings for a zone.

Every subnet gets assigned a tag in the dnsmasq configuration that is
equal to the subnet_id. This can be used to override / set additional
configuration options on a per-subnet basis.

Additionally it creates the file /etc/default/dnsmasq. that
provides default options for the dnsmasq service.

Leases are stored in /var/lib/misc/dnsmasq..

Co-Authored-By: Alexandre Derumier 
Signed-off-by: Stefan Hanreich 
---
 debian/control  |   1 +
 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm | 198 
 2 files changed, 199 insertions(+)
 create mode 100644 src/PVE/Network/SDN/Dhcp/Dnsmasq.pm

diff --git a/debian/control b/debian/control
index 8b720c3..4424096 100644
--- a/debian/control
+++ b/debian/control
@@ -24,6 +24,7 @@ Depends: libpve-common-perl (>= 5.0-45),
  ${misc:Depends},
  ${perl:Depends},
 Recommends: frr-pythontools (>= 8.5.1~), ifupdown2
+Suggests: dnsmasq
 Description: Proxmox VE's SDN (Software Defined Network) stack
  This package contains the Software Defined Network (tech preview) for
  Proxmox VE.
diff --git a/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm 
b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
new file mode 100644
index 000..21a6ddd
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp/Dnsmasq.pm
@@ -0,0 +1,198 @@
+package PVE::Network::SDN::Dhcp::Dnsmasq;
+
+use strict;
+use warnings;
+
+use base qw(PVE::Network::SDN::Dhcp::Plugin);
+
+use Net::IP qw(:PROC);
+use PVE::Tools qw(file_set_contents run_command lock_file);
+
+use File::Copy;
+
+my $DNSMASQ_CONFIG_ROOT = '/etc/dnsmasq.d';
+my $DNSMASQ_DEFAULT_ROOT = '/etc/default';
+my $DNSMASQ_LEASE_ROOT = '/var/lib/misc';
+
+sub type {
+return 'dnsmasq';
+}
+
+sub del_ip_mapping {
+my ($class, $dhcpid, $mac) = @_;
+
+my $ethers_file = "$DNSMASQ_CONFIG_ROOT/$dhcpid/ethers";
+my $ethers_tmp_file = "$ethers_file.tmp";
+
+my $removeFn = sub {
+   open(my $in, '<', $ethers_file) or die "Could not open file 
'$ethers_file' $!\n";
+   open(my $out, '>', $ethers_tmp_file) or die "Could not open file 
'$ethers_tmp_file' $!\n";
+
+while (my $line = <$in>) {
+   next if $line =~ m/^$mac/;
+   print $out $line;
+   }
+
+   close $in;
+   close $out;
+
+   move $ethers_tmp_file, $ethers_file;
+
+   chmod 0644, $ethers_file;
+};
+
+PVE::Tools::lock_file($ethers_file, 10, $removeFn);
+
+if ($@) {
+   warn "Unable to remove $mac from the dnsmasq configuration: $@\n";
+   return;
+}
+
+my $service_name = "dnsmasq\@$dhcpid";
+PVE::Tools::run_command(['systemctl', 'reload', $service_name]);
+}
+
+sub add_ip_mapping {
+my ($class, $dhcpid, $mac, $ip) = @_;
+
+my $ethers_file = "$DNSMASQ_CONFIG_ROOT/$dhcpid/ethers";
+my $ethers_tmp_file = "$ethers_file.tmp";
+
+my $appendFn = sub {
+   open(my $in, '<', $ethers_file) or die "Could not open file 
'$ethers_file' $!\n";
+   open(my $out, '>', $ethers_tmp_file) or die "Could not open file 
'$ethers_tmp_file' $!\n";
+
+while (my $line = <$in>) {
+   next if $line =~ m/^$mac/;
+   print $out $line;
+   }
+
+   print $out "$mac,$ip\n";
+   close $in;
+   close $out;
+   move $ethers_tmp_file, $ethers_file;
+   chmod 0644, $ethers_file;
+};
+
+PVE::Tools::lock_file($ethers_file, 10, $appendFn);
+
+if ($@) {
+   warn "Unable to add $mac/$ip to the dnsmasq configuration: $@\n";
+   return;
+}
+
+my $service_name = "dnsmasq\@$dhcpid";
+PVE::Tools::run_command(['systemctl', 'reload', $service_name]);
+}
+
+sub configure_subnet {
+my ($class, $dhcpid, $subnet_config) = @_;
+
+die "No gateway defined for subnet $subnet_config->{id}"
+   if !$subnet_config->{gateway};
+
+my $tag = $subnet_config->{id};
+
+my @dnsmasq_config = (
+   "listen-address=$subnet_config->{gateway}",
+);
+
+my $option_string;
+if (ip_is_ipv6($subnet_config->{network})) {
+   $option_string = 'option6';
+   push @dnsmasq_config, "enable-ra";
+} else {
+   $option_string = 'option';
+   push @dnsmasq_config, 
"dhcp-option=tag:$tag,$option_string:router,$subnet_config->{gateway}";
+}
+
+push @dnsmasq_config, 

[pve-devel] [PATCH v4 pve-network 09/33] sdn: dhcp: add helper for creating DHCP leases

2023-11-17 Thread Stefan Hanreich
This helper can be used to create DHCP entries for a specific zone. It
is used by the API to create DHCP leases for VMs/CTs.

Co-Authored-By: Alexandre Derumier 
Signed-off-by: Stefan Hanreich 
---
 src/PVE/Network/SDN/Dhcp.pm  | 115 +++
 src/PVE/Network/SDN/Makefile |   2 +-
 2 files changed, 116 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/Network/SDN/Dhcp.pm

diff --git a/src/PVE/Network/SDN/Dhcp.pm b/src/PVE/Network/SDN/Dhcp.pm
new file mode 100644
index 000..a05b441
--- /dev/null
+++ b/src/PVE/Network/SDN/Dhcp.pm
@@ -0,0 +1,115 @@
+package PVE::Network::SDN::Dhcp;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_read_file);
+
+use PVE::Network::SDN;
+use PVE::Network::SDN::SubnetPlugin;
+use PVE::Network::SDN::Dhcp qw(config);
+use PVE::Network::SDN::Subnets qw(sdn_subnets_config config get_dhcp_ranges);
+use PVE::Network::SDN::Dhcp::Plugin;
+use PVE::Network::SDN::Dhcp::Dnsmasq;
+
+use PVE::INotify qw(nodename);
+
+PVE::Network::SDN::Dhcp::Plugin->init();
+
+PVE::Network::SDN::Dhcp::Dnsmasq->register();
+PVE::Network::SDN::Dhcp::Dnsmasq->init();
+
+sub add_mapping {
+my ($vnetid, $mac, $ip4, $ip6) = @_;
+
+my $vnet = PVE::Network::SDN::Vnets::get_vnet($vnetid);
+return if !$vnet;
+
+my $zoneid = $vnet->{zone};
+my $zone = PVE::Network::SDN::Zones::get_zone($zoneid);
+
+return if !$zone->{ipam} || !$zone->{dhcp};
+
+my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($zone->{dhcp});
+$dhcp_plugin->add_ip_mapping($zoneid, $mac, $ip4) if $ip4;
+$dhcp_plugin->add_ip_mapping($zoneid, $mac, $ip6) if $ip6;
+}
+
+sub remove_mapping {
+my ($vnetid, $mac) = @_;
+
+my $vnet = PVE::Network::SDN::Vnets::get_vnet($vnetid);
+return if !$vnet;
+
+my $zoneid = $vnet->{zone};
+my $zone = PVE::Network::SDN::Zones::get_zone($zoneid);
+
+return if !$zone->{ipam} || !$zone->{dhcp};
+
+my $dhcp_plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($zone->{dhcp});
+$dhcp_plugin->del_ip_mapping($zoneid, $mac);
+}
+
+sub regenerate_config {
+my ($reload) = @_;
+
+my $cfg = PVE::Network::SDN::running_config();
+
+my $zone_cfg = $cfg->{zones};
+my $subnet_cfg = $cfg->{subnets};
+return if !$zone_cfg && !$subnet_cfg;
+
+my $nodename = PVE::INotify::nodename();
+
+my $plugins = PVE::Network::SDN::Dhcp::Plugin->lookup_types();
+
+foreach my $plugin_name (@$plugins) {
+   my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($plugin_name);
+   eval { $plugin->before_regenerate() };
+   die "Could not run before_regenerate for DHCP plugin $plugin_name $@\n" 
if $@;
+}
+
+foreach my $zoneid (sort keys %{$zone_cfg->{ids}}) {
+my $zone = $zone_cfg->{ids}->{$zoneid};
+next if !$zone->{dhcp};
+
+   my $dhcp_plugin_name = $zone->{dhcp};
+   my $dhcp_plugin = 
PVE::Network::SDN::Dhcp::Plugin->lookup($dhcp_plugin_name);
+
+   die "Could not find DHCP plugin: $dhcp_plugin_name" if !$dhcp_plugin;
+
+   eval { $dhcp_plugin->before_configure($zoneid) };
+   die "Could not run before_configure for DHCP server $zoneid $@\n" if $@;
+
+
+   foreach my $subnet_id (keys %{$subnet_cfg->{ids}}) {
+   my $subnet_config = 
PVE::Network::SDN::Subnets::sdn_subnets_config($subnet_cfg, $subnet_id);
+   my $dhcp_ranges = 
PVE::Network::SDN::Subnets::get_dhcp_ranges($subnet_config);
+
+   my ($zone, $subnet_network, $subnet_mask) = split(/-/, $subnet_id);
+   next if $zone ne $zoneid;
+   next if !$dhcp_ranges;
+
+   eval { $dhcp_plugin->configure_subnet($zoneid, $subnet_config) };
+   warn "Could not configure subnet $subnet_id: $@\n" if $@;
+
+   foreach my $dhcp_range (@$dhcp_ranges) {
+   eval { $dhcp_plugin->configure_range($zoneid, $subnet_config, 
$dhcp_range) };
+   warn "Could not configure DHCP range for $subnet_id: $@\n" if 
$@;
+   }
+   }
+
+   eval { $dhcp_plugin->after_configure($zoneid) };
+   warn "Could not run after_configure for DHCP server $zoneid $@\n" if $@;
+
+}
+
+foreach my $plugin_name (@$plugins) {
+   my $plugin = PVE::Network::SDN::Dhcp::Plugin->lookup($plugin_name);
+
+   eval { $plugin->after_regenerate() };
+   warn "Could not run after_regenerate for DHCP plugin $plugin_name $@\n" 
if $@;
+}
+}
+
+1;
diff --git a/src/PVE/Network/SDN/Makefile b/src/PVE/Network/SDN/Makefile
index 848f7d4..3e6e5fb 100644
--- a/src/PVE/Network/SDN/Makefile
+++ b/src/PVE/Network/SDN/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Vnets.pm VnetPlugin.pm Zones.pm Controllers.pm Subnets.pm 
SubnetPlugin.pm Ipams.pm Dns.pm
+SOURCES=Vnets.pm VnetPlugin.pm Zones.pm Controllers.pm Subnets.pm 
SubnetPlugin.pm Ipams.pm Dns.pm Dhcp.pm
 
 
 PERL5DIR=${DESTDIR}/usr/share/perl5
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com

  1   2   >