[pve-devel] [PATCH dab-pve-appliances] fix #4858: install libsasl2-modules for pmg

2024-03-01 Thread Stoiko Ivanov
the issue was already resolved for installations from ISO (short time
after PMG 8.0 was released), but I forgot to adapt the
container-template.

Signed-off-by: Stoiko Ivanov 
---
quickly tested by building a template and checking dpkg -l

debian-12-bookworm-pmg-8-64/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian-12-bookworm-pmg-8-64/Makefile 
b/debian-12-bookworm-pmg-8-64/Makefile
index 2fb1ddb..ab590bc 100644
--- a/debian-12-bookworm-pmg-8-64/Makefile
+++ b/debian-12-bookworm-pmg-8-64/Makefile
@@ -9,7 +9,7 @@ all: info/init_ok ${CVD_FILES}
cp systemd-presets 
${BASEDIR}/etc/systemd/system-preset/00-pve-template.preset
touch ${BASEDIR}/proxmox_install_mode
dab install libdbi-perl perl-openssl-defaults libcgi-pm-perl 
proxmox-mailgateway-container gpg ifupdown2
-   dab install antiword docx2txt odt2txt poppler-utils tesseract-ocr unrtf
+   dab install antiword docx2txt odt2txt poppler-utils tesseract-ocr unrtf 
libsasl2-modules
rm ${BASEDIR}/proxmox_install_mode
sed -i '/^deb.*\.proxmox\.com\/.*$$/d;$${/^$$/d;}' 
${BASEDIR}/etc/apt/sources.list
cp ${CVD_FILES} ${BASEDIR}/var/lib/clamav/
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage 1/1] storage/plugins: pass scfg to parse_volname

2024-03-01 Thread Roland Kammerer via pve-devel
--- Begin Message ---
On Fri, Mar 01, 2024 at 10:45:37AM +0100, Dietmar Maurer wrote:
> 
> > On 29.2.2024 16:09 CET Roland Kammerer via pve-devel 
> >  wrote:
> > All in all, yes, this is specific for our use case, otherwise
> > parse_volname would already have that additional parameter as all the
> > other plugin functions, but I don't see where this would hurt existing
> > code, and it certainly helps us to enable reassigning disks to VMs.
> > Passing in a param all other functions already get access to also does
> > not sound too terrible to me.
> > 
> > If there are further questions please feel free to ask.
> 
> Are you aware that parse_volname() is sometimes called for
> all volumes, i.e inside volume_is_base_and_used().
> 
> Would that be fast enough? IMHO its a bad idea to make a network query for 
> each volume there...

Thanks for mentioning that, I saw that in my tests as well. We already
have infrastructure for persistent caches on the local file system in
the plugin for status(), which is also called often on all nodes and
which would otherwise hammer down the central LINSTOR controller with
too many requests.

>From what I saw the calls to parse_volname() usually happen in bursts,
my current implementation uses our existing cache infrastructure to
implement a "burst cache" valid for some seconds. I'd assume that is
then good enough, being one network request per burst.

Regards, rck

--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH dab-pve-appliances] pmg: update to 8.1

2024-03-01 Thread Thomas Lamprecht
Am 29/02/2024 um 12:40 schrieb Stoiko Ivanov:
> Signed-off-by: Stoiko Ivanov 
> ---
> tested with the packages from our internal repository yesterday evening
> all looked ok.
> 
>  debian-12-bookworm-pmg-8-64/dab.conf | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage 1/1] storage/plugins: pass scfg to parse_volname

2024-03-01 Thread Dietmar Maurer


> On 29.2.2024 16:09 CET Roland Kammerer via pve-devel 
>  wrote:
> All in all, yes, this is specific for our use case, otherwise
> parse_volname would already have that additional parameter as all the
> other plugin functions, but I don't see where this would hurt existing
> code, and it certainly helps us to enable reassigning disks to VMs.
> Passing in a param all other functions already get access to also does
> not sound too terrible to me.
> 
> If there are further questions please feel free to ask.

Are you aware that parse_volname() is sometimes called for
all volumes, i.e inside volume_is_base_and_used().

Would that be fast enough? IMHO its a bad idea to make a network query for each 
volume there...


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs] storage: fix zfs over iscsi links

2024-03-01 Thread Dominik Csapak
The `_ZFS_over_iSCSI` wiki page is redirected to the legacy page
(for historical reasons), but we want to link to the reference docs
instead.

for the wiki add the legacy link in a `see also` section, so users can
still reach that page easily should they need to

Signed-off-by: Dominik Csapak 
---
 pve-storage-zfs.adoc | 9 +
 pvesm.adoc   | 2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/pve-storage-zfs.adoc b/pve-storage-zfs.adoc
index 6801873..c07f534 100644
--- a/pve-storage-zfs.adoc
+++ b/pve-storage-zfs.adoc
@@ -137,3 +137,12 @@ point of failure in your deployment.
 |Content types  |Image formats  |Shared |Snapshots |Clones
 |images |raw|yes|yes|no
 |==
+
+ifdef::wiki[]
+
+See Also
+
+
+* link:/wiki/Legacy:_ZFS_over_iSCSI[Legacy: ZFS over iSCSI]
+
+endif::wiki[]
diff --git a/pvesm.adoc b/pvesm.adoc
index ff4d352..7ae4063 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -420,7 +420,7 @@ See Also
 
 * link:/wiki/Storage:_ZFS[Storage: ZFS]
 
-* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
+* link:/wiki/Storage:_ZFS_over_ISCSI[Storage: ZFS over ISCSI]
 
 endif::wiki[]
 
-- 
2.30.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v9 5/7] fix #1027: virtio-fs support

2024-03-01 Thread Markus Frank
add support for sharing directories with a guest vm

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not
run in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid and the optional parameters direct-io, cache and
writeback. Additionally the xattr & acl parameter overwrite the directory
mapping settings for xattr & acl.

The dirid gets mapped to the path on the current node and is also used as a
mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Signed-off-by: Markus Frank 
---
 PVE/API2/Qemu.pm   |  39 ++-
 PVE/QemuServer.pm  |  19 +++-
 PVE/QemuServer/Makefile|   3 +-
 PVE/QemuServer/Memory.pm   |  34 --
 PVE/QemuServer/Virtiofs.pm | 212 +
 5 files changed, 296 insertions(+), 11 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index cdc8f7a..20edbfb 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -648,6 +648,32 @@ my sub check_vm_create_hostpci_perm {
 return 1;
 };
 
+my sub check_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', 
$value);
+$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", 
['Mapping.Use']);
+
+return 1;
+};
+
+my sub check_vm_create_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+for my $opt (keys %{$param}) {
+   next if $opt !~ m/^virtiofs\d+$/;
+   check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+}
+
+return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
 my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -658,7 +684,7 @@ my $check_vm_modify_config_perm = sub {
# else, as there the permission can be value dependend
next if PVE::QemuServer::is_valid_drivename($opt);
next if $opt eq 'cdrom';
-   next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+   next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
next if $opt eq 'tags';
 
 
@@ -954,6 +980,7 @@ __PACKAGE__->register_method({
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
+   check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
&$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -1828,6 +1855,10 @@ my $update_vm_api  = sub {
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
+   PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+   PVE::QemuConfig->write_config($vmid, $conf);
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
delete $conf->{$opt};
@@ -1914,6 +1945,12 @@ my $update_vm_api  = sub {
}
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
$conf->{pending}->{$opt} = $param->{$opt};
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   if (my $oldvalue = $conf->{$opt}) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$oldvalue);
+   }
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
+   $conf->{pending}->{$opt} = $param->{$opt};
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $conf->{$opt}, 
$param->{$opt}, $rpcenv, $authuser);
$conf->{pending}->{$opt} = 
PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b45507a..06182bf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -34,6 +34,7 

[pve-devel] [PATCH manager v9 7/7] api: add resource map api endpoints for directories

2024-03-01 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 PVE/API2/Cluster/Mapping.pm   |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 317 ++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 325 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
 use strict;
 use warnings;
 
+use PVE::API2::Cluster::Mapping::Dir;
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
 
 use base qw(PVE::RESTHandler);
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Cluster::Mapping::Dir",
+path => 'dir',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::Cluster::Mapping::PCI",
 path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
 
my $result = [
+   { name => 'dir' },
{ name => 'pci' },
{ name => 'usb' },
];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index ..ddb6977d
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,317 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'index',
+path => '',
+method => 'GET',
+# only proxy if we give the 'check-node' parameter
+proxyto_callback => sub {
+   my ($rpcenv, $proxyto, $param) = @_;
+   return $param->{'check-node'} // 'localhost';
+},
+description => "List directory mapping",
+permissions => {
+   description => "Only lists entries where you have 'Mapping.Modify', 
'Mapping.Use' or".
+   " 'Mapping.Audit' permissions on '/mapping/dir/'.",
+   user => 'all',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   'check-node' => get_standard_option('pve-node', {
+   description => "If given, checks the configurations on the 
given node for ".
+   "correctness, and adds relevant diagnostics for the 
directory to the response.",
+   optional => 1,
+   }),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => "object",
+   properties => {
+   id => {
+   type => 'string',
+   description => "The logical ID of the mapping."
+   },
+   map => {
+   type => 'array',
+   description => "The entries of the mapping.",
+   items => {
+   type => 'string',
+   description => "A mapping for a node.",
+   },
+   },
+   description => {
+   type => 'string',
+   description => "A description of the logical mapping.",
+   },
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes.",
+   optional => 1,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for posix ACLs (implies 
--xattr).",
+   optional => 1,
+   },
+   checks => {
+   type => "array",
+   optional => 1,
+   description => "A list of checks, only present if 
'check-node' is set.",
+   items => {
+   type => 'object',
+   properties => {
+   severity => {
+   type => "string",
+   enum => ['warning', 'error'],
+   description => "The severity of the error",
+   },
+   message => {
+   type => "string",
+   description => "The message of the error",
+   },
+   },
+   }
+   },
+   },
+   },
+   links => [ { rel => 'child', href => "{id}" } ],
+},
+code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   my $check_node = $param->{'check-node'};
+   my $local_node = PVE::INotify::nodename();
+
+   die "wrong node to check - $check_node != $local_node\n"
+   if defined($check_node) && 

[pve-devel] [PATCH qemu-server v9 6/7] migration: check_local_resources for virtiofs

2024-03-01 Thread Markus Frank
add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank 
---
 PVE/QemuServer.pm| 10 +-
 test/MigrationTest/Shared.pm |  7 +++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 06182bf..516410b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2575,6 +2575,7 @@ sub check_local_resources {
 my $nodelist = PVE::Cluster::get_nodelist();
 my $pci_map = PVE::Mapping::PCI::config();
 my $usb_map = PVE::Mapping::USB::config();
+my $dir_map = PVE::Mapping::Dir::config();
 
 my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2586,6 +2587,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, 
$node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, 
$node);
+   } elsif ($type eq 'dir') {
+   $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, 
$node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2614,9 +2617,14 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+   if ($k =~ m/^virtiofs/) {
+   my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+   $add_missing_mapping->('dir', $k, $entry->{dirid});
+   push @$mapped_res, $k;
+   }
# sockets are safe: they will recreated be on the target side 
post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-   push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+   push @loc_res, $k if $k =~ 
m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
 }
 
 die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
 },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+config => sub {
+   return {};
+},
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
 vm_is_ha_managed => sub {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v9 2/7] add dir mapping section config

2024-03-01 Thread Markus Frank
Adds a config file for directories by using a 'map' property string for
each node mapping.

Next to node & path, there is the optional submounts parameter in the
map property string that is used to announce other mounted file systems
in the specified directory.

Additionally there are the default settings for xattr & acl.

example config:
```
some-dir-id
map node=node1,path=/mnt/share/,submounts=1
map node=node2,path=/mnt/share/,
xattr 1
acl 1
```

Signed-off-by: Markus Frank 
---
 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 205 +
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
install -d ${PERL5DIR}/PVE/Mapping
+   install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 000..8f131c2
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,205 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file 
cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::Storage::Plugin;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+  sub { __PACKAGE__->parse_config(@_); },
+  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+my ($class, $line) = @_;
+
+if ($line =~ m/^(\S+)\s*$/) {
+   my $id = $1;
+   my $errmsg = undef; # set if you want to skip whole section
+   eval { PVE::JSONSchema::pve_verify_configid($id) };
+   $errmsg = $@ if $@;
+   my $config = {}; # to return additional attributes
+   return ('dir', $id, $errmsg, $config);
+}
+return undef;
+}
+
+sub format_section_header {
+my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+return "$sectionId\n";
+}
+
+sub type {
+return 'dir';
+}
+
+my $map_fmt = {
+node => get_standard_option('pve-node'),
+path => {
+   description => "Absolute directory path that should be shared with the 
guest.",
+   type => 'string',
+   format => 'pve-storage-path',
+},
+submounts => {
+   type => 'boolean',
+   description => "Announce that the directory contains other mounted"
+   ." file systems. If this is not set and multiple file systems are"
+   ." mounted, the guest may encounter duplicates due to file system"
+   ." specific inode IDs.",
+   optional => 1,
+   default => 0,
+},
+description => {
+   description => "Description of the node specific directory.",
+   type => 'string',
+   optional => 1,
+   maxLength => 4096,
+},
+};
+
+my $defaultData = {
+propertyList => {
+   id => {
+   type => 'string',
+   description => "The ID of the directory",
+   format => 'pve-configid',
+   },
+   description => {
+   description => "Description of the directory",
+   type => 'string',
+   optional => 1,
+   maxLength => 4096,
+   },
+   map => {
+   type => 'array',
+   description => 'A list of maps for the cluster nodes.',
+   optional => 1,
+   items => {
+   type => 'string',
+   format => $map_fmt,
+   },
+   },
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes."
+   ." If not supported by Guest OS or file system, this option is"
+   ." simply ignored.",
+   optional => 1,
+   default => 0,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for POSIX ACLs (implies --xattr)."
+   ." The guest OS has to support ACLs. When used in a directory"
+   ." with a file system without ACL support, the ACLs are 
ignored.",
+   optional => 1,
+   default => 0,
+   },
+},
+};
+
+sub private {
+return $defaultData;
+}
+
+sub map_fmt {
+return $map_fmt;
+}
+
+sub options {
+return {
+   description => { optional => 1 },
+   map => {},
+   xattr => { optional => 1 },
+   acl => { optional => 1 },
+};
+}
+
+sub assert_valid {
+my ($dir_cfg) = @_;
+
+my $path = $dir_cfg->{path};
+
+if (! -e 

[pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v9 0/7] virtiofs

2024-03-01 Thread Markus Frank
Virtio-fs is a shared file system that enables sharing a directory between host
and guest VM. It takes advantage of the locality of virtual machines and the
hypervisor to get a higher throughput than the 9p remote file system protocol.


build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating zombie
processes after stutdown.
So I replaced run_command with exec for now. 
Maybe someone can find out why this happens.


cluser:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

v9:
* fixed wrong indentation
* changed parameter description
* added check_duplicate function to prevent multiple mappings for one node

v7:
* renamed DIR to Dir
* made xattr & acl settings per directory-id and not per node

Markus Frank (1):
  add dir mapping section config

 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 205 +
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm


docs:

v9:
* corrected grammatical errors and capitalization

v8:
* added "Known Limitations"
* removed old mount tag

Markus Frank (1):
  add doc section for the shared filesystem virtio-fs

 qm.adoc | 94 +++--
 1 file changed, 92 insertions(+), 2 deletions(-)


qemu-server:

v9:
* moved virtiofs code to Virtiofs module
* combined "Permission check for virtiofs directory access" with 
 "feature #1027: virtio-fs support" patch
* separated debian/control change into its own patch

v8:
* changed permission checks to cover cloning and restoring and
 made the helper functions similar to the PCI, USB permission check functions.
* warn if acl is activated on Windows VM, since the virtiofs device cannot be
 mounted on Windows if acl is on and moved with dir config validation to
 its own function. This function is called in config_to_command so that
 no virtiofsd command is running although qmstart died because a second
 virtiofs device was incorrectly configured.

v7:
* enabled use of hugepages
* renamed variables
* added acl & xattr parameters that overwrite the default directory
 mapping settings

v6:
* added virtiofsd dependency
* 2 new patches:
* Permission check for virtiofs directory access
* check_local_resources: virtiofs

v5:
* allow numa settings with virtio-fs
* added direct-io & cache settings
* changed to rust implementation of virtiofsd
* made double fork and closed all file descriptor so that the lockfile
 gets released.

v3:
* created own socket and get file descriptor for virtiofsd
 so there is no race between starting virtiofsd & qemu
* added TODO to replace virtiofsd with rust implementation in bookworm
 (I packaged the rust implementation for bookworm & the C implementation
 in qemu will be removed in qemu 8.0)

v2:
* replaced sharedfiles_fmt path in qemu-server with dirid:
* user can use the dirid to specify the directory without requiring root access

Markus Frank (3):
  add virtiofsd as runtime dependency for qemu-server
  fix #1027: virtio-fs support
  migration: check_local_resources for virtiofs

 PVE/API2/Qemu.pm |  39 ++-
 PVE/QemuServer.pm|  29 -
 PVE/QemuServer/Makefile  |   3 +-
 PVE/QemuServer/Memory.pm |  34 --
 PVE/QemuServer/Virtiofs.pm   | 212 +++
 debian/control   |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 7 files changed, 313 insertions(+), 12 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm


manager:

v9:
* changed API descriptions
* prevent multiple mappings for one node with 
PVE::Mapping::Dir::check_duplicate 

v8:
* removed ui patches for now

Markus Frank (1):
  api: add resource map api endpoints for directories

 PVE/API2/Cluster/Mapping.pm   |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 317 ++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 325 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v9 4/7] add virtiofsd as runtime dependency for qemu-server

2024-03-01 Thread Markus Frank
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 7d6f975..0fce1a8 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
  socat,
  swtpm,
  swtpm-tools,
+ virtiofsd,
  ${misc:Depends},
  ${perl:Depends},
  ${shlibs:Depends},
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster v9 1/7] add mapping/dir.cfg for resource mapping

2024-03-01 Thread Markus Frank
Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.

Signed-off-by: Markus Frank 
Reviewed-by: Fiona Ebner 
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index f899dbe..6b775f8 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -82,6 +82,7 @@ my $observed = {
 'sdn/.running-config' => 1,
 'virtual-guest/cpu-models.conf' => 1,
 'virtual-guest/profiles.cfg' => 1,
+'mapping/dir.cfg' => 1,
 'mapping/pci.cfg' => 1,
 'mapping/usb.cfg' => 1,
 };
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index dc44464..17cbf61 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -112,6 +112,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "virtual-guest/cpu-models.conf" },
{ .path = "virtual-guest/profiles.cfg" },
{ .path = "firewall/cluster.fw" },
+   { .path = "mapping/dir.cfg" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
 };
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v9 3/7] add doc section for the shared filesystem virtio-fs

2024-03-01 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 qm.adoc | 94 +++--
 1 file changed, 92 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index fa6a772..fa1de72 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1056,6 +1056,95 @@ recommended to always use a limiter to avoid guests 
using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~
+
+Virtio-fs is a shared file system that enables sharing a directory between host
+and guest VM. It takes advantage of the locality of virtual machines and the
+hypervisor to get a higher throughput than the 9p remote file system protocol.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. In {pve}, this process starts immediately 
before
+the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known Limitations
+^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Live migration does not work.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add Mapping for Shared Directories
+^^
+
+To add a mapping for a shared directory, either use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+
+pvesh create /cluster/mapping/dir --id dir1 \
+--map node=node1,path=/path/to/share1 \
+--map node=node2,path=/path/to/share2,submounts=1 \
+--xattr 1 \
+--acl 1
+
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set `xattr` to `0` if `acl` is set to `1`.
+
+Set `submounts` to `1` when multiple file systems are mounted in a shared
+directory to prevent the guest from creating duplicates because of file system
+specific inode IDs that get passed through.
+
+
+Add virtio-fs to a VM
+^
+
+To share a directory using virtio-fs, add the parameter `virtiofs` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, or `auto` (default: `auto`),
+depending on your requirements. How the different caching modes behave can be
+read at https://lwn.net/Articles/774495/ under the title "Caching Modes". To
+enable writeback cache set `writeback` to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful 
if
+applications do their own caching.
+
+Additionally, it is possible to overwrite the default mapping settings for
+`xattr` and `acl` by setting them to either `1` or `0`. The `acl` parameter
+automatically implies `xattr`, that is, it makes no difference whether you set
+`xattr` to `0` if `acl` is set to `1`.
+
+
+qm set  -virtiofs0 dirid=,cache=always,direct-io=1
+qm set  -virtiofs1 ,cache=never,xattr=1
+qm set  -virtiofs2 ,acl=1,writeback=1
+
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run 
the
+following command inside the guest:
+
+
+mount -t virtiofs  
+
+
+The dirid associated with the path on the current node is also used as the 
mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~
@@ -1662,8 +1751,9 @@ in the relevant tab in the `Resource Mappings` category, 
or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `` is the hardware type (currently either `pci` or `usb`) and
-`` are the device mappings and other configuration parameters.
+Where `` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel