[pve-devel] [PATCH qemu-server] suspend: continue cleanup even if savevm-end QMP command fails

2024-05-14 Thread Fiona Ebner
The savevm-end command also fails when no snapshot operation was
started before. In particular, this is the case when savevm-start
failed early, because of unmigratable devices.

Avoid potentially leaving an orphaned volume and snasphot-related
configuration keys around by continuing with cleanup instead.

Signed-off-by: Fiona Ebner 
---
 PVE/QemuServer.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9032d294..5df0c96d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6406,7 +6406,8 @@ sub vm_suspend {
if ($err) {
# cleanup, but leave suspending lock, to indicate something 
went wrong
eval {
-   mon_cmd($vmid, "savevm-end");
+   eval { mon_cmd($vmid, "savevm-end"); };
+   warn $@ if $@;
PVE::Storage::deactivate_volumes($storecfg, [$vmstate]);
PVE::Storage::vdisk_free($storecfg, $vmstate);
delete $conf->@{qw(vmstate runningmachine runningcpu)};
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] snapshot: prohibit snapshot with ram if vm has a passthrough pci device

2024-05-14 Thread Fiona Ebner
Am 14.05.24 um 15:03 schrieb Fiona Ebner:
> That said, looking into this and wondering why QEMU doesn't check it,
> there's an issue in that our savevm-async code does not properly check
> for all migration blockers (only some of them)! I'll work out a patch
> for that. 

Well...you can't live-migrate with VMDK:

> VM 105 qmp command 'migrate' failed - The vmdk format used by node 
> '#block185' does not support live migration
This also means that improving the check for blockers for savevm-async
would prohibit suspend-to-disk for VMs with a VMDK image (snapshots are
already not supported on the storage layer).

>From QEMU commit 5aaac46793 ("migration: savevm: consult migration
blockers"):

> There is really no difference between live migration and savevm, except
> that savevm does not require bdrv_invalidate_cache to be implemented
> by all disks.  However, it is unlikely that savevm is used with anything
> except qcow2 disks, so the penalty is small and worth the improvement
> in catching bad usage of savevm.

VMDK does not implement bdrv_co_invalidate_cache() and sets a migration
blocker, so the penalty would be prohibiting suspend-to-disk for them
:(. Note that other drivers we care about, i.e. RBD/iSCSI/file-posix all
do implement bdrv_co_invalidate_cache() and do not set a migration blocker.

Still, it seems dangerous to ignore other migration blockers, leading to
issues like the one motivating the patch. I'll see if filtering that
special blocker or introducing special handling is not too
difficult/hacky. Otherwise, I'm not sure if it'd be tolerable to break
suspend-to-disk with VMDK (maybe for an upcoming release)?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] snapshot: prohibit snapshot with ram if vm has a passthrough pci device

2024-05-14 Thread Fiona Ebner
Am 12.04.24 um 11:32 schrieb Fabian GrĂĽnbichler:
> On March 19, 2024 4:08 pm, Hannes Duerr wrote:
>> When a snapshot is created with RAM, qemu attempts to save not only the
>> RAM content, but also the internal state of the PCI devices.
>>
>> However, as not all drivers support this, this can lead to the device
>> drivers in the VM not being able to handle the saved state during the
>> restore/rollback and in conclusion the VM might crash. For this reason,
>> we now generally prohibit snapshots with RAM for VMs with passthrough
>> devices.
>>
>> In the future, this prohibition can of course be relaxed for individual
>> drivers that we know support it, such as the vfio driver
>>

We're already using pci-vfio, see [0]. So not sure how that relaxation
would look like. Probably it'd need to be a flag for the hostpci
property similar to what's done in Dominik's "implement experimental
vgpu live migration​" series for mapped devices.

That said, looking into this and wondering why QEMU doesn't check it,
there's an issue in that our savevm-async code does not properly check
for all migration blockers (only some of them)! I'll work out a patch
for that. If we can be sure not to break any existing users with the
below code, we can still apply it too of course.

>> Signed-off-by: Hannes Duerr 
>> ---
>>  PVE/API2/Qemu.pm | 10 ++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index 40b6c30..0acd1c7 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -5101,6 +5101,16 @@ __PACKAGE__->register_method({
>>  die "unable to use snapshot name 'pending' (reserved name)\n"
>>  if lc($snapname) eq 'pending';
>>  
>> +if ($param->{vmstate}) {
>> +my $conf = PVE::QemuConfig->load_config($vmid);
>> +
>> +for my $key (keys %$conf) {
>> +next if $key !~ /^hostpci\d+/;
>> +die "cannot snapshot VM with RAM due to passed-through PCI 
>> device(s), which lack"
>> +." the possibility to save/restore their internal state\n";
>> +}
>> +}
> 
> isn't the same also true of other local resources (e.g., passed-through
> USB?)?
> 
> maybe we could find a way to unify the checks we do for live migration
> (PVE::QemuServer::check_local_resources), since that is almost the same
> code inside Qemu as a stateful snapshot+rollback?
> 
> (not opposed to applying this before that happens though, just a
> question in general..)
> 

Similarly, there is the suspend API endpoint that could benefit from
having a single helper. I assume this code was copied from there.

[0]:
https://git.proxmox.com/?p=qemu-server.git;a=blob;f=PVE/QemuServer/PCI.pm;h=1673041bbe7a5d638a0ee9c56ea6bbb31027023b;hb=HEAD#l625


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v10 08/11] ui: add edit window for dir mappings

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 www/manager6/Makefile |   1 +
 www/manager6/window/DirMapEdit.js | 222 ++
 2 files changed, 223 insertions(+)
 create mode 100644 www/manager6/window/DirMapEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2c3a822b..f6140562 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -137,6 +137,7 @@ JSSRC=  
\
window/TreeSettingsEdit.js  \
window/PCIMapEdit.js\
window/USBMapEdit.js\
+   window/DirMapEdit.js\
window/GuestImport.js   \
ha/Fencing.js   \
ha/GroupEdit.js \
diff --git a/www/manager6/window/DirMapEdit.js 
b/www/manager6/window/DirMapEdit.js
new file mode 100644
index ..cda5824b
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,222 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+extend: 'Proxmox.window.Edit',
+
+mixins: ['Proxmox.Mixin.CBind'],
+
+cbindData: function(initialConfig) {
+   let me = this;
+   me.isCreate = !me.name;
+   me.method = me.isCreate ? 'POST' : 'PUT';
+   me.hideMapping = !!me.entryOnly;
+   me.hideComment = me.name && !me.entryOnly;
+   me.hideNodeSelector = me.nodename || me.entryOnly;
+   me.hideNode = !me.nodename || !me.hideNodeSelector;
+   return {
+   name: me.name,
+   nodename: me.nodename,
+   };
+},
+
+submitUrl: function(_url, data) {
+   let me = this;
+   let name = me.isCreate ? '' : me.name;
+   return `/cluster/mapping/dir/${name}`;
+},
+
+title: gettext('Add Dir mapping'),
+
+onlineHelp: 'resource_mapping',
+
+method: 'POST',
+
+controller: {
+   xclass: 'Ext.app.ViewController',
+
+   onGetValues: function(values) {
+   let me = this;
+   let view = me.getView();
+   values.node ??= view.nodename;
+
+   let name = values.name;
+   let description = values.description;
+   let xattr = values.xattr;
+   let acl = values.acl;
+   let deletes = values.delete;
+
+   delete values.description;
+   delete values.name;
+   delete values.xattr;
+   delete values.acl;
+
+   let map = [];
+   if (me.originalMap) {
+   map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) 
=> e.node !== values.node);
+   }
+   if (values.path) {
+   map.push(PVE.Parser.printPropertyString(values));
+   }
+
+   values = { map };
+   if (description) {
+   values.description = description;
+   }
+   if (xattr) {
+   values.xattr = xattr;
+   }
+   if (acl) {
+   values.acl = acl;
+   }
+   if (deletes) {
+   values.delete = deletes;
+   }
+
+   if (view.isCreate) {
+   values.id = name;
+   }
+   return values;
+   },
+
+   onSetValues: function(values) {
+   let me = this;
+   let view = me.getView();
+   me.originalMap = [...values.map];
+   let configuredNodes = [];
+   PVE.Parser.filterPropertyStringList(values.map, (e) => {
+   configuredNodes.push(e.node);
+   if (e.node === view.nodename) {
+   values = e;
+   }
+   return false;
+   });
+
+   me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+   return values;
+   },
+
+   init: function(view) {
+   let me = this;
+
+   if (!view.nodename) {
+   //throw "no nodename given";
+   }
+   },
+},
+
+items: [
+   {
+   xtype: 'inputpanel',
+   onGetValues: function(values) {
+   return this.up('window').getController().onGetValues(values);
+   },
+
+   onSetValues: function(values) {
+   return this.up('window').getController().onSetValues(values);
+   },
+
+   columnT: [
+   {
+   xtype: 'displayfield',
+   reference: 'directory-hint',
+   columnWidth: 1,
+   value: 'Make sure the directory exists.',
+   cbind: {
+   disabled: '{hideMapping}',
+   hidden: '{hideMapping}',
+   },
+   userCls: 'pmx-hint',
+   },
+   ],
+
+   column1: [
+   {
+   xtype: 'pmxDisplayEditField',
+   fieldLabel: gettext('Name'),
+   cbind: {
+   editable: '{!name}',
+ 

[pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 qm.adoc | 94 +++--
 1 file changed, 92 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 42c26db..755e20e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1081,6 +1081,95 @@ recommended to always use a limiter to avoid guests 
using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~
+
+Virtio-fs is a shared file system that enables sharing a directory between host
+and guest VM. It takes advantage of the locality of virtual machines and the
+hypervisor to get a higher throughput than the 9p remote file system protocol.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. In {pve}, this process starts immediately 
before
+the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known Limitations
+^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Live migration does not work.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add Mapping for Shared Directories
+^^
+
+To add a mapping for a shared directory, either use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+
+pvesh create /cluster/mapping/dir --id dir1 \
+--map node=node1,path=/path/to/share1 \
+--map node=node2,path=/path/to/share2,submounts=1 \
+--xattr 1 \
+--acl 1
+
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set `xattr` to `0` if `acl` is set to `1`.
+
+Set `submounts` to `1` when multiple file systems are mounted in a shared
+directory to prevent the guest from creating duplicates because of file system
+specific inode IDs that get passed through.
+
+
+Add virtio-fs to a VM
+^
+
+To share a directory using virtio-fs, add the parameter `virtiofs` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, or `auto` (default: `auto`),
+depending on your requirements. How the different caching modes behave can be
+read at https://lwn.net/Articles/774495/ under the title "Caching Modes". To
+enable writeback cache set `writeback` to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful 
if
+applications do their own caching.
+
+Additionally, it is possible to overwrite the default mapping settings for
+`xattr` and `acl` by setting them to either `1` or `0`. The `acl` parameter
+automatically implies `xattr`, that is, it makes no difference whether you set
+`xattr` to `0` if `acl` is set to `1`.
+
+
+qm set  -virtiofs0 dirid=,cache=always,direct-io=1
+qm set  -virtiofs1 ,cache=never,xattr=1
+qm set  -virtiofs2 ,acl=1,writeback=1
+
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run 
the
+following command inside the guest:
+
+
+mount -t virtiofs  
+
+
+The dirid associated with the path on the current node is also used as the 
mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~
@@ -1743,8 +1832,9 @@ in the relevant tab in the `Resource Mappings` category, 
or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `` is the hardware type (currently either `pci` or `usb`) and
-`` are the device mappings and other configuration parameters.
+Where `` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v10 09/11] ui: ResourceMapTree for DIR

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 www/manager6/Makefile |  1 +
 www/manager6/dc/Config.js | 10 +++
 www/manager6/dc/DirMapView.js | 50 +++
 3 files changed, 61 insertions(+)
 create mode 100644 www/manager6/dc/DirMapView.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index f6140562..5a3541e0 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC=  
\
dc/RealmSyncJob.js  \
dc/PCIMapView.js\
dc/USBMapView.js\
+   dc/DirMapView.js\
lxc/CmdMenu.js  \
lxc/Config.js   \
lxc/CreateWizard.js \
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index ddbb58b1..3355c835 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -320,6 +320,16 @@ Ext.define('PVE.dc.Config', {
title: gettext('USB Devices'),
flex: 1,
},
+   {
+   xtype: 'splitter',
+   collapsible: false,
+   performCollapse: false,
+   },
+   {
+   xtype: 'pveDcDirMapView',
+   title: gettext('Directories'),
+   flex: 1,
+   },
],
},
);
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index ..4468e951
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,50 @@
+Ext.define('pve-resource-dir-tree', {
+extend: 'Ext.data.Model',
+idProperty: 'internalId',
+fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+extend: 'PVE.tree.ResourceMapTree',
+alias: 'widget.pveDcDirMapView',
+
+editWindowClass: 'PVE.window.DirMapEditWindow',
+baseUrl: '/cluster/mapping/dir',
+mapIconCls: 'fa fa-folder',
+entryIdProperty: 'path',
+
+store: {
+   sorters: 'text',
+   model: 'pve-resource-dir-tree',
+   data: {},
+},
+
+columns: [
+   {
+   xtype: 'treecolumn',
+   text: gettext('ID/Node'),
+   dataIndex: 'text',
+   width: 200,
+   },
+   {
+   text: gettext('xattr'),
+   dataIndex: 'xattr',
+   },
+   {
+   text: gettext('acl'),
+   dataIndex: 'acl',
+   },
+   {
+   text: gettext('submounts'),
+   dataIndex: 'submounts',
+   },
+   {
+   header: gettext('Comment'),
+   dataIndex: 'description',
+   renderer: function(value, _meta, record) {
+   return value ?? record.data.comment;
+   },
+   flex: 1,
+   },
+],
+});
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v10 10/11] ui: form: add DIRMapSelector

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 www/manager6/Makefile   |  1 +
 www/manager6/form/DirMapSelector.js | 63 +
 2 files changed, 64 insertions(+)
 create mode 100644 www/manager6/form/DirMapSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 5a3541e0..cac8cd02 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC=
\
form/ContentTypeSelector.js \
form/ControllerSelector.js  \
form/DayOfWeekSelector.js   \
+   form/DirMapSelector.js  \
form/DiskFormatSelector.js  \
form/DiskStorageSelector.js \
form/FileSelector.js\
diff --git a/www/manager6/form/DirMapSelector.js 
b/www/manager6/form/DirMapSelector.js
new file mode 100644
index ..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+extend: 'Proxmox.form.ComboGrid',
+alias: 'widget.pveDirMapSelector',
+
+store: {
+   fields: ['name', 'path'],
+   filterOnLoad: true,
+   sorters: [
+   {
+   property: 'id',
+   direction: 'ASC',
+   },
+   ],
+},
+
+allowBlank: false,
+autoSelect: false,
+displayField: 'id',
+valueField: 'id',
+
+listConfig: {
+   columns: [
+   {
+   header: gettext('Directory ID'),
+   dataIndex: 'id',
+   flex: 1,
+   },
+   {
+   header: gettext('Comment'),
+   dataIndex: 'description',
+   flex: 1,
+   },
+   ],
+},
+
+setNodename: function(nodename) {
+   var me = this;
+
+   if (!nodename || me.nodename === nodename) {
+   return;
+   }
+
+   me.nodename = nodename;
+
+   me.store.setProxy({
+   type: 'proxmox',
+   url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+   });
+
+   me.store.load();
+},
+
+initComponent: function() {
+   var me = this;
+
+   var nodename = me.nodename;
+   me.nodename = undefined;
+
+me.callParent();
+
+   me.setNodename(nodename);
+},
+});
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs

2024-05-14 Thread Markus Frank
Virtio-fs is a shared file system that enables sharing a directory
between host and guest VMs. It takes advantage of the locality of
virtual machines and the hypervisor to get a higher throughput than
the 9p remote file system protocol.

build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating
zombie processes after stutdown. So I replaced run_command with exec
for now. Maybe someone can find out why this happens.


changes v10:
* rebase to master
* added gui patches again

cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

Markus Frank (1):
  add dir mapping section config

 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 205 +
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm


docs:

Markus Frank (1):
  add doc section for the shared filesystem virtio-fs

 qm.adoc | 94 +++--
 1 file changed, 92 insertions(+), 2 deletions(-)


qemu-server:

Markus Frank (3):
  add virtiofsd as runtime dependency for qemu-server
  fix #1027: virtio-fs support
  migration: check_local_resources for virtiofs

 PVE/API2/Qemu.pm |  39 ++-
 PVE/QemuServer.pm|  29 -
 PVE/QemuServer/Makefile  |   3 +-
 PVE/QemuServer/Memory.pm |  34 --
 PVE/QemuServer/Virtiofs.pm   | 212 +++
 debian/control   |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 7 files changed, 313 insertions(+), 12 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm


manager:

Markus Frank (5):
  api: add resource map api endpoints for directories
  ui: add edit window for dir mappings
  ui: ResourceMapTree for DIR
  ui: form: add DIRMapSelector
  ui: add options to add virtio-fs to qemu config

 PVE/API2/Cluster/Mapping.pm |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm | 317 
 PVE/API2/Cluster/Mapping/Makefile   |   1 +
 www/manager6/Makefile   |   4 +
 www/manager6/Utils.js   |   1 +
 www/manager6/dc/Config.js   |  10 +
 www/manager6/dc/DirMapView.js   |  50 +
 www/manager6/form/DirMapSelector.js |  63 ++
 www/manager6/qemu/HardwareView.js   |  19 ++
 www/manager6/qemu/VirtiofsEdit.js   | 137 
 www/manager6/window/DirMapEdit.js   | 222 +++
 11 files changed, 831 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
 create mode 100644 www/manager6/dc/DirMapView.js
 create mode 100644 www/manager6/form/DirMapSelector.js
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js
 create mode 100644 www/manager6/window/DirMapEdit.js

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v10 6/11] migration: check_local_resources for virtiofs

2024-05-14 Thread Markus Frank
add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank 
---
 PVE/QemuServer.pm| 10 +-
 test/MigrationTest/Shared.pm |  7 +++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0600d7..ee7699f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2573,6 +2573,7 @@ sub check_local_resources {
 my $nodelist = PVE::Cluster::get_nodelist();
 my $pci_map = PVE::Mapping::PCI::config();
 my $usb_map = PVE::Mapping::USB::config();
+my $dir_map = PVE::Mapping::Dir::config();
 
 my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2584,6 +2585,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, 
$node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, 
$node);
+   } elsif ($type eq 'dir') {
+   $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, 
$node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2612,9 +2615,14 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+   if ($k =~ m/^virtiofs/) {
+   my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+   $add_missing_mapping->('dir', $k, $entry->{dirid});
+   push @$mapped_res, $k;
+   }
# sockets are safe: they will recreated be on the target side 
post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-   push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+   push @loc_res, $k if $k =~ 
m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
 }
 
 die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
 },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+config => sub {
+   return {};
+},
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
 vm_is_ha_managed => sub {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v10 07/11] api: add resource map api endpoints for directories

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 PVE/API2/Cluster/Mapping.pm   |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 317 ++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 325 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
 use strict;
 use warnings;
 
+use PVE::API2::Cluster::Mapping::Dir;
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
 
 use base qw(PVE::RESTHandler);
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Cluster::Mapping::Dir",
+path => 'dir',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::Cluster::Mapping::PCI",
 path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
 
my $result = [
+   { name => 'dir' },
{ name => 'pci' },
{ name => 'usb' },
];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index ..ddb6977d
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,317 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'index',
+path => '',
+method => 'GET',
+# only proxy if we give the 'check-node' parameter
+proxyto_callback => sub {
+   my ($rpcenv, $proxyto, $param) = @_;
+   return $param->{'check-node'} // 'localhost';
+},
+description => "List directory mapping",
+permissions => {
+   description => "Only lists entries where you have 'Mapping.Modify', 
'Mapping.Use' or".
+   " 'Mapping.Audit' permissions on '/mapping/dir/'.",
+   user => 'all',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   'check-node' => get_standard_option('pve-node', {
+   description => "If given, checks the configurations on the 
given node for ".
+   "correctness, and adds relevant diagnostics for the 
directory to the response.",
+   optional => 1,
+   }),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => "object",
+   properties => {
+   id => {
+   type => 'string',
+   description => "The logical ID of the mapping."
+   },
+   map => {
+   type => 'array',
+   description => "The entries of the mapping.",
+   items => {
+   type => 'string',
+   description => "A mapping for a node.",
+   },
+   },
+   description => {
+   type => 'string',
+   description => "A description of the logical mapping.",
+   },
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes.",
+   optional => 1,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for posix ACLs (implies 
--xattr).",
+   optional => 1,
+   },
+   checks => {
+   type => "array",
+   optional => 1,
+   description => "A list of checks, only present if 
'check-node' is set.",
+   items => {
+   type => 'object',
+   properties => {
+   severity => {
+   type => "string",
+   enum => ['warning', 'error'],
+   description => "The severity of the error",
+   },
+   message => {
+   type => "string",
+   description => "The message of the error",
+   },
+   },
+   }
+   },
+   },
+   },
+   links => [ { rel => 'child', href => "{id}" } ],
+},
+code => sub {
+   my ($param) = @_;
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   my $check_node = $param->{'check-node'};
+   my $local_node = PVE::INotify::nodename();
+
+   die "wrong node to check - $check_node != $local_node\n"
+   if defined($check_node) && 

[pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support

2024-05-14 Thread Markus Frank
add support for sharing directories with a guest vm

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not
run in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid and the optional parameters direct-io, cache and
writeback. Additionally the xattr & acl parameter overwrite the directory
mapping settings for xattr & acl.

The dirid gets mapped to the path on the current node and is also used as a
mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Signed-off-by: Markus Frank 
---
 PVE/API2/Qemu.pm   |  39 ++-
 PVE/QemuServer.pm  |  19 +++-
 PVE/QemuServer/Makefile|   3 +-
 PVE/QemuServer/Memory.pm   |  34 --
 PVE/QemuServer/Virtiofs.pm | 212 +
 5 files changed, 296 insertions(+), 11 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8..5d97896 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -695,6 +695,32 @@ my sub check_vm_create_hostpci_perm {
 return 1;
 };
 
+my sub check_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', 
$value);
+$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", 
['Mapping.Use']);
+
+return 1;
+};
+
+my sub check_vm_create_dir_perm {
+my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+return 1 if $authuser eq 'root@pam';
+
+for my $opt (keys %{$param}) {
+   next if $opt !~ m/^virtiofs\d+$/;
+   check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+}
+
+return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
 my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -705,7 +731,7 @@ my $check_vm_modify_config_perm = sub {
# else, as there the permission can be value dependend
next if PVE::QemuServer::is_valid_drivename($opt);
next if $opt eq 'cdrom';
-   next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+   next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
next if $opt eq 'tags';
 
 
@@ -999,6 +1025,7 @@ __PACKAGE__->register_method({
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, 
$param);
+   check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
&$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -1899,6 +1926,10 @@ my $update_vm_api  = sub {
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
PVE::QemuConfig->write_config($vmid, $conf);
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$val);
+   PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+   PVE::QemuConfig->write_config($vmid, $conf);
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
delete $conf->{$opt};
@@ -1987,6 +2018,12 @@ my $update_vm_api  = sub {
}
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
$conf->{pending}->{$opt} = $param->{$opt};
+   } elsif ($opt =~ m/^virtiofs\d$/) {
+   if (my $oldvalue = $conf->{$opt}) {
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$oldvalue);
+   }
+   check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, 
$param->{$opt});
+   $conf->{pending}->{$opt} = $param->{$opt};
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $conf->{$opt}, 
$param->{$opt}, $rpcenv, $authuser);
$conf->{pending}->{$opt} = 
PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9032d29..a0600d7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -35,6 +35,7 

[pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 1301a36..8e4ca7f 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
  socat,
  swtpm,
  swtpm-tools,
+ virtiofsd,
  ${misc:Depends},
  ${perl:Depends},
  ${shlibs:Depends},
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v10 11/11] ui: add options to add virtio-fs to qemu config

2024-05-14 Thread Markus Frank
Signed-off-by: Markus Frank 
---
 www/manager6/Makefile |   1 +
 www/manager6/Utils.js |   1 +
 www/manager6/qemu/HardwareView.js |  19 +
 www/manager6/qemu/VirtiofsEdit.js | 137 ++
 4 files changed, 158 insertions(+)
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index cac8cd02..29d676df 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -270,6 +270,7 @@ JSSRC=  
\
qemu/Smbios1Edit.js \
qemu/SystemEdit.js  \
qemu/USBEdit.js \
+   qemu/VirtiofsEdit.js\
sdn/Browser.js  \
sdn/ControllerView.js   \
sdn/Status.js   \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f5608944..52ea5589 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1636,6 +1636,7 @@ Ext.define('PVE.Utils', {
serial: 4,
rng: 1,
tpmstate: 1,
+   virtiofs: 10,
 },
 
 // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js 
b/www/manager6/qemu/HardwareView.js
index 672a7e1a..0d4a3984 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -309,6 +309,16 @@ Ext.define('PVE.qemu.HardwareView', {
never_delete: !caps.nodes['Sys.Console'],
header: gettext("VirtIO RNG"),
};
+   for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+   let confid = "virtiofs" + i.toString();
+   rows[confid] = {
+   group: 50,
+   order: i,
+   iconCls: 'folder',
+   editor: 'PVE.qemu.VirtiofsEdit',
+   header: gettext('Virtiofs') + ' (' + confid +')',
+   };
+   }
 
var sorterFn = function(rec1, rec2) {
var v1 = rec1.data.key;
@@ -583,6 +593,7 @@ Ext.define('PVE.qemu.HardwareView', {
const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+   const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -592,6 +603,7 @@ Ext.define('PVE.qemu.HardwareView', {
me.down('#addRng').setDisabled(noSysConsolePerm || 
isAtLimit('rng'));
efidisk_menuitem.setDisabled(noVMConfigDiskPerm || 
isAtLimit('efidisk'));
me.down('#addTpmState').setDisabled(noSysConsolePerm || 
isAtLimit('tpmstate'));
+   me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || 
isAtLimit('virtiofs'));
me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || 
noVMConfigCloudinitPerm || hasCloudInit);
 
if (!rec) {
@@ -735,6 +747,13 @@ Ext.define('PVE.qemu.HardwareView', {
disabled: !caps.nodes['Sys.Console'],
handler: editorFactory('RNGEdit'),
},
+   {
+   text: gettext("Virtiofs"),
+   itemId: 'addVirtiofs',
+   iconCls: 'fa fa-folder',
+   disabled: !caps.nodes['Sys.Console'],
+   handler: editorFactory('VirtiofsEdit'),
+   },
],
}),
},
diff --git a/www/manager6/qemu/VirtiofsEdit.js 
b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index ..ec5c69fd
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,137 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+extend: 'Proxmox.panel.InputPanel',
+xtype: 'pveVirtiofsInputPanel',
+onlineHelp: 'qm_virtiofs',
+
+insideWizard: false,
+
+onGetValues: function(values) {
+   var me = this;
+   var confid = me.confid;
+   var params = {};
+   delete values.delete;
+   params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+   return params;
+},
+
+setSharedfiles: function(confid, data) {
+   var me = this;
+   me.confid = confid;
+   me.virtiofs = data;
+   me.setValues(me.virtiofs);
+},
+initComponent: function() {
+   let me = this;
+
+   me.nodename = me.pveSelNode.data.node;
+   if (!me.nodename) {
+   throw "no node name specified";
+   }
+   me.items = [
+   {
+   xtype: 'pveDirMapSelector',
+   emptyText: 'dirid',
+  

[pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config

2024-05-14 Thread Markus Frank
Adds a config file for directories by using a 'map' property string for
each node mapping.

Next to node & path, there is the optional submounts parameter in the
map property string that is used to announce other mounted file systems
in the specified directory.

Additionally there are the default settings for xattr & acl.

example config:
```
some-dir-id
map node=node1,path=/mnt/share/,submounts=1
map node=node2,path=/mnt/share/,
xattr 1
acl 1
```

Signed-off-by: Markus Frank 
---
 src/Makefile   |   1 +
 src/PVE/Mapping/Dir.pm | 205 +
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
install -d ${PERL5DIR}/PVE/Mapping
+   install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 000..8f131c2
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,205 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file 
cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::Storage::Plugin;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+  sub { __PACKAGE__->parse_config(@_); },
+  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+my ($class, $line) = @_;
+
+if ($line =~ m/^(\S+)\s*$/) {
+   my $id = $1;
+   my $errmsg = undef; # set if you want to skip whole section
+   eval { PVE::JSONSchema::pve_verify_configid($id) };
+   $errmsg = $@ if $@;
+   my $config = {}; # to return additional attributes
+   return ('dir', $id, $errmsg, $config);
+}
+return undef;
+}
+
+sub format_section_header {
+my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+return "$sectionId\n";
+}
+
+sub type {
+return 'dir';
+}
+
+my $map_fmt = {
+node => get_standard_option('pve-node'),
+path => {
+   description => "Absolute directory path that should be shared with the 
guest.",
+   type => 'string',
+   format => 'pve-storage-path',
+},
+submounts => {
+   type => 'boolean',
+   description => "Announce that the directory contains other mounted"
+   ." file systems. If this is not set and multiple file systems are"
+   ." mounted, the guest may encounter duplicates due to file system"
+   ." specific inode IDs.",
+   optional => 1,
+   default => 0,
+},
+description => {
+   description => "Description of the node specific directory.",
+   type => 'string',
+   optional => 1,
+   maxLength => 4096,
+},
+};
+
+my $defaultData = {
+propertyList => {
+   id => {
+   type => 'string',
+   description => "The ID of the directory",
+   format => 'pve-configid',
+   },
+   description => {
+   description => "Description of the directory",
+   type => 'string',
+   optional => 1,
+   maxLength => 4096,
+   },
+   map => {
+   type => 'array',
+   description => 'A list of maps for the cluster nodes.',
+   optional => 1,
+   items => {
+   type => 'string',
+   format => $map_fmt,
+   },
+   },
+   xattr => {
+   type => 'boolean',
+   description => "Enable support for extended attributes."
+   ." If not supported by Guest OS or file system, this option is"
+   ." simply ignored.",
+   optional => 1,
+   default => 0,
+   },
+   acl => {
+   type => 'boolean',
+   description => "Enable support for POSIX ACLs (implies --xattr)."
+   ." The guest OS has to support ACLs. When used in a directory"
+   ." with a file system without ACL support, the ACLs are 
ignored.",
+   optional => 1,
+   default => 0,
+   },
+},
+};
+
+sub private {
+return $defaultData;
+}
+
+sub map_fmt {
+return $map_fmt;
+}
+
+sub options {
+return {
+   description => { optional => 1 },
+   map => {},
+   xattr => { optional => 1 },
+   acl => { optional => 1 },
+};
+}
+
+sub assert_valid {
+my ($dir_cfg) = @_;
+
+my $path = $dir_cfg->{path};
+
+if (! -e 

[pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping

2024-05-14 Thread Markus Frank
Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.

Signed-off-by: Markus Frank 
Reviewed-by: Fiona Ebner 
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index f899dbe..6b775f8 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -82,6 +82,7 @@ my $observed = {
 'sdn/.running-config' => 1,
 'virtual-guest/cpu-models.conf' => 1,
 'virtual-guest/profiles.cfg' => 1,
+'mapping/dir.cfg' => 1,
 'mapping/pci.cfg' => 1,
 'mapping/usb.cfg' => 1,
 };
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index dc44464..17cbf61 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -112,6 +112,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "virtual-guest/cpu-models.conf" },
{ .path = "virtual-guest/profiles.cfg" },
{ .path = "firewall/cluster.fw" },
+   { .path = "mapping/dir.cfg" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
 };
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 3/6] fix #5250: proxinstall: expose new btrfs `compress` option

2024-05-14 Thread Christoph Heiss
On Mon, May 13, 2024 at 02:13:52PM +0200, Christoph Heiss wrote:
> Signed-off-by: Christoph Heiss 
> ---
>  Proxmox/Install.pm |  7 +--
>  proxinstall| 15 +++
>  2 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
> index f3bc5aa..60f38e5 100644
> --- a/Proxmox/Install.pm
> +++ b/Proxmox/Install.pm
> @@ -1014,8 +1014,11 @@ sub extract_data {
>   my $btrfs_opts = Proxmox::Install::Config::get_btrfs_opt();
>
>   my $mountopts = 'defaults';
> - $mountopts .= ",compress=$btrfs_opts->{compress}"
> - if $btrfs_opts->{compress} ne 'off';
> + if ($btrfs_opts->{compress} eq 'on') {
> + $mountopts .= ',compress';
> + } elsif ($btrfs_opts->{compress} ne 'off') {
> + $mountopts .= ",compress=$btrfs_opts->{compress}";
> + }

That was supposed to be squashed into the previous patch, whoops!
I'll send a v2 soon if nothing else comes up, sorry for the noise.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel