nded' lock, but apparently [0] we cannot rely on the lock to be
set if and only if there is a vmstate.
[0]: https://forum.proxmox.com/threads/task-error-start-failed.72450
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 96de0db..cd4a005 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1179,16 +1179,9 @@ sub phase3_cleanup
Allows to mock moving the configuration for testing
and reduces duplication between the migration modules by a tiny amount.
Signed-off-by: Fabian Ebner
---
Dependency bumps
container,qemu-server -> guest-common
are needed
PVE/AbstractConfig.pm | 11 +++
1 file changed, 11 inserti
Signed-off-by: Fabian Ebner
---
I felt like this makes sense as a single block now (without each
line being separated by a blank), but I can send a v2 without that style
change if you want. Same for the next patch.
src/PVE/LXC/Migrate.pm | 12 ++--
1 file changed, 2 insertions(+), 10
For the use case with '--dumpdir', it's not possible to call prune_backups
directly, so a little bit of special handling is required there.
Signed-off-by: Fabian Ebner
---
Note that $opts->{'prune-backups'} is always defined after
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
to avoid some code duplication.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index b1107eac..e8669665 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -631,10 +631,15
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Makefile| 2 +-
PVE/API2/Storage/PruneBackups.pm | 153 +++
PVE/API2/Storage/Status.pm | 7 ++
PVE/CLI/pvesm.pm | 27 ++
4 files changed, 188 insertions(+), 1 deletion(-)
create
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index ce8796d9..9bdb5ab0 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -84,19 +84,18 @@ sub storage_info {
PVE::Storage
e next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 83 +++
1 file changed, 58 in
No functional change is intended.
The preference order is: option, then storage config, then vzdump defaults.
Signed-off-by: Fabian Ebner
---
IMHO the old method was very confusing.
PVE/VZDump.pm | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/PVE/VZDump.pm b
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v3:
* When checking if all keep-options are 0, improve readability
by using hash values directly
* For creation times in
Signed-off-by: Fabian Ebner
---
Changes in v3:
* die if unlink of archive fails
* check whether log file exists before trying to unlink it
* warn if unlink of log file fails
PVE/Storage.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/Storage.pm b/PVE
dumpdir will be overwritten if a storage is specified
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index dceeb9ca..ce8796d9 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -449,7 +449,9
Signed-off-by: Fabian Ebner
---
PVE/Storage/PBSPlugin.pm | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index fba4b2b..f029e55 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
@@ -88,6
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
7;). Add a test case for this.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm| 13 -
test/archive_info_test.pm | 22 ++
2 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 07a4f53..ac0dccd 100755
--- a
/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
A dependency bump 'manager -> storage' is needed for patches #11-#13.
storage:
Fabian Ebner (7):
Introduce prune-backups property for directory-based storages
Extend archive_info to include fil
On 6/15/20 2:21 PM, Thomas Lamprecht wrote:
Am 6/10/20 um 1:23 PM schrieb Fabian Ebner:
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v2:
* Return actual volid in PBS using the
On 6/15/20 2:01 PM, Thomas Lamprecht wrote:
Am 6/10/20 um 1:23 PM schrieb Fabian Ebner:
to keep the removal of the archive and its log file together.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage.pm b
e next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 83 +++
1 f
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v2:
* Return actual volid in PBS using the new print_volid helper
* Split out prune_mark_backup_group and move it to Storage.pm
For the use case with '--dumpdir', it's not possible to call prune_backups
directly, so a little bit of special handling is required there.
Note that $opts->{'prune-backups'} is always defined after new()
Signed-off-by: Fabian Ebner
---
Ne
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Makefile| 2 +-
PVE/API2/Storage/PruneBackups.pm | 153 +++
PVE/API2/Storage/Status.pm | 7 ++
PVE/CLI/pvesm.pm | 27 ++
4 files changed, 188 insertions(+), 1 deletion(-)
create
e's a regex below '{storage}/content',
namely '{volume}', so it's not possible to create endpoints like
'{storage}/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
A dependency bump 'manager -> storage' is needed for
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index bc4ac751..12c02a2a 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -84,19 +84,18 @@ sub storage_info
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage/PBSPlugin.pm | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index fba4b2b..f029e55 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
to keep the removal of the archive and its log file together.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index ac0dccd..a459572 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
Add a test case for this.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm| 13 -
test/archive_info_test.pm | 22 ++
2 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 07a4f53..ac0dccd 1
dumpdir will be overwritten if a storage is specified.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index bdbf641e..bc4ac751 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -449,7
to avoid some code duplication.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 6d68ac34..8ef9fbf0 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
No functional change is intended.
The preference order is: option, then storage config, then vzdump defaults.
Signed-off-by: Fabian Ebner
---
New in v2
IMHO the old method was very confusing.
PVE/VZDump.pm | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/PVE
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
On 6/4/20 11:08 AM, Fabian Ebner wrote:
Implement it for generic storages supporting backups (i.e.
directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 27 -
PVE/Storage/PBSPlugin.pm | 50
PVE/Storage/Plugin.pm
Any feedback for these patches?
On 5/4/20 10:50 AM, Fabian Ebner wrote:
The size of VM state files and the size of unused disks not
referenced by any snapshot is not saved in the VM configuration,
so it's not available here either.
Signed-off-by: Fabian Ebner
---
Changes from v1:
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
where 'is_std_name' shows whether the backup name uses the standard naming
schema and most likely was created by our tools.
Also adds a '^' to the existing filename matching regex, which
should be fine since basename() is used beforehand.
Signed-off-by: Fabian Ebner
Implement it for generic storages supporting backups (i.e.
directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 27 -
PVE/Storage/PBSPlugin.pm | 50
PVE/Storage/Plugin.pm | 128
test
Signed-off-by: Fabian Ebner
---
Not sure if this is the best place for the new API endpoints.
I decided to opt for two distinct calls rather than just using a
--dry-run option and use a worker for actually pruning, because
removing many backups over the network might take a while.
PVE/API2
ely '{volume}', so it's not possible to create endpoints like
'{storage}/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
Fabian Ebner (6):
PBSPlugin: list_volumes: filter by vmid if specified
Expand archive_info to include ctime, v
Signed-off-by: Fabian Ebner
---
PVE/Storage/PBSPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index 3c0879c..65696f4 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
@@ -291,6 +291,7 @@ sub list_volumes
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
This reverts commit 95015dbbf24b710011965805e689c03923fb830c.
parse_volname always gives 'images' and not 'rootdir'. In most
cases the volume name alone does not contain the needed information,
e.g. vm-123-disk-0 can be both a VM volume or a container volume.
Signed-
by extending filter_local_volumes.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 49a0e03..e7d16c7 100644
--- a/PVE/QemuMigrate.pm
+++ b
merge them somehow.
But before thinking too much about those things I wanted
to get some feedback for this and ask if this is the
right direction to go in.
Fabian Ebner (11):
sync_disks: fix check
update_disksize: make interface leaner
Split sync_disks into two functions
Avoid re-s
elf->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.
Signed-off-by: Fabian Ebner
---
Who needs phase3 anyways ;)?
PVE/QemuMigrate.pm | 45 -
1 file c
by using the information from volume_map. Call cleanup_remotedisks in
phase1_cleanup as well, because that's where we end if sync_disks
fails and some disks might already have been transfered successfully.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 19 ++-
1
This makes sure that they are present in volume_map as soon as
the remote node tells us, that they have been allocated.
Signed-off-by: Fabian Ebner
---
Makes the cleanup_remotedisks simplyfication in the next patch possible.
Another idea would be to do it in its own loop, after obtaining the
by making local_volumes class-accessible. One functions is for scanning all
local
volumes and one is for actually syncing offline volumes via storage_migrate.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 98 ++
1 file changed, 64 insertions
Signed-off-by: Fabian Ebner
---
This is a re-send of a previously stand-alone patch.
PVE/QemuMigrate.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b729940..f6baeda 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 55 ++
1 file changed, 31 insertions(+), 24 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3b138c4..152cb25 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
It is enough to call get_bandwith_limit once for each source_storage.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 22 +-
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 152cb25..777ba2e 100644
--- a/PVE
Like this we don't need to worry about auto-vivifaction.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a6f42df..49a0e03 100644
---
Pass new size directly, so the function doesn't need to know about
how some hash is organized. And return a message directly, instead
of both size-strings. Also dropped the wantarray, because both
existing callers use the message anyways.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigra
by using the information obtained in the first scan. This
also makes sure we only scan local storages.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e65b28f..3b138c4
The backend treats an undefined value and 0 differently. If the option
is undefined, it will still be set for Windows in config_to_command.
Replace the checkbox with a combobox covering all options.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* use a combobox with all options to allow
Signed-off-by: Fabian Ebner
---
Changes from v1:
* die/warn depending on force (thanks to Thomas and Aaron for the
suggestion)
* don't die/warn if VM is not replicated at all
PVE/API2/Qemu.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/API2/Qemu.pm
Partially fixes #2728 (GUI part is still needed).
Signed-off-by: Fabian Ebner
---
PVE/API2/Qemu.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fd51bf3..8e993a9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3486,6 +3486,12
On 5/12/20 3:45 PM, Mira Limbeck wrote:
Replicated disks can only be live migrated to the same storage on the
target node. Add a warning that mentions that limitation. The warning is
only printed when the target node is a replication target. When the
target node is not a replication target, the o
On 5/12/20 3:45 PM, Mira Limbeck wrote:
For better warnings regarding replicated disks and the ignored target
storage, add the 'is_replicated' field to the migration check result.
This contains the result of the replication checks. The first one checks if
the VM is replicated, and the second one
Signed-off-by: Fabian Ebner
---
The real issue is that the shared volumes are scanned here and
that happens in the scan_volids call above. I'll try to address
that as part of the sync_disks cleanup I'm working on.
PVE/QemuMigrate.pm | 4 +++-
1 file changed, 3 insertions(+),
Signed-off-by: Fabian Ebner
---
www/manager6/Utils.js| 7 +++
www/manager6/qemu/Options.js | 6 +++---
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 0cce81d4..24e7f1e2 100644
--- a/www/manager6/Utils.js
+++ b/ww
r in the backend is to use the
original layout from the backup configuration file, which
makes sense to use as the default in the GUI as well.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* avoid unnecessary ?-operators
* better emptyText
www/manager6/window/Restore.js | 9 ++---
On 5/5/20 1:40 PM, Thomas Lamprecht wrote:
On 5/5/20 1:20 PM, Fabian Ebner wrote:
Previously, the blank '' would be passed along and lead to a
parameter verfication failure.
For LXC the default behavior in the backend is to use 'local' as
the storage, so disallow blank
efault behavior in the backend is to use the
original layout from the backup configuration file, which
makes sense to use as the default in the GUI as well.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Restore.js | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/
On 5/5/20 12:02 PM, Thomas Lamprecht wrote:
On 5/5/20 10:27 AM, Fabian Ebner wrote:
by moving the write_config calls from vmconfig_*_pending to their
call sites. The single other call site for update_pct_config in
update_vm is also adapted.
The update_pct_config call lead to a write_config
$old_config was always the one written
by update_pct_config. Meaning that for a create_vm call with force=1,
already existing old volumes were not removed.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* instead of re-ordering, move the write_config
calls to the (grand-)parent call sites
From: Fabian Grünbichler
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
Changes from v1:
* Add patch for container create_vm issue
* Add patch for snapshot_rollback issue
* Dropped the two already applied patches for qemu-server
src/PVE/LXC.pm | 4 ++--
1 file
From: Fabian Grünbichler
and move the lock call and decision logic closer together
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
PVE/API2/Qemu.pm | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index
No functional change is intended.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index beb10c7..f1b395c 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
From: Fabian Grünbichler
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index eefeeb9..3a064b7 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE
figurable timeout
the latter only has a single user (qemu-server's clone API call)
currently.
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 39 ++-
1 file changed, 22 insertions(+), 17 deletions(-)
diff --git a/PVE/Abs
Commit a1dfeff3a8502544123245ea61ad62cbe97803b7 changed the behavior
for Replication::prepare with last_sync=0, so use last_sync=1 instead.
Signed-off-by: Fabian Ebner
---
This is not related to the locking issues.
PVE/AbstractConfig.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion
From: Fabian Grünbichler
and check_lock before forking as well
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
src/PVE/API2/LXC.pm | 31 ---
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
See [0] for the details. The call tree for the variants is
lock_config -> lock_config_full -> lock_config_mode
so it is sufficient to adapt lock_config_mode.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=2682
Suggested-by: Fabian Grünbichler
Signed-off-by: Fabian Ebner
---
Not
From: Fabian Grünbichler
to protect checks against concurrent modifications
Signed-off-by: Fabian Grünbichler
Tested-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 45 +--
1 file changed, 22 insertions(+), 23 deletions(-)
diff --git a/PVE
I'll re-send this as part of the lock series v2.
On 4/30/20 11:33 AM, Fabian Ebner wrote:
by moving the write_config calls from vmconfig_*_pending to their
call sites. The single other call site for update_pct_config in
update_vm is also adapted. The first write_config ca
On 5/4/20 6:02 PM, Thomas Lamprecht wrote:
On 4/23/20 1:51 PM, Fabian Ebner wrote:
No functional change is intended.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/PVE/AbstractConfig.pm b/PVE
so the current disk locations can be preserved even if
there are multiple local disks. And users don't have to
manually select the current storage if there is only one
local disk.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Migrate.js | 8 ++--
1 file changed, 6 insertions(
The size of VM state files and the size of unused disks not
referenced by any snapshot is not saved in the VM configuration,
so it's not available here either.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* use variable for size text and use format string
* drop patch exp
storage option for offline migration available via GUI should wait until
we either:
1. have a way to error out early
or even better:
2. can support all possible exports/imports with such a general fallback
method.
- Mail original -
De: "Fabian Ebner"
À: "pve-devel&quo
xport/import formats before starting
migration, it would improve. But currently the erroring out happens on a
per disk basis inside storage_migrate.
So I'm not sure anymore if this is an improvement. If not, and if patch
#3 is fine, I'll send a v2 without this one.
On 30.04.20 12:5
so the current disk locations can be preserved even if
there are multiple local disks. And users don't have to
manually select the current storage if there is only one
local disk.
Signed-off-by: Fabian Ebner
---
Not too happy about the "use current layout" text. Maybe
somebody ha
The size of VM state files and the size of unused disks not
referenced by any snapshot is not saved in the VM configuration,
so it's not available here either.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Migrate.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
Signed-off-by: Fabian Ebner
---
www/manager6/window/Migrate.js | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 9fc66a9b..20e057ad 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6
, the file read for $old_config was always the one written
by update_pct_config. Meaning that for a create_vm call with force=1,
already existing old volumes were not removed.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* instead of re-ordering, move the write_config
calls to the
On 30.04.20 08:59, Fabian Grünbichler wrote:
On April 29, 2020 11:58 am, Fabian Ebner wrote:
The update_pct_config call leads to a write_config call and so the
configuration file was created before it was intended to be created.
When the CFS is updated in between the write_config call and the
Signed-off-by: Fabian Ebner
---
Follow-up for https://pve.proxmox.com/pipermail/pve-devel/2020-April/043041.html
PVE/QemuServer.pm | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 63b368f..efacc45 100644
--- a/PVE
written
by update_pct_config. Meaning that for a create_vm call with force=1,
already existing old volumes were not removed.
Signed-off-by: Fabian Ebner
---
Note that this should be applied before [0] to avoid temporary
(further ;)) breakage of container creation.
[0]: https://pve.proxmox.com/pipermail/pve-
Played around for a bit with your patches applied on top of mine and
found no obvious issue, except for LXC creation [0] which is exposed by
my second patch.
So for Fabian G.'s patches:
Tested-By: Fabian Ebner
[0]: https://pve.proxmox.com/pipermail/pve-devel/2020-April/043210.htm
On 28.04.20 13:00, Fabian Ebner wrote:
This patch breaks container creation for some reason. It'll fail with:
unable to create CT : config file already exist
the real error message is:
unable to create CT : CT already exists on node ''
I pasted the wrong one in a hurry
I
This patch breaks container creation for some reason. It'll fail with:
unable to create CT : config file already exists
I'll investigate why this happens.
On 23.04.20 13:51, Fabian Ebner wrote:
See [0] for the details. The call tree for the variants is
lock_config -> loc
by introducing a safe_compare helper. Fixes warnings, e.g.
pvesh get /nodes//network
would print "use of uninitialized"-warnings if there are inactive
network interfaces, because for those, 'active' is undef.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* don't
On 27.04.20 13:27, Thomas Lamprecht wrote:
On 4/27/20 11:41 AM, Fabian Ebner wrote:
This restores the behavior for sort_key as it's described in the
comment for print_text_table.
Fixes warnings, e.g.
pvesh get /nodes//network
would print "use of uninitialized"-warnings if the
One not-patch-related observation inline.
On 27.04.20 10:24, Fabian Grünbichler wrote:
to protect checks against concurrent modifications
Signed-off-by: Fabian Grünbichler
---
Notes:
bested viewed with --patience -w
PVE/AbstractConfig.pm | 45 +-
This restores the behavior for sort_key as it's described in the
comment for print_text_table.
Fixes warnings, e.g.
pvesh get /nodes//network
would print "use of uninitialized"-warnings if there are inactive
network interfaces, because for those, 'active' is undef.
Si
On 23.04.20 13:51, Fabian Ebner wrote:
See [0] for the details. The call tree for the variants is
lock_config -> lock_config_full -> lock_config_mode
so it is sufficient to adapt lock_config_mode.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=2682
Suggested-by: Fabian Grünbichler
No functional change is intended.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index beb10c7..f1b395c 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
See [0] for the details. The call tree for the variants is
lock_config -> lock_config_full -> lock_config_mode
so it is sufficient to adapt lock_config_mode.
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=2682
Suggested-by: Fabian Grünbichler
Signed-off-by: Fabian Ebner
--
1 - 100 of 468 matches
Mail list logo