Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 132
src/PVE/CLI/pct.pm | 1 +
2 files changed, 133 insertions(+)
diff --git
This check ensure that no volume can have snapshots where we are not able to
migrate it.
---
PVE/QemuMigrate.pm | 9 -
PVE/QemuServer.pm | 11 +++
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ef5d9e6..ddcfa44
This check ensure that no volume can have snapshots where we are not able to
migrate it.
---
PVE/QemuMigrate.pm | 9 -
PVE/QemuServer.pm | 11 +++
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index ef5d9e6..ce7a5a7
Offline migration on LVM and LVMThin are possible offline.
---
PVE/QemuMigrate.pm | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 7b9506f..26744f0 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -293,12
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index ff9ee8f..b0abf3c 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -476,7 +476,7 @@ sub init {
die "Pool $dest->{all} does not exists\n" if check_pool_exists($dest);
-my $check =
---
PVE/CLI/pvesm.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index f5ae277..1a83c66 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -151,7 +151,7 @@ our $cmddef = {
alloc => [ "PVE::API2::Storage::Content", 'create',
---
pve-storage-dir.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-storage-dir.adoc b/pve-storage-dir.adoc
index f664465..c8dbffb 100644
--- a/pve-storage-dir.adoc
+++ b/pve-storage-dir.adoc
@@ -113,7 +113,7 @@ Please use the following command to allocate a 4GB image
---
pve-storage-dir.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-storage-dir.adoc b/pve-storage-dir.adoc
index f664465..088473c 100644
--- a/pve-storage-dir.adoc
+++ b/pve-storage-dir.adoc
@@ -113,7 +113,7 @@ Please use the following command to allocate a 4GB image
---
PVE/CLI/pvesm.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index f5ae277..e3b4570 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -151,7 +151,7 @@ our $cmddef = {
alloc => [ "PVE::API2::Storage::Content", 'create',
make code more readable.
---
PVE/QemuMigrate.pm | 30 +++---
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 26744f0..ef5d9e6 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -287,21 +287,21 @@ sub
---
PVE/VZDump.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 1e00c2d..0280568 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -853,6 +853,17 @@ sub exec_backup_task {
# lock VM (prevent config changes)
$plugin->lock_vm
---
PVE/VZDump.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 1e00c2d..302da0c 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -853,6 +853,10 @@ sub exec_backup_task {
# lock VM (prevent config changes)
$plugin->lock_vm ($vmid);
---
PVE/VZDump.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 1e00c2d..ea6577d 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -853,6 +853,16 @@ sub exec_backup_task {
# lock VM (prevent config changes)
$plugin->lock_vm
---
pve-zsync | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index 4491d1a..b0d5e30 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -474,11 +474,11 @@ sub init {
run_cmd(['ssh-copy-id', '-i', '/root/.ssh/id_rsa.pub', "root\@$ip"]);
}
-
---
PVE/QemuMigrate.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a288627..a25efff 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -230,7 +230,7 @@ sub sync_disks {
return if !$volid;
-
---
pve-zsync | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index 212ada9..cc566b4 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -763,9 +763,17 @@ sub parse_disks {
my $disk = undef;
my $stor = undef;
- if($line =~
---
pve-zsync | 2 ++
1 file changed, 2 insertions(+)
diff --git a/pve-zsync b/pve-zsync
index 212ada9..932c299 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -766,6 +766,8 @@ sub parse_disks {
if($line =~ m/^(?:((?:virtio|ide|scsi|sata|mp)\d+)|rootfs):
([^:]+:)([A-Za-z0-9\-]+),(.*)$/) {
---
src/PVE/API2/LXC.pm | 21 +-
src/PVE/LXC.pm | 62 +
2 files changed, 72 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 95932a9..0bcadc3 100644
--- a/src/PVE/API2/LXC.pm
+++
Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 116
src/PVE/CLI/pct.pm | 1 +
2 files changed, 117 insertions(+)
diff --git
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 71cf21d..6fb0a62 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1041,12 +1041,6 @@ __PACKAGE__->register_method({
"you clone a
If we make a linked clone the CT must be a template so it is not allowed to run.
If we make a full clone, it is safer to have the CT offline.
---
src/PVE/API2/LXC.pm | 11 +++
src/PVE/LXC.pm | 4 ++--
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git
With this patch it is possible to make a full clone from an running container,
if the underlying Storage provides snapshots.
---
src/PVE/API2/LXC.pm | 42 +-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm
Hello,
Test it and move KVM disk from LVMThin to LVMThin works well new Disk
has 80% usage as the source.
On 06/16/2016 09:13 AM, Dennis Busch wrote:
> Good morning!
>
> We're right now holding a training in Verona. It is the first one with
> LVMthin. In our hands-on parts we recognized that
Add parameter array to foreach_volid to use is in the functions.
correct typos.
---
PVE/QemuMigrate.pm | 12 +---
PVE/QemuServer.pm | 4 ++--
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f0734cb..3e90a46 100644
---
This is necessary to ensure the process will proper finished.
---
PVE/Storage.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bb35b32..011c4f3 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -569,6 +569,7 @@ sub storage_migrate {
if (my
There were no useful information about block device.
---
PVE/Report.pm | 31 ++-
1 file changed, 30 insertions(+), 1 deletion(-)
diff --git a/PVE/Report.pm b/PVE/Report.pm
index 4d15ef5..bfd0bd7 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -23,6 +23,14 @@ my
Sorry you are right on online move disk.
This is not optimal.
will check it, if it is a bug or only an special case what we haven't set.
On 06/16/2016 10:01 AM, Wolfgang Link wrote:
>
> Hello,
>
> Test it and move KVM disk from LVMThin to LVMThin works well new Disk
>
rootfs need no mp because it is always /
---
src/PVE/API2/LXC.pm | 10 +-
src/PVE/CLI/pct.pm| 9 -
src/PVE/LXC.pm| 25 ++---
src/PVE/LXC/Create.pm | 2 +-
4 files changed, 36 insertions(+), 10 deletions(-)
diff --git a/src/PVE/API2/LXC.pm
---
src/PVE/LXC.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 33fca55..364c761 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2363,6 +2363,8 @@ sub create_disks {
$conf->{$ms} = print_ct_mountpoint($mountpoint, $ms eq
If map is not set you get a warning of an empty variable without real
information.
And when you try to start the container, it will not start without an
explication.
---
src/PVE/LXC.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 860901f..af3b9b7
---
pve-storage-lvm.adoc | 33 +
pve-storage-lvmthin.adoc | 40
2 files changed, 73 insertions(+)
diff --git a/pve-storage-lvm.adoc b/pve-storage-lvm.adoc
index 4046e15..521cb02 100644
--- a/pve-storage-lvm.adoc
+++
I think if somebody is interested on what this command do in detail
she/he will read the man.
In case "-largest-new=1" makes no sence to a person, who has no idea
what going on anyway.
On 06/29/2016 10:50 AM, Emmanuel Kasper wrote:
>>
>> +Storage Layout
>> +~~
>> +
>> +On a default
---
PVE/QemuServer.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d10e1e5..88e288c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5476,7 +5476,11 @@ sub restore_vma_archive {
"unable to read
This function list all the templates off a specified storage.
It also gives the size of the template.
---
PVE/CLI/pveam.pm | 65
1 file changed, 65 insertions(+)
diff --git a/PVE/CLI/pveam.pm b/PVE/CLI/pveam.pm
index e90a7d7..b6adfc4
now it is possible to erase templates with pveam
---
PVE/CLI/pveam.pm | 45 +
1 file changed, 45 insertions(+)
diff --git a/PVE/CLI/pveam.pm b/PVE/CLI/pveam.pm
index b6adfc4..0f907cf 100644
--- a/PVE/CLI/pveam.pm
+++ b/PVE/CLI/pveam.pm
@@ -14,6 +14,7
---
PVE/CLI/Makefile | 2 +-
PVE/CLI/pveam.pm | 88
bin/Makefile | 7 ++---
bin/pveam| 21 ++
4 files changed, 95 insertions(+), 23 deletions(-)
create mode 100644 PVE/CLI/pveam.pm
diff --git a/PVE/CLI/Makefile
With this function you can download templates from the repositories.
---
PVE/API2/Nodes.pm | 26 --
PVE/CLI/pveam.pm | 2 ++
2 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index aa1fa0b..434b936 100644
---
---
PVE/CLI/Makefile | 2 +-
PVE/CLI/pveam.pm | 88
bin/Makefile | 7 ++---
bin/pveam| 21 ++
4 files changed, 95 insertions(+), 23 deletions(-)
create mode 100644 PVE/CLI/pveam.pm
diff --git a/PVE/CLI/Makefile
Gives the possibility to install Proxmox VE on nvme SSD.
---
proxinstall | 2 ++
1 file changed, 2 insertions(+)
diff --git a/proxinstall b/proxinstall
index eaa689b..3469e64 100755
--- a/proxinstall
+++ b/proxinstall
@@ -540,6 +540,8 @@ sub get_partition_dev {
return "${dev}p$partnum";
It is waste of disk space to allocate more then 12GB swap-partition.
---
proxinstall | 1 +
1 file changed, 1 insertion(+)
diff --git a/proxinstall b/proxinstall
index 3469e64..9eb48d1 100755
--- a/proxinstall
+++ b/proxinstall
@@ -865,6 +865,7 @@ sub compute_swapsize {
my $ss = int
---
bin/pveupdate | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/bin/pveupdate b/bin/pveupdate
index ee75190..3a00335 100755
--- a/bin/pveupdate
+++ b/bin/pveupdate
@@ -15,7 +15,7 @@ use PVE::RPCEnvironment;
use PVE::API2::Subscription;
use PVE::API2::APT;
-initlog
---
PVE/APLInfo.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/APLInfo.pm b/PVE/APLInfo.pm
index a18cc1f..1a50508 100644
--- a/PVE/APLInfo.pm
+++ b/PVE/APLInfo.pm
@@ -246,7 +246,7 @@ sub get_apl_sources {
my $urls = [];
push @$urls,
Http, https proxy are supported on http and https sites.
Now the server will also be verified.
---
PVE/API2/Subscription.pm | 21 -
PVE/APLInfo.pm | 15 +++
2 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/PVE/API2/Subscription.pm
---
PVE/APLInfo.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/APLInfo.pm b/PVE/APLInfo.pm
index ec49088..f4ad617 100644
--- a/PVE/APLInfo.pm
+++ b/PVE/APLInfo.pm
@@ -265,7 +265,7 @@ sub get_apl_sources {
my $urls = [];
push @$urls,
This patch set up the IO::Socket::SSL that all proxy(transparent, https and
http) and non proxy settings will work.
Now the server will also be verified.
---
PVE/API2/Subscription.pm | 22 +++---
PVE/APLInfo.pm | 35 ---
2 files changed,
To prevent error at VM starting, when we passthrough a harddrive from host to
vm.
---
PVE/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index d72ed6d..70a03e0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1285,6 +1285,7 @@
This is useful on large zfs pools because they take longer to response.
---
PVE/Storage/ZFSPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
index 5074ba4..d6339ce 100644
--- a/PVE/Storage/ZFSPlugin.pm
+++
We can raise the timeout because it does not matter if a worker process need
longer.
---
src/PVE/Tools.pm | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 9f08aa6..c9343e5 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@
This patch raise the timeout of the worker processed task.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/QemuServer.pm | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1139438..e97fcd1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6120,6 +6120,19 @@ sub do_snapshots_with_qemu {
return undef;
}
To prevent that one time Net:SSL and an outer time IO::Socket::SSL is loaded,
ensure that always use the same socket class.
We load the the Net:SSL in AccessControl.pm if we call pveupdate,
but if we call pveam update this module is not loaded an so the default is used
(IO::Socket::SSL).
---
To prevent that one time Net:SSL and an outer time IO::Socket::SSL is loaded,
ensure that always use the same socket class.
We load the the Net:SSL in AccessControl.pm if we call pveupdate,
but if we call pveam update this module is not loaded an so the default is used
(IO::Socket::SSL).
---
if we do not close it, there is a change that the tunnel stays open and the
next migration will not work.
---
PVE/QemuMigrate.pm | 24
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 7ae3880..5da62eb 100644
---
---
PVE/Storage.pm | 20
1 file changed, 20 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 140f8ae..af3facd 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1270,4 +1270,24 @@ sub complete_volume {
return $res;
}
+sub is_image_on_zfs {
+my
The backported patch ca369d51b3e1649be4a72addd6d6a168cfb3f537 from the
kernel.org repo
has an bug in it.
The fix where made here patch commit d0eb20a863ba7dc1d3f4b841639671f134560be2
---
iSCSI-block-sd-Fix-device-imposed-transfer-length-limits.patch | 2 +-
1 file changed, 1 insertion(+), 1
---
PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 140f8ae..a248773 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -512,7 +512,7 @@ sub storage_migrate {
my $snap = "zfs snapshot $zfspath\@__migration__";
---
test/Makefile |6 +
test/run_test_zfspoolplugin.pl | 2490
2 files changed, 2496 insertions(+)
create mode 100644 test/Makefile
create mode 100755 test/run_test_zfspoolplugin.pl
diff --git a/test/Makefile b/test/Makefile
new
There is no need to cancel the program if the ram can't remove.
The user will see that it is pending.
---
PVE/API2/Qemu.pm | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0d33f6c..96829c8 100644
--- a/PVE/API2/Qemu.pm
+++
I will check, but AFIK we have a check what prevent this.
> Dietmar Maurer hat am 17. März 2016 um 07:10
> geschrieben:
>
>
> applied.
>
> But I wonder how that whole storage_migrate() works with 'cloned' images?
___
pve-devel
No the plugin also returns the parent vmid and base volume when a link clone is
used
---
PVE/Storage.pm | 2 +-
PVE/Storage/ZFSPoolPlugin.pm | 17 +++--
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index
---
PVE/Storage.pm | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index a248773..e7ff5a0 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -510,18 +510,19 @@ sub storage_migrate {
my $zfspath =
This patch reconfigured the rsync parameters, so the fs keeps all settings and
works recursive.
---
PVE/Storage.pm | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index e7ff5a0..415301a 100755
--- a/PVE/Storage.pm
+++
Now we check before we migrate, if a storage is a licked clone.
If this is true we check also if it is on a shared storage because outer way it
would not possible to migrate.
---
PVE/QemuMigrate.pm | 27 +++
1 file changed, 27 insertions(+)
diff --git
If a Vm failed to start, check if Vm disk is on ZFS (filesystem) and "No cache"
or "Default" is set as Cache mode.
---
PVE/QemuServer.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 17b43d2..f6cde2c 100644
--- a/PVE/QemuServer.pm
---
PVE/Storage.pm | 20
1 file changed, 20 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 140f8ae..044e866 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1270,4 +1270,24 @@ sub complete_volume {
return $res;
}
+sub is_image_on_zfs {
+my
syscmd use run_command with noout what return the only exitcode.
> Dietmar Maurer hat am 2. März 2016 um 17:11 geschrieben:
>
>
> comments inline:
>
> > diff --git a/proxinstall b/proxinstall
> > index ec15477..7a67623 100755
> > --- a/proxinstall
> > +++ b/proxinstall
> >
This is necessary, if the volume group "pve" exists, say from a previous
installation.
But whitout printing the reason no user will understand why this happens.
---
proxinstall | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/proxinstall b/proxinstall
index
This is necessary, if the volume group "pve" exists, say from a previous
installation.
But whitout printing the reason no user will understand why this happens.
---
proxinstall | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/proxinstall b/proxinstall
index
---
PVE/Storage/GlusterfsPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 315c5a6..951db50 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -98,7 +98,7 @@ sub type
---
www/manager/storage/GlusterFsEdit.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager/storage/GlusterFsEdit.js
b/www/manager/storage/GlusterFsEdit.js
index f0d3b4f..0b43b94 100644
--- a/www/manager/storage/GlusterFsEdit.js
+++ b/www/manager/storage/GlusterFsEdit.js
@@ -130,6
---
www/manager/storage/GlusterFsEdit.js | 8
1 file changed, 8 deletions(-)
diff --git a/www/manager/storage/GlusterFsEdit.js
b/www/manager/storage/GlusterFsEdit.js
index f0d3b4f..fd8f569 100644
--- a/www/manager/storage/GlusterFsEdit.js
+++ b/www/manager/storage/GlusterFsEdit.js
@@
GlusterFS is slower if you use the client and not the QEMU driver.
So to guaranty that GlusterFS perform good allow only KVM/QEMU disk images.
Also remove the vmdk format because it makes no sense. It is much slower then
qcow2.
---
PVE/Storage/GlusterfsPlugin.pm | 4 ++--
1 file changed, 2
Because there are situation where you can slow down the KVM/QEMU disk image.
On 03/01/2016 02:22 PM, Stoyan Marinov wrote:
> May I ask why? What's wrong with keeping your ISO images and/or backups on
> gluster?
>
> On Mar 1, 2016, at 3:20 PM, Wolfgang Link <w.l...@pro
---
test/run_test_zfspoolplugin.pl | 181 -
1 file changed, 180 insertions(+), 1 deletion(-)
diff --git a/test/run_test_zfspoolplugin.pl b/test/run_test_zfspoolplugin.pl
index fc68195..2512db9 100755
--- a/test/run_test_zfspoolplugin.pl
+++
It is possible to select the test what should run.
synopsis: run_test_zfspoolplugin.pl [ |
]
---
test/run_test_zfspoolplugin.pl | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/test/run_test_zfspoolplugin.pl b/test/run_test_zfspoolplugin.pl
index
There is no need to remove the hole storage, if one property is not valid.
Just ignore the property.
---
PVE/Storage/Plugin.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index ccb3280..6f29838 100644
---
This Patch Series make add Ceph Infernalis and Jewel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Ceph changed user name from root to ceph.
And for startup systemd is used instead of sysvinit.
---
PVE/API2/Ceph.pm | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 58e5b35..2206197 100644
--- a/PVE/API2/Ceph.pm
+++
---
PVE/CephTools.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/CephTools.pm b/PVE/CephTools.pm
index c7749bb..4d551da 100644
--- a/PVE/CephTools.pm
+++ b/PVE/CephTools.pm
@@ -353,4 +353,21 @@ sub list_disks {
return $disklist;
}
+sub systemd_managed {
+
+
---
PVE/CLI/pveceph.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index ce991c1..64dc4ea 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -74,7 +74,7 @@ __PACKAGE__->register_method ({
version => {
---
PVE/CLI/pveceph.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index ce991c1..2783f24 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -73,8 +73,8 @@ __PACKAGE__->register_method ({
properties => {
Ceph changed user name from root to ceph.
And for startup systemd is used instead of sysvinit.
---
PVE/API2/Ceph.pm | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 58e5b35..abf5255 100644
--- a/PVE/API2/Ceph.pm
+++
---
PVE/CephTools.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/CephTools.pm b/PVE/CephTools.pm
index c7749bb..37000d6 100644
--- a/PVE/CephTools.pm
+++ b/PVE/CephTools.pm
@@ -353,4 +353,21 @@ sub list_disks {
return $disklist;
}
+sub systemd_managed {
+
+
---
Makefile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 67b30dd..07cd162 100644
--- a/Makefile
+++ b/Makefile
@@ -24,7 +24,8 @@ download:
#git clone git://git.qemu-project.org/qemu.git -b stable-2.4 ${KVMDIR}
git clone
This patch allow to run a command with an alternative user.
At the moment we run all commands as root.
---
src/PVE/Tools.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 8c7f373..5a69daa 100644
--- a/src/PVE/Tools.pm
+++ b/src/PVE/Tools.pm
@@
I discuss with Wolfgang and we will change some things.
Set Home Dir.
Check if setuid and setguid worked.
Check User.
So I will send a patch V2.
We can set the ID back on the end of the function.
I think this make sense.
On 05/18/2016 05:00 PM, Dietmar Maurer wrote:
> And when do you change
My be we can fork and set the id.
> Dietmar Maurer hat am 18. Mai 2016 um 18:11 geschrieben:
>
>
> > I discuss with Wolfgang and we will change some things.
> > Set Home Dir.
> > Check if setuid and setguid worked.
> > Check User.
> >
> > So I will send a patch V2.
> >
>
It is usefull to see which ceph version are installed on the PVE host.
---
PVE/API2/APT.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/APT.pm b/PVE/API2/APT.pm
index ca077b7..34ff623 100644
--- a/PVE/API2/APT.pm
+++ b/PVE/API2/APT.pm
@@ -532,7 +532,7 @@
---
src/PVE/API2/LXC.pm | 21 +--
src/PVE/LXC.pm | 58 +
2 files changed, 68 insertions(+), 11 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 95932a9..0bcadc3 100644
--- a/src/PVE/API2/LXC.pm
+++
Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 116
src/PVE/CLI/pct.pm | 1 +
2 files changed, 117 insertions(+)
diff --git
With this new function it is possible to copy a rootfs or a mount-point,
if the mp is a volume.
Also we use this function to create full clones of CT
and now we can move volumes to an other volume too.
___
pve-devel mailing list
If we make a linked clone the CT must be a template so it is not allowed to run.
If we make a full clone, it is safer to have the CT offline.
---
src/PVE/API2/LXC.pm | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 71cf21d..6fb0a62 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1041,12 +1041,6 @@ __PACKAGE__->register_method({
"you clone a
Yes 2012R2 but to keep it shorter R2
> Dietmar Maurer <diet...@proxmox.com> hat am 19. April 2016 um 17:01
> geschrieben:
>
>
>
>
> > On April 19, 2016 at 2:38 PM Wolfgang Link <w.l...@proxmox.com> wrote:
> >
> >
> > So user now we
If we make a linked clone the CT must be a template, so it is not allowed to
run.
If we make a full clone, it is safer to have the CT offline.
---
src/PVE/API2/LXC.pm | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 976e25d..5c0ae99 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1035,12 +1035,6 @@ __PACKAGE__->register_method({
"you clone a
Now it is possible to move the volume to an other storage.
This works only when the CT is off, to keep the volume consistent.
---
src/PVE/API2/LXC.pm | 116
src/PVE/CLI/pct.pm | 1 +
2 files changed, 117 insertions(+)
diff --git
With this new function it is possible to copy a rootfs or a mount-point,
if the mp is a volume.
Also we use this function to create full clones of CT
and now we can move volumes to an other volume too.
___
pve-devel mailing list
On 04/20/2016 11:07 AM, Thomas Lamprecht wrote:
>
>
> On 04/20/2016 08:06 AM, Wolfgang Link wrote:
>> If we make a linked clone the CT must be a template, so it is not allowed to
>> run.
>> If we make a full clone, it is safer to have the CT offline.
>>
clone ct is now full implemented. There is no need fore this anymore.
---
src/PVE/API2/LXC.pm | 6 --
1 file changed, 6 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index e64ee17..68f0a59 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1035,12 +1035,6 @@
301 - 400 of 954 matches
Mail list logo