It's a mixture of both. But my hw is still inkonsistent and an odd number. Will
get my final hw Friday or Monday.
Stefan
Am 04.12.2012 um 18:02 schrieb Alexandre DERUMIER :
>>> Right now 23.000 iops with random 4k writes.
> Is it the limit by vm or for the total cluster ?
>
>
>
> - Mail
>>Right now 23.000 iops with random 4k writes.
Is it the limit by vm or for the total cluster ?
- Mail original -
De: "Stefan Priebe - Profihost AG"
À: "Alexandre DERUMIER"
Cc: "Dietmar Maurer" , pve-devel@pve.proxmox.com
Envoyé: Mardi 4 Décembre 2012 16:26:24
Objet: Re: [pve-de
Hi,
Am 04.12.2012 15:59, schrieb Alexandre DERUMIER:
So the performance problems are already solved?
I can't get more than 6000iops by kvm guest with my current setup. (I don't
know where is the bottleneck, maybe network latency or cpu on my cluster)
But Stefan has achieve a lot more with hi
>>So the performance problems are already solved?
I can't get more than 6000iops by kvm guest with my current setup. (I don't
know where is the bottleneck, maybe network latency or cpu on my cluster)
But Stefan has achieve a lot more with his ssd cluster.
Stefan, what are your last bench resul
> Subject: Re: [pve-devel] ceph news: syncfs kernel support for next stable
> release
>
> > So maybe it's time to add ceph server support to proxmox ?
>
> So the performance problems are already solved?
AFAIK Stefan reported that syncfs support does not solve the problems at all?
_
> So maybe it's time to add ceph server support to proxmox ?
So the performance problems are already solved?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi,
just a little news:
next ceph stable release (coming in 1 or 2 week), have support for syncfs from
kernel.
So no more need to have a glibc with syncfs support.
So maybe it's time to add ceph server support to proxmox ?
Regards,
Alexandre
___
Signed-off-by: Alexandre Derumier
---
PVE/Storage/NexentaPlugin.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 5ca385d..8c09058 100644
--- a/PVE/Storage/NexentaPlugin.pm
+++ b/PVE/Storage/NexentaPlugin.pm
Signed-off-by: Alexandre Derumier
---
PVE/Storage/ISCSIPlugin.pm |5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index 9e53167..3af01e3 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -388,6 +388,11 @
Signed-off-by: Alexandre Derumier
---
PVE/Storage/ISCSIDirectPlugin.pm |5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index 2378163..0018ed9 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirec
Signed-off-by: Alexandre Derumier
---
PVE/Storage/LVMPlugin.pm |7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 712ca1d..8ebceb0 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -450,4 +450,11 @@ sub vo
Signed-off-by: Alexandre Derumier
---
PVE/Storage/SheepdogPlugin.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index ff00bc8..93f3e8d 100644
--- a/PVE/Storage/SheepdogPlugin.pm
+++ b/PVE/Storage/SheepdogPlugin.p
Signed-off-by: Alexandre Derumier
---
PVE/Storage/RBDPlugin.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 0f4a66f..2f26ed3 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -389,6 +389,18 @@
Signed-off-by: Alexandre Derumier
---
PVE/Storage/Plugin.pm | 45 -
1 file changed, 28 insertions(+), 17 deletions(-)
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index a46581b..1427fa1 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Stor
Signed-off-by: Alexandre Derumier
---
PVE/Storage/NexentaPlugin.pm | 30 +++---
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index b164e0d..5ca385d 100644
--- a/PVE/Storage/NexentaPlugin.pm
+
Signed-off-by: Alexandre Derumier
---
PVE/Storage/SheepdogPlugin.pm | 34 +-
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index bd59ce0..ff00bc8 100644
--- a/PVE/Storage/SheepdogPlu
Signed-off-by: Alexandre Derumier
---
PVE/Storage/Plugin.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 1427fa1..7521e3f 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -568,6 +568,23 @@ sub volume
Signed-off-by: Alexandre Derumier
---
PVE/Storage.pm | 16
1 file changed, 16 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index c8a8c78..daa514c 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -183,6 +183,22 @@ sub volume_snapshot_delete {
}
}
+sub
Signed-off-by: Alexandre Derumier
---
PVE/Storage/RBDPlugin.pm | 31 +++
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index eaff83c..0f4a66f 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storag
we can't protect an iscsi device
Signed-off-by: Alexandre Derumier
---
PVE/Storage/ISCSIPlugin.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index 173ca1d..9e53167 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/
we can't protect an iscsi volume
Signed-off-by: Alexandre Derumier
---
PVE/Storage/ISCSIDirectPlugin.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index e2490e8..2378163 100644
--- a/PVE/Storage/ISCSIDirectPlug
we can't protect a lvm volume.
Signed-off-by: Alexandre Derumier
---
PVE/Storage/LVMPlugin.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 9199db1..712ca1d 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVM
return undef, as nexenta have a implicit protection system when creatin clones
Signed-off-by: Alexandre Derumier
---
PVE/Storage/NexentaPlugin.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/Storage/NexentaPlugin.pm b/PVE/Storage/NexentaPlugin.pm
index 386656f..b164e0d 100644
We use the rbd protect command to protect a snapshot.
This is mandatory for clone a snapshot.
The rbd volume need to be at format V2
Signed-off-by: Alexandre Derumier
---
PVE/Storage/RBDPlugin.pm | 42 +-
1 file changed, 37 insertions(+), 5 deletions(-)
return undef, as sheepdog doesn't need protection
Signed-off-by: Alexandre Derumier
---
PVE/Storage/SheepdogPlugin.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm
index 3e5839e..bd59ce0 100644
--- a/PVE/Storage/Sheepdo
(and also fix backing file regex parsing)
for files, we protect the volume file with chattr.
So we can only read it, but can't delete or move it.
Signed-off-by: Alexandre Derumier
---
PVE/Storage/Plugin.pm | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git
Signed-off-by: Alexandre Derumier
---
PVE/Storage.pm | 15 +++
1 file changed, 15 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b13df21..c8a8c78 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -183,6 +183,21 @@ sub volume_snapshot_delete {
}
}
+sub v
changelog since V2:
- use volume_protect insted volume_unproct
- use read_only param in volume_protect
- add lvm volume_protect
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
qm create --clonefrom vmid --snapname snap
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 89 --
1 file changed, 86 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d2dba3c..548987f 100644
--- a/
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm |2 ++
PVE/QemuServer.pm |4 ++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 548987f..d5ce5c3 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -447,6 +447,8 @@ __P
if files (raw,qcow2) are a template, we forbid vm_start.
note : the readonly protection do already the job, but we need a clear message
for users
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.
Signed-off-by: Alexandre Derumier
---
PVE/QemuMigrate.pm |6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0711681..eae28a6 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -245,6 +245,12 @@ sub sync_disks {
die "c
template_create:
we need to check if we can create a template from a snapshot or current.
qcow2,raw : template from current
rbd,sheepdog,nexenta : template from snapshot
then we lock volume if storage need it for clone (files or rbd)
then we add template:1 to the config (current or snapshot)
changelog since v2:
- use protect instead unprotect
- use read_only param in protect
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
if a qcow2 current is a template, we can't rollback to a previous snapshot.
(note that file readonly protection do already the job, but we need a clear
error message for user)
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/Q
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 21 +
1 file changed, 21 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 04fb3be..b6b5d4e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2867,6 +2867,8 @@ sub qemu_volume_snapshot
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm |2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b6b5d4e..3ca8494 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3224,6 +3224,8 @@ sub vm_sendkey {
sub vm_destroy {
my ($
> What is exactly the problem with snapshots in cluster if snapshots volume
> are active on each node ?
Snapshot state is kept in RAM (at least parts).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/p
Hi Dietmar,
I have done some tests, with use lvm snapshots as clone image
parentvm : /dev/volume1/vm-122-disk-1
clone1: /dev/volume1/vm-888-disk-1 (#lvcreate -L 32G -s -n vm-888-disk-1
/dev/volume1/vm-122-disk-1)
clone2: /dev/volume1/vm-999-disk-1 (#lvcreate -L 32G -s -n vm-999-disk-1
/dev/v
39 matches
Mail list logo