this allows us later to mock the sub,
which we need for testing
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 34 +-
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index dd2591c..ad1a896 100644
--- a/P
because it logically belongs there, also
this makes the testing easier
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 8382045..dd2591c 100644
--- a/PVE/Diskmanage.pm
+++ b/P
this patch series prepares the disklist module
for my upcoming regression tests
the only behavioural change is the parameter of udevadm (patch 1/5)
from
udevadm -n device_name
to
udevadm -p /path/to/blockdev
which should also fix a bug (see the commit message for details)
Dominik Csapak (5):
u
since we iterate over the entries in /sys/block
it makes sense to use this path
this should fix #1099
because udevadm does not take
-n cciss!c0d0 (because it only looks in dev for this)
but takes
-p /sys/block/cciss!c0d0
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 4 ++--
1 file cha
because if the file does not exist,
we have an perl error for comparing an uninitialized
value
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index c8706b7..0bd0556 100644
--- a/PVE/Dis
we want this, because the model in /sys/block//device/model
is limited to 16 characters
and since the model is not always in the udevadm output (nvme),
also read the model from the model file as fallback
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 8 ++--
1 file changed, 6 inserti
Signed-off-by: Dominik Csapak
---
test/disk_tests/cciss/cciss!c0d0/device/model | 1 +
test/disk_tests/cciss/cciss!c0d0/device/vendor| 1 +
test/disk_tests/cciss/cciss!c0d0/queue/rotational | 1 +
test/disk_tests/cciss/cciss!c0d0/size | 1 +
test/disk_tests/cciss/cciss!c0d
this patch series adds regression tests
for the disklist and smart parsing
this needs my previous patch series to work
Dominik Csapak (7):
add disklist test
add hdd and smart regression tests
add ssd and smart regression tests
add nvme regression test
add sas regression tests
add ccis
Signed-off-by: Dominik Csapak
---
test/disk_tests/hdd_smart/disklist| 2 +
test/disk_tests/hdd_smart/disklist_expected.json | 32 +++
test/disk_tests/hdd_smart/sda/device/vendor | 1 +
test/disk_tests/hdd_smart/sda/queue/rotational| 1 +
test/disk_tests/hdd_smart
and add to makefile
Signed-off-by: Dominik Csapak
---
test/Makefile | 5 +-
test/disklist_test.pm | 216 +
test/run_disk_tests.pl | 10 +++
3 files changed, 230 insertions(+), 1 deletion(-)
create mode 100644 test/disklist_test.pm
cr
Signed-off-by: Dominik Csapak
---
test/disk_tests/nvme_smart/disklist| 1 +
test/disk_tests/nvme_smart/disklist_expected.json | 17 +
test/disk_tests/nvme_smart/nvme0_smart | 22 ++
test/disk_tests/nvme_smart/nvme0n1/device/model
Signed-off-by: Dominik Csapak
---
test/disk_tests/ssd_smart/disklist| 5 +
test/disk_tests/ssd_smart/disklist_expected.json | 77 +++
test/disk_tests/ssd_smart/sda/device/vendor | 1 +
test/disk_tests/ssd_smart/sda/queue/rotational| 1 +
test/disk_tests/ssd_s
Signed-off-by: Dominik Csapak
---
test/disk_tests/usages/disklist | 6 ++
test/disk_tests/usages/disklist_expected.json | 97 +++
test/disk_tests/usages/mounts | 2 +
test/disk_tests/usages/partlist | 2 +
test/disk_tests/usag
Signed-off-by: Dominik Csapak
---
test/disk_tests/sas/disklist| 1 +
test/disk_tests/sas/disklist_expected.json | 17
test/disk_tests/sas/sda/device/model| 1 +
test/disk_tests/sas/sda/device/vendor | 1 +
test/disk_tests/sas/sda/queue/rotational
On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier
> ---
> PVE/QemuServer.pm | 7 +--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 05edd7a..ec8df94 100644
> --- a/PVE/Qemu
---
PVE/API2/Qemu.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ad7a0c0..f64a77c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1858,8 +1858,10 @@ __PACKAGE__->register_method({
method => 'POST',
protected
---
Not 100% sure if these shutdown explaination belong to asciidoc or to the
generated
manu.
qm.adoc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 3624e2f..5d711f7 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -464,6 +464,9 @@ start after those where the parameter is set
On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
> ---
> PVE/API2/Qemu.pm | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index ad7a0c0..f64a77c 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1858
On Mon, Oct 17, 2016 at 10:47:56AM +0200, Wolfgang Bumiller wrote:
> On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote:
> > Signed-off-by: Alexandre Derumier
> > ---
> > PVE/QemuServer.pm | 7 +--
> > 1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/PVE/Q
On 10/17/2016 10:29 AM, Dominik Csapak wrote:
because if the file does not exist,
we have an perl error for comparing an uninitialized
value
Signed-off-by: Dominik Csapak
---
PVE/Diskmanage.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanag
>>But we actually have 3 states here: kvm binary version (not interesting
>>when hotpluggin), machine version (only interesting when migrating),
>>running qemu version (probably the most important).
>>The machine part (query-machines) would give us the qemu version the
>>VM was originally started
On 10/17/2016 11:05 AM, Fabian Grünbichler wrote:
> On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
>> ---
>> PVE/API2/Qemu.pm | 6 --
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index ad7a0c0..f64a77c 100644
>>
On Mon, Oct 17, 2016 at 11:33:44AM +0200, Alexandre DERUMIER wrote:
> >>But we actually have 3 states here: kvm binary version (not interesting
> >>when hotpluggin), machine version (only interesting when migrating),
> >>running qemu version (probably the most important).
>
> >>The machine part (q
>>for iothread, I think query-machines will do the job.
>>We don't care about machine version, only the running qemu binary version is
>>needed.
I mean qmp "query-version" not "query-machines"
- Mail original -
De: "aderumier"
À: "Wolfgang Bumiller"
Cc: "pve-devel"
Envoyé: Lundi 17
>>But wouldn't that be query-version? If we migrate from 2.6 to 2.7 then
>>query-machine would give us pc-i440fx-2.6, no?
yes,yes, cross mailing ;)
- Mail original -
De: "Wolfgang Bumiller"
À: "aderumier"
Cc: "pve-devel"
Envoyé: Lundi 17 Octobre 2016 11:56:39
Objet: Re: [pve-devel] [PA
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3e069ea..186fae1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2975,8 +2975,18 @@ sub config_to_command
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 28 +---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e3c2550..46d0403 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3776,7 +3776,7 @@ sub q
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 30 --
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 186fae1..e3c2550 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3761,6 +3761,8 @@ sub
changelog: remove kvm_version() call for hotplug|unplug.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 22 ++
1 file changed, 22 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f4bb4dd..3e069ea 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1674,6 +1674,28 @@ sub print_netdev_full {
changelog:
check the current running qemu version to enable them.
(We can use older machines version, only the current qemu version is needed)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
changelog : check running qemu binary version
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e4c385f..f42b733 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@
changelog: check current running qemu process
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6376323..e4c385f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer
return current running qemu process version
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 46d0403..6376323 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5961,6 +5961,13 @
On Mon, Oct 17, 2016 at 11:38:15AM +0200, Emmanuel Kasper wrote:
> On 10/17/2016 11:05 AM, Fabian Grünbichler wrote:
> > On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
> >> ---
> >> PVE/API2/Qemu.pm | 6 --
> >> 1 file changed, 4 insertions(+), 2 deletions(-)
> >>
> >> diff -
So, when testing this on a linux guest (4.7.6, Arch) I still get errors
(and see kernel traces in the guest's dmesg) with the iothread flag...
Removing "works" (it's removed, but the qemu command still errors and
you get an error in the pve GUI) - re-adding doesn't work (always shows
me kernel stac
---
pve-bibliography.adoc | 5 +
1 file changed, 5 insertions(+)
diff --git a/pve-bibliography.adoc b/pve-bibliography.adoc
index d721c3d..e1fc280 100644
--- a/pve-bibliography.adoc
+++ b/pve-bibliography.adoc
@@ -62,6 +62,11 @@ endif::manvolnum[]
Packt Publishing, 2015.
ISBN 978-178398
>>What kind of setup did you test this with?
debian jessie, kernel 3.16.
I have tried to remove/readd multiple time a virtio-scsi disk, no error.
with qemu 2.6, I had errors when removing then readd same drive.
I'll do more tests with arch to see if I can reproduce
- Mail original -
D
Ok,
I have done more tests again, and It seem that I can trigger the bug.
Sometimes the virtio-scsi controller can't be unplug after some plug|unplug.
I can't reproduce 100%, sometimes it's after 3 hot|unplug, sometimes after 10
hot|unplug.
I have also tried manual "device_del", and the contro
applied whole series
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Mon, Oct 17, 2016 at 12:20:43PM +0200, Alexandre Derumier wrote:
> return current running qemu process version
>
> Signed-off-by: Alexandre Derumier
> ---
> PVE/QemuServer.pm | 7 +++
> 1 file changed, 7 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 46d0403
applied
On Mon, Oct 17, 2016 at 12:20:45PM +0200, Alexandre Derumier wrote:
> changelog : check running qemu binary version
>
> Signed-off-by: Alexandre Derumier
> ---
> PVE/QemuServer.pm | 7 +--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/QemuServer.pm b/PVE/Qe
On Tue, Oct 11, 2016 at 04:45:19PM +0200, Alexandre Derumier wrote:
> This allow to migrate a local storage (only 1 for now) to a remote node
> storage.
>
> When the target node start, a new volume is created and exposed through qemu
> embedded nbd server.
>
> qemu drive-mirror is done on sourc
>>considering some of the code looks like it's prepared for multiple
>>disks, I wonder if the remote side should send a mapping containing the
>>old + new names?
yes, I think it can prepare output for multiple disks, it'll be easier later.
Maybe simply send multiple lines, 1 by disk ?
> + PVE::
On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote:
> >>considering some of the code looks like it's prepared for multiple
> >>disks, I wonder if the remote side should send a mapping containing the
> >>old + new names?
>
> yes, I think it can prepare output for multiple disks, it'
>>So we'd need a way to switch back, then again the remote side might be
>>dead at this point... we could try though?
setting new drive in pending until the whole migration is done, so user can use
revert ?
I think it should done manually by user, because maybe user don't want to loose
new datas
Hi all,
i am currently working on a cluster dashboard,
and wanted to get feedback from you all.
please be aware that this is a mockup only, no functionality yet,
so no patches for now
i will also post it on the forum (maybe tomorrow) to get
additional feedback
please discuss :)
ps: for clarif
This book is already in the list. Maybe we can remove the older edition?
> On October 17, 2016 at 1:31 PM Fabian Grünbichler
> wrote:
>
>
> ---
> pve-bibliography.adoc | 5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/pve-bibliography.adoc b/pve-bibliography.adoc
> index d721c3d..
On 10/17/2016 04:52 PM, Dietmar Maurer wrote:
https://www.pictshare.net/cb2c08d9ca.png
Seems you try to display lists of Nodes/Guests in:
Offline Nodes:
Guest with errors:
IMHO such lists can be quite long, so how do you plan to display
a long lists here?
as it is now, it would simply line
> https://www.pictshare.net/cb2c08d9ca.png
Seems you try to display lists of Nodes/Guests in:
Offline Nodes:
Guest with errors:
IMHO such lists can be quite long, so how do you plan to display
a long lists here?
___
pve-devel mailing list
pve-devel@pv
> > IMHO such lists can be quite long, so how do you plan to display
> > a long lists here?
> >
>
> as it is now, it would simply linewrap and make the boxes bigger,
> but yes this is a good point, i have to experiment a little with this
>
> imho offline nodes won't be too many i think, and ha e
On 10/17/2016 05:04 PM, Dietmar Maurer wrote:
IMHO such lists can be quite long, so how do you plan to display
a long lists here?
as it is now, it would simply linewrap and make the boxes bigger,
but yes this is a good point, i have to experiment a little with this
imho offline nodes won't
The /cluster/nextid call is not thread safe, when making calls in
(quasi) parallel the callee may get overlapping VMIDs and then only
the first one actually writing the config to the pmxcfs "wins" and
may use it.
Use the new 'next_unused_vmid' from the cluster package to improve
this. It not only
The get /cluster/nextid API call is not secured against race conditions and
parallel accesses in general.
Users of the API which created multiple CT in parallel or at least very fast
after each other run into problems here: multiple calls got the same VMID
returned.
Fix this by allowing the /clust
This will be used in cases where the VMID may not be important. For
example some users do not care which VMID a CT/VM gets, they just
want a CT/VM.
Signed-off-by: Thomas Lamprecht
---
src/PVE/JSONSchema.pm | 16
1 file changed, 16 insertions(+)
diff --git a/src/PVE/JSONSchema.p
Using a dot (.) as a VMID will automatically select an available one,
this can be helpful for mass CT creation or may be simply just
convenient.
Signed-off-by: Thomas Lamprecht
---
src/PVE/API2/LXC.pm | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm b
This can be used to get a unused VMID in a thread safe way, also the
new VMID can be reserved temporary (60s for now) so that multiple
calls to the API at the same time, which often first request a VMID
and then, in a later moment reserve it actually thorugh writing the
VMID.conf file, do not get i
Using a dot (.) as a VMID will automatically select an available one,
this can be helpful for mass VM creation or may be simply just
convenient.
Signed-off-by: Thomas Lamprecht
---
PVE/API2/Qemu.pm | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/A
>>setting new drive in pending until the whole migration is done, so user can
>>use revert ?
>>I think it should done manually by user, because maybe user don't want to
>>loose new datas written to target storage.
Another possibility : create a new vmid on target host, with his own config.
Lik
>>Another possibility : create a new vmid on target host, with his own config.
>>Like this user can manage old and new vm after migration . and If something
>>crash during the migration,this is more easier
We could adapt vm_clone to be able to use remote local storage.
then reuse vm_clone in v
> Another possibility : create a new vmid on target host, with his own config.
> Like this user can manage old and new vm after migration . and If something
> crash during the migration,this is more easier.
>
> The only drawback is that we can't mix local && shared storage in this case.
> (but I
On Mon, Oct 17, 2016 at 05:44:31PM +0200, Thomas Lamprecht wrote:
> The get /cluster/nextid API call is not secured against race conditions and
> parallel accesses in general.
> Users of the API which created multiple CT in parallel or at least very fast
> after each other run into problems here: m
On Mon, Oct 17, 2016 at 05:44:34PM +0200, Thomas Lamprecht wrote:
> This will be used in cases where the VMID may not be important. For
> example some users do not care which VMID a CT/VM gets, they just
> want a CT/VM.
>
> Signed-off-by: Thomas Lamprecht
> ---
> src/PVE/JSONSchema.pm | 16 +
63 matches
Mail list logo