Re: [pve-devel] [PATCH] update proxmox patches to qemu 2.4

2015-06-30 Thread Alexandre DERUMIER
Thanks Wolfgang,I'll try them (I think you are better with C language than me ;)


qemu 2.4 have some great improvements 
- memory hot-unplug support
- drive-mirror fixes with discard(should be merge in master in coming days)
- sata migratable
- incremental backup (don't known if it'll be easy to adapt proxmox patch for 
it)

- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Mercredi 1 Juillet 2015 08:14:17
Objet: Re: [pve-devel] [PATCH] update proxmox patches to qemu 2.4

I'll merge this into my 2.4 branch. 
I've already started the branch for 2.4 after finishing the 2.3 patches, 
but then decided to wait for qemu's hard-freeze on 2015-07-07 (see 
) before finishing it off. 
(I probably should have just posted the patch for reviewing anyway.) 
My branch doesn't include your new patches yet. 
I'll send my current diff in a minute. It seems to overlap a bit 

On Tue, Jun 30, 2015 at 06:17:35PM +0200, Alexandre Derumier wrote: 
> fixme : 
> -internal-snapshot-async.patch 
> 
> - backups : seem that they are lot of changes with bitmap support add 
> (backup_start have new arguments for example) 
> 
> blockdev.c:2486:7: error: conflicting types for ‘qmp_backup’ 
> char *qmp_backup(const char *backup_file, bool has_format, 
> ^ 
> In file included from blockdev.c:49:0: 
> qmp-commands.h:111:11: note: previous declaration of ‘qmp_backup’ was here 
> UuidInfo *qmp_backup(const char *backup_file, bool has_format, BackupFormat 
> format, bool has_config_file, const char *config_file, bool has_devlist, 
> const char *devlist, bool has_speed, int64_t speed, Error **errp); 
> ^ 
> blockdev.c: In function ‘qmp_backup’: 
> blockdev.c:2520:37: error: ‘QERR_DEVICE_IS_READ_ONLY’ undeclared (first use 
> in this function) 
> error_set(errp, QERR_DEVICE_IS_READ_ONLY, *d); 
> ^ 
> blockdev.c:2520:37: note: each undeclared identifier is reported only once 
> for each function it appears in 
> blockdev.c:2524:21: error: incompatible type for argument 2 of ‘error_set’ 
> error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, *d); 
> ^ 
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/sysemu/block-backend.h:17:0,
>  
> from blockdev.c:33: 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/qapi/error.h:30:6: 
> note: expected ‘ErrorClass’ but argument is of type ‘const char *’ 
> void error_set(Error **errp, ErrorClass err_class, const char *fmt, ...) 
> ^ 
> blockdev.c:2531:33: error: ‘QERR_DEVICE_NOT_FOUND’ undeclared (first use in 
> this function) 
> error_set(errp, QERR_DEVICE_NOT_FOUND, *d); 
> ^ 
> blockdev.c:2707:9: error: incompatible type for argument 7 of ‘backup_start’ 
> backup_start(di->bs, di->target, speed, MIRROR_SYNC_MODE_FULL, 
> ^ 
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  
> from blockdev.c:37: 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘BlockdevOnError’ but argument is of type ‘int (*)(void *, 
> struct BlockDriverState *, int64_t, int, unsigned char *)’ 
> void backup_start(BlockDriverState *bs, BlockDriverState *target, 
> ^ 
> blockdev.c:2709:41: warning: passing argument 8 of ‘backup_start’ from 
> incompatible pointer type 
> pvebackup_dump_cb, pvebackup_complete_cb, di, 
> ^ 
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  
> from blockdev.c:37: 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘int (*)(void *, struct BlockDriverState *, int64_t, int, 
> unsigned char *)’ but argument is of type ‘void (*)(void *, int)’ 
> void backup_start(BlockDriverState *bs, BlockDriverState *target, 
> ^ 
> blockdev.c:2709:64: warning: passing argument 9 of ‘backup_start’ from 
> incompatible pointer type 
> pvebackup_dump_cb, pvebackup_complete_cb, di, 
> ^ 
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  
> from blockdev.c:37: 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘void (*)(void *, int)’ but argument is of type ‘struct 
> PVEBackupDevInfo *’ 
> void backup_start(BlockDriverState *bs, BlockDriverState *target, 
> ^ 
> blockdev.c:2710:22: warning: passing argument 10 of ‘backup_start’ makes 
> pointer from integer without a cast 
> true, &local_err); 
> ^ 
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  
> from blockdev.c:37: 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘void *’ but argument is of type ‘int’ 
> void backup_start(BlockDriverState *bs, BlockDriverState *target, 
> ^ 
> blockdev.c:2710:22: warning: the address of ‘local_err’ will always evaluate 
> as ‘true’ [-Waddress] 
> true, &local_err); 
> ^ 
> blockdev.c:2707:9

[pve-devel] [PATCH] Update to v2.4.0

2015-06-30 Thread Wolfgang Bumiller
---
 .../patches/backup-add-pve-monitor-commands.patch  |  2 +-
 debian/patches/backup-modify-job-api.patch | 32 --
 debian/patches/internal-snapshot-async.patch   |  4 +--
 debian/patches/modify-query-machines.patch |  2 +-
 debian/patches/modify-query-spice.patch|  2 +-
 debian/patches/virtio-balloon-fix-query.patch  | 24 ++--
 6 files changed, 31 insertions(+), 35 deletions(-)

diff --git a/debian/patches/backup-add-pve-monitor-commands.patch 
b/debian/patches/backup-add-pve-monitor-commands.patch
index e58033e..450be3b 100644
--- a/debian/patches/backup-add-pve-monitor-commands.patch
+++ b/debian/patches/backup-add-pve-monitor-commands.patch
@@ -583,8 +583,8 @@ Index: new/hmp.h
 --- new.orig/hmp.h 2014-11-20 06:45:05.0 +0100
 +++ new/hmp.h  2014-11-20 07:47:31.0 +0100
 @@ -29,6 +29,7 @@
- void hmp_info_migrate(Monitor *mon, const QDict *qdict);
  void hmp_info_migrate_capabilities(Monitor *mon, const QDict *qdict);
+ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict);
  void hmp_info_migrate_cache_size(Monitor *mon, const QDict *qdict);
 +void hmp_info_backup(Monitor *mon, const QDict *qdict);
  void hmp_info_cpus(Monitor *mon, const QDict *qdict);
diff --git a/debian/patches/backup-modify-job-api.patch 
b/debian/patches/backup-modify-job-api.patch
index f5e81a7..9b9f234 100644
--- a/debian/patches/backup-modify-job-api.patch
+++ b/debian/patches/backup-modify-job-api.patch
@@ -2,8 +2,8 @@ Index: new/block/backup.c
 ===
 --- new.orig/block/backup.c2014-11-20 07:55:31.0 +0100
 +++ new/block/backup.c 2014-11-20 08:56:23.0 +0100
-@@ -39,6 +39,7 @@
- BlockDriverState *target;
+@@ -39,6 +39,7 @@ typedef struct BackupBlockJob
+ BdrvDirtyBitmap *sync_bitmap;
  MirrorSyncMode sync_mode;
  RateLimit limit;
 +BackupDumpFunc *dump_cb;
@@ -78,7 +78,7 @@ Index: new/block/backup.c
  bdrv_add_before_write_notifier(bs, &before_write);
  
 @@ -359,8 +373,10 @@
- 
+ }
  hbitmap_free(job->bitmap);
  
 -bdrv_iostatus_disable(target);
@@ -91,7 +91,7 @@ Index: new/block/backup.c
  data = g_malloc(sizeof(*data));
  data->ret = ret;
 @@ -370,13 +386,15 @@ for backup_start
-   int64_t speed, MirrorSyncMode sync_mode,
+   BdrvDirtyBitmap *sync_bitmap,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
 +  BackupDumpFunc *dump_cb,
@@ -125,8 +125,8 @@ Index: new/block/backup.c
  return;
  }
  
-@@ -397,12 +415,15 @@ in backup_start
- return;
+@@ -397,14 +415,17 @@ in backup_start
+ goto error;
  }
  
 -bdrv_op_block_all(target, job->common.blocker);
@@ -138,6 +138,8 @@ Index: new/block/backup.c
  job->on_target_error = on_target_error;
  job->target = target;
  job->sync_mode = sync_mode;
+ job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_DIRTY_BITMAP ?
+sync_bitmap : NULL;
 +job->common.paused = paused;
  job->common.len = len;
  job->common.co = qemu_coroutine_create(backup_run);
@@ -147,20 +149,20 @@ Index: new/blockdev.c
 --- new.orig/blockdev.c2014-11-20 07:55:31.0 +0100
 +++ new/blockdev.c 2014-11-20 08:48:02.0 +0100
 @@ -2223,7 +2223,7 @@ qmp_drive_backup
- bdrv_set_aio_context(target_bs, aio_context);
+ }
  
- backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error,
-- block_job_cb, bs, &local_err);
-+ NULL, block_job_cb, bs, false, &local_err);
+ backup_start(bs, target_bs, speed, sync, bmap,
+- on_source_error, on_target_error,
++ on_source_error, on_target_error, NULL,
+  block_job_cb, bs, &local_err);
  if (local_err != NULL) {
  bdrv_unref(target_bs);
- error_propagate(errp, local_err);
 @@ -2284,7 +2284,7 @@ qmp_blockdev_backup
  bdrv_ref(target_bs);
  bdrv_set_aio_context(target_bs, aio_context);
- backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error,
-- block_job_cb, bs, &local_err);
-+ NULL, block_job_cb, bs, false, &local_err);
+ backup_start(bs, target_bs, speed, sync, NULL, on_source_error,
+- on_target_error, block_job_cb, bs, &local_err);
++ on_target_error, NULL, block_job_cb, bs, &local_err);
  if (local_err != NULL) {
  bdrv_unref(target_bs);
  error_propagate(errp, local_err);
@@ -179,7 +181,7 @@ Index: new/include/block/block_int.h
  BlockDriverState *bs;
  int64_t offset;
 @@ -583,7 +586,9 @@
-   int64_t speed, MirrorSyncMode sync_mode,
+   BdrvDirtyBitmap *sync_bitmap,
BlockdevOnError on_source_error,
BlockdevOnError on_target_err

Re: [pve-devel] [PATCH] update proxmox patches to qemu 2.4

2015-06-30 Thread Wolfgang Bumiller
I'll merge this into my 2.4 branch.
I've already started the branch for 2.4 after finishing the 2.3 patches,
but then decided to wait for qemu's hard-freeze on 2015-07-07 (see
) before finishing it off.
(I probably should have just posted the patch for reviewing anyway.)
My branch doesn't include your new patches yet.
I'll send my current diff in a minute. It seems to overlap a bit

On Tue, Jun 30, 2015 at 06:17:35PM +0200, Alexandre Derumier wrote:
> fixme :
> -internal-snapshot-async.patch
> 
> - backups : seem that they are lot of changes with bitmap support add 
> (backup_start have new arguments for example)
> 
> blockdev.c:2486:7: error: conflicting types for ‘qmp_backup’
>  char *qmp_backup(const char *backup_file, bool has_format,
>^
> In file included from blockdev.c:49:0:
> qmp-commands.h:111:11: note: previous declaration of ‘qmp_backup’ was here
>  UuidInfo *qmp_backup(const char *backup_file, bool has_format, BackupFormat 
> format, bool has_config_file, const char *config_file, bool has_devlist, 
> const char *devlist, bool has_speed, int64_t speed, Error **errp);
>^
> blockdev.c: In function ‘qmp_backup’:
> blockdev.c:2520:37: error: ‘QERR_DEVICE_IS_READ_ONLY’ undeclared (first use 
> in this function)
>  error_set(errp, QERR_DEVICE_IS_READ_ONLY, *d);
>  ^
> blockdev.c:2520:37: note: each undeclared identifier is reported only once 
> for each function it appears in
> blockdev.c:2524:21: error: incompatible type for argument 2 of ‘error_set’
>  error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
>  ^
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/sysemu/block-backend.h:17:0,
>  from blockdev.c:33:
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/qapi/error.h:30:6: 
> note: expected ‘ErrorClass’ but argument is of type ‘const char *’
>  void error_set(Error **errp, ErrorClass err_class, const char *fmt, ...)
>   ^
> blockdev.c:2531:33: error: ‘QERR_DEVICE_NOT_FOUND’ undeclared (first use in 
> this function)
>  error_set(errp, QERR_DEVICE_NOT_FOUND, *d);
>  ^
> blockdev.c:2707:9: error: incompatible type for argument 7 of ‘backup_start’
>  backup_start(di->bs, di->target, speed, MIRROR_SYNC_MODE_FULL,
>  ^
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  from blockdev.c:37:
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘BlockdevOnError’ but argument is of type ‘int (*)(void *, 
> struct BlockDriverState *, int64_t,  int,  unsigned char *)’
>  void backup_start(BlockDriverState *bs, BlockDriverState *target,
>   ^
> blockdev.c:2709:41: warning: passing argument 8 of ‘backup_start’ from 
> incompatible pointer type
>   pvebackup_dump_cb, pvebackup_complete_cb, di,
>  ^
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  from blockdev.c:37:
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘int (*)(void *, struct BlockDriverState *, int64_t,  int,  
> unsigned char *)’ but argument is of type ‘void (*)(void *, int)’
>  void backup_start(BlockDriverState *bs, BlockDriverState *target,
>   ^
> blockdev.c:2709:64: warning: passing argument 9 of ‘backup_start’ from 
> incompatible pointer type
>   pvebackup_dump_cb, pvebackup_complete_cb, di,
> ^
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  from blockdev.c:37:
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘void (*)(void *, int)’ but argument is of type ‘struct 
> PVEBackupDevInfo *’
>  void backup_start(BlockDriverState *bs, BlockDriverState *target,
>   ^
> blockdev.c:2710:22: warning: passing argument 10 of ‘backup_start’ makes 
> pointer from integer without a cast
>   true, &local_err);
>   ^
> In file included from 
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
>  from blockdev.c:37:
> /var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6:
>  note: expected ‘void *’ but argument is of type ‘int’
>  void backup_start(BlockDriverState *bs, BlockDriverState *target,
>   ^
> blockdev.c:2710:22: warning: the address of ‘local_err’ will always evaluate 
> as ‘true’ [-Waddress]
>   true, &local_err);
>   ^
> blockdev.c:2707:9: error: too few arguments to function ‘backup_start’
>

Re: [pve-devel] [PATCH] qemu : add drive-mirror sleep patches

2015-06-30 Thread Dietmar Maurer

applied (to master), thanks!

Note: I fixed the path for qemu-kvm/debian/patches/mirror-sleep2.patch
to debian/patches/mirror-sleep2.patch

On 07/01/2015 06:01 AM, Alexandre Derumier wrote:

Currently when drive-mirror is starting,
the vm and qmp it's hanging on bitmap scanning phase (mainly with raw, nfs and 
block raw driver).

This patch do regular pause between each iteration

The initial patch from qemu mailing is working,but pause time is really too 
short,
so we still hang qmp hangs and qemu big slowdown.

I increase it to SLICE_TIME, which is 100ms by default

Signed-off-by: Alexandre Derumier 
---
  debian/patches/mirror-sleep.patch   | 58 +
  debian/patches/series   |  2 +
  qemu-kvm/debian/patches/mirror-sleep2.patch | 28 ++
  3 files changed, 88 insertions(+)
  create mode 100644 debian/patches/mirror-sleep.patch
  create mode 100644 qemu-kvm/debian/patches/mirror-sleep2.patch

diff --git a/debian/patches/mirror-sleep.patch 
b/debian/patches/mirror-sleep.patch
new file mode 100644
index 000..37dc939
--- /dev/null
+++ b/debian/patches/mirror-sleep.patch
@@ -0,0 +1,58 @@
+From 2540abec85433596dd04640b14f75ceb13bbb342 Mon Sep 17 00:00:00 2001
+From: Fam Zheng 
+Date: Wed, 13 May 2015 11:11:13 +0800
+Subject: [PATCH] block/mirror: Sleep periodically during bitmap scanning
+
+Before, we only yield after initializing dirty bitmap, where the QMP
+command would return. That may take very long, and guest IO will be
+blocked.
+
+Add sleep points like the later mirror iterations.
+
+Signed-off-by: Fam Zheng 
+Reviewed-by: Wen Congyang 
+Reviewed-by: Paolo Bonzini 
+Reviewed-by: Stefan Hajnoczi 
+---
+ block/mirror.c | 13 -
+ 1 file changed, 12 insertions(+), 1 deletion(-)
+
+diff --git a/block/mirror.c b/block/mirror.c
+index 4056164..0a05971 100644
+--- a/block/mirror.c
 b/block/mirror.c
+@@ -432,11 +432,23 @@ static void coroutine_fn mirror_run(void *opaque)
+ sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
+ mirror_free_init(s);
+
++last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ if (!s->is_none_mode) {
+ /* First part, loop on the sectors and initialize the dirty bitmap.  
*/
+ BlockDriverState *base = s->base;
+ for (sector_num = 0; sector_num < end; ) {
+ int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
++int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
++
++if (now - last_pause_ns > SLICE_TIME) {
++last_pause_ns = now;
++block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
++}
++
++if (block_job_is_cancelled(&s->common)) {
++goto immediate_exit;
++}
++
+ ret = bdrv_is_allocated_above(bs, base,
+   sector_num, next - sector_num, &n);
+
+@@ -455,7 +467,6 @@ static void coroutine_fn mirror_run(void *opaque)
+ }
+
+ bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
+-last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ for (;;) {
+ uint64_t delay_ns = 0;
+ int64_t cnt;
+--
+2.1.4
+
diff --git a/debian/patches/series b/debian/patches/series
index 1105537..a018473 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -33,3 +33,5 @@ add-qmp-get-link-status.patch
  0001-friendlier-ai_flag-hints-for-ipv6-hosts.patch
  0001-vvfat-add-a-label-option.patch
  jemalloc.patch
+mirror-sleep.patch
+mirror-sleep2.patch
diff --git a/qemu-kvm/debian/patches/mirror-sleep2.patch 
b/qemu-kvm/debian/patches/mirror-sleep2.patch
new file mode 100644
index 000..e1b59db
--- /dev/null
+++ b/qemu-kvm/debian/patches/mirror-sleep2.patch
@@ -0,0 +1,28 @@
+From d1ca17e6bfcf8292b85474cc871e015088672df4 Mon Sep 17 00:00:00 2001
+From: Alexandre Derumier 
+Date: Wed, 1 Jul 2015 05:07:06 +0200
+Subject: [PATCH] increase block_job_sleep_ns time to SLICE_TIME
+
+current value 0 is really too short to avoid qmp hangs
+
+Signed-off-by: Alexandre Derumier 
+---
+ block/mirror.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/block/mirror.c b/block/mirror.c
+index 0a05971..2711249 100644
+--- a/block/mirror.c
 b/block/mirror.c
+@@ -442,7 +442,7 @@ static void coroutine_fn mirror_run(void *opaque)
+
+ if (now - last_pause_ns > SLICE_TIME) {
+ last_pause_ns = now;
+-block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
++block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 
SLICE_TIME);
+ }
+
+ if (block_job_is_cancelled(&s->common)) {
+--
+2.1.4
+



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu : add drive-mirror sleep patches

2015-06-30 Thread Alexandre Derumier
Currently when drive-mirror is starting,
the vm and qmp it's hanging on bitmap scanning phase (mainly with raw, nfs and 
block raw driver).

This patch do regular pause between each iteration

The initial patch from qemu mailing is working,but pause time is really too 
short,
so we still hang qmp hangs and qemu big slowdown.

I increase it to SLICE_TIME, which is 100ms by default

Signed-off-by: Alexandre Derumier 
---
 debian/patches/mirror-sleep.patch   | 58 +
 debian/patches/series   |  2 +
 qemu-kvm/debian/patches/mirror-sleep2.patch | 28 ++
 3 files changed, 88 insertions(+)
 create mode 100644 debian/patches/mirror-sleep.patch
 create mode 100644 qemu-kvm/debian/patches/mirror-sleep2.patch

diff --git a/debian/patches/mirror-sleep.patch 
b/debian/patches/mirror-sleep.patch
new file mode 100644
index 000..37dc939
--- /dev/null
+++ b/debian/patches/mirror-sleep.patch
@@ -0,0 +1,58 @@
+From 2540abec85433596dd04640b14f75ceb13bbb342 Mon Sep 17 00:00:00 2001
+From: Fam Zheng 
+Date: Wed, 13 May 2015 11:11:13 +0800
+Subject: [PATCH] block/mirror: Sleep periodically during bitmap scanning
+
+Before, we only yield after initializing dirty bitmap, where the QMP
+command would return. That may take very long, and guest IO will be
+blocked.
+
+Add sleep points like the later mirror iterations.
+
+Signed-off-by: Fam Zheng 
+Reviewed-by: Wen Congyang 
+Reviewed-by: Paolo Bonzini 
+Reviewed-by: Stefan Hajnoczi 
+---
+ block/mirror.c | 13 -
+ 1 file changed, 12 insertions(+), 1 deletion(-)
+
+diff --git a/block/mirror.c b/block/mirror.c
+index 4056164..0a05971 100644
+--- a/block/mirror.c
 b/block/mirror.c
+@@ -432,11 +432,23 @@ static void coroutine_fn mirror_run(void *opaque)
+ sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
+ mirror_free_init(s);
+ 
++last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ if (!s->is_none_mode) {
+ /* First part, loop on the sectors and initialize the dirty bitmap.  
*/
+ BlockDriverState *base = s->base;
+ for (sector_num = 0; sector_num < end; ) {
+ int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
++int64_t now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
++
++if (now - last_pause_ns > SLICE_TIME) {
++last_pause_ns = now;
++block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
++}
++
++if (block_job_is_cancelled(&s->common)) {
++goto immediate_exit;
++}
++
+ ret = bdrv_is_allocated_above(bs, base,
+   sector_num, next - sector_num, &n);
+ 
+@@ -455,7 +467,6 @@ static void coroutine_fn mirror_run(void *opaque)
+ }
+ 
+ bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
+-last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ for (;;) {
+ uint64_t delay_ns = 0;
+ int64_t cnt;
+-- 
+2.1.4
+
diff --git a/debian/patches/series b/debian/patches/series
index 1105537..a018473 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -33,3 +33,5 @@ add-qmp-get-link-status.patch
 0001-friendlier-ai_flag-hints-for-ipv6-hosts.patch
 0001-vvfat-add-a-label-option.patch
 jemalloc.patch
+mirror-sleep.patch
+mirror-sleep2.patch
diff --git a/qemu-kvm/debian/patches/mirror-sleep2.patch 
b/qemu-kvm/debian/patches/mirror-sleep2.patch
new file mode 100644
index 000..e1b59db
--- /dev/null
+++ b/qemu-kvm/debian/patches/mirror-sleep2.patch
@@ -0,0 +1,28 @@
+From d1ca17e6bfcf8292b85474cc871e015088672df4 Mon Sep 17 00:00:00 2001
+From: Alexandre Derumier 
+Date: Wed, 1 Jul 2015 05:07:06 +0200
+Subject: [PATCH] increase block_job_sleep_ns time to SLICE_TIME
+
+current value 0 is really too short to avoid qmp hangs
+
+Signed-off-by: Alexandre Derumier 
+---
+ block/mirror.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/block/mirror.c b/block/mirror.c
+index 0a05971..2711249 100644
+--- a/block/mirror.c
 b/block/mirror.c
+@@ -442,7 +442,7 @@ static void coroutine_fn mirror_run(void *opaque)
+ 
+ if (now - last_pause_ns > SLICE_TIME) {
+ last_pause_ns = now;
+-block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
++block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 
SLICE_TIME);
+ }
+ 
+ if (block_job_is_cancelled(&s->common)) {
+-- 
+2.1.4
+
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] drive-mirror : fix qemu hang at first phase (bitmap scanning)

2015-06-30 Thread Alexandre Derumier
Hi,
theses patches finally fix the bug with drive-mirror, which can hang qemu
when it's scan the source volume to create the bitmap.

It's occuring mainly with raw file (depend of filesystem), nfs and some block 
storage like ceph.

Currently when this occur, qemu and qmp are hanging (can take hours with big 
volumes).

So, the qemu patch add some small job pause between each iteration of the scan.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qemu-server : drive-mirror : allow to interrupts at the scanning bitmap phase

2015-06-30 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 15fb471..f035b67 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6176,17 +6176,13 @@ sub qemu_drive_mirror {
 
 my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
 
-#drive-mirror is doing lseek on source image before starting, and this can 
take a lot of time for big nfs volume
-#during this time, qmp socket is hanging
-#http://lists.nongnu.org/archive/html/qemu-devel/2015-05/msg01838.html
-#so we need to setup a big timeout
-my $opts = { timeout => 14400, device => "drive-$drive", mode => 
"existing", sync => "full", target => $dst_path };
+my $opts = { timeout => 10, device => "drive-$drive", mode => "existing", 
sync => "full", target => $dst_path };
 $opts->{format} = $format if $format;
 
-print "drive mirror is starting : this step can take some minutes/hours, 
depend of disk size and storage speed\n";
+print "drive mirror is starting (scanning bitmap) : this step can take 
some minutes/hours, depend of disk size and storage speed\n";
 
-vm_mon_cmd($vmid, "drive-mirror", %$opts);
 eval {
+vm_mon_cmd($vmid, "drive-mirror", %$opts);
while (1) {
my $stats = vm_mon_cmd($vmid, "query-block-jobs");
my $stat = @$stats[0];
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] update proxmox patches to qemu 2.4

2015-06-30 Thread Alexandre Derumier
fixme :
-internal-snapshot-async.patch

- backups : seem that they are lot of changes with bitmap support add 
(backup_start have new arguments for example)

blockdev.c:2486:7: error: conflicting types for ‘qmp_backup’
 char *qmp_backup(const char *backup_file, bool has_format,
   ^
In file included from blockdev.c:49:0:
qmp-commands.h:111:11: note: previous declaration of ‘qmp_backup’ was here
 UuidInfo *qmp_backup(const char *backup_file, bool has_format, BackupFormat 
format, bool has_config_file, const char *config_file, bool has_devlist, const 
char *devlist, bool has_speed, int64_t speed, Error **errp);
   ^
blockdev.c: In function ‘qmp_backup’:
blockdev.c:2520:37: error: ‘QERR_DEVICE_IS_READ_ONLY’ undeclared (first use in 
this function)
 error_set(errp, QERR_DEVICE_IS_READ_ONLY, *d);
 ^
blockdev.c:2520:37: note: each undeclared identifier is reported only once for 
each function it appears in
blockdev.c:2524:21: error: incompatible type for argument 2 of ‘error_set’
 error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, *d);
 ^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/sysemu/block-backend.h:17:0,
 from blockdev.c:33:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/qapi/error.h:30:6: note: 
expected ‘ErrorClass’ but argument is of type ‘const char *’
 void error_set(Error **errp, ErrorClass err_class, const char *fmt, ...)
  ^
blockdev.c:2531:33: error: ‘QERR_DEVICE_NOT_FOUND’ undeclared (first use in 
this function)
 error_set(errp, QERR_DEVICE_NOT_FOUND, *d);
 ^
blockdev.c:2707:9: error: incompatible type for argument 7 of ‘backup_start’
 backup_start(di->bs, di->target, speed, MIRROR_SYNC_MODE_FULL,
 ^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
 from blockdev.c:37:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6: 
note: expected ‘BlockdevOnError’ but argument is of type ‘int (*)(void *, 
struct BlockDriverState *, int64_t,  int,  unsigned char *)’
 void backup_start(BlockDriverState *bs, BlockDriverState *target,
  ^
blockdev.c:2709:41: warning: passing argument 8 of ‘backup_start’ from 
incompatible pointer type
  pvebackup_dump_cb, pvebackup_complete_cb, di,
 ^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
 from blockdev.c:37:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6: 
note: expected ‘int (*)(void *, struct BlockDriverState *, int64_t,  int,  
unsigned char *)’ but argument is of type ‘void (*)(void *, int)’
 void backup_start(BlockDriverState *bs, BlockDriverState *target,
  ^
blockdev.c:2709:64: warning: passing argument 9 of ‘backup_start’ from 
incompatible pointer type
  pvebackup_dump_cb, pvebackup_complete_cb, di,
^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
 from blockdev.c:37:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6: 
note: expected ‘void (*)(void *, int)’ but argument is of type ‘struct 
PVEBackupDevInfo *’
 void backup_start(BlockDriverState *bs, BlockDriverState *target,
  ^
blockdev.c:2710:22: warning: passing argument 10 of ‘backup_start’ makes 
pointer from integer without a cast
  true, &local_err);
  ^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
 from blockdev.c:37:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6: 
note: expected ‘void *’ but argument is of type ‘int’
 void backup_start(BlockDriverState *bs, BlockDriverState *target,
  ^
blockdev.c:2710:22: warning: the address of ‘local_err’ will always evaluate as 
‘true’ [-Waddress]
  true, &local_err);
  ^
blockdev.c:2707:9: error: too few arguments to function ‘backup_start’
 backup_start(di->bs, di->target, speed, MIRROR_SYNC_MODE_FULL,
 ^
In file included from 
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/throttle-groups.h:29:0,
 from blockdev.c:37:
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/include/block/block_int.h:650:6: 
note: declared here
 void backup_start(BlockDriverState *bs, BlockDriverState *target,
  ^
/var/lib/vz/proxmox2/pve-qemu-kvm2.4/qemu-kvm/rules.mak:57: recipe for target 
'blockdev.o' failed

Signed-off-by: Alexandre Derumier 
---
 debian/patches/add-qmp-get-link-status.patch   |  15 ++-
 .../patches/backup-add-pve-monitor-commands.patch  |   4 +-

Re: [pve-devel] [PATCH 2/3] LXCSetup::Redhat: ipv6 config

2015-06-30 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] LXCSetup::Debian: ipv6 + naming consistency

2015-06-30 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] PATCH V6 : RFC: cloud-init update

2015-06-30 Thread Alexandre DERUMIER
>>Thanks, I've just skimmed over it and will read it in detail tomorrow.
>>Can it now be configured to use any of ide/virtio/sata/...?

Currently I force it on a new sata controller
(because we can't had a second ide controller, and virtio is not supported by 
all os.)

but this can be changed if needed.





- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Mardi 30 Juin 2015 16:21:50
Objet: Re: [pve-devel] PATCH V6 : RFC: cloud-init update

Thanks, I've just skimmed over it and will read it in detail tomorrow. 
Can it now be configured to use any of ide/virtio/sata/...? 

On Tue, Jun 30, 2015 at 04:01:45PM +0200, Alexandre Derumier wrote: 
> This patch add support for generic storage to store the cloudinit image 
> + a dedicated sata controller 
> 
> details are in patch 5/5 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/2] firewall autodisable GUI patch v2

2015-06-30 Thread Alen Grizonic

Hi Dietmar.


On 06/30/2015 04:41 PM, Dietmar Maurer wrote:

Hi Alen,

first, thanks for the cleanup.


! the patch needs the keepalive feature disabled to work correctly !

OK, but we don't want to do that ;-) Any other suggestions?


Yes, I am still trying to find a better solution, that's why I pointed 
out the keepalive feature as a non-permanent one.





Signed-off-by: Alen Grizonic 
---
  www/manager/grid/FirewallOptions.js | 60
-
  1 file changed, 59 insertions(+), 1 deletion(-)

diff --git a/www/manager/grid/FirewallOptions.js
b/www/manager/grid/FirewallOptions.js
index f94be6c..9c70e6b 100644
--- a/www/manager/grid/FirewallOptions.js
+++ b/www/manager/grid/FirewallOptions.js
@@ -25,6 +25,63 @@ Ext.define('PVE.FirewallOptions', {
  
  	var rows = {};
  
+	var submit_first = function() {

+   var me = this;
+   var form = me.formPanel.getForm();
+   var form_values = me.getValues();
+   submit_twice.call(me, form_values.enable ? 2 : 0);
+   }
+
+   var submit_twice = function(enable) {
+   var me = this;
+   var form = me.formPanel.getForm();
+   var values = me.getValues();
+
+   if (enable == 2) {
+   values.enable = 2;
+   } else if (enable == 1) {
+   values.enable = 1;
+   }
+
+   if (me.digest) {
+   if (values.enable == 2) {
+   me.digest = "";

delete me.digest


Much better. Thanks.




+   } else {
+   values.digest = me.digest;
+   }
+   }
+
+   PVE.Utils.API2Request({
+   url: me.url,
+   waitMsgTarget: me,
+   method: me.method || (me.backgroundDelay ? 'POST' : 'PUT'),
+   params: values,
+   failure: function(response, options) {
+   if (response.result && response.result.errors) {
+   form.markInvalid(response.result.errors);
+   }

unsure -  I guess we need different error messages for first and second call.

I also think we do not need that markInvalid() code here.


OK. I'll check it out.




+   confirm ("Connection lost: Disabling firewall (in 60 
seconds).");
+   },
+   success: function(response, options) {
+   if ((enable == 2) || (enable == 0)) {
+   me.close();
+   }
+   if ((me.backgroundDelay || me.showProgress) &&
+   response.result.data) {
+   var upid = response.result.data;
+   var win = Ext.create('PVE.window.TaskProgress', {
+   upid: upid
+   });
+   win.show();
+   }

also remove above if statement (we do not use me.backgroundDelay here)


True. Consider it done.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/2] firewall autodisable GUI patch v2

2015-06-30 Thread Dietmar Maurer
Hi Alen,

first, thanks for the cleanup.

> ! the patch needs the keepalive feature disabled to work correctly !

OK, but we don't want to do that ;-) Any other suggestions?

> 
> Signed-off-by: Alen Grizonic 
> ---
>  www/manager/grid/FirewallOptions.js | 60
> -
>  1 file changed, 59 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager/grid/FirewallOptions.js
> b/www/manager/grid/FirewallOptions.js
> index f94be6c..9c70e6b 100644
> --- a/www/manager/grid/FirewallOptions.js
> +++ b/www/manager/grid/FirewallOptions.js
> @@ -25,6 +25,63 @@ Ext.define('PVE.FirewallOptions', {
>  
>   var rows = {};
>  
> + var submit_first = function() {
> + var me = this;
> + var form = me.formPanel.getForm();
> + var form_values = me.getValues();
> + submit_twice.call(me, form_values.enable ? 2 : 0);
> + }
> +
> + var submit_twice = function(enable) {
> + var me = this;
> + var form = me.formPanel.getForm();
> + var values = me.getValues();
> +
> + if (enable == 2) {
> + values.enable = 2;
> + } else if (enable == 1) {
> + values.enable = 1;
> + }
> +
> + if (me.digest) {
> + if (values.enable == 2) {
> + me.digest = "";

delete me.digest

> + } else {
> + values.digest = me.digest;
> + }
> + }
> +
> + PVE.Utils.API2Request({
> + url: me.url,
> + waitMsgTarget: me,
> + method: me.method || (me.backgroundDelay ? 'POST' : 'PUT'),
> + params: values,
> + failure: function(response, options) {
> + if (response.result && response.result.errors) {
> + form.markInvalid(response.result.errors);
> + }

unsure -  I guess we need different error messages for first and second call.

I also think we do not need that markInvalid() code here.

> + confirm ("Connection lost: Disabling firewall (in 60 
> seconds).");
> + },
> + success: function(response, options) {
> + if ((enable == 2) || (enable == 0)) {
> + me.close();
> + }
> + if ((me.backgroundDelay || me.showProgress) &&
> + response.result.data) {
> + var upid = response.result.data;
> + var win = Ext.create('PVE.window.TaskProgress', {
> + upid: upid
> + });
> + win.show();
> + }

also remove above if statement (we do not use me.backgroundDelay here) 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] PATCH V6 : RFC: cloud-init update

2015-06-30 Thread Wolfgang Bumiller
Thanks, I've just skimmed over it and will read it in detail tomorrow.
Can it now be configured to use any of ide/virtio/sata/...?

On Tue, Jun 30, 2015 at 04:01:45PM +0200, Alexandre Derumier wrote:
> This patch add support for generic storage to store the cloudinit image
> + a dedicated sata controller
> 
> details are in patch 5/5
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 4/5] cloud-init : force ifdown ifup at boot

2015-06-30 Thread Alexandre Derumier
when we change ip address, network configuration is correctly writen in guest,
but cloud-init don't apply it and keep previous ip address.

workaround with forcing ifdown ifup

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b13162e..82905ad 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6451,6 +6451,9 @@ sub generate_cloudinit_userdata {
 my $hostname = $conf->{searchdomain} ? 
$conf->{name}.".".$conf->{searchdomain} : $conf->{name};
 $content .= "fqdn: $hostname\n";
 $content .= "manage_etc_hosts: true\n";
+$content .= "bootcmd: \n";
+$content .= "  - ifdown -a\n";
+$content .= "  - ifup -a\n";
 
 if ($conf->{sshkey}) {
$content .= "users:\n";
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/5] cloudinit: use qcow2 for snapshot support

2015-06-30 Thread Alexandre Derumier
From: Wolfgang Bumiller 

The config-disk is now generated into a qcow2 located on a
configured storage.
It is now also storage-managed and solive-migration and
live-snapshotting should work as they do for regular hard
drives.

Signed-off-by: Alexandre Derumier 
---
 PVE/API2/Qemu.pm |  16 ++---
 PVE/QemuMigrate.pm   |   8 +--
 PVE/QemuServer.pm| 158 ++-
 PVE/VZDump/QemuServer.pm |   2 +-
 4 files changed, 113 insertions(+), 71 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fae2872..285acc1 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -57,7 +57,7 @@ my $test_deallocate_drive = sub {
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
 
-   PVE::QemuServer::foreach_drive($settings, sub {
+   PVE::QemuServer::foreach_drive($settings, $vmid, sub {
my ($ds, $drive) = @_;
 
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -79,11 +79,11 @@ my $check_storage_access = sub {
 };
 
 my $check_storage_access_clone = sub {
-   my ($rpcenv, $authuser, $storecfg, $conf, $storage) = @_;
+   my ($rpcenv, $authuser, $storecfg, $conf, $vmid, $storage) = @_;
 
my $sharedvm = 1;
 
-   PVE::QemuServer::foreach_drive($conf, sub {
+   PVE::QemuServer::foreach_drive($conf, $vmid, sub {
my ($ds, $drive) = @_;
 
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -123,7 +123,7 @@ my $create_disks = sub {
 my $vollist = [];
 
 my $res = {};
-PVE::QemuServer::foreach_drive($settings, sub {
+PVE::QemuServer::foreach_drive($settings, $vmid, sub {
my ($ds, $disk) = @_;
 
my $volid = $disk->{file};
@@ -2052,8 +2052,8 @@ __PACKAGE__->register_method({
}
my $storecfg = PVE::Storage::config();
 
-   my $nodelist = PVE::QemuServer::shared_nodes($conf, $storecfg);
-   my $hasFeature = PVE::QemuServer::has_feature($feature, $conf, 
$storecfg, $snapname, $running);
+   my $nodelist = PVE::QemuServer::shared_nodes($conf, $storecfg, $vmid);
+   my $hasFeature = PVE::QemuServer::has_feature($feature, $conf, $vmid, 
$storecfg, $snapname, $running);
 
return {
hasFeature => $hasFeature,
@@ -2205,7 +2205,7 @@ __PACKAGE__->register_method({
 
my $oldconf = $snapname ? $conf->{snapshots}->{$snapname} : $conf;
 
-   my $sharedvm = &$check_storage_access_clone($rpcenv, $authuser, 
$storecfg, $oldconf, $storage);
+   my $sharedvm = &$check_storage_access_clone($rpcenv, $authuser, 
$storecfg, $oldconf, $vmid, $storage);
 
die "can't clone VM to node '$target' (VM uses local storage)\n" if 
$target && !$sharedvm;
 
@@ -2563,7 +2563,7 @@ __PACKAGE__->register_method({
}
 
my $storecfg = PVE::Storage::config();
-   PVE::QemuServer::check_storage_availability($storecfg, $conf, $target);
+   PVE::QemuServer::check_storage_availability($storecfg, $conf, $vmid, 
$target);
 
if (PVE::HA::Config::vm_is_ha_managed($vmid) && $rpcenv->{type} ne 
'ha') {
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 264a2a7..0f77745 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -153,7 +153,7 @@ sub prepare {
 }
 
 # activate volumes
-my $vollist = PVE::QemuServer::get_vm_volumes($conf);
+my $vollist = PVE::QemuServer::get_vm_volumes($conf, $vmid);
 PVE::Storage::activate_volumes($self->{storecfg}, $vollist);
 
 # fixme: check if storage is available on both nodes
@@ -192,7 +192,7 @@ sub sync_disks {
 
 # get list from PVE::Storage (for unused volumes)
 my $dl = PVE::Storage::vdisk_list($self->{storecfg}, $storeid, 
$vmid);
-PVE::Storage::foreach_volid($dl, sub {
+PVE::Storage::foreach_volid($dl, $vmid, sub {
 my ($volid, $sid, $volname) = @_;
 
 # check if storage is available on target node
@@ -205,7 +205,7 @@ sub sync_disks {
 
# and add used, owned/non-shared disks (just to be sure we have all)
 
-   PVE::QemuServer::foreach_volid($conf, sub {
+   PVE::QemuServer::foreach_volid($conf, $vmid, sub {
my ($volid, $is_cdrom) = @_;
 
return if !$volid;
@@ -629,7 +629,7 @@ sub phase3_cleanup {
 
 # always deactivate volumes - avoid lvm LVs to be active on several nodes
 eval {
-   my $vollist = PVE::QemuServer::get_vm_volumes($conf);
+   my $vollist = PVE::QemuServer::get_vm_volumes($conf, $vmid);
PVE::Storage::deactivate_volumes($self->{storecfg}, $vollist);
 };
 if (my $err = $@) {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 51f1277..b13162e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -402,8 +402,11 @@ EODESCR
 },
 cloudinit => {
optional => 1,
-   type => 'boolean',
-   description => "Enable cloudinit config generation.",
+   type => 'string',
+   # FIXME: for templa

[pve-devel] PATCH V6 : RFC: cloud-init update

2015-06-30 Thread Alexandre Derumier
This patch add support for generic storage to store the cloudinit image
+ a dedicated sata controller

details are in patch 5/5

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/5] cloud-init changes

2015-06-30 Thread Alexandre Derumier
From: Wolfgang Bumiller 

 * Add ipconfigX for all netX configuration options and
   using ip=CIDR, gw=IP, ip6=CIDR, gw6=IP as option names
   like in LXC.
 * Adding explicit ip=dhcp and ip6=dhcp options.
 * Removing the config-update code and instead generating
   the ide3 commandline in config_to_command.
   - Adding a conflict check to write_vm_config similar to
   the one for 'cdrom'.
 * Replacing UUID generation with a SHA1 hash of the
   concatenated userdata and network configuration. For this
   generate_cloudinit_userdata/network now returns the
   content variable.
 * Finishing ipv6 support in generate_cloudinit_network.
   Note that ipv4 now only defaults to dhcp if no ipv6
   address was specified. (Explicitly requested  dhcp is
   always used.)

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 157 +++---
 1 file changed, 127 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7956c50..51f1277 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,7 +18,6 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
-use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
@@ -490,8 +489,25 @@ EODESCR
 };
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+my $ipconfigdesc = {
+optional => 1,
+type => 'string', format => 'pve-qm-ipconfig',
+typetext => 
"[ip=IPv4_CIDR[,gw=IPv4_GATEWAY]][,ip6=IPv6_CIDR[,gw6=IPv6_GATEWAY]]",
+description => <<'EODESCR',
+Specify IP addresses and gateways for the corresponding interface.
+
+IP addresses use CIDR notation, gateways are optional but need an IP of the 
same type specified.
+
+The special string 'dhcp' can be used for IP addresses to use DHCP, in which 
case no explicit gateway should be provided.
+
+If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, 
it defaults to using dhcp on IPv4.
+EODESCR
+};
+PVE::JSONSchema::register_standard_option("pve-qm-ipconfig", $netdesc);
+
 for (my $i = 0; $i < $MAX_NETS; $i++)  {
 $confdesc->{"net$i"} = $netdesc;
+$confdesc->{"ipconfig$i"} = $ipconfigdesc;
 }
 
 my $drivename_hash;
@@ -1382,18 +1398,63 @@ sub parse_net {
$res->{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res->{link_down} = $1;
-   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   } else {
+   return undef;
+   }
+
+}
+
+return undef if !$res->{model};
+
+return $res;
+}
+
+# ipconfigX ip=cidr,gw=ip,ip6=cidr,gw6=ip
+sub parse_ipconfig {
+my ($data) = @_;
+
+my $res = {};
+
+foreach my $kvp (split(/,/, $data)) {
+   if ($kvp =~ m/^ip=dhcp$/) {
+   $res->{address} = 'dhcp';
+   } elsif ($kvp =~ m/^ip=($IPV4RE)\/(\d+)$/) {
$res->{address} = $1;
$res->{netmask} = $2;
-   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   } elsif ($kvp =~ m/^gw=($IPV4RE)$/) {
$res->{gateway} = $1;
+   } elsif ($kvp =~ m/^ip6=dhcp6?$/) {
+   $res->{address6} = 'dhcp';
+   } elsif ($kvp =~ m/^ip6=($IPV6RE)\/(\d+)$/) {
+   $res->{address6} = $1;
+   $res->{netmask6} = $2;
+   } elsif ($kvp =~ m/^gw6=($IPV6RE)$/) {
+   $res->{gateway6} = $1;
} else {
return undef;
}
+}
 
+if ($res->{gateway} && !$res->{address}) {
+   warn 'gateway specified without specifying an IP address';
+   return undef;
+}
+if ($res->{gateway6} && !$res->{address6}) {
+   warn 'IPv6 gateway specified without specifying an IPv6 address';
+   return undef;
+}
+if ($res->{gateway} && !$res->{address} eq 'dhcp') {
+   warn 'gateway specified together with DHCP';
+   return undef;
+}
+if ($res->{gateway6} && !$res->{address6} eq 'dhcp') {
+   warn 'IPv6 gateway specified together with DHCP6';
+   return undef;
 }
 
-return undef if !$res->{model};
+if (!$res->{address} && !$res->{address6}) {
+   return { address => 'dhcp' };
+}
 
 return $res;
 }
@@ -1614,6 +1675,17 @@ sub verify_net {
 die "unable to parse network options\n";
 }
 
+PVE::JSONSchema::register_format('pve-qm-ipconfig', \&verify_ipconfig);
+sub verify_ipconfig {
+my ($value, $noerr) = @_;
+
+return $value if parse_ipconfig($value);
+
+return undef if $noerr;
+
+die "unable to parse ipconfig options\n";
+}
+
 PVE::JSONSchema::register_format('pve-qm-drive', \&verify_drive);
 sub verify_drive {
 my ($value, $noerr) = @_;
@@ -1995,6 +2067,11 @@ sub write_vm_config {
delete $conf->{cdrom};
 }
 
+if ($conf->{cloudinit}) {
+   die "option cloudinit conflicts with ide3\n" if $conf->{ide3};
+   delete $conf->{cloudinit};
+}
+
 # we do not use 'smp' any longer
 if ($conf->{sockets}) {
delete $conf->{smp};
@@ -3115,6 +3192,8 @@ sub config_to_command {

[pve-devel] [PATCH 5/5] cloudinit : add support for generic storage + dedicated sata drive

2015-06-30 Thread Alexandre Derumier
This patch add support to create the cloudinit drive on any storage

I introduce an

cloudinitdrive0: local:100/vm-100-cloudinit.qcow2

to store the generate iso reference.

This is to avoid to scan everytime the storage at vm start,
to see if the drive has already been generated.
(and also if we change storeid from cloudinit option, we can remove the old 
drive easily).

This drive is on a dedicated sata controller,so no conflict possible with 
current users config.
sata will be migratable in qemu 2.4 (already ok in master).

This drive works like other drives, so live migration will works out of the box

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 80 ---
 1 file changed, 47 insertions(+), 33 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 82905ad..15fb471 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -636,6 +636,9 @@ for (my $i = 0; $i < $MAX_SATA_DISKS; $i++)  {
 $confdesc->{"sata$i"} = $satadesc;
 }
 
+$drivename_hash->{"cloudinitdrive0"} = 1;
+$confdesc->{"cloudinitdrive0"} = $satadesc;
+
 for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++)  {
 $drivename_hash->{"scsi$i"} = 1;
 $confdesc->{"scsi$i"} = $scsidesc ;
@@ -703,7 +706,8 @@ sub disknames {
 return ((map { "ide$_" } (0 .. ($MAX_IDE_DISKS - 1))),
 (map { "scsi$_" } (0 .. ($MAX_SCSI_DISKS - 1))),
 (map { "virtio$_" } (0 .. ($MAX_VIRTIO_DISKS - 1))),
-(map { "sata$_" } (0 .. ($MAX_SATA_DISKS - 1;
+(map { "sata$_" } (0 .. ($MAX_SATA_DISKS - 1))),
+   'cloudinitdrive');
 }
 
 sub valid_drivename {
@@ -1163,6 +1167,10 @@ sub print_drivedevice_full {
my $controller = int($drive->{index} / $MAX_SATA_DISKS);
my $unit = $drive->{index} % $MAX_SATA_DISKS;
$device = 
"ide-drive,bus=ahci$controller.$unit,drive=drive-$drive->{interface}$drive->{index},id=$drive->{interface}$drive->{index}";
+} elsif ($drive->{interface} eq 'cloudinitdrive'){
+   my $controller = 1;
+   my $unit = 0;
+   $device = 
"ide-cd,bus=ahci$controller.$unit,drive=drive-$drive->{interface}$drive->{index},id=$drive->{interface}$drive->{index}";
 } elsif ($drive->{interface} eq 'usb') {
die "implement me";
#  -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
@@ -2068,11 +2076,6 @@ sub write_vm_config {
delete $conf->{cdrom};
 }
 
-if ($conf->{cloudinit} && $conf->{ide3}) {
-   die "option cloudinit conflicts with ide3\n";
-   delete $conf->{cloudinit};
-}
-
 # we do not use 'smp' any longer
 if ($conf->{sockets}) {
delete $conf->{smp};
@@ -2615,21 +2618,6 @@ sub foreach_drive {
 
&$func($ds, $drive);
 }
-
-if (my $storeid = $conf->{cloudinit}) {
-   my $storecfg = PVE::Storage::config();
-   my $imagedir = PVE::Storage::get_image_dir($storecfg, $storeid, $vmid);
-   my $iso_name = "vm-$vmid-cloudinit.qcow2";
-   my $iso_path = "$imagedir/$iso_name";
-   # Only snapshot it if it has already been created.
-   # (Which is not the case if the VM has never been started before with
-   # cloud-init enabled.)
-   if (-e $iso_path) {
-   my $ds = 'ide3';
-   my $drive = parse_drive($ds, 
"$storeid:$vmid/vm-$vmid-cloudinit.qcow2");
-   &$func($ds, $drive) if $drive;
-   }
-}
 }
 
 sub foreach_volid {
@@ -3203,6 +3191,13 @@ sub config_to_command {
$ahcicontroller->{$controller}=1;
 }
 
+if ($drive->{interface} eq 'cloudinitdrive') {
+   my $controller = 1;
+   $pciaddr = print_pci_addr("ahci$controller", $bridges);
+   push @$devices, '-device', 
"ahci,id=ahci$controller,multifunction=on$pciaddr" if 
!$ahcicontroller->{$controller};
+   $ahcicontroller->{$controller}=1;
+}
+
my $drive_cmd = print_drive_full($storecfg, $vmid, $drive);
push @$devices, '-drive',$drive_cmd;
push @$devices, '-device', print_drivedevice_full($storecfg, $conf, 
$vmid, $drive, $bridges);
@@ -4851,6 +4846,7 @@ sub print_pci_addr {
'net29' => { bus => 1, addr => 24 },
'net30' => { bus => 1, addr => 25 },
'net31' => { bus => 1, addr => 26 },
+   'ahci1' => { bus => 1, addr => 27 },
'virtio6' => { bus => 2, addr => 1 },
'virtio7' => { bus => 2, addr => 2 },
'virtio8' => { bus => 2, addr => 3 },
@@ -6381,17 +6377,25 @@ sub scsihw_infos {
 my $cloudinit_iso_size = 5; # in MB
 
 sub prepare_cloudinit_disk {
-my ($vmid, $storeid) = @_;
+my ($vmid, $conf, $storeid) = @_;
 
 my $storecfg = PVE::Storage::config();
-my $imagedir = PVE::Storage::get_image_dir($storecfg, $storeid, $vmid);
-my $iso_name = "vm-$vmid-cloudinit.qcow2";
-my $iso_path = "$imagedir/$iso_name";
-return $iso_path if -e $iso_path;
+my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+my $name = "vm-$vmid-clou

[pve-devel] [PATCH 1/5] implement cloudinit v2

2015-06-30 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 181 +-
 control.in|   2 +-
 2 files changed, 179 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ab9ac74..7956c50 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -18,17 +18,19 @@ use Cwd 'abs_path';
 use IPC::Open3;
 use JSON;
 use Fcntl;
+use UUID;
 use PVE::SafeSyslog;
 use Storable qw(dclone);
 use PVE::Exception qw(raise raise_param_exc);
 use PVE::Storage;
-use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach);
+use PVE::Tools qw(run_command lock_file lock_file_full file_read_firstline 
dir_glob_foreach $IPV6RE $IPV4RE);
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_write_file 
cfs_lock_file);
 use PVE::INotify;
 use PVE::ProcFSTools;
 use PVE::QMPClient;
 use PVE::RPCEnvironment;
+
 use Time::HiRes qw(gettimeofday);
 
 my $qemu_snap_storage = {rbd => 1, sheepdog => 1};
@@ -384,6 +386,28 @@ EODESCR
maxLength => 256,
optional => 1,
 },
+searchdomain => {
+optional => 1,
+type => 'string',
+description => "Sets DNS search domains for a container. Create will 
automatically use the setting from the host if you neither set searchdomain or 
nameserver.",
+},
+nameserver => {
+optional => 1,
+type => 'string',
+description => "Sets DNS server IP address for a container. Create 
will automatically use the setting from the host if you neither set 
searchdomain or nameserver.",
+},
+sshkey => {
+optional => 1,
+type => 'string',
+description => "Ssh keys for root",
+},
+cloudinit => {
+   optional => 1,
+   type => 'boolean',
+   description => "Enable cloudinit config generation.",
+   default => 0,
+},
+
 };
 
 # what about other qemu settings ?
@@ -712,6 +736,8 @@ sub get_iso_path {
return get_cdrom_path();
 } elsif ($cdrom eq 'none') {
return '';
+} elsif ($cdrom eq 'cloudinit') {
+   return "/tmp/cloudinit/$vmid/configdrive.iso";
 } elsif ($cdrom =~ m|^/|) {
return $cdrom;
 } else {
@@ -723,7 +749,7 @@ sub get_iso_path {
 sub filename_to_volume_id {
 my ($vmid, $file, $media) = @_;
 
-if (!($file eq 'none' || $file eq 'cdrom' ||
+ if (!($file eq 'none' || $file eq 'cdrom' || $file eq 'cloudinit' ||
  $file =~ m|^/dev/.+| || $file =~ m/^([^:]+):(.+)$/)) {
 
return undef if $file =~ m|/|;
@@ -1356,6 +1382,11 @@ sub parse_net {
$res->{firewall} = $1;
} elsif ($kvp =~ m/^link_down=([01])$/) {
$res->{link_down} = $1;
+   } elsif ($kvp =~ m/^cidr=($IPV6RE|$IPV4RE)\/(\d+)$/) {
+   $res->{address} = $1;
+   $res->{netmask} = $2;
+   } elsif ($kvp =~ m/^gateway=($IPV6RE|$IPV4RE)$/) {
+   $res->{gateway} = $1;
} else {
return undef;
}
@@ -4143,12 +4174,14 @@ sub vm_start {
check_lock($conf) if !$skiplock;
 
die "VM $vmid already running\n" if check_running($vmid, undef, 
$migratedfrom);
-
+   
if (!$statefile && scalar(keys %{$conf->{pending}})) {
vmconfig_apply_pending($vmid, $conf, $storecfg);
$conf = load_config($vmid); # update/reload
}
 
+   generate_cloudinitconfig($conf, $vmid);
+
my $defaults = load_defaults();
 
# set environment variable useful inside network script
@@ -6251,4 +6284,146 @@ sub scsihw_infos {
 return ($maxdev, $controller, $controller_prefix);
 }
 
+sub generate_cloudinitconfig {
+my ($conf, $vmid) = @_;
+
+return if !$conf->{cloudinit};
+
+my $path = "/tmp/cloudinit/$vmid";
+
+mkdir "/tmp/cloudinit";
+mkdir $path;
+mkdir "$path/drive";
+mkdir "$path/drive/openstack";
+mkdir "$path/drive/openstack/latest";
+mkdir "$path/drive/openstack/content";
+generate_cloudinit_userdata($conf, $path);
+generate_cloudinit_metadata($conf, $path);
+generate_cloudinit_network($conf, $path);
+
+my $cmd = [];
+push @$cmd, 'genisoimage';
+push @$cmd, '-R';
+push @$cmd, '-V', 'config-2';
+push @$cmd, '-o', "$path/configdrive.iso";
+push @$cmd, "$path/drive";
+
+run_command($cmd);
+rmtree("$path/drive");
+my $drive = PVE::QemuServer::parse_drive('ide3', 'cloudinit,media=cdrom');
+$conf->{'ide3'} = PVE::QemuServer::print_drive($vmid, $drive);
+update_config_nolock($vmid, $conf, 1);
+
+}
+
+sub generate_cloudinit_userdata {
+my ($conf, $path) = @_;
+
+my $content = "#cloud-config\n";
+my $hostname = $conf->{searchdomain} ? 
$conf->{name}.".".$conf->{searchdomain} : $conf->{name};
+$content .= "fqdn: $hostname\n";
+$content .= "manage_etc_hosts: true\n";
+
+if ($conf->{sshkey}) {
+   $content .= "users:\n";
+   $content .= "  - default\n";
+   $c

[pve-devel] [PATCH 2/2] firewall autodisable GUI patch v2

2015-06-30 Thread Alen Grizonic
Changes since [PATCH]:

- removed all the unnecessary code removed

- direct call of the warning message without using the hook method

- additional function submit_first to pass the enable flag parameter in the 
correct way

- added additional condition for the close method

- optimized enable flag change conditions

! the patch needs the keepalive feature disabled to work correctly !

Signed-off-by: Alen Grizonic 
---
 www/manager/grid/FirewallOptions.js | 60 -
 1 file changed, 59 insertions(+), 1 deletion(-)

diff --git a/www/manager/grid/FirewallOptions.js 
b/www/manager/grid/FirewallOptions.js
index f94be6c..9c70e6b 100644
--- a/www/manager/grid/FirewallOptions.js
+++ b/www/manager/grid/FirewallOptions.js
@@ -25,6 +25,63 @@ Ext.define('PVE.FirewallOptions', {
 
var rows = {};
 
+   var submit_first = function() {
+   var me = this;
+   var form = me.formPanel.getForm();
+   var form_values = me.getValues();
+   submit_twice.call(me, form_values.enable ? 2 : 0);
+   }
+
+   var submit_twice = function(enable) {
+   var me = this;
+   var form = me.formPanel.getForm();
+   var values = me.getValues();
+
+   if (enable == 2) {
+   values.enable = 2;
+   } else if (enable == 1) {
+   values.enable = 1;
+   }
+
+   if (me.digest) {
+   if (values.enable == 2) {
+   me.digest = "";
+   } else {
+   values.digest = me.digest;
+   }
+   }
+
+   PVE.Utils.API2Request({
+   url: me.url,
+   waitMsgTarget: me,
+   method: me.method || (me.backgroundDelay ? 'POST' : 'PUT'),
+   params: values,
+   failure: function(response, options) {
+   if (response.result && response.result.errors) {
+   form.markInvalid(response.result.errors);
+   }
+   confirm ("Connection lost: Disabling firewall (in 60 
seconds).");
+   },
+   success: function(response, options) {
+   if ((enable == 2) || (enable == 0)) {
+   me.close();
+   }
+   if ((me.backgroundDelay || me.showProgress) &&
+   response.result.data) {
+   var upid = response.result.data;
+   var win = Ext.create('PVE.window.TaskProgress', {
+   upid: upid
+   });
+   win.show();
+   }
+   if (enable == 2) {
+   submit_twice.call(me, 1);
+   }
+   }
+   });
+   };
+
+
var add_boolean_row = function(name, text, defaultValue, labelWidth) {
rows[name] = {
header: text,
@@ -42,7 +99,8 @@ Ext.define('PVE.FirewallOptions', {
name: name,
uncheckedValue: 0,
fieldLabel: text
-   }
+   },
+   submit: submit_first
}
};
};
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PVE-User] Proxmox VE ZFS replication manager released (pve-zsync)

2015-06-30 Thread Wolfgang Link

There where some issues in 0.6.4 but they are fixed in 0.6.4!
https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4


On 06/30/2015 02:08 PM, Angel Docampo wrote:

Hi there!

Is it based on zfs send-receive? I thought it was buggy on linux... 
perhaps it was on 0.6.3?


Anyway, that's a great feature, thank you!

:)

On 30/06/15 12:19, Martin Maurer wrote:

Hi all,

We just released the brand new Proxmox VE ZFS replication manager
(pve-zsync)!

This CLI tool synchronizes your virtual machine (virtual disks and VM
configuration) or directory stored on ZFS between two servers - very
useful for backup and replication tasks.

A big Thank-you to our active community for all feedback, testing, bug
reporting and patch submissions.

Documentation
http://pve.proxmox.com/wiki/PVE-zsync

Git
https://git.proxmox.com/?p=pve-zsync.git;a=summary

Bugtracker
https://bugzilla.proxmox.com/



--


*Angel Docampo
*
*Datalab Tecnologia, s.a.*
Castillejos, 352 - 08025 Barcelona
Tel. 93 476 69 14 - Ext: 114
Mob. 670.299.381



___
pve-user mailing list
pve-u...@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] fix hotplug ip configuration V4

2015-06-30 Thread Dietmar Maurer
> and I don't known if the lxc root pid is always the first in the tasks list ?

I don't think you can assume that. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/3] LXCSetup::Redhat: ipv6 config

2015-06-30 Thread Wolfgang Bumiller
According to their documentation both of these variables
take an IP[/prefix] notation, the gateway even has an
optional '%iface' suffix. So it should be possible to simply
take over the value content from the configuration directly.
---
 src/PVE/LXCSetup/Redhat.pm | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/PVE/LXCSetup/Redhat.pm b/src/PVE/LXCSetup/Redhat.pm
index ba29eb5..4d4e1d9 100644
--- a/src/PVE/LXCSetup/Redhat.pm
+++ b/src/PVE/LXCSetup/Redhat.pm
@@ -207,11 +207,11 @@ sub setup_network {
$data .= "GATEWAY=$d->{gw}\n";
}
}
-   if (defined($d->{gw6})) {
-   die "implement me";
-   }
if (defined($d->{ip6})) {
-   die "implement me";
+   $data .= "IPV6ADDR=$d->{ip6}\n";
+   }
+   if (defined($d->{gw6})) {
+   $data .= "IPV6_DEFAULTGW=$d->{gw6}\n";
}
PVE::Tools::file_set_contents($filename, $data);
}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/3] LXC: more compact network configuration

2015-06-30 Thread Wolfgang Bumiller
Deduplicated network setup code.
---
 src/PVE/LXC.pm | 117 +
 1 file changed, 35 insertions(+), 82 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 36c3995..bd2ab08 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1246,98 +1246,51 @@ sub update_ipconfig {
 
 my $lxc_setup = PVE::LXCSetup->new($conf, $rootdir);
 
-my $update_gateway;
-if (&$safe_string_ne($conf->{$opt}->{gw}, $newnet->{gw})) {
-
-   $update_gateway = 1;
-   if ($conf->{$opt}->{gw}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'route', 'del', 'default', 'via', $conf->{$opt}->{gw} ];
-   eval { PVE::Tools::run_command($cmd); };
-   warn $@ if $@; # ignore errors here
-   delete $conf->{$opt}->{gw};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
-   }
-}
-
-if (&$safe_string_ne($conf->{$opt}->{ip}, $newnet->{ip})) {
-
-   if ($conf->{$opt}->{ip}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'addr', 'del', $conf->{$opt}->{ip}, 'dev', $eth  ];
-   eval { PVE::Tools::run_command($cmd); };
-   warn $@ if $@; # ignore errors here
-   delete $conf->{$opt}->{ip};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
-   }
-
-   if ($newnet->{ip}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'addr', 'add', $newnet->{ip}, 'dev', $eth  ];
-   PVE::Tools::run_command($cmd);
-
-   $conf->{$opt}->{ip} = $newnet->{ip};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
+my $optdata = $conf->{$opt};
+my @deleted;
+my @added;
+my $change = sub {
+   my ($prop, $target, @command) = @_;
+   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', @command];
+   eval { PVE::Tools::run_command($cmd); };
+   if (my $err = $@) {
+   warn $err;
+   } else {
+   push @$target, $prop;
}
-}
-
-if ($update_gateway) {
-
-   if ($newnet->{gw}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'route', 'add', 'default', 'via', $newnet->{gw} ];
-   PVE::Tools::run_command($cmd);
+};
 
-   $conf->{$opt}->{gw} = $newnet->{gw};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
-}
-}
+my $change_ip_config = sub {
+   my ($suffix) = @_;
+   my $gw= "gw$suffix";
+   my $ip= "ip$suffix";
 
-my $update_gateway6;
-if (&$safe_string_ne($conf->{$opt}->{gw6}, $newnet->{gw6})) {
-   
-   $update_gateway6 = 1;
-   if ($conf->{$opt}->{gw6}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'route', 'del', 'default', 'via', $conf->{$opt}->{gw6} ];
-   eval { PVE::Tools::run_command($cmd); };
-   warn $@ if $@; # ignore errors here
-   delete $conf->{$opt}->{gw6};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
+   my $update_gateway = 0;
+   if (&$safe_string_ne($optdata->{$gw}, $newnet->{$gw})) {
+   $update_gateway = 1;
+   &$change($gw => \@deleted, ('route', 'del', 'default', 'via', 
$optdata->{$gw}));
}
-}
-
-if (&$safe_string_ne($conf->{$opt}->{ip6}, $newnet->{ip6})) {
 
-   if ($conf->{$opt}->{ip6}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'addr', 'del', $conf->{$opt}->{ip6}, 'dev', $eth  ];
-   eval { PVE::Tools::run_command($cmd); };
-   warn $@ if $@; # ignore errors here
-   delete $conf->{$opt}->{ip6};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
+   if (&$safe_string_ne($optdata->{$ip}, $newnet->{$ip})) {
+   &$change($ip => \@deleted, ('addr', 'del', $optdata->{$ip}, 'dev', 
$eth))
+   if $optdata->{$ip};
+   &$change($ip => \@added, ('addr', 'add', $newnet->{$ip}, 'dev', 
$eth))
+   if $newnet->{$ip};
}
 
-   if ($newnet->{ip6}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'addr', 'add', $newnet->{ip6}, 'dev', $eth  ];
-   PVE::Tools::run_command($cmd);
-
-   $conf->{$opt}->{ip6} = $newnet->{ip6};
-   PVE::LXC::write_config($vmid, $conf);
-   $lxc_setup->setup_network($conf);
+   if ($update_gateway && $newnet->{$gw}) {
+   &$change($gw => \@added, ('route', 'add', 'default', 'via', 
$newnet->{$gw}));
}
-}
-
-if ($update_gateway6) {
+};
 
-   if ($newnet->{gw6}) {
-   my $cmd = ['lxc-attach', '-n', $vmid, '-s', 'NETWORK', '--', 
'/sbin/ip', 'route', 'add',

[pve-devel] [PATCH 1/3] LXCSetup::Debian: ipv6 + naming consistency

2015-06-30 Thread Wolfgang Bumiller
Implemented the 'implement me' code and changed v4* and v6*
variable names to be consistent to the way they're named in
qemu-server: eg address and address6 vs v4address and
v6address.
---
 src/PVE/LXCSetup/Debian.pm | 35 +--
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/src/PVE/LXCSetup/Debian.pm b/src/PVE/LXCSetup/Debian.pm
index 6094bce..a10d953 100644
--- a/src/PVE/LXCSetup/Debian.pm
+++ b/src/PVE/LXCSetup/Debian.pm
@@ -3,7 +3,7 @@ package PVE::LXCSetup::Debian;
 use strict;
 use warnings;
 use Data::Dumper;
-use PVE::Tools;
+use PVE::Tools qw($IPV6RE);
 use PVE::LXC;
 use File::Path;
 
@@ -105,14 +105,21 @@ sub setup_network {
my $net = {};
if (defined($d->{ip})) {
my $ipinfo = PVE::LXC::parse_ipv4_cidr($d->{ip});
-   $net->{v4address} = $ipinfo->{address};
-   $net->{v4netmask} = $ipinfo->{netmask};
+   $net->{address} = $ipinfo->{address};
+   $net->{netmask} = $ipinfo->{netmask};
}
if (defined($d->{'gw'})) {
-   $net->{v4gateway} = $d->{'gw'};
+   $net->{gateway} = $d->{'gw'};
}
if (defined($d->{ip6})) {
-   die "implement me";
+   if ($d->{ip6} !~ /^($IPV6RE)\/(\d+)$/) {
+   die "unable to parse ipv6 address/prefix\n";
+   }
+   $net->{address6} = $1;
+   $net->{netmask6} = $2;
+   }
+   if (defined($d->{'gw6'})) {
+   $net->{gateway6} = $d->{'gw6'};
}
$networks->{$d->{name}} = $net;
}
@@ -142,11 +149,11 @@ sub setup_network {
 
$interfaces .= "auto $section->{ifname}\n" if $new;
 
-   if ($net->{v4address}) {
+   if ($net->{address}) {
$interfaces .= "iface $section->{ifname} inet static\n";
-   $interfaces .= "\taddress $net->{v4address}\n" if 
defined($net->{v4address});
-   $interfaces .= "\tnetmask $net->{v4netmask}\n" if 
defined($net->{v4netmask});
-   $interfaces .= "\tgateway $net->{v4gateway}\n" if 
defined($net->{v4gateway});
+   $interfaces .= "\taddress $net->{address}\n" if 
defined($net->{address});
+   $interfaces .= "\tnetmask $net->{netmask}\n" if 
defined($net->{netmask});
+   $interfaces .= "\tgateway $net->{gateway}\n" if 
defined($net->{gateway});
foreach my $attr (@{$section->{attr}}) {
$interfaces .= "\t$attr\n";
}
@@ -159,11 +166,11 @@ sub setup_network {
} elsif ($section->{type} eq 'ipv6') {
$done_v6_hash->{$section->{ifname}} = 1;

-   if ($net->{v6address}) {
+   if ($net->{address6}) {
$interfaces .= "iface $section->{ifname} inet6 static\n";
-   $interfaces .= "\taddress $net->{v6address}\n" if 
defined($net->{v6address});
-   $interfaces .= "\tnetmask $net->{v6netmask}\n" if 
defined($net->{v6netmask});
-   $interfaces .= "\tgateway $net->{v6gateway}\n" if 
defined($net->{v6gateway});
+   $interfaces .= "\taddress $net->{address6}\n" if 
defined($net->{address6});
+   $interfaces .= "\tnetmask $net->{netmask6}\n" if 
defined($net->{netmask6});
+   $interfaces .= "\tgateway $net->{gateway6}\n" if 
defined($net->{gateway6});
foreach my $attr (@{$section->{attr}}) {
$interfaces .= "\t$attr\n";
}
@@ -236,7 +243,7 @@ sub setup_network {
$section = { type => 'ipv4', ifname => $ifname, attr => []};
&$print_section(1);
}
-   if (!$done_v6_hash->{$ifname} && defined($net->{v6address})) {
+   if (!$done_v6_hash->{$ifname} && defined($net->{address6})) {
if ($need_separator) { $interfaces .= "\n"; $need_separator = 0; }; 

$section = { type => 'ipv6', ifname => $ifname, attr => []};
&$print_section(1);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/2] add dns hotplug

2015-06-30 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 src/PVE/LXC.pm | 14 ++
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 36c3995..7b7226b 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1005,9 +1005,11 @@ sub update_lxc_config {
 my @nohotplug;
 
 my $rootdir;
+my $lxc_setup;
 if ($running) {
my $pid = find_lxc_pid($vmid);
$rootdir = "/proc/$pid/root";
+   $lxc_setup = PVE::LXCSetup->new($conf, $rootdir);
 }
 
 if (defined($delete)) {
@@ -1025,12 +1027,10 @@ sub update_lxc_config {
delete $conf->{'pve.startup'};
} elsif ($opt eq 'nameserver') {
delete $conf->{'pve.nameserver'};
-   push @nohotplug, $opt;
-   next if $running;
+   $lxc_setup->set_dns($conf);
} elsif ($opt eq 'searchdomain') {
delete $conf->{'pve.searchdomain'};
-   push @nohotplug, $opt;
-   next if $running;
+   $lxc_setup->set_dns($conf);
} elsif ($opt =~ m/^net(\d)$/) {
delete $conf->{$opt};
next if !$running;
@@ -1054,13 +1054,11 @@ sub update_lxc_config {
} elsif ($opt eq 'nameserver') {
my $list = verify_nameserver_list($value);
$conf->{'pve.nameserver'} = $list;
-   push @nohotplug, $opt;
-   next if $running;
+   $lxc_setup->set_dns($conf);
} elsif ($opt eq 'searchdomain') {
my $list = verify_searchdomain_list($value);
$conf->{'pve.searchdomain'} = $list;
-   push @nohotplug, $opt;
-   next if $running;
+   $lxc_setup->set_dns($conf);
} elsif ($opt eq 'memory') {
$conf->{'lxc.cgroup.memory.limit_in_bytes'} = $value*1024*1024;
write_cgroup_value("memory", $vmid, "memory.limit_in_bytes", 
$value*1024*1024);
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/2] add hostname hotplug

2015-06-30 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 src/PVE/LXC.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7b7226b..9214dd2 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1047,6 +1047,9 @@ sub update_lxc_config {
my $value = $param->{$opt};
if ($opt eq 'hostname') {
$conf->{'lxc.utsname'} = $value;
+$lxc_setup->set_hostname($conf);
+my $cmd = ['lxc-attach', '-n', $vmid, '--', 'hostname', $value ];
+PVE::Tools::run_command($cmd);
} elsif ($opt eq 'onboot') {
$conf->{'pve.onboot'} = $value ? 1 : 0;
} elsif ($opt eq 'startup') {
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Proxmox VE ZFS replication manager released (pve-zsync)

2015-06-30 Thread Martin Maurer
Hi all,

We just released the brand new Proxmox VE ZFS replication manager
(pve-zsync)!

This CLI tool synchronizes your virtual machine (virtual disks and VM
configuration) or directory stored on ZFS between two servers - very
useful for backup and replication tasks.

A big Thank-you to our active community for all feedback, testing, bug
reporting and patch submissions.

Documentation
http://pve.proxmox.com/wiki/PVE-zsync

Git
https://git.proxmox.com/?p=pve-zsync.git;a=summary

Bugtracker
https://bugzilla.proxmox.com/

-- 
Best Regards,

Martin Maurer

mar...@proxmox.com
http://www.proxmox.com

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] fix hotplug ip configuration V4

2015-06-30 Thread Alexandre DERUMIER
to find the lxc pid,

it's possible to find it here 

/sys/fs/cgroup/systemd/lxc/$name/tasks

but the tasks have all pid, of all running process running in the container.


and I don't known if the lxc root pid is always the first in the tasks list ?



- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Envoyé: Mardi 30 Juin 2015 08:11:17
Objet: Re: [pve-devel] [PATCH] fix hotplug ip configuration V4

applied, thanks! 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] new Debian LXC Appliance Builder

2015-06-30 Thread Dietmar Maurer
Hi all,

I just updated dab to work with LXC:

https://git.proxmox.com/?p=dab.git;a=summary
https://git.proxmox.com/?p=dab-pve-appliances.git;a=summary

Also added support for ubuntu 12.04, 14.04 and 15.04 (precise, trusty and
vivid).

- Dietmar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

2015-06-30 Thread Alexandre DERUMIER
>>However I found we could easily turn it into a CDROM drive again with 
>>this little change: 
>>
>>- my $drive = parse_drive($ds, "$storeid:$vmid/vm-$vmid-cloudinit.qcow2"); 
>>+ my $drive = parse_drive($ds, 
>>"$storeid:$vmid/vm-$vmid-cloudinit.qcow2,media=cdrom"); 
>>
>>This is the only change required to make it use 'ide-cd' and thus show 
>>up as /dev/srX. 

I'll test this. Thanks.



>>You can simply use `cloudinit: local`. AFAIK we already need shared
>>storage for migration, and more importantly, using storage-backed qcow2
>>images automagically adds snapshot support. 
Ok, so we need to implement this on other shared block storage like rbd,zfs.
I'll try to look at this


>>(which becomes a pain when we add template support, since we
>>need to include the template and cannot simply backup the configuration
>>variables in the vmid.conf).
Ah ok, I understand now !


- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Mardi 30 Juin 2015 10:18:22
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

> 1)-does it work with windows ? (as we expose the config drive as drive and 
> not cdrom) 
Well, it shows up, and according to the code it should, but my 
experience with administrating windows systems is limited, and my 
patience has hit a limit with all that lengthy point&click work... 
You don't happen to have a working cloudinit-base config file for 
windows I could copy? The installer doesn't let me choose the network 
device, and finding configuration variables and bringing them into the 
VM is hurting my brain :-P 

> 2)-as we put it as ide3, it could break some guest disk drive order, if we 
> don't use disk uuid in /etc/fstab 
Personally I'm a fan of always using UUID= or LABEL=, but you're right, 
this could be an issue. 
However I found we could easily turn it into a CDROM drive again with 
this little change: 

- my $drive = parse_drive($ds, "$storeid:$vmid/vm-$vmid-cloudinit.qcow2"); 
+ my $drive = parse_drive($ds, 
"$storeid:$vmid/vm-$vmid-cloudinit.qcow2,media=cdrom"); 

This is the only change required to make it use 'ide-cd' and thus show 
up as /dev/srX. 

> 3) do we really to define a special storage for hosting qcow2 ? maybe always 
> store it in local storage and rsync it on live migration. 
> (everybody don't have a nfs shared storage) 

You can simply use `cloudinit: local`. AFAIK we already need shared 
storage for migration, and more importantly, using storage-backed qcow2 
images automagically adds snapshot support. Otherwise we need to 
separately implement some way of storing the image when making a 
snapshot (which becomes a pain when we add template support, since we 
need to include the template and cannot simply backup the configuration 
variables in the vmid.conf). 

On Tue, Jun 30, 2015 at 08:44:17AM +0200, Alexandre DERUMIER wrote: 
> Hi Wolfgang, 
> 
> I begin to test your patches, 
> seem to works fine here. 
> 
> I have some questions: 
> 
> 1)-does it work with windows ? (as we expose the config drive as drive and 
> not cdrom) 
> 
> 2)-as we put it as ide3, it could break some guest disk drive order, if we 
> don't use disk uuid in /etc/fstab 
> for example : 
> user have 2 virtio-scsi disk 
> /dev/sda 
> /dev/sdb 
> 
> with /etc/fstab 
> /dev/sda / 
> /dev/sdb /var 
> 
> now, with ide3, it's going to /dev/sda , as AFAIK, they are assigned by pci 
> slots order. 
> 
> Maybe create a sata controller on the lastest pcislot/bridge could avoid 
> that. (need to test qemu 2.4 sata migration) 
> 
> 3) do we really to define a special storage for hosting qcow2 ? maybe always 
> store it in local storage and rsync it on live migration. 
> (everybody don't have a nfs shared storage) 
> 
> 
> - Mail original - 
> De: "aderumier"  
> À: "Wolfgang Bumiller"  
> Cc: "pve-devel"  
> Envoyé: Vendredi 26 Juin 2015 14:17:38 
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 
> 
> Hi, 
> 
> >>For the UI enabling and disabling cloudinit would then be 
> >>adding/removing a cloudinit device. It would then also not have to be 
> >>hardcoded to ide3 but be configurable to any block device like when 
> >>adding a hard disk. 
> 
> I'm not sure but I think than sata/ahci controller are now migratable in qemu 
> master (so for qemu 2.4) 
> 
> http://git.qemu.org/?p=qemu.git;a=commit;h=04329029a8c539eb5f75dcb6d8b016f0c53a031a
>  
> 
> maybe we could add a dedicated sata controller for cloudinit drive ? 
> 
> 
> 
> - Mail original - 
> De: "Wolfgang Bumiller"  
> À: "pve-devel"  
> Envoyé: Vendredi 26 Juin 2015 12:36:52 
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 
> 
> We just talked this over a bit again. 
> 
> If we keep going with this approach we could actually remove the 
> cloudinit config parameter and, similar to what Alexandre did in the 
> first patches, have an `ideX: cloudinit,storage=STOREID` parameter 
> enable cloudinit (but have it fix in the config rather than added after 
> doing `

Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

2015-06-30 Thread Dietmar Maurer
> also,
> 
> I can compress cloud-init qcow2 to around 5K with compression and small
> cluster_size
> 
> qemu-img convert -c vm-100-cloudinit.qcow2 -f qcow2 -O qcow2 test.qcow2 -o
> cluster_size=512b
> 
> -rw-r--r--  1 root root   5120 Jun 30 09:43 test.qcow2

great :-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

2015-06-30 Thread Wolfgang Bumiller
> 1)-does it work with windows ? (as we expose the config drive as drive and 
> not cdrom)
Well, it shows up, and according to the code it should, but my
experience with administrating windows systems is limited, and my
patience has hit a limit with all that lengthy point&click work...
You don't happen to have a working cloudinit-base config file for
windows I could copy? The installer doesn't let me choose the network
device, and finding configuration variables and bringing them into the
VM is hurting my brain :-P

> 2)-as we put it as ide3, it could break some guest disk drive order, if we 
> don't use disk uuid in /etc/fstab
Personally I'm a fan of always using UUID= or LABEL=, but you're right,
this could be an issue.
However I found we could easily turn it into a CDROM drive again with
this little change:

-   my $drive = parse_drive($ds, 
"$storeid:$vmid/vm-$vmid-cloudinit.qcow2");
+   my $drive = parse_drive($ds, 
"$storeid:$vmid/vm-$vmid-cloudinit.qcow2,media=cdrom");

This is the only change required to make it use 'ide-cd' and thus show
up as /dev/srX.

> 3) do we really to define a special storage for hosting qcow2 ? maybe always 
> store it in local storage and rsync it on live migration.
>(everybody don't have a nfs shared storage)

You can simply use `cloudinit: local`. AFAIK we already need shared
storage for migration, and more importantly, using storage-backed qcow2
images automagically adds snapshot support. Otherwise we need to
separately implement some way of storing the image when making a
snapshot (which becomes a pain when we add template support, since we
need to include the template and cannot simply backup the configuration
variables in the vmid.conf).

On Tue, Jun 30, 2015 at 08:44:17AM +0200, Alexandre DERUMIER wrote:
> Hi Wolfgang,
> 
> I begin to test your patches,
> seem to works fine here.
> 
> I have some questions:
> 
> 1)-does it work with windows ? (as we expose the config drive as drive and 
> not cdrom)
> 
> 2)-as we put it as ide3, it could break some guest disk drive order, if we 
> don't use disk uuid in /etc/fstab
>  for example : 
>  user have 2 virtio-scsi disk
>   /dev/sda
>   /dev/sdb
>  
>  with /etc/fstab
>  /dev/sda  /
>  /dev/sdb  /var
> 
> now, with ide3, it's going to /dev/sda , as AFAIK, they are assigned by pci 
> slots order.
> 
> Maybe create a sata controller on the lastest pcislot/bridge could avoid 
> that. (need to test qemu 2.4 sata migration)
> 
> 3) do we really to define a special storage for hosting qcow2 ? maybe always 
> store it in local storage and rsync it on live migration.
>(everybody don't have a nfs shared storage)
> 
> 
> - Mail original -
> De: "aderumier" 
> À: "Wolfgang Bumiller" 
> Cc: "pve-devel" 
> Envoyé: Vendredi 26 Juin 2015 14:17:38
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update
> 
> Hi, 
> 
> >>For the UI enabling and disabling cloudinit would then be 
> >>adding/removing a cloudinit device. It would then also not have to be 
> >>hardcoded to ide3 but be configurable to any block device like when 
> >>adding a hard disk. 
> 
> I'm not sure but I think than sata/ahci controller are now migratable in qemu 
> master (so for qemu 2.4) 
> 
> http://git.qemu.org/?p=qemu.git;a=commit;h=04329029a8c539eb5f75dcb6d8b016f0c53a031a
>  
> 
> maybe we could add a dedicated sata controller for cloudinit drive ? 
> 
> 
> 
> - Mail original - 
> De: "Wolfgang Bumiller"  
> À: "pve-devel"  
> Envoyé: Vendredi 26 Juin 2015 12:36:52 
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 
> 
> We just talked this over a bit again. 
> 
> If we keep going with this approach we could actually remove the 
> cloudinit config parameter and, similar to what Alexandre did in the 
> first patches, have an `ideX: cloudinit,storage=STOREID` parameter 
> enable cloudinit (but have it fix in the config rather than added after 
> doing `cloudinit: 1`). 
> For the UI enabling and disabling cloudinit would then be 
> adding/removing a cloudinit device. It would then also not have to be 
> hardcoded to ide3 but be configurable to any block device like when 
> adding a hard disk. 
> 
> We'd then still need a parameter for templates. (Either a new one like 
> `cloudinit-template: xyz` or if we plan on adding more cloud-init 
> parameters we could keep `cloudinit: variouts,comma=separated,args`. 
> 
> Another TODO before I forget about it again: physical cdrom drives 
> probably don't need `media=cdrom` in the code. should check that. 
> 
> On Fri, Jun 26, 2015 at 12:06:31PM +0200, Wolfgang Bumiller wrote: 
> > Changes since [PATCH v4]: 
> > 
> > Instead of generating a separate ISO image file we now generate the 
> > ISO into a qcow2 device which is storage-managed. 
> > This does not only mean we don't need to rsync the file for 
> > live-migrations, but we can also use the live-snapshot feature out of 
> > the box. 
> > 
> > It also allowed me to remove the code to generate the commandline 
> > p

Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

2015-06-30 Thread Alexandre DERUMIER
>>3) do we really to define a special storage for hosting qcow2 ? maybe always 
>>store it in local storage and rsync it on live migration.
>>   (everybody don't have a nfs shared storage)

or,do you plan to use any storage instead qcow2 ? 
(we need snapshot feature on this cloud-init drive, if user make a vm snapshot 
?)


- Mail original -
De: "aderumier" 
À: "Wolfgang Bumiller" 
Cc: "pve-devel" 
Envoyé: Mardi 30 Juin 2015 09:47:43
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

also, 

I can compress cloud-init qcow2 to around 5K with compression and small 
cluster_size 

qemu-img convert -c vm-100-cloudinit.qcow2 -f qcow2 -O qcow2 test.qcow2 -o 
cluster_size=512b 

-rw-r--r-- 1 root root 5120 Jun 30 09:43 test.qcow2 



- Mail original - 
De: "aderumier"  
À: "Wolfgang Bumiller"  
Cc: "pve-devel"  
Envoyé: Mardi 30 Juin 2015 08:44:17 
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 

Hi Wolfgang, 

I begin to test your patches, 
seem to works fine here. 

I have some questions: 

1)-does it work with windows ? (as we expose the config drive as drive and not 
cdrom) 

2)-as we put it as ide3, it could break some guest disk drive order, if we 
don't use disk uuid in /etc/fstab 
for example : 
user have 2 virtio-scsi disk 
/dev/sda 
/dev/sdb 

with /etc/fstab 
/dev/sda / 
/dev/sdb /var 

now, with ide3, it's going to /dev/sda , as AFAIK, they are assigned by pci 
slots order. 

Maybe create a sata controller on the lastest pcislot/bridge could avoid that. 
(need to test qemu 2.4 sata migration) 

3) do we really to define a special storage for hosting qcow2 ? maybe always 
store it in local storage and rsync it on live migration. 
(everybody don't have a nfs shared storage) 


- Mail original - 
De: "aderumier"  
À: "Wolfgang Bumiller"  
Cc: "pve-devel"  
Envoyé: Vendredi 26 Juin 2015 14:17:38 
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 

Hi, 

>>For the UI enabling and disabling cloudinit would then be 
>>adding/removing a cloudinit device. It would then also not have to be 
>>hardcoded to ide3 but be configurable to any block device like when 
>>adding a hard disk. 

I'm not sure but I think than sata/ahci controller are now migratable in qemu 
master (so for qemu 2.4) 

http://git.qemu.org/?p=qemu.git;a=commit;h=04329029a8c539eb5f75dcb6d8b016f0c53a031a
 

maybe we could add a dedicated sata controller for cloudinit drive ? 



- Mail original - 
De: "Wolfgang Bumiller"  
À: "pve-devel"  
Envoyé: Vendredi 26 Juin 2015 12:36:52 
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 

We just talked this over a bit again. 

If we keep going with this approach we could actually remove the 
cloudinit config parameter and, similar to what Alexandre did in the 
first patches, have an `ideX: cloudinit,storage=STOREID` parameter 
enable cloudinit (but have it fix in the config rather than added after 
doing `cloudinit: 1`). 
For the UI enabling and disabling cloudinit would then be 
adding/removing a cloudinit device. It would then also not have to be 
hardcoded to ide3 but be configurable to any block device like when 
adding a hard disk. 

We'd then still need a parameter for templates. (Either a new one like 
`cloudinit-template: xyz` or if we plan on adding more cloud-init 
parameters we could keep `cloudinit: variouts,comma=separated,args`. 

Another TODO before I forget about it again: physical cdrom drives 
probably don't need `media=cdrom` in the code. should check that. 

On Fri, Jun 26, 2015 at 12:06:31PM +0200, Wolfgang Bumiller wrote: 
> Changes since [PATCH v4]: 
> 
> Instead of generating a separate ISO image file we now generate the 
> ISO into a qcow2 device which is storage-managed. 
> This does not only mean we don't need to rsync the file for 
> live-migrations, but we can also use the live-snapshot feature out of 
> the box. 
> 
> It also allowed me to remove the code to generate the commandline 
> parameters by simply making foreach_drive include the cloud-init drive 
> (if it exists). 
> In order to do that I had to add a $vmid parameter to it. Since it 
> already takes the VM's config as parameter this seemed like a sane 
> thing to do. I grepped the rest of the repositories for code affected 
> by this change. It seemed to be all isolated in qemu-server. 
> 
> Please test and comment. 
> 
> Alexandre Derumier (1): 
> implement cloudinit v2 
> 
> Wolfgang Bumiller (2): 
> cloud-init changes 
> cloudinit: use qcow2 for future snapshot support 
> 
> PVE/API2/Qemu.pm | 16 +-- 
> PVE/QemuMigrate.pm | 8 +- 
> PVE/QemuServer.pm | 364 +++ 
> PVE/VZDump/QemuServer.pm | 2 +- 
> control.in | 2 +- 
> 5 files changed, 353 insertions(+), 39 deletions(-) 
> 
> -- 
> 2.1.4 
> 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 

__

Re: [pve-devel] [PATCH] firewall autodisable GUI update

2015-06-30 Thread Alen Grizonic

Yes, I agree. Some part of the code has to be let out.

Other comments inline:

On 06/29/2015 07:21 PM, Dietmar Maurer wrote:

comments inline


var rows = {};
  
+var submit_twice = function(enable) {

+
+   var me = this;
+
+var form = me.formPanel.getForm();
+
+var values = me.getValues();
+
+if ((values.enable == 1) && (enable != 2)) {
+values.enable = 2;
+} else if (enable == 2) {
+values.enable = 1;
+}
+

-from-here

+Ext.Object.each(values, function(name, val) {
+if (values.hasOwnProperty(name)) {
+if (Ext.isArray(val) && !val.length) {
+values[name] = '';
+}
+}
+});

-

Above code is not really required, because this is a special form.


Removed.




+
+if (me.digest) {
+if (values.enable == 2) {
+me.digest = "";
+} else {
+values.digest = me.digest;
+}
+}

are you sure this work? I think the digest changes after the first call?


The digest is not changed after the first call, so it has to be removed 
/ cleared, otherwise is not working correctly.





+
+if (me.backgroundDelay) {
+values.background_delay = me.backgroundDelay;
+}

we do not need backgroundDelay here.


Removed.




+
+var url =  me.url;
+if (me.method === 'DELETE') {
+url = url + "?" + Ext.Object.toQueryString(values);
+values = undefined;
+}

AFAIK we never send 'DELETE' here, so above code is not required.


Removed.




+
+PVE.Utils.API2Request({
+url: url,
+waitMsgTarget: me,
+method: me.method || (me.backgroundDelay ? 'POST' : 'PUT'),
+params: values,
+failure: function(response, options) {
+if (me.onFailedHook) {
+me.onFailedHook(response);

why do we need that onFailedHook? (or onFailerHook??)


The onFailed/onFailureHook was used for the notification message. Ok. 
Now is used in the other way.





+} else {
+if (response.result && response.result.errors) {
+form.markInvalid(response.result.errors);
+}
+Ext.Msg.alert(gettext('Error'), response.htmlStatus);
+}
+},
+success: function(response, options) {
+me.close();

we want to close after the second call.


True. Corrected.




+if ((me.backgroundDelay || me.showProgress) &&
+response.result.data) {
+var upid = response.result.data;
+var win = Ext.create('PVE.window.TaskProgress', {
+upid: upid
+});
+win.show();
+}

I think we do not need above code here.


Removed.




+if (values.enable == 2) {
+submit_twice.call(me, 2);
+}
+}
+});
+};
+
var add_boolean_row = function(name, text, defaultValue, labelWidth) {
rows[name] = {
header: text,
@@ -41,8 +113,12 @@ Ext.define('PVE.FirewallOptions', {
checked: defaultValue ? true : false,
name: name,
uncheckedValue: 0,
-   fieldLabel: text
-   }
+fieldLabel: text,
+},
+onFailedHook: function() {
+confirm ("Connection lost: Disabling firewall (in 60
seconds).") ;
+},
+submit: submit_twice
}
};
};
diff --git a/www/manager/window/Edit.js b/www/manager/window/Edit.js
index 3e69da9..5d52a65 100644
--- a/www/manager/window/Edit.js
+++ b/www/manager/window/Edit.js
@@ -24,6 +24,8 @@ Ext.define('PVE.window.Edit', {
  
  showProgress: false,
  
+onFailerHook: undefined,

+

I guess we do not need that here.


Removed.




  isValid: function() {
var me = this;
  
--

2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

2015-06-30 Thread Alexandre DERUMIER
also,

I can compress cloud-init qcow2 to around 5K with compression and small 
cluster_size

qemu-img convert -c vm-100-cloudinit.qcow2 -f qcow2 -O qcow2 test.qcow2 -o 
cluster_size=512b

-rw-r--r--  1 root root   5120 Jun 30 09:43 test.qcow2



- Mail original -
De: "aderumier" 
À: "Wolfgang Bumiller" 
Cc: "pve-devel" 
Envoyé: Mardi 30 Juin 2015 08:44:17
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update

Hi Wolfgang, 

I begin to test your patches, 
seem to works fine here. 

I have some questions: 

1)-does it work with windows ? (as we expose the config drive as drive and not 
cdrom) 

2)-as we put it as ide3, it could break some guest disk drive order, if we 
don't use disk uuid in /etc/fstab 
for example : 
user have 2 virtio-scsi disk 
/dev/sda 
/dev/sdb 

with /etc/fstab 
/dev/sda / 
/dev/sdb /var 

now, with ide3, it's going to /dev/sda , as AFAIK, they are assigned by pci 
slots order. 

Maybe create a sata controller on the lastest pcislot/bridge could avoid that. 
(need to test qemu 2.4 sata migration) 

3) do we really to define a special storage for hosting qcow2 ? maybe always 
store it in local storage and rsync it on live migration. 
(everybody don't have a nfs shared storage) 


- Mail original - 
De: "aderumier"  
À: "Wolfgang Bumiller"  
Cc: "pve-devel"  
Envoyé: Vendredi 26 Juin 2015 14:17:38 
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 

Hi, 

>>For the UI enabling and disabling cloudinit would then be 
>>adding/removing a cloudinit device. It would then also not have to be 
>>hardcoded to ide3 but be configurable to any block device like when 
>>adding a hard disk. 

I'm not sure but I think than sata/ahci controller are now migratable in qemu 
master (so for qemu 2.4) 

http://git.qemu.org/?p=qemu.git;a=commit;h=04329029a8c539eb5f75dcb6d8b016f0c53a031a
 

maybe we could add a dedicated sata controller for cloudinit drive ? 



- Mail original - 
De: "Wolfgang Bumiller"  
À: "pve-devel"  
Envoyé: Vendredi 26 Juin 2015 12:36:52 
Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 

We just talked this over a bit again. 

If we keep going with this approach we could actually remove the 
cloudinit config parameter and, similar to what Alexandre did in the 
first patches, have an `ideX: cloudinit,storage=STOREID` parameter 
enable cloudinit (but have it fix in the config rather than added after 
doing `cloudinit: 1`). 
For the UI enabling and disabling cloudinit would then be 
adding/removing a cloudinit device. It would then also not have to be 
hardcoded to ide3 but be configurable to any block device like when 
adding a hard disk. 

We'd then still need a parameter for templates. (Either a new one like 
`cloudinit-template: xyz` or if we plan on adding more cloud-init 
parameters we could keep `cloudinit: variouts,comma=separated,args`. 

Another TODO before I forget about it again: physical cdrom drives 
probably don't need `media=cdrom` in the code. should check that. 

On Fri, Jun 26, 2015 at 12:06:31PM +0200, Wolfgang Bumiller wrote: 
> Changes since [PATCH v4]: 
> 
> Instead of generating a separate ISO image file we now generate the 
> ISO into a qcow2 device which is storage-managed. 
> This does not only mean we don't need to rsync the file for 
> live-migrations, but we can also use the live-snapshot feature out of 
> the box. 
> 
> It also allowed me to remove the code to generate the commandline 
> parameters by simply making foreach_drive include the cloud-init drive 
> (if it exists). 
> In order to do that I had to add a $vmid parameter to it. Since it 
> already takes the VM's config as parameter this seemed like a sane 
> thing to do. I grepped the rest of the repositories for code affected 
> by this change. It seemed to be all isolated in qemu-server. 
> 
> Please test and comment. 
> 
> Alexandre Derumier (1): 
> implement cloudinit v2 
> 
> Wolfgang Bumiller (2): 
> cloud-init changes 
> cloudinit: use qcow2 for future snapshot support 
> 
> PVE/API2/Qemu.pm | 16 +-- 
> PVE/QemuMigrate.pm | 8 +- 
> PVE/QemuServer.pm | 364 +++ 
> PVE/VZDump/QemuServer.pm | 2 +- 
> control.in | 2 +- 
> 5 files changed, 353 insertions(+), 39 deletions(-) 
> 
> -- 
> 2.1.4 
> 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel maili