Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
# Note about locking: we use flock on the config file protect @@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { my $running = check_running($vmid); -return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); +PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); s/storage_support_snapshop/storage_support_snapshot/ But the name is misleading. The question is if 'qemu' does the snapshot, or if we need to do it ourselves. Maybesomething like: do_snapshots_with_qemu() return if !$running; @@ -5772,6 +5774,23 @@ my $savevm_wait = sub { } }; +sub storage_support_snapshot { +my ($volid, $storecfg) = @_; I would reorder the arguments: my ($storecfg, $volid) = @_; + +my $storage_name = PVE::Storage::parse_volume_id($volid); + +my $ret = undef; what for? +if ($snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){ + $ret = 1; return 1; +} + +if ($volid =~ m/\.(qcow2|qed)$/){ + $ret = 1; return 1; +} + +return $ret; return undef; +} + sub snapshot_create { my ($vmid, $snapname, $save_vmstate, $comment) = @_; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] remove running from Storage and check it in QemuServer
It is better to check if a VM is running in QemuServer then in Storage. for the Storage there is no difference if it is running or not. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/Storage.pm | 4 ++-- PVE/Storage/ISCSIDirectPlugin.pm | 2 +- PVE/Storage/LVMPlugin.pm | 2 +- PVE/Storage/Plugin.pm| 4 +--- PVE/Storage/RBDPlugin.pm | 4 +--- PVE/Storage/SheepdogPlugin.pm| 4 +--- PVE/Storage/ZFSPoolPlugin.pm | 2 +- 7 files changed, 8 insertions(+), 14 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index b542ee6..92c7d14 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -162,13 +162,13 @@ sub volume_rollback_is_possible { } sub volume_snapshot { -my ($cfg, $volid, $snap, $running) = @_; +my ($cfg, $volid, $snap) = @_; my ($storeid, $volname) = parse_volume_id($volid, 1); if ($storeid) { my $scfg = storage_config($cfg, $storeid); my $plugin = PVE::Storage::Plugin-lookup($scfg-{type}); -return $plugin-volume_snapshot($scfg, $storeid, $volname, $snap, $running); +return $plugin-volume_snapshot($scfg, $storeid, $volname, $snap); } elsif ($volid =~ m|^(/.+)$| -e $volid) { die snapshot file/device '$volid' is not possible\n; } else { diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm index c957ade..763c482 100644 --- a/PVE/Storage/ISCSIDirectPlugin.pm +++ b/PVE/Storage/ISCSIDirectPlugin.pm @@ -205,7 +205,7 @@ sub volume_resize { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; +my ($class, $scfg, $storeid, $volname, $snap) = @_; die volume snapshot is not possible on iscsi device; } diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm index 1688bb5..19eb78c 100644 --- a/PVE/Storage/LVMPlugin.pm +++ b/PVE/Storage/LVMPlugin.pm @@ -456,7 +456,7 @@ sub volume_resize { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; +my ($class, $scfg, $storeid, $volname, $snap) = @_; die lvm snapshot is not implemented; } diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index 5b72b07..f119068 100644 --- a/PVE/Storage/Plugin.pm +++ b/PVE/Storage/Plugin.pm @@ -641,12 +641,10 @@ sub volume_resize { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; +my ($class, $scfg, $storeid, $volname, $snap) = @_; die can't snapshot this image format\n if $volname !~ m/\.(qcow2|qed)$/; -return 1 if $running; - my $path = $class-filesystem_path($scfg, $volname); my $cmd = ['/usr/bin/qemu-img', 'snapshot','-c', $snap, $path]; diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm index 2c45a68..878fa16 100644 --- a/PVE/Storage/RBDPlugin.pm +++ b/PVE/Storage/RBDPlugin.pm @@ -510,9 +510,7 @@ sub volume_resize { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; - -return 1 if $running; +my ($class, $scfg, $storeid, $volname, $snap) = @_; my ($vtype, $name, $vmid) = $class-parse_volname($volname); diff --git a/PVE/Storage/SheepdogPlugin.pm b/PVE/Storage/SheepdogPlugin.pm index 3e2c126..e358f9e 100644 --- a/PVE/Storage/SheepdogPlugin.pm +++ b/PVE/Storage/SheepdogPlugin.pm @@ -389,9 +389,7 @@ sub volume_resize { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; - -return 1 if $running; +my ($class, $scfg, $storeid, $volname, $snap) = @_; my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class-parse_volname($volname); diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 39fc348..1064869 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -415,7 +415,7 @@ sub volume_size_info { } sub volume_snapshot { -my ($class, $scfg, $storeid, $volname, $snap, $running) = @_; +my ($class, $scfg, $storeid, $volname, $snap) = @_; $class-zfs_request($scfg, undef, 'snapshot', $scfg-{pool}/$volname\@$snap); } -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] Snapshot: move check running from Storage to QemuServer
Hi all@list, I'm think we should check if we make snapshots not at the Storage. IMHO it should be done at the QemuServer. Is there any matter to kept it? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] remove running from Storage and check it in QemuServer
It is better to check if a VM is running in QemuServer then in Storage. for the Storage there is no difference if it is running or not. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/QemuServer.pm | 21 - 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 708b208..39aff42 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -31,6 +31,8 @@ use PVE::QMPClient; use PVE::RPCEnvironment; use Time::HiRes qw(gettimeofday); +my $snap_storage = {zfspool = 1, rbd = 1, zfs = 1, sheepdog = 1}; + my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); # Note about locking: we use flock on the config file protect @@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { my $running = check_running($vmid); -return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); +PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); return if !$running; @@ -5772,6 +5774,23 @@ my $savevm_wait = sub { } }; +sub storage_support_snapshot { +my ($volid, $storecfg) = @_; + +my $storage_name = PVE::Storage::parse_volume_id($volid); + +my $ret = undef; +if ($snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){ + $ret = 1; +} + +if ($volid =~ m/\.(qcow2|qed)$/){ + $ret = 1; +} + +return $ret; +} + sub snapshot_create { my ($vmid, $snapname, $save_vmstate, $comment) = @_; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] Add Readme explaining how to use extjs5 for dev
--- www/manager5/Readme.md | 22 ++ 1 file changed, 22 insertions(+) create mode 100644 www/manager5/Readme.md diff --git a/www/manager5/Readme.md b/www/manager5/Readme.md new file mode 100644 index 000..87e00c3 --- /dev/null +++ b/www/manager5/Readme.md @@ -0,0 +1,22 @@ +pveproxy with ExtJS 5 developpement mini howto +== + +unpack the ExtJS 5 sources, and copy them to /usr/share/pve-manager/ext5 + +cd www/ext5/ +make install + +symlink the to our ext5 compatible javascript code + +cd /usr/share/pve-manager +ln -s PATH_TO_YOUR_GIT_REPO/www/manager5 + +access the PVE proxy with ExtJS 5 + +https://localhost:8006/?ext5=1 + + +With the extra parameter **ext5=1**, pve-proxy will call the function **PVE::ExtJSIndex5::get_index()** +which returns a HTML page, with all javascript files included. +Provided you included the javascript in **PVE/ExtJSIndex5.pm**, a simple browser refresh is then enough +to see your changes. -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); This seem to be wrong. we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to use qmp in this case. Form what I see this case is already handled inside storage_support_snapshop()? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
Form what I see this case is already handled inside storage_support_snapshop()? PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); return if !$running; vm_mon_cmd($vmid, snapshot-drive, device = $deviceid, name = $snap); and storage_support_snapshop { ... if ($volid =~ m/\.(qcow2|qed)$/){ + $ret = 1; + } So, with this code, we always call qemu-img snasphot for qcow2, which is wrong if the vm is running. (for rbd,zfs,.. no problem). I think that's why we passed the running flag to pve-storage. - Mail original - De: dietmar diet...@proxmox.com À: aderumier aderum...@odiso.com, Wolfgang Link w.l...@proxmox.com Cc: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 12:28:36 Objet: Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); This seem to be wrong. we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to use qmp in this case. Form what I see this case is already handled inside storage_support_snapshop()? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
Couldn't we reuse volume_has_feature from storage plugins ? We can, but this is misleading. It is not a storage feature, it is a feature of qemu. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
So, with this code, we always call qemu-img snasphot for qcow2, which is wrong if the vm is running. (for rbd,zfs,.. no problem). I think that's why we passed the running flag to pve-storage. Ah, yes - the logic is wrong, but fixable. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
We need to use qemu snapshot-drive if the VM is running? For qcow2 file yes. found some reference here: https://www.suse.com/documentation/sles11/book_kvm/data/cha_qemu_guest_inst_qemu-img.html WARNING: Do not create or delete virtual machine snapshots with the qemu-img snapshot command while the virtual machine is running. Otherwise, you can damage the disk image with the state of the virtual machine saved. - Mail original - De: dietmar diet...@proxmox.com À: aderumier aderum...@odiso.com Cc: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 13:11:35 Objet: Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer So, with this code, we always call qemu-img snasphot for qcow2, which is wrong if the vm is running. (for rbd,zfs,.. no problem). I think that's why we passed the running flag to pve-storage. Ah, yes - the logic is wrong, but fixable. Or is it the other way around? We need to use qemu snapshot-drive if the VM is running? confused :-/ ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
Also, + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); This seem to be wrong. we can't do a qcow2 snapshot with qemu-img if the vm is running, we need to use qmp in this case. - Mail original - De: Wolfgang Link w.l...@proxmox.com À: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 09:47:03 Objet: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer It is better to check if a VM is running in QemuServer then in Storage. for the Storage there is no difference if it is running or not. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/QemuServer.pm | 21 - 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 708b208..39aff42 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -31,6 +31,8 @@ use PVE::QMPClient; use PVE::RPCEnvironment; use Time::HiRes qw(gettimeofday); +my $snap_storage = {zfspool = 1, rbd = 1, zfs = 1, sheepdog = 1}; + my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); # Note about locking: we use flock on the config file protect @@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { my $running = check_running($vmid); - return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); return if !$running; @@ -5772,6 +5774,23 @@ my $savevm_wait = sub { } }; +sub storage_support_snapshot { + my ($volid, $storecfg) = @_; + + my $storage_name = PVE::Storage::parse_volume_id($volid); + + my $ret = undef; + if ($snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){ + $ret = 1; + } + + if ($volid =~ m/\.(qcow2|qed)$/){ + $ret = 1; + } + + return $ret; +} + sub snapshot_create { my ($vmid, $snapname, $save_vmstate, $comment) = @_; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
So, with this code, we always call qemu-img snasphot for qcow2, which is wrong if the vm is running. (for rbd,zfs,.. no problem). I think that's why we passed the running flag to pve-storage. Ah, yes - the logic is wrong, but fixable. Or is it the other way around? We need to use qemu snapshot-drive if the VM is running? confused :-/ ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
dpkg -i lxc-pve-dev_1.1.1-1_amd64.deb Setting up lxc-pve (1.1.1-1) ... usermod: invalid option -- 'v' It's hanging in lxc-pve.postinst # create subuid/subgui map for root # (to run unprivileged containers as root) usermod -v 10-165535 -w 10-165535 root #usermod -v 10-165535 -w 10-165535 root usermod: invalid option -- 'v' Any idea ? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer
Hi, does it make sense to define +my $snap_storage = {zfspool = 1, rbd = 1, zfs = 1, sheepdog = 1}; .. sub storage_support_snapshot() in qemuserver ? Couldn't we reuse volume_has_feature from storage plugins ? - Mail original - De: Wolfgang Link w.l...@proxmox.com À: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 09:47:03 Objet: [pve-devel] [PATCH] remove running from Storage and check it in QemuServer It is better to check if a VM is running in QemuServer then in Storage. for the Storage there is no difference if it is running or not. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/QemuServer.pm | 21 - 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 708b208..39aff42 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -31,6 +31,8 @@ use PVE::QMPClient; use PVE::RPCEnvironment; use Time::HiRes qw(gettimeofday); +my $snap_storage = {zfspool = 1, rbd = 1, zfs = 1, sheepdog = 1}; + my $cpuinfo = PVE::ProcFSTools::read_cpuinfo(); # Note about locking: we use flock on the config file protect @@ -3777,7 +3779,7 @@ sub qemu_volume_snapshot { my $running = check_running($vmid); - return if !PVE::Storage::volume_snapshot($storecfg, $volid, $snap, $running); + PVE::Storage::volume_snapshot($storecfg, $volid, $snap) if storage_support_snapshop($volid, $storecfg); return if !$running; @@ -5772,6 +5774,23 @@ my $savevm_wait = sub { } }; +sub storage_support_snapshot { + my ($volid, $storecfg) = @_; + + my $storage_name = PVE::Storage::parse_volume_id($volid); + + my $ret = undef; + if ($snap_storage-{$storecfg-{ids}-{$storage_name}-{type}} ){ + $ret = 1; + } + + if ($volid =~ m/\.(qcow2|qed)$/){ + $ret = 1; + } + + return $ret; +} + sub snapshot_create { my ($vmid, $snapname, $save_vmstate, $comment) = @_; -- 2.1.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] New installation with pvetest
I am currently testing with the latest release. you talk about pvetest for wheezy? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
(about upgrade, we need also to install apt-get install systemd-sysv to get systemd working after reboot) Ah, OK. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] New installation with pvetest
hi, I am currently testing with the latest release. 2 points: - At the end of the update with pvetest, it propose to replace it with pvetest pve - enterprise. - A problem of installation dependencies ceph hammer : The following packages have unmet dependencies: ceph : Depends: libboost-system1.49.0 (= 1.49.0-1) but it is not installable Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable ceph-common : Depends: librbd1 (= 0.94.1-1~bpo70+1) but 0.80.7-2 is to be installed Depends: libboost-thread1.49.0 (= 1.49.0-1) but it is not installable Depends: libudev0 (= 146) but it is not installable Breaks: librbd1 ( 0.92-1238) but 0.80.7-2 is to be installed E: Unable to correct problems, you have held broken packages. One question: to make a new HA-PVE, a small howto ? Thank's Moula de la Kabylie. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] New installation with pvetest
pvetest for jessie ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
do you plan to update jessie pvetest repo soon ? I would like to test upgrade from wheezy with new packages version and lxc Yes, I can update if you want. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
do you plan to update jessie pvetest repo soon ? I would like to test upgrade from wheezy with new packages version and lxc Yes, I can update if you want. OK, the repository is up to date now - please test. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] New installation with pvetest
On April 30, 2015 at 8:09 PM Moula BADJI moul...@hotmail.com wrote: - A problem of installation dependencies ceph hammer : There are still no ceph packages for jessie, sorry. One question: to make a new HA-PVE, a small howto ? Sorry, there is currently no docu. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
Old system? This package is targeted for debian jessie. Yes, (it was the upgraded wheezy, seem that passwd was not upgraded correctly) (about upgrade, we need also to install apt-get install systemd-sysv to get systemd working after reboot) - Mail original - De: dietmar diet...@proxmox.com À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 17:14:14 Objet: Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst #usermod -v 10-165535 -w 10-165535 root usermod: invalid option -- 'v' Old system? This package is targeted for debian jessie. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
Seem that passwd was not updated to jessie. Works fine now - Mail original - De: aderumier aderum...@odiso.com À: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 15:01:28 Objet: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst dpkg -i lxc-pve-dev_1.1.1-1_amd64.deb Setting up lxc-pve (1.1.1-1) ... usermod: invalid option -- 'v' It's hanging in lxc-pve.postinst # create subuid/subgui map for root # (to run unprivileged containers as root) usermod -v 10-165535 -w 10-165535 root #usermod -v 10-165535 -w 10-165535 root usermod: invalid option -- 'v' Any idea ? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
#usermod -v 10-165535 -w 10-165535 root usermod: invalid option -- 'v' Old system? This package is targeted for debian jessie. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst
BTW, do you plan to update jessie pvetest repo soon ? I would like to test upgrade from wheezy with new packages version and lxc - Mail original - De: aderumier aderum...@odiso.com À: dietmar diet...@proxmox.com Cc: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 17:17:26 Objet: Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst Old system? This package is targeted for debian jessie. Yes, (it was the upgraded wheezy, seem that passwd was not upgraded correctly) (about upgrade, we need also to install apt-get install systemd-sysv to get systemd working after reboot) - Mail original - De: dietmar diet...@proxmox.com À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 30 Avril 2015 17:14:14 Objet: Re: [pve-devel] trying to install lxc-pve, usermod error in lxc-pve.postinst #usermod -v 10-165535 -w 10-165535 root usermod: invalid option -- 'v' Old system? This package is targeted for debian jessie. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel