[pve-devel] applied: [PATCH installer] prompt user if a vgrename is OK for exisiting 'pve'/'pmg' VGs

2019-07-09 Thread Thomas Lamprecht
If one has a 'pve' VG on a disks not selected as install target
(e.g., on re-installation to different disk or if putting a used disk
into another server where PVE was installed on) the vgcreate call
errored out, as for creation the VG names must be unique.

Cope with that by asking the users if a rename to a
'-OLD-' name is OK in this case. It ensures that
no data is lost and that we can safely continue with the
installation. The admin can then later on wipe the renamed VG if it
was really decommissioned or save data from it (or actually use it)

This can cope (tested) with:
* a single 'pve' VG on another device
* a single 'pve' VG spanning multiple devices
* multiple 'pve' VGs spanning different sets of devices

This is achieved by using the VG UUID for rename, and by recording
all PVs with said UUID as index.

Note, while this commit message talks mostly about 'pve' VG the patch
itself is actually agnostic of the specific name, works for 'pmg' and
possible (future) other VG names too.

Signed-off-by: Thomas Lamprecht 
---
 proxinstall | 67 +
 1 file changed, 67 insertions(+)

diff --git a/proxinstall b/proxinstall
index b256c6e..aff6c4c 100755
--- a/proxinstall
+++ b/proxinstall
@@ -972,11 +972,78 @@ sub partition_bootable_disk {
 return ($os_size, $osdev, $efibootdev);
 }
 
+sub get_pv_list_from_vgname {
+my ($vgname) = @_;
+
+my $res;
+
+my $parser = sub {
+   my $line = shift;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   return if !$line;
+   my ($pv, $vg_uuid) = split(/\s+/, $line);
+
+   if (!defined($res->{$vg_uuid}->{pvs})) {
+   $res->{$vg_uuid}->{pvs} = "$pv";
+   } else {
+   $res->{$vg_uuid}->{pvs} .= ", $pv";
+   }
+};
+run_command("pvs --noheadings -o pv_name,vg_uuid -S vg_name='$vgname'", 
$parser, undef, 1);
+
+return $res;
+}
+
+sub ask_existing_vg_rename_or_abort {
+my ($vgname) = @_;
+
+# this normally only happens if one put a disk with a PVE installation in
+# this server and that disk is not the installation target.
+my $duplicate_vgs = get_pv_list_from_vgname($vgname);
+return if !$duplicate_vgs;
+
+my $message = "Detected existing '$vgname' Volume Group(s)! Do you want 
to:\n";
+
+for my $vg_uuid (keys %$duplicate_vgs) {
+   my $vg = $duplicate_vgs->{$vg_uuid};
+
+   # no high randomnes properties, but this is only for the cases where
+   # we either have multiple "$vgname" vgs from multiple old PVE disks, or
+   # we have a disk with both a "$vgname" and "$vgname-old"...
+   my $short_uid = sprintf "%08X", rand(0x);
+   $vg->{new_vgname} = "$vgname-OLD-$short_uid";
+
+   $message .= "rename VG backed by PV '$vg->{pvs}' to 
'$vg->{new_vgname}'\n";
+}
+$message .= "or cancel the installation?";
+
+my $dialog = Gtk3::MessageDialog->new($window, 'modal', 'question', 
'ok-cancel', $message);
+my $response = $dialog->run();
+$dialog->destroy();
+
+if ($response eq 'ok') {
+   for my $vg_uuid (keys %$duplicate_vgs) {
+   my $vg = $duplicate_vgs->{$vg_uuid};
+   my $new_vgname = $vg->{new_vgname};
+
+   syscmd("vgrename $vg_uuid $new_vgname") == 0 ||
+   die "could not rename VG from '$vg->{pvs}' ($vg_uuid) to 
'$new_vgname'!\n";
+   }
+} else {
+   set_next("_Reboot", sub { exit (0); } );
+   display_html("fail.htm");
+   die "Cancled installation by user, due to already existing volume group 
'$vgname'\n";
+}
+}
+
 sub create_lvm_volumes {
 my ($lvmdev, $os_size, $swap_size) = @_;
 
 my $vgname = $setup->{product};
 
+ask_existing_vg_rename_or_abort($vgname);
+
 my $rootdev = "/dev/$vgname/root";
 my $datadev = "/dev/$vgname/data";
 my $swapfile;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH cluster] Workaround for broken corosync-qdevice SysV config

2019-07-09 Thread Thomas Lamprecht
On 7/9/19 5:31 PM, Stefan Reiter wrote:
> 
> The author of corosync-qdevice is aware of the issue:
> https://bugs.launchpad.net/ubuntu/+source/corosync-qdevice/+bug/1809682

FYI, those are not the authors of corosync-qdevice, those are some ubuntu
maintainers, AFAICT from the ubuntu-ha team. The qdevice main authors are
Jan Friesse and Christine Caulfield, but they do not provide native Debian
packaging support.

As we use the corosync-qdevice from Debian in PVE 6 based on Buster this is
not the correct upstream, albeit sometimes Ubuntu and Debian maintainers are
the same (or at least one of them), or they share much with upstream.

It may be worth to check how the Debian and Ubuntu HA team is woven together
and if it seems that they are not connected together it may be worth to still
report a (sensible) bug to Debian BTS, with a link to the Ubuntu Bug report
so that it's ensured the Debian HA Maintainer are informed and pointed to
this specific issue, in the worst case they can tell you that they work or
know already of it.

But it just seems that we may need to package this ourself anyway, no big
hassle but would've been still nice to avoid.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH installer v2 0/2] create unique identifiers on install

2019-07-09 Thread Thomas Lamprecht
On 7/9/19 6:09 PM, Stoiko Ivanov wrote:
> while looking into #1603 I misread and focussed on /etc/hostid, instead of
> /etc/machine-id. Upon closer inspection the problem is similar for both of 
> them:
> Our installer shipped one version of the file for both ids, while it should
> create them uniquely for the system.
> 
> Addionally the /etc/hostid was not shipped in the installed system, which
> could lead to unimportable pools if it gets generated later on.
> 
> v1->v2
> * Thomas suggested off-list to take a look at `systemd-id128 new` as a way
>   to create a truly new random id (not depending on the magic with kvm UUIDs 
> and
>   dbus
> 
> Tested by installing twice successfully and comparing ids
> 
> Stoiko Ivanov (2):
>   fix #1603: create a new and unique machine-id
>   copy /etc/hostid from installer root to target
> 
>  proxinstall | 5 +
>  1 file changed, 5 insertions(+)
> 

applied series, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] spice new streaming option

2019-07-09 Thread Alexandre DERUMIER
Hi,

I have open a bugzilla
https://bugzilla.proxmox.com/show_bug.cgi?id=2272

seem that since spice  0.14.1, they are a new option to stream video from guest 
directly to spice client.

This need a new qemu device

-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel1,id=channel1,name=org.spice-space.stream.0
 \
-chardev spiceport,name=org.spice-space.stream.0,id=charchannel1


I don't have tested it yet, but maybe it could be added before proxmox6 final 
release ?
(add it for qemu machine >= 4)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer v2 1/2] fix #1603: create a new and unique machine-id

2019-07-09 Thread Stoiko Ivanov
see machine-id(5). The machine-id serves as a partial replacement to the hostid
(gethostid(3)) used by systemd and should be unique.

By generating a new one with `systemd-id128 new` (see machine-id(5),
sd-id128(3)) after the installation the newly installed system gets a unique
one.

Signed-off-by: Stoiko Ivanov 
---
 proxinstall | 4 
 1 file changed, 4 insertions(+)

diff --git a/proxinstall b/proxinstall
index 380abdf..97af4b9 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1510,6 +1510,10 @@ sub extract_data {
diversion_add($targetdir, "/usr/sbin/update-grub", "/bin/true");
diversion_add($targetdir, "/usr/sbin/update-initramfs", "/bin/true");
 
+   my $machine_id = run_command("systemd-id128 new");
+   die "unable to create a new machine-id\n" if ! $machine_id;
+   write_config($machine_id, "$targetdir/etc/machine-id");
+
syscmd("touch  $targetdir/proxmox_install_mode");
 
my $grub_install_devices_txt = '';
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer v2 2/2] copy /etc/hostid from installer root to target

2019-07-09 Thread Stoiko Ivanov
/etc/hostid is used by ZFS (spl.ko) to determine which host last imported a
pool creating and importing a pool with one hostid during install and booting
with a different one (or none) leads to the system refusing to import the pool
see spl-module-parameters(5) and zpool(8).

by copying the /etc/hostid from the installer into the target system we ensure
that it stays consistent

Signed-off-by: Stoiko Ivanov 
---
 proxinstall | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/proxinstall b/proxinstall
index 97af4b9..b256c6e 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1514,6 +1514,9 @@ sub extract_data {
die "unable to create a new machine-id\n" if ! $machine_id;
write_config($machine_id, "$targetdir/etc/machine-id");
 
+   syscmd("cp /etc/hostid $targetdir/etc/") == 0 ||
+   die "unable to copy hostid\n";
+
syscmd("touch  $targetdir/proxmox_install_mode");
 
my $grub_install_devices_txt = '';
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer v2 0/2] create unique identifiers on install

2019-07-09 Thread Stoiko Ivanov
while looking into #1603 I misread and focussed on /etc/hostid, instead of
/etc/machine-id. Upon closer inspection the problem is similar for both of them:
Our installer shipped one version of the file for both ids, while it should
create them uniquely for the system.

Addionally the /etc/hostid was not shipped in the installed system, which
could lead to unimportable pools if it gets generated later on.

v1->v2
* Thomas suggested off-list to take a look at `systemd-id128 new` as a way
  to create a truly new random id (not depending on the magic with kvm UUIDs and
  dbus

Tested by installing twice successfully and comparing ids

Stoiko Ivanov (2):
  fix #1603: create a new and unique machine-id
  copy /etc/hostid from installer root to target

 proxinstall | 5 +
 1 file changed, 5 insertions(+)

-- 
2.20.1

*** BLURB HERE ***

Stoiko Ivanov (2):
  fix #1603: create a new and unique machine-id
  copy /etc/hostid from installer root to target

 proxinstall | 7 +++
 1 file changed, 7 insertions(+)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH cluster] Workaround for broken corosync-qdevice SysV config

2019-07-09 Thread Stefan Reiter
Since we use only systemd, we can simply remove this file. Without
removing, the "systemd enable" command fails, complaining about unset
run-levels.

Signed-off-by: Stefan Reiter 
---

The author of corosync-qdevice is aware of the issue:
https://bugs.launchpad.net/ubuntu/+source/corosync-qdevice/+bug/1809682

Until it is merged upstream, this patch allows creation and removal of qdevices
without issues.

 data/PVE/CLI/pvecm.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
index 823130a..de6b21a 100755
--- a/data/PVE/CLI/pvecm.pm
+++ b/data/PVE/CLI/pvecm.pm
@@ -243,6 +243,11 @@ __PACKAGE__->register_method ({
my $outsub = sub { print "\nnode '$node': " . shift };
print "\nINFO: start and enable corosync qdevice daemon on node 
'$node'...\n";
run_command([@$ssh_cmd, $ip, 'systemctl', 'start', 
'corosync-qdevice'], outfunc => \&$outsub);
+
+   # corosync-qdevice package ships with broken SysV file
+   # FIXME: Remove once fix is upstream
+   run_command([@$ssh_cmd, $ip, 'rm', '-f', 
'/etc/init.d/corosync-qdevice']);
+
run_command([@$ssh_cmd, $ip, 'systemctl', 'enable', 
'corosync-qdevice'], outfunc => \&$outsub);
});
 
@@ -300,6 +305,11 @@ __PACKAGE__->register_method ({
$foreach_member->(sub {
my (undef, $ip) = @_;
run_command([@$ssh_cmd, $ip, 'systemctl', 'stop', 
'corosync-qdevice']);
+
+   # corosync-qdevice package ships with broken SysV file
+   # FIXME: Remove once fix is upstream
+   run_command([@$ssh_cmd, $ip, 'rm', '-f', 
'/etc/init.d/corosync-qdevice']);
+
run_command([@$ssh_cmd, $ip, 'systemctl', 'disable', 
'corosync-qdevice']);
});
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 cluster] fix #2270: allow custom lxc options to be restored as root

2019-07-09 Thread Stefan Reiter
Seems to be a regression introduced with
f360d7f16b094fa258cf82d2557d06f3284435e4 (related to #2028).
$conf->{'lxc'} would always be defined, hence we never replaced it with
the restored options.

Co-developed-by: Oguz Bektas 
Signed-off-by: Stefan Reiter 
---

Nevermind v1, perl arrays and hashes are confusing. This time it works with
multiple custom options as well ;)

 src/PVE/LXC/Create.pm | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 029c940..ee83052 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -175,16 +175,22 @@ sub restore_configuration {
# we know if it was a template in the restore API call and check if 
the target
# storage supports creating a template there
next if $key =~ /^template$/;
-   if ($restricted && $key eq 'lxc') {
-   warn "skipping custom lxc options, restore manually as root:\n";
-   warn "\n";
+
+   if ($key eq 'lxc') {
my $lxc_list = $oldconf->{'lxc'};
-   foreach my $lxc_opt (@$lxc_list) {
-   warn "$lxc_opt->[0]: $lxc_opt->[1]\n"
+   if ($restricted) {
+   warn "skipping custom lxc options, restore manually as 
root:\n";
+   warn "\n";
+   foreach my $lxc_opt (@$lxc_list) {
+   warn "$lxc_opt->[0]: $lxc_opt->[1]\n"
+   }
+   warn "\n";
+   } else {
+   @{$conf->{$key}} = (@$lxc_list, @{$conf->{$key}});
}
-   warn "\n";
next;
}
+
if ($unique && $key =~ /^net\d+$/) {
my $net = PVE::LXC::Config->parse_lxc_network($oldconf->{$key});
my $dc = PVE::Cluster::cfs_read_file('datacenter.cfg');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] fix #2270: allow custom lxc options to be restored as root

2019-07-09 Thread Stefan Reiter
Seems to be a regression introduced with
f360d7f16b094fa258cf82d2557d06f3284435e4 (related to #2028).
$conf->{'lxc'} would always be defined, hence we never replaced it with
the restored options.

We now merge LXC options individually. We can't just overwrite, since
that would undo the fix mentioned above.

Signed-off-by: Stefan Reiter 
---
 src/PVE/LXC/Create.pm | 23 +--
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 029c940..3f893e5 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -175,16 +175,27 @@ sub restore_configuration {
# we know if it was a template in the restore API call and check if 
the target
# storage supports creating a template there
next if $key =~ /^template$/;
-   if ($restricted && $key eq 'lxc') {
-   warn "skipping custom lxc options, restore manually as root:\n";
-   warn "\n";
+
+   if ($key eq 'lxc') {
my $lxc_list = $oldconf->{'lxc'};
-   foreach my $lxc_opt (@$lxc_list) {
-   warn "$lxc_opt->[0]: $lxc_opt->[1]\n"
+   if ($restricted) {
+   warn "skipping custom lxc options, restore manually as 
root:\n";
+   warn "\n";
+   foreach my $lxc_opt (@$lxc_list) {
+   warn "$lxc_opt->[0]: $lxc_opt->[1]\n"
+   }
+   warn "\n";
+   } else {
+   # merge lxc options individually
+   $conf->{$key} = [] if !defined($conf->{$key});
+   foreach my $lxc_opt (@$lxc_list) {
+   push(@{$conf->{$key}}, $lxc_opt)
+   if !grep {$_->{0} eq $lxc_opt->[0]} 
@{$conf->{$key}};
+   }
}
-   warn "\n";
next;
}
+
if ($unique && $key =~ /^net\d+$/) {
my $net = PVE::LXC::Config->parse_lxc_network($oldconf->{$key});
my $dc = PVE::Cluster::cfs_read_file('datacenter.cfg');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer 2/2] copy /etc/hostid from installer root to target

2019-07-09 Thread Stoiko Ivanov
/etc/hostid is used by ZFS (spl.ko) to determine which host last imported a
pool creating and importing a pool with one hostid during install and booting
with a different one (or none) leads to the system refusing to import the pool
see spl-module-parameters(5) and zpool(8).

by copying the /etc/hostid from the installer into the target system we ensure
that it stays consistent

Signed-off-by: Stoiko Ivanov 
---
 proxinstall | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/proxinstall b/proxinstall
index 19d0896..8ac00d0 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1511,6 +1511,8 @@ sub extract_data {
diversion_add($targetdir, "/usr/sbin/update-initramfs", "/bin/true");
syscmd("systemd-machine-id-setup --root=$targetdir") == 0 ||
die "unable to create a new machine-id\n";
+   syscmd("cp /etc/hostid $targetdir/etc/") == 0 ||
+   die "unable to copy hostid\n";
 
 
syscmd("touch  $targetdir/proxmox_install_mode");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer 1/2] fix #1603: create a new and unique machine-id

2019-07-09 Thread Stoiko Ivanov
see machine-id(5). The machine-id serves as a partial replacement to the hostid
(gethostid(3)) used by systemd and should be unique, by not shipping one in the
ISO and generating one after the installation the installer ensures the
uniqueness.

Signed-off-by: Stoiko Ivanov 
---
 proxinstall | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/proxinstall b/proxinstall
index 380abdf..19d0896 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1509,6 +1509,9 @@ sub extract_data {
diversion_add($targetdir, "/sbin/start-stop-daemon", 
"/sbin/fake-start-stop-daemon");
diversion_add($targetdir, "/usr/sbin/update-grub", "/bin/true");
diversion_add($targetdir, "/usr/sbin/update-initramfs", "/bin/true");
+   syscmd("systemd-machine-id-setup --root=$targetdir") == 0 ||
+   die "unable to create a new machine-id\n";
+
 
syscmd("touch  $targetdir/proxmox_install_mode");
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH installer 0/2] create unique identifiers on install

2019-07-09 Thread Stoiko Ivanov
while looking into #1603 I misread and focussed on /etc/hostid, instead of
/etc/machine-id. Upon closer inspection the problem is similar for both of them:
Our installer shipped one version of the file for both ids, while it should
create them uniquely for the system.

Addionally the /etc/hostid was not shipped in the installed system, which
could lead to unimportable pools if it gets generated later on.

Stoiko Ivanov (2):
  fix #1603: create a new and unique machine-id
  copy /etc/hostid from installer root to target

 proxinstall | 5 +
 1 file changed, 5 insertions(+)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH http-server] decode_urlencoded: cope with undefined values

2019-07-09 Thread Thomas Lamprecht
Avoids syslog/journal warning like:
>  Use of uninitialized value $v in substitution (s///) at
>  /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 648.

If one passes a "value-less" GET argument to a request, e.g.,
GET /?debug

Besides the fact that this allows us to even use such arguments it
also is a general improvement against a slight "syslog DOS attack",
because anybody can pass such parameters to the '/' page, and all
proxmox daemons providing a API/UI using libpve-http-server-perl
allow to do such requests unauthenticated (which itself is OK, as
else one could not show the login window at all). As each of such
request produces two log lines in the syslog/journal it's far from
ideal.

A simple reproducer of the possible outcome can be seen with the
following shell script using curl:

> PVEURL='127.0.0.1'
> ARGS='?a'; # send multiple args at once to amplify the per-connection cost
> for c in {a..z}; do for i in {0..9}; do ARGS="$ARGS&$c$i"; done; done
> while true; do curl --insecure --silent --output /dev/null 
> "https://$PVEURL:8006$ARGS;; done

Not really bad, but not nice either, as logging is not too cheap this
has some resource usage cost and noise in the syslog is never nice.

Signed-off-by: Thomas Lamprecht 
---

applied directly to master and stable-5

 PVE/APIServer/AnyEvent.pm | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/PVE/APIServer/AnyEvent.pm b/PVE/APIServer/AnyEvent.pm
index c6b74c0..2e8ca47 100644
--- a/PVE/APIServer/AnyEvent.pm
+++ b/PVE/APIServer/AnyEvent.pm
@@ -645,16 +645,19 @@ sub decode_urlencoded {
my ($k, $v) = split(/=/, $kv);
$k =~s/\+/ /g;
$k =~ s/%([0-9a-fA-F][0-9a-fA-F])/chr(hex($1))/eg;
-   $v =~s/\+/ /g;
-   $v =~ s/%([0-9a-fA-F][0-9a-fA-F])/chr(hex($1))/eg;
 
-   $v = Encode::decode('utf8', $v);
+   if (defined($v)) {
+   $v =~s/\+/ /g;
+   $v =~ s/%([0-9a-fA-F][0-9a-fA-F])/chr(hex($1))/eg;
 
-   if (defined(my $old = $res->{$k})) {
-   $res->{$k} = "$old\0$v";
-   } else {
-   $res->{$k} = $v;
+   $v = Encode::decode('utf8', $v);
+
+   if (defined(my $old = $res->{$k})) {
+   $v = "$old\0$v";
+   }
}
+
+   $res->{$k} = $v;
 }
 return $res;
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 docs 1/2] Use correct xref: syntax and add pvecm prefix

2019-07-09 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

No changes for v2. As mentioned, I did not find any references to the changed
names.

 pvecm.adoc | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 05756ca..1c0b9e7 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -150,7 +150,7 @@ Login via `ssh` to the node you want to add.
 
 
 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
-An IP address is recommended (see <>).
+An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
Types]).
 
 CAUTION: A new node cannot hold any VMs, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
@@ -212,7 +212,7 @@ Membership information
  4  1 hp4
 
 
-[[adding-nodes-with-separated-cluster-network]]
+[[pvecm_adding_nodes_with_separated_cluster_network]]
 Adding Nodes With Separated Cluster Network
 ~~~
 
@@ -428,7 +428,7 @@ part is done by corosync, an implementation of a high 
performance low overhead
 high availability development toolkit. It serves our decentralized
 configuration file system (`pmxcfs`).
 
-[[cluster-network-requirements]]
+[[pvecm_cluster_network_requirements]]
 Network Requirements
 
 This needs a reliable network with latencies under 2 milliseconds (LAN
@@ -486,7 +486,7 @@ Setting Up A New Network
 
 First you have to setup a new network interface. It should be on a physical
 separate network. Ensure that your network fulfills the
-<>.
+xref:pvecm_cluster_network_requirements[cluster network requirements].
 
 Separate On Cluster Creation
 
@@ -510,9 +510,9 @@ systemctl status corosync
 
 
 Afterwards, proceed as descripted in the section to
-<>.
+xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a 
separated cluster network].
 
-[[separate-cluster-net-after-creation]]
+[[pvecm_separate_cluster_net_after_creation]]
 Separate After Cluster Creation
 ^^^
 
@@ -521,7 +521,7 @@ its communication to another network, without rebuilding 
the whole cluster.
 This change may lead to short durations of quorum loss in the cluster, as nodes
 have to restart corosync and come up one after the other on the new network.
 
-Check how to <> first.
+Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
 The open it and you should see a file similar to:
 
 
@@ -579,7 +579,7 @@ you do not see them already. Those *must* match the node 
name.
 Then replace the address from the 'ring0_addr' properties with the new
 addresses.  You may use plain IP addresses or also hostnames here. If you use
 hostnames ensure that they are resolvable from all nodes. (see also
-<>)
+xref:pvecm_corosync_addresses[Ring Address Types])
 
 In my example I want to switch my cluster communication to the 10.10.10.1/25
 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
@@ -640,7 +640,7 @@ totem {
 
 
 Now after a final check whether all changed information is correct we save it
-and see again the <> section to
+and see again the xref:pvecm_edit_corosync_conf[edit corosync.conf file] 
section to
 learn how to bring it in effect.
 
 As our change cannot be enforced live from corosync we have to do an restart.
@@ -661,7 +661,7 @@ systemctl status corosync
 If corosync runs again correct restart corosync also on all other nodes.
 They will then join the cluster membership one by one on the new network.
 
-[[corosync-addresses]]
+[[pvecm_corosync_addresses]]
 Corosync addresses
 ~~
 
@@ -708,7 +708,7 @@ RRP On Cluster Creation
 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
 
-NOTE: See the <> if you do not know what each 
parameter means.
+NOTE: See the xref:pvecm_corosync_conf_glossary[glossary] if you do not know 
what each parameter means.
 
 So if you have two networks, one on the 10.10.10.1/24 and the other on the
 10.10.20.1/24 subnet you would execute:
@@ -723,7 +723,7 @@ RRP On Existing Clusters
 
 
 You will take similar steps as described in
-<> to
+xref:pvecm_separate_cluster_net_after_creation[separating the cluster network] 
to
 enable RRP on an already running cluster. The single difference is, that you
 will add `ring1` and use it instead of `ring0`.
 
@@ -781,7 +781,7 @@ nodelist {
 
 
 Bring it in effect like described in the
-<> section.
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file] section.
 
 This is a change which cannot take live in effect and needs at least a restart
 of corosync. Recommended is a restart of the whole cluster.
@@ -979,7 +979,7 @@ For node membership you should always use the `pvecm` tool 
provided by {pve}.
 You may have to edit the configuration file manually for other changes.
 Here are a 

[pve-devel] [PATCH v2 docs 2/2] Update pvecm documentation for corosync 3

2019-07-09 Thread Stefan Reiter
Parts about multicast and RRP have been removed entirely. Instead, a new
section 'Corosync Redundancy' has been added explaining the concept of
links and link priorities.

Signed-off-by: Stefan Reiter 
---

v1 -> v2:
 * Spelling mistakes
 * Rewording to improve clarity
 * Fixed redundancy explanation and example
 * Added note about multiple clusters in one network

Didn't want to add back the entire section for the last point, would have
basically been a heading with a single sentence below. I think the note is
enough.

 pvecm.adoc | 428 ++---
 1 file changed, 209 insertions(+), 219 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 1c0b9e7..0019ec8 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
 Requirements
 
 
-* All nodes must be in the same network as `corosync` uses IP Multicast
- to communicate between nodes (also see
- http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
- ports 5404 and 5405 for cluster communication.
-+
-NOTE: Some switches do not support IP multicast by default and must be
-manually enabled first.
+* All nodes must be able to connect to each other via UDP ports 5404 and 5405
+ for corosync to work.
 
 * Date and time have to be synchronized.
 
@@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this 
is not supported as
 production configuration and should only used temporarily during upgrading the
 whole cluster from one to another major version.
 
+NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
+cluster protocol (corosync) between {pve} 6.x and earlier versions changed
+fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
+upgrade procedure to {pve} 6.0.
+
 
 Preparing Nodes
 ---
@@ -96,10 +96,13 @@ Currently the cluster creation can either be done on the 
console (login via
 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
 Cluster__).
 
-While it's often common use to reference all other nodenames in `/etc/hosts`
-with their IP this is not strictly necessary for a cluster, which normally uses
-multicast, to work. It maybe useful as you then can connect from one node to
-the other with SSH through the easier to remember node name.
+While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
+make their names resolvable through other means), this is not necessary for a
+cluster to work. It may be useful however, as you can then connect from one 
node
+to the other with SSH via the easier to remember node name (see also
+xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
+recommend to reference nodes by their IP addresses in the cluster 
configuration.
+
 
 [[pvecm_create_cluster]]
 Create the Cluster
@@ -113,10 +116,10 @@ node names.
  hp1# pvecm create CLUSTERNAME
 
 
-CAUTION: The cluster name is used to compute the default multicast address.
-Please use unique cluster names if you run more than one cluster inside your
-network. To avoid human confusion, it is also recommended to choose different
-names even if clusters do not share the cluster network.
+NOTE: It is possible to create multiple clusters in the same physical or 
logical
+network. Use unique cluster names if you do so. To avoid human confusion, it is
+also recommended to choose different names even if clusters do not share the
+cluster network.
 
 To check the state of your cluster use:
 
@@ -124,20 +127,6 @@ To check the state of your cluster use:
  hp1# pvecm status
 
 
-Multiple Clusters In Same Network
-~
-
-It is possible to create multiple clusters in the same physical or logical
-network. Each cluster must have a unique name, which is used to generate the
-cluster's multicast group address. As long as no duplicate cluster names are
-configured in one network segment, the different clusters won't interfere with
-each other.
-
-If multiple clusters operate in a single network it may be beneficial to setup
-an IGMP querier and enable IGMP Snooping in said network. This may reduce the
-load of the network significantly because multicast packets are only delivered
-to endpoints of the respective member nodes.
-
 
 [[pvecm_join_node_to_cluster]]
 Adding Nodes to the Cluster
@@ -150,7 +139,7 @@ Login via `ssh` to the node you want to add.
 
 
 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
-An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
Types]).
+An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address 
Types]).
 
 CAUTION: A new node cannot hold any VMs, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
@@ -158,7 +147,7 @@ conflicts about identical VM IDs. Also, all existing 
configuration in
 workaround, use `vzdump` to backup and 

Re: [pve-devel] [PATCH docs 2/2] Update pvecm documentation for corosync 3

2019-07-09 Thread Aaron Lauterer

Added a few notes, mostly regarding style and readability.

On 7/8/19 6:26 PM, Stefan Reiter wrote:

Parts about multicast and RRP have been removed entirely. Instead, a new
section 'Corosync Redundancy' has been added explaining the concept of
links and link priorities.

Signed-off-by: Stefan Reiter 
---
  pvecm.adoc | 372 +
  1 file changed, 147 insertions(+), 225 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 1c0b9e7..1246111 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
  Requirements
  
  
-* All nodes must be in the same network as `corosync` uses IP Multicast

- to communicate between nodes (also see
- http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
- ports 5404 and 5405 for cluster communication.
-+
-NOTE: Some switches do not support IP multicast by default and must be
-manually enabled first.
+* All nodes must be able to contact each other via UDP ports 5404 and 5405 for
+ corosync to work.


Maybe "connect" instead of "contact"?
  
  * Date and time have to be synchronized.
  
@@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as

  production configuration and should only used temporarily during upgrading the
  whole cluster from one to another major version.
  
+NOTE: Mixing {pve} 6.x and earlier versions is not supported, because of the

+major corosync upgrade. While possible to run corosync 3 on {pve} 5.4, this
+configuration is not supported for production environments and should only be
+used while upgrading a cluster.
+


"NOTE: Running a cluster of {pve} 6.x with earlier versions is not 
possible. The cluster protocol (corosync) between {pve} 6.x and earlier 
versions changed fundamentally. The corosync 3 packages for {pve} 5.4 
are only intended for the upgrade procedure to {pve} 6.0."


  
  Preparing Nodes

  ---
@@ -96,10 +96,12 @@ Currently the cluster creation can either be done on the 
console (login via
  `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
  Cluster__).
  
-While it's often common use to reference all other nodenames in `/etc/hosts`

-with their IP this is not strictly necessary for a cluster, which normally uses
-multicast, to work. It maybe useful as you then can connect from one node to
-the other with SSH through the easier to remember node name.
+While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
+make their names resolveable through other means), this is not strictly
+necessary for a cluster to work. It may be useful however, as you can then
+connect from one node to the other with SSH via the easier to remember node
+name. (see also xref:pvecm_corosync_addresses[Link Address Types])
+


"node names" instead of one word. "resolvable" instead of "resolveable".

But maybe we should tell people to keep away from host names and rely on 
IP addresses? If someone still wants to use the hosts file they will, 
but let not put ideas in the heads of not so experienced users who do 
not know possible pitfalls and the fragility of such an approach.


"We highly recommend to reference the nodes by their IP addresses in the 
cluster configuration. This will prevent (circular) dependencies on 
other means to resolve a host name to an IP address like DNS or manual 
entries in the `/etc/hosts` file."


Not sure if we should have the "(circular)" in there or not.


  
  [[pvecm_create_cluster]]

  Create the Cluster
@@ -113,31 +115,12 @@ node names.
   hp1# pvecm create CLUSTERNAME
  
  
-CAUTION: The cluster name is used to compute the default multicast address.

-Please use unique cluster names if you run more than one cluster inside your
-network. To avoid human confusion, it is also recommended to choose different
-names even if clusters do not share the cluster network.
-
  To check the state of your cluster use:
  
  

   hp1# pvecm status
  
  
-Multiple Clusters In Same Network

-~
-
-It is possible to create multiple clusters in the same physical or logical
-network. Each cluster must have a unique name, which is used to generate the
-cluster's multicast group address. As long as no duplicate cluster names are
-configured in one network segment, the different clusters won't interfere with
-each other.
-
-If multiple clusters operate in a single network it may be beneficial to setup
-an IGMP querier and enable IGMP Snooping in said network. This may reduce the
-load of the network significantly because multicast packets are only delivered
-to endpoints of the respective member nodes.
-
  
  [[pvecm_join_node_to_cluster]]

  Adding Nodes to the Cluster
@@ -150,7 +133,7 @@ Login via `ssh` to the node you want to add.
  
  
  For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.

-An IP address is recommended (see 

Re: [pve-devel] [PATCH docs 2/2] Update pvecm documentation for corosync 3

2019-07-09 Thread Stefan Reiter

Thanks for feedback!

Regarding patch 1/2: I grep'd through the sources and could not find any 
references to the heading names I changed. A quick look through the GUI 
also didn't reveal any obvious references.


Some of my own notes inline, I will send v2 today.

On 7/9/19 9:19 AM, Thomas Lamprecht wrote:

On 7/8/19 6:26 PM, Stefan Reiter wrote:

Parts about multicast and RRP have been removed entirely. Instead, a new
section 'Corosync Redundancy' has been added explaining the concept of
links and link priorities.



note bad at all, still some notes inline.


Signed-off-by: Stefan Reiter 
---
  pvecm.adoc | 372 +
  1 file changed, 147 insertions(+), 225 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 1c0b9e7..1246111 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
  Requirements
  
  
-* All nodes must be in the same network as `corosync` uses IP Multicast

- to communicate between nodes (also see
- http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
- ports 5404 and 5405 for cluster communication.
-+
-NOTE: Some switches do not support IP multicast by default and must be
-manually enabled first.
+* All nodes must be able to contact each other via UDP ports 5404 and 5405 for
+ corosync to work.
  
  * Date and time have to be synchronized.
  
@@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as

  production configuration and should only used temporarily during upgrading the
  whole cluster from one to another major version.
  
+NOTE: Mixing {pve} 6.x and earlier versions is not supported, because of the

+major corosync upgrade. While possible to run corosync 3 on {pve} 5.4, this
+configuration is not supported for production environments and should only be
+used while upgrading a cluster.
+
  
  Preparing Nodes

  ---
@@ -96,10 +96,12 @@ Currently the cluster creation can either be done on the 
console (login via
  `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
  Cluster__).
  
-While it's often common use to reference all other nodenames in `/etc/hosts`

-with their IP this is not strictly necessary for a cluster, which normally uses
-multicast, to work. It maybe useful as you then can connect from one node to
-the other with SSH through the easier to remember node name.
+While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
+make their names resolveable through other means), this is not strictly
+necessary for a cluster to work. It may be useful however, as you can then
+connect from one node to the other with SSH via the easier to remember node
+name. (see also xref:pvecm_corosync_addresses[Link Address Types])
+
  
  [[pvecm_create_cluster]]

  Create the Cluster
@@ -113,31 +115,12 @@ node names.
   hp1# pvecm create CLUSTERNAME
  
  
-CAUTION: The cluster name is used to compute the default multicast address.

-Please use unique cluster names if you run more than one cluster inside your
-network. To avoid human confusion, it is also recommended to choose different
-names even if clusters do not share the cluster network.


Maybe move this from a "CAUTION" to a "NOTE" and keep the hint that it still
makes sense to use unique cluster names, to avoid human confusion and as I have
a feeling that there are other assumption in corosync which depend on that.
Also, _if_ multicast gets integrated into knet we probably have a similar issue
again, so try to bring people in lane now already, even if not 100% required.



Makes sense. I just wanted to avoid mentioning multicast in the general 
instructions, to avoid people reading the docs for the first time being 
confused if they need it or not.



-
  To check the state of your cluster use:
  
  

   hp1# pvecm status
  
  
-Multiple Clusters In Same Network

-~
-
-It is possible to create multiple clusters in the same physical or logical
-network. Each cluster must have a unique name, which is used to generate the
-cluster's multicast group address. As long as no duplicate cluster names are
-configured in one network segment, the different clusters won't interfere with
-each other.
-
-If multiple clusters operate in a single network it may be beneficial to setup
-an IGMP querier and enable IGMP Snooping in said network. This may reduce the
-load of the network significantly because multicast packets are only delivered
-to endpoints of the respective member nodes.
-


It's still possible to create multiple clusters in the same network, so I'd keep
above and just adapt to non-multicast for now..

  
  [[pvecm_join_node_to_cluster]]

  Adding Nodes to the Cluster
@@ -150,7 +133,7 @@ Login via `ssh` to the node you want to add.
  
  
  For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.

-An IP address is recommended 

Re: [pve-devel] [PATCH docs 2/2] Update pvecm documentation for corosync 3

2019-07-09 Thread Thomas Lamprecht
On 7/8/19 6:26 PM, Stefan Reiter wrote:
> Parts about multicast and RRP have been removed entirely. Instead, a new
> section 'Corosync Redundancy' has been added explaining the concept of
> links and link priorities.
> 

note bad at all, still some notes inline.

> Signed-off-by: Stefan Reiter 
> ---
>  pvecm.adoc | 372 +
>  1 file changed, 147 insertions(+), 225 deletions(-)
> 
> diff --git a/pvecm.adoc b/pvecm.adoc
> index 1c0b9e7..1246111 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
>  Requirements
>  
>  
> -* All nodes must be in the same network as `corosync` uses IP Multicast
> - to communicate between nodes (also see
> - http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
> - ports 5404 and 5405 for cluster communication.
> -+
> -NOTE: Some switches do not support IP multicast by default and must be
> -manually enabled first.
> +* All nodes must be able to contact each other via UDP ports 5404 and 5405 
> for
> + corosync to work.
>  
>  * Date and time have to be synchronized.
>  
> @@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this 
> is not supported as
>  production configuration and should only used temporarily during upgrading 
> the
>  whole cluster from one to another major version.
>  
> +NOTE: Mixing {pve} 6.x and earlier versions is not supported, because of the
> +major corosync upgrade. While possible to run corosync 3 on {pve} 5.4, this
> +configuration is not supported for production environments and should only be
> +used while upgrading a cluster.
> +
>  
>  Preparing Nodes
>  ---
> @@ -96,10 +96,12 @@ Currently the cluster creation can either be done on the 
> console (login via
>  `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
>  Cluster__).
>  
> -While it's often common use to reference all other nodenames in `/etc/hosts`
> -with their IP this is not strictly necessary for a cluster, which normally 
> uses
> -multicast, to work. It maybe useful as you then can connect from one node to
> -the other with SSH through the easier to remember node name.
> +While it's common to reference all nodenames and their IPs in `/etc/hosts` 
> (or
> +make their names resolveable through other means), this is not strictly
> +necessary for a cluster to work. It may be useful however, as you can then
> +connect from one node to the other with SSH via the easier to remember node
> +name. (see also xref:pvecm_corosync_addresses[Link Address Types])
> +
>  
>  [[pvecm_create_cluster]]
>  Create the Cluster
> @@ -113,31 +115,12 @@ node names.
>   hp1# pvecm create CLUSTERNAME
>  
>  
> -CAUTION: The cluster name is used to compute the default multicast address.
> -Please use unique cluster names if you run more than one cluster inside your
> -network. To avoid human confusion, it is also recommended to choose different
> -names even if clusters do not share the cluster network.

Maybe move this from a "CAUTION" to a "NOTE" and keep the hint that it still
makes sense to use unique cluster names, to avoid human confusion and as I have
a feeling that there are other assumption in corosync which depend on that.
Also, _if_ multicast gets integrated into knet we probably have a similar issue
again, so try to bring people in lane now already, even if not 100% required.

> -
>  To check the state of your cluster use:
>  
>  
>   hp1# pvecm status
>  
>  
> -Multiple Clusters In Same Network
> -~
> -
> -It is possible to create multiple clusters in the same physical or logical
> -network. Each cluster must have a unique name, which is used to generate the
> -cluster's multicast group address. As long as no duplicate cluster names are
> -configured in one network segment, the different clusters won't interfere 
> with
> -each other.
> -
> -If multiple clusters operate in a single network it may be beneficial to 
> setup
> -an IGMP querier and enable IGMP Snooping in said network. This may reduce the
> -load of the network significantly because multicast packets are only 
> delivered
> -to endpoints of the respective member nodes.
> -

It's still possible to create multiple clusters in the same network, so I'd keep
above and just adapt to non-multicast for now..

>  
>  [[pvecm_join_node_to_cluster]]
>  Adding Nodes to the Cluster
> @@ -150,7 +133,7 @@ Login via `ssh` to the node you want to add.
>  
>  
>  For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
> -An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
> Types]).
> +An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address 
> Types]).

Maybe somewhere a note that while the new things are named "Link" the config
still refers to "ringX_addr" for backward compatibility.

>  
>  CAUTION: A new node cannot hold any