[pve-devel] applied: [PATCH manager] gui: show 0 for max_relocate/restart correctly

2019-07-08 Thread Thomas Lamprecht
On 7/8/19 2:12 PM, Dominik Csapak wrote:
> 0 || '1' will always return '1'
> 
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/ha/Resources.js | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/www/manager6/ha/Resources.js b/www/manager6/ha/Resources.js
> index 0b142c8d..bd6c337c 100644
> --- a/www/manager6/ha/Resources.js
> +++ b/www/manager6/ha/Resources.js
> @@ -146,7 +146,10 @@ Ext.define('PVE.ha.ResourcesView', {
>   width: 100,
>   sortable: true,
>   renderer: function(v) {
> - return v || '1';
> + if (v === undefined) {
> + return '1';
> + }
> + return v;
>   },
>   dataIndex: 'max_restart'
>   },
> @@ -155,7 +158,10 @@ Ext.define('PVE.ha.ResourcesView', {
>   width: 100,
>   sortable: true,
>   renderer: function(v) {
> - return v || '1';
> + if (v === undefined) {
> + return '1';
> + }
> + return v;
>   },
>   dataIndex: 'max_relocate'
>   },
> 

applied to stable 5 and to master, but for the latter I just had
to follouwp with some arrow function use ^^

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] fix #2266: Diskmanage: get correct osd id

2019-07-08 Thread Thomas Lamprecht
On 7/8/19 3:48 PM, Dominik Csapak wrote:
> the osdid has not only a single digit
> also add more regression tests for this
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/Diskmanage.pm |  2 +-
>  test/disk_tests/usages/disklist   |  2 ++
>  test/disk_tests/usages/disklist_expected.json | 31 +++
>  test/disk_tests/usages/lvs|  6 ++--
>  test/disk_tests/usages/pvs|  2 ++
>  test/disk_tests/usages/sdk/device/vendor  |  1 +
>  test/disk_tests/usages/sdk/queue/rotational   |  1 +
>  test/disk_tests/usages/sdk/size   |  1 +
>  test/disk_tests/usages/sdk_udevadm| 12 +++
>  test/disk_tests/usages/sdl/device/vendor  |  1 +
>  test/disk_tests/usages/sdl/queue/rotational   |  1 +
>  test/disk_tests/usages/sdl/size   |  1 +
>  test/disk_tests/usages/sdl_udevadm| 12 +++
>  13 files changed, 70 insertions(+), 3 deletions(-)
>  create mode 100644 test/disk_tests/usages/sdk/device/vendor
>  create mode 100644 test/disk_tests/usages/sdk/queue/rotational
>  create mode 100644 test/disk_tests/usages/sdk/size
>  create mode 100644 test/disk_tests/usages/sdk_udevadm
>  create mode 100644 test/disk_tests/usages/sdl/device/vendor
>  create mode 100644 test/disk_tests/usages/sdl/queue/rotational
>  create mode 100644 test/disk_tests/usages/sdl/size
>  create mode 100644 test/disk_tests/usages/sdl_udevadm
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH stable5 manager 0/6] 5to6 corosync improvements

2019-07-08 Thread Thomas Lamprecht
On 7/8/19 1:54 PM, Fabian Grünbichler wrote:
> patch #1 is new (inline of pve-cluster commits)
> patch #2 is adapated for #1
> 
> rest are clean cherry-picks
> 
> Fabian Grünbichler (6):
>   5to6: add Corosync resolve helper
>   5to6: attempt to resolve corosync rings
>   5to6: reword/-structure corosync message
>   5to6: fail if a corosync node has neither ring0 nor ring1 defined
>   5to6: add more corosync subheaders
>   5to6: make corosync totem checks more verbose
> 
>  PVE/CLI/pve5to6.pm | 107 -
>  1 file changed, 96 insertions(+), 11 deletions(-)
> 

applied series, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH stable5 access-control] ticket: properly verify exactly 5min old tickets

2019-07-08 Thread Thomas Lamprecht
On 7/8/19 2:36 PM, Fabian Grünbichler wrote:
> to fix an issue where valid tickets could be rejected 5 minutes after a
> key rotation, where the minimum age is exactly 0 seconds.
> 
> thanks Dominik for triaging!
> 
> Signed-off-by: Fabian Grünbichler 
> (cherry picked from commit 5bb966fe5d6f3f6a30e86724c024f80ebebacfba)
> ---
> this cherry-pick was missed, already applied in master
> 
>  PVE/AccessControl.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/AccessControl.pm b/PVE/AccessControl.pm
> index fc519f1..908cccb 100644
> --- a/PVE/AccessControl.pm
> +++ b/PVE/AccessControl.pm
> @@ -294,7 +294,7 @@ sub verify_ticket {
>   return undef if !$rsa_pub;
>  
>   my ($min, $max) = $get_ticket_age_range->($now, $rsa_mtime, $old);
> - return undef if !$min;
> + return undef if !defined($min);
>  
>   return PVE::Ticket::verify_rsa_ticket(
>   $rsa_pub, 'PVE', $ticket, undef, $min, $max, 1);
> 
applied, thanks!


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs 1/2] Use correct xref: syntax and add pvecm prefix

2019-07-08 Thread Thomas Lamprecht
On 7/8/19 6:26 PM, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter 
> ---
> 
> Hope it's the correct style now.
> 
> I decided to this into its own commit, since it could technically be applied 
> to
> the docs for pve 5 as well (although with little visual effect).

in general OK and good (that there are no visual changes for such a patch
is wanted and a feature ;) but did you checked if all references pointing
to this file (from docs internal and the WebUI online help references),
not that something got renamed and has now dangling links pointing to it?

> 
>  pvecm.adoc | 30 +++---
>  1 file changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/pvecm.adoc b/pvecm.adoc
> index 05756ca..1c0b9e7 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -150,7 +150,7 @@ Login via `ssh` to the node you want to add.
>  
>  
>  For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
> -An IP address is recommended (see <>).
> +An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
> Types]).
>  
>  CAUTION: A new node cannot hold any VMs, because you would get
>  conflicts about identical VM IDs. Also, all existing configuration in
> @@ -212,7 +212,7 @@ Membership information
>   4  1 hp4
>  
>  
> -[[adding-nodes-with-separated-cluster-network]]
> +[[pvecm_adding_nodes_with_separated_cluster_network]]
>  Adding Nodes With Separated Cluster Network
>  ~~~
>  
> @@ -428,7 +428,7 @@ part is done by corosync, an implementation of a high 
> performance low overhead
>  high availability development toolkit. It serves our decentralized
>  configuration file system (`pmxcfs`).
>  
> -[[cluster-network-requirements]]
> +[[pvecm_cluster_network_requirements]]
>  Network Requirements
>  
>  This needs a reliable network with latencies under 2 milliseconds (LAN
> @@ -486,7 +486,7 @@ Setting Up A New Network
>  
>  First you have to setup a new network interface. It should be on a physical
>  separate network. Ensure that your network fulfills the
> -<>.
> +xref:pvecm_cluster_network_requirements[cluster network requirements].
>  
>  Separate On Cluster Creation
>  
> @@ -510,9 +510,9 @@ systemctl status corosync
>  
>  
>  Afterwards, proceed as descripted in the section to
> -< cluster network>>.
> +xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a 
> separated cluster network].
>  
> -[[separate-cluster-net-after-creation]]
> +[[pvecm_separate_cluster_net_after_creation]]
>  Separate After Cluster Creation
>  ^^^
>  
> @@ -521,7 +521,7 @@ its communication to another network, without rebuilding 
> the whole cluster.
>  This change may lead to short durations of quorum loss in the cluster, as 
> nodes
>  have to restart corosync and come up one after the other on the new network.
>  
> -Check how to <> first.
> +Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] 
> first.
>  The open it and you should see a file similar to:
>  
>  
> @@ -579,7 +579,7 @@ you do not see them already. Those *must* match the node 
> name.
>  Then replace the address from the 'ring0_addr' properties with the new
>  addresses.  You may use plain IP addresses or also hostnames here. If you use
>  hostnames ensure that they are resolvable from all nodes. (see also
> -<>)
> +xref:pvecm_corosync_addresses[Ring Address Types])
>  
>  In my example I want to switch my cluster communication to the 10.10.10.1/25
>  network. So I replace all 'ring0_addr' respectively. I also set the 
> bindnetaddr
> @@ -640,7 +640,7 @@ totem {
>  
>  
>  Now after a final check whether all changed information is correct we save it
> -and see again the <> section to
> +and see again the xref:pvecm_edit_corosync_conf[edit corosync.conf file] 
> section to
>  learn how to bring it in effect.
>  
>  As our change cannot be enforced live from corosync we have to do an restart.
> @@ -661,7 +661,7 @@ systemctl status corosync
>  If corosync runs again correct restart corosync also on all other nodes.
>  They will then join the cluster membership one by one on the new network.
>  
> -[[corosync-addresses]]
> +[[pvecm_corosync_addresses]]
>  Corosync addresses
>  ~~
>  
> @@ -708,7 +708,7 @@ RRP On Cluster Creation
>  The 'pvecm create' command provides the additional parameters 
> 'bindnetX_addr',
>  'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
>  
> -NOTE: See the <> if you do not know what 
> each parameter means.
> +NOTE: See the xref:pvecm_corosync_conf_glossary[glossary] if you do not know 
> what each parameter means.
>  
>  So if you have two networks, one on the 10.10.10.1/24 and the other on the
>  10.10.20.1/24 subnet you would execute:
> @@ -723,7 +723,7 @@ RRP On Existing Clusters
>  
>  
>  You will take similar steps as described in
> -<> to

[pve-devel] [PATCH docs 2/2] Update pvecm documentation for corosync 3

2019-07-08 Thread Stefan Reiter
Parts about multicast and RRP have been removed entirely. Instead, a new
section 'Corosync Redundancy' has been added explaining the concept of
links and link priorities.

Signed-off-by: Stefan Reiter 
---
 pvecm.adoc | 372 +
 1 file changed, 147 insertions(+), 225 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 1c0b9e7..1246111 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
 Requirements
 
 
-* All nodes must be in the same network as `corosync` uses IP Multicast
- to communicate between nodes (also see
- http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
- ports 5404 and 5405 for cluster communication.
-+
-NOTE: Some switches do not support IP multicast by default and must be
-manually enabled first.
+* All nodes must be able to contact each other via UDP ports 5404 and 5405 for
+ corosync to work.
 
 * Date and time have to be synchronized.
 
@@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this 
is not supported as
 production configuration and should only used temporarily during upgrading the
 whole cluster from one to another major version.
 
+NOTE: Mixing {pve} 6.x and earlier versions is not supported, because of the
+major corosync upgrade. While possible to run corosync 3 on {pve} 5.4, this
+configuration is not supported for production environments and should only be
+used while upgrading a cluster.
+
 
 Preparing Nodes
 ---
@@ -96,10 +96,12 @@ Currently the cluster creation can either be done on the 
console (login via
 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
 Cluster__).
 
-While it's often common use to reference all other nodenames in `/etc/hosts`
-with their IP this is not strictly necessary for a cluster, which normally uses
-multicast, to work. It maybe useful as you then can connect from one node to
-the other with SSH through the easier to remember node name.
+While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
+make their names resolveable through other means), this is not strictly
+necessary for a cluster to work. It may be useful however, as you can then
+connect from one node to the other with SSH via the easier to remember node
+name. (see also xref:pvecm_corosync_addresses[Link Address Types])
+
 
 [[pvecm_create_cluster]]
 Create the Cluster
@@ -113,31 +115,12 @@ node names.
  hp1# pvecm create CLUSTERNAME
 
 
-CAUTION: The cluster name is used to compute the default multicast address.
-Please use unique cluster names if you run more than one cluster inside your
-network. To avoid human confusion, it is also recommended to choose different
-names even if clusters do not share the cluster network.
-
 To check the state of your cluster use:
 
 
  hp1# pvecm status
 
 
-Multiple Clusters In Same Network
-~
-
-It is possible to create multiple clusters in the same physical or logical
-network. Each cluster must have a unique name, which is used to generate the
-cluster's multicast group address. As long as no duplicate cluster names are
-configured in one network segment, the different clusters won't interfere with
-each other.
-
-If multiple clusters operate in a single network it may be beneficial to setup
-an IGMP querier and enable IGMP Snooping in said network. This may reduce the
-load of the network significantly because multicast packets are only delivered
-to endpoints of the respective member nodes.
-
 
 [[pvecm_join_node_to_cluster]]
 Adding Nodes to the Cluster
@@ -150,7 +133,7 @@ Login via `ssh` to the node you want to add.
 
 
 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
-An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
Types]).
+An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address 
Types]).
 
 CAUTION: A new node cannot hold any VMs, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
@@ -173,7 +156,7 @@ Date: Mon Apr 20 12:30:13 2015
 Quorum provider:  corosync_votequorum
 Nodes:4
 Node ID:  0x0001
-Ring ID:  1928
+Ring ID:  1/8
 Quorate:  Yes
 
 Votequorum information
@@ -217,15 +200,15 @@ Adding Nodes With Separated Cluster Network
 ~~~
 
 When adding a node to a cluster with a separated cluster network you need to
-use the 'ringX_addr' parameters to set the nodes address on those networks:
+use the 'link0' parameter to set the nodes address on that network:
 
 [source,bash]
 
-pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
+pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
 
 
-If you want to use the Redundant Ring Protocol you will also want to pass the
-'ring1_addr' parameter.
+If you want to use the 

[pve-devel] [PATCH docs 1/2] Use correct xref: syntax and add pvecm prefix

2019-07-08 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Hope it's the correct style now.

I decided to this into its own commit, since it could technically be applied to
the docs for pve 5 as well (although with little visual effect).

 pvecm.adoc | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/pvecm.adoc b/pvecm.adoc
index 05756ca..1c0b9e7 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -150,7 +150,7 @@ Login via `ssh` to the node you want to add.
 
 
 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
-An IP address is recommended (see <>).
+An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address 
Types]).
 
 CAUTION: A new node cannot hold any VMs, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
@@ -212,7 +212,7 @@ Membership information
  4  1 hp4
 
 
-[[adding-nodes-with-separated-cluster-network]]
+[[pvecm_adding_nodes_with_separated_cluster_network]]
 Adding Nodes With Separated Cluster Network
 ~~~
 
@@ -428,7 +428,7 @@ part is done by corosync, an implementation of a high 
performance low overhead
 high availability development toolkit. It serves our decentralized
 configuration file system (`pmxcfs`).
 
-[[cluster-network-requirements]]
+[[pvecm_cluster_network_requirements]]
 Network Requirements
 
 This needs a reliable network with latencies under 2 milliseconds (LAN
@@ -486,7 +486,7 @@ Setting Up A New Network
 
 First you have to setup a new network interface. It should be on a physical
 separate network. Ensure that your network fulfills the
-<>.
+xref:pvecm_cluster_network_requirements[cluster network requirements].
 
 Separate On Cluster Creation
 
@@ -510,9 +510,9 @@ systemctl status corosync
 
 
 Afterwards, proceed as descripted in the section to
-<>.
+xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a 
separated cluster network].
 
-[[separate-cluster-net-after-creation]]
+[[pvecm_separate_cluster_net_after_creation]]
 Separate After Cluster Creation
 ^^^
 
@@ -521,7 +521,7 @@ its communication to another network, without rebuilding 
the whole cluster.
 This change may lead to short durations of quorum loss in the cluster, as nodes
 have to restart corosync and come up one after the other on the new network.
 
-Check how to <> first.
+Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
 The open it and you should see a file similar to:
 
 
@@ -579,7 +579,7 @@ you do not see them already. Those *must* match the node 
name.
 Then replace the address from the 'ring0_addr' properties with the new
 addresses.  You may use plain IP addresses or also hostnames here. If you use
 hostnames ensure that they are resolvable from all nodes. (see also
-<>)
+xref:pvecm_corosync_addresses[Ring Address Types])
 
 In my example I want to switch my cluster communication to the 10.10.10.1/25
 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
@@ -640,7 +640,7 @@ totem {
 
 
 Now after a final check whether all changed information is correct we save it
-and see again the <> section to
+and see again the xref:pvecm_edit_corosync_conf[edit corosync.conf file] 
section to
 learn how to bring it in effect.
 
 As our change cannot be enforced live from corosync we have to do an restart.
@@ -661,7 +661,7 @@ systemctl status corosync
 If corosync runs again correct restart corosync also on all other nodes.
 They will then join the cluster membership one by one on the new network.
 
-[[corosync-addresses]]
+[[pvecm_corosync_addresses]]
 Corosync addresses
 ~~
 
@@ -708,7 +708,7 @@ RRP On Cluster Creation
 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
 
-NOTE: See the <> if you do not know what each 
parameter means.
+NOTE: See the xref:pvecm_corosync_conf_glossary[glossary] if you do not know 
what each parameter means.
 
 So if you have two networks, one on the 10.10.10.1/24 and the other on the
 10.10.20.1/24 subnet you would execute:
@@ -723,7 +723,7 @@ RRP On Existing Clusters
 
 
 You will take similar steps as described in
-<> to
+xref:pvecm_separate_cluster_net_after_creation[separating the cluster network] 
to
 enable RRP on an already running cluster. The single difference is, that you
 will add `ring1` and use it instead of `ring0`.
 
@@ -781,7 +781,7 @@ nodelist {
 
 
 Bring it in effect like described in the
-<> section.
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file] section.
 
 This is a change which cannot take live in effect and needs at least a restart
 of corosync. Recommended is a restart of the whole cluster.
@@ -979,7 +979,7 @@ For node membership you should always use the `pvecm` tool 
provided by 

[pve-devel] applied: [PATCH installer] mount efivarfs to ensure we can read bigger variables

2019-07-08 Thread Thomas Lamprecht
In short, EFI variables can get quite big, and the old sysfs
interface was made for when they couldn't. A few firmwares out there
have such big variables, and if those are accessed through the sysfs
backed interface one gets a "Input/Output Error". 'grub-install'
chokes on that error when it iterates over all variables to do it's
stuff, and thus fails our installation. When we mount the efivarfs,
which does not has this limitations, one can read all variables just
fine - at least as long as the NVRAM backing them is not broken.

from Linux Kernel Documentation/filesystems/efivarfs.txt:
> The efivarfs filesystem was created to address the shortcomings of
> using entries in sysfs to maintain EFI variables. The old sysfs EFI
> variables code only supported variables of up to 1024 bytes. This
> limitation existed in version 0.99 of the EFI specification, but was
> removed before any full releases. Since variables can now be larger
> than a single page, sysfs isn't the best interface for this.
> Variables can be created, deleted and modified with the efivarfs
> filesystem.

Also mount it in the installer environment for debugging purpose.

Signed-off-by: Thomas Lamprecht 
---

this allows also to remove the "--no-variables" switch from our call to
bootctl, as now the variable store should always be fully functional.

 proxinstall | 7 ++-
 unconfigured.sh | 4 
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/proxinstall b/proxinstall
index 1f70720..019ae0b 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1097,7 +1097,7 @@ sub prepare_systemd_boot_esp {
 File::Path::make_path("$targetdir/$espmp/EFI/proxmox") ||
die "unable to create directory $targetdir/$espmp/EFI/proxmox\n";
 
-syscmd("chroot $targetdir bootctl --no-variables --path /$espmp install") 
== 0 ||
+syscmd("chroot $targetdir bootctl --path /$espmp install") == 0 ||
die "unable to install systemd-boot loader\n";
 write_config("timeout 3\ndefault proxmox-*\n",
"$targetdir/$espmp/loader/loader.conf");
@@ -1378,6 +1378,10 @@ sub extract_data {
die "unable to mount proc on $targetdir/proc\n";
syscmd("mount -n -t sysfs sysfs $targetdir/sys") == 0 ||
die "unable to mount sysfs on $targetdir/sys\n";
+   if ($boot_type eq 'efi') {
+   syscmd("mount -n -t efivarfs none 
$targetdir/sys/firmware/efi/efivars") == 0 ||
+   die "unable to mount efivarfs on 
$targetdir/sys/firmware/efi/efivars: $!\n";
+   }
syscmd("chroot $targetdir mount --bind /mnt/hostrun /run") == 0 ||
die "unable to re-bindmount hostrun on /run in chroot\n";
 
@@ -1735,6 +1739,7 @@ _EOD
 syscmd("umount $targetdir/mnt/hostrun");
 syscmd("umount $targetdir/tmp");
 syscmd("umount $targetdir/proc");
+syscmd("umount $targetdir/sys/firmware/efi/efivars");
 syscmd("umount $targetdir/sys");
 
 if ($use_zfs) {
diff --git a/unconfigured.sh b/unconfigured.sh
index 674605f..d16ea61 100755
--- a/unconfigured.sh
+++ b/unconfigured.sh
@@ -63,6 +63,10 @@ export SYSTEMD_IGNORE_CHROOT=1
 
 mount -n -t proc proc /proc
 mount -n -t sysfs sysfs /sys
+if [ -d /sys/firmware/efi ]; then
+echo "EFI boot mode detected, mounting efivars filesystem"
+mount -nt efivarfs none /sys/firmware/efi/efivars
+fi
 mount -n -t tmpfs tmpfs /run
 
 parse_cmdline
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] fix #2267: delete address(6) and netmas(6) with cidr(6)

2019-07-08 Thread Dominik Csapak
otherwise a user cannot delete an ip from an interface

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Network.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
index 00337fe2..5e2abda1 100644
--- a/PVE/API2/Network.pm
+++ b/PVE/API2/Network.pm
@@ -435,6 +435,13 @@ __PACKAGE__->register_method({
delete $ifaces->{$iface}->{$k};
@$families = grep(!/^inet$/, @$families) if $k eq 'address';
@$families = grep(!/^inet6$/, @$families) if $k eq 'address6';
+   if ($k eq 'cidr') {
+   delete $ifaces->{$iface}->{netmask};
+   delete $ifaces->{$iface}->{address};
+   } elsif ($k eq 'cidr6') {
+   delete $ifaces->{$iface}->{netmask6};
+   delete $ifaces->{$iface}->{address6};
+   }
}
 
$map_cidr_to_address_netmask->($param);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] fix #2266: Diskmanage: get correct osd id

2019-07-08 Thread Dominik Csapak
the osdid has not only a single digit
also add more regression tests for this

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm |  2 +-
 test/disk_tests/usages/disklist   |  2 ++
 test/disk_tests/usages/disklist_expected.json | 31 +++
 test/disk_tests/usages/lvs|  6 ++--
 test/disk_tests/usages/pvs|  2 ++
 test/disk_tests/usages/sdk/device/vendor  |  1 +
 test/disk_tests/usages/sdk/queue/rotational   |  1 +
 test/disk_tests/usages/sdk/size   |  1 +
 test/disk_tests/usages/sdk_udevadm| 12 +++
 test/disk_tests/usages/sdl/device/vendor  |  1 +
 test/disk_tests/usages/sdl/queue/rotational   |  1 +
 test/disk_tests/usages/sdl/size   |  1 +
 test/disk_tests/usages/sdl_udevadm| 12 +++
 13 files changed, 70 insertions(+), 3 deletions(-)
 create mode 100644 test/disk_tests/usages/sdk/device/vendor
 create mode 100644 test/disk_tests/usages/sdk/queue/rotational
 create mode 100644 test/disk_tests/usages/sdk/size
 create mode 100644 test/disk_tests/usages/sdk_udevadm
 create mode 100644 test/disk_tests/usages/sdl/device/vendor
 create mode 100644 test/disk_tests/usages/sdl/queue/rotational
 create mode 100644 test/disk_tests/usages/sdl/size
 create mode 100644 test/disk_tests/usages/sdl_udevadm

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index f446269..0deb1a6 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -258,7 +258,7 @@ sub get_ceph_volume_infos {
if ($fields->[1] =~ m|^osd-([^-]+)-|) {
my $type = $1;
# $result autovivification is wanted, to not creating empty hashes
-   if (($type eq 'block' || $type eq 'data') && $fields->[2] =~ 
m/ceph.osd_id=([^,])/) {
+   if (($type eq 'block' || $type eq 'data') && $fields->[2] =~ 
m/ceph.osd_id=([^,]+)/) {
$result->{$dev}->{osdid} = $1;
$result->{$dev}->{bluestore} = ($type eq 'block');
} else {
diff --git a/test/disk_tests/usages/disklist b/test/disk_tests/usages/disklist
index 9092ce0..ef443ed 100644
--- a/test/disk_tests/usages/disklist
+++ b/test/disk_tests/usages/disklist
@@ -8,3 +8,5 @@ sdg
 sdh
 sdi
 sdj
+sdk
+sdl
diff --git a/test/disk_tests/usages/disklist_expected.json 
b/test/disk_tests/usages/disklist_expected.json
index 9829339..610e80f 100644
--- a/test/disk_tests/usages/disklist_expected.json
+++ b/test/disk_tests/usages/disklist_expected.json
@@ -152,5 +152,36 @@
"bluestore": 0,
"type" : "hdd",
"osdid" : 0
+},
+"sdk" : {
+   "serial" : "SERIAL1",
+   "vendor" : "ATA",
+   "wwn" : "0x",
+   "devpath" : "/dev/sdk",
+   "model" : "MODEL1",
+   "used" : "LVM",
+   "wearout" : "N/A",
+   "health" : "UNKNOWN",
+   "gpt" : 1,
+   "size" : 1536000,
+   "rpm" : 0,
+   "bluestore": 0,
+   "type" : "hdd",
+   "osdid" : 230
+},
+"sdl" : {
+   "serial" : "SERIAL1",
+   "vendor" : "ATA",
+   "wwn" : "0x",
+   "devpath" : "/dev/sdl",
+   "model" : "MODEL1",
+   "used" : "LVM",
+   "wearout" : "N/A",
+   "health" : "UNKNOWN",
+   "gpt" : 1,
+   "size" : 1536000,
+   "rpm" : 0,
+   "type" : "hdd",
+   "osdid" : -1
 }
 }
diff --git a/test/disk_tests/usages/lvs b/test/disk_tests/usages/lvs
index 393dcd3..8d640e1 100644
--- a/test/disk_tests/usages/lvs
+++ b/test/disk_tests/usages/lvs
@@ -1,4 +1,6 @@
 /dev/sdg(0);osd-block-01234;ceph.osd_id=1
 /dev/sdh(0);osd-journal-01234;ceph.osd_id=1
-/dev/sdi(0);osd-db-01234;ceph.osd_id=1
-/dev/sdj(0);osd-data-01234;ceph.osd_id=0
+/dev/sdi(0);osd-db-01234;ceph.osd_id=1,dasdf
+/dev/sdj(0);osd-data-01234;ceph.osd_id=0,asfd
+/dev/sdk(0);osd-data-231231;ceph.osd_id=230,ceph.fsid=test
+/dev/sdl(0);osd-data-234132;ceph.osd_id=,bar
diff --git a/test/disk_tests/usages/pvs b/test/disk_tests/usages/pvs
index 0df5080..86ec3d4 100644
--- a/test/disk_tests/usages/pvs
+++ b/test/disk_tests/usages/pvs
@@ -3,3 +3,5 @@
   /dev/sdh
   /dev/sdi
   /dev/sdj
+  /dev/sdk
+  /dev/sdl
diff --git a/test/disk_tests/usages/sdk/device/vendor 
b/test/disk_tests/usages/sdk/device/vendor
new file mode 100644
index 000..531030d
--- /dev/null
+++ b/test/disk_tests/usages/sdk/device/vendor
@@ -0,0 +1 @@
+ATA
diff --git a/test/disk_tests/usages/sdk/queue/rotational 
b/test/disk_tests/usages/sdk/queue/rotational
new file mode 100644
index 000..d00491f
--- /dev/null
+++ b/test/disk_tests/usages/sdk/queue/rotational
@@ -0,0 +1 @@
+1
diff --git a/test/disk_tests/usages/sdk/size b/test/disk_tests/usages/sdk/size
new file mode 100644
index 000..13de30f
--- /dev/null
+++ b/test/disk_tests/usages/sdk/size
@@ -0,0 +1 @@
+3000
diff --git a/test/disk_tests/usages/sdk_udevadm 
b/test/disk_tests/usages/sdk_udevadm
new file mode 100644
index 000..3baef2f
--- /dev/null
+++ b/test/disk_tests/usages/sdk_udevadm
@@ -0,0 

[pve-devel] [PATCH stable5 access-control] ticket: properly verify exactly 5min old tickets

2019-07-08 Thread Fabian Grünbichler
to fix an issue where valid tickets could be rejected 5 minutes after a
key rotation, where the minimum age is exactly 0 seconds.

thanks Dominik for triaging!

Signed-off-by: Fabian Grünbichler 
(cherry picked from commit 5bb966fe5d6f3f6a30e86724c024f80ebebacfba)
---
this cherry-pick was missed, already applied in master

 PVE/AccessControl.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/AccessControl.pm b/PVE/AccessControl.pm
index fc519f1..908cccb 100644
--- a/PVE/AccessControl.pm
+++ b/PVE/AccessControl.pm
@@ -294,7 +294,7 @@ sub verify_ticket {
return undef if !$rsa_pub;
 
my ($min, $max) = $get_ticket_age_range->($now, $rsa_mtime, $old);
-   return undef if !$min;
+   return undef if !defined($min);
 
return PVE::Ticket::verify_rsa_ticket(
$rsa_pub, 'PVE', $ticket, undef, $min, $max, 1);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 manager 1/2] fix #1451: allow some extra mount options for lxc

2019-07-08 Thread Oguz Bektas
hi

On Fri, Jul 05, 2019 at 07:00:08PM +0200, Thomas Lamprecht wrote:
> On 7/5/19 1:27 PM, Oguz Bektas wrote:
> > this allows the following mount options for lxc container rootfs or
> > mountpoints:
> > * noatime
> > * nosuid
> > * noexec
> > * nodev
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> > v3:
> > no change, added for convenience when applying
> > 
> 
> applied, but moved the field to the left column, see followup commit
> message.
> 
> Two improvement ideas:
> * combogrid with short description of flag
ok!
> * more important: implement pending changes for CT, that would be
>   great!
i'll take the bug on bugzilla and start working on it soon, once i
finish up some other stuff.
> 
> > 
> >  www/manager6/lxc/MPEdit.js | 21 +
> >  1 file changed, 21 insertions(+)
> > 
> > diff --git a/www/manager6/lxc/MPEdit.js b/www/manager6/lxc/MPEdit.js
> > index c7c3870a..e33cf54d 100644
> > --- a/www/manager6/lxc/MPEdit.js
> > +++ b/www/manager6/lxc/MPEdit.js
> > @@ -29,6 +29,9 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
> >  
> > var confid = me.confid || "mp"+values.mpid;
> > values.file = me.down('field[name=file]').getValue();
> > +   if (values.mountoptions) {
> > +   values.mountoptions = values.mountoptions.join(';');
> > +   }
> >  
> > if (me.unused) {
> > confid = "mp"+values.mpid;
> > @@ -52,6 +55,9 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
> > var me = this;
> > var vm = this.getViewModel();
> > vm.set('mptype', mp.type);
> > +   if (mp.mountoptions) {
> > +   mp.mountoptions = mp.mountoptions.split(';');
> > +   }
> > me.setValues(mp);
> >  },
> >  
> > @@ -275,6 +281,21 @@ Ext.define('PVE.lxc.MountPointInputPanel', {
> > allowBlank: true
> > },
> > {
> > +   xtype: 'proxmoxKVComboBox',
> > +   name: 'mountoptions',
> > +   fieldLabel: gettext('Mount options'),
> > +   deleteEmpty: false,
> > +   comboItems: [
> > +   ['noatime', 'noatime'],
> > +   ['nodev', 'nodev'],
> > +   ['noexec', 'noexec'],
> > +   ['nosuid', 'nosuid']
> > +   ],
> > +   multiSelect: true,
> > +   value: [],
> > +   allowBlank: true
> > +   },
> > +   {
> > xtype: 'proxmoxcheckbox',
> > inputValue: '0', // reverses the logic
> > name: 'replicate',
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] gui: show 0 for max_relocate/restart correctly

2019-07-08 Thread Dominik Csapak
0 || '1' will always return '1'

Signed-off-by: Dominik Csapak 
---
 www/manager6/ha/Resources.js | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/www/manager6/ha/Resources.js b/www/manager6/ha/Resources.js
index 0b142c8d..bd6c337c 100644
--- a/www/manager6/ha/Resources.js
+++ b/www/manager6/ha/Resources.js
@@ -146,7 +146,10 @@ Ext.define('PVE.ha.ResourcesView', {
width: 100,
sortable: true,
renderer: function(v) {
-   return v || '1';
+   if (v === undefined) {
+   return '1';
+   }
+   return v;
},
dataIndex: 'max_restart'
},
@@ -155,7 +158,10 @@ Ext.define('PVE.ha.ResourcesView', {
width: 100,
sortable: true,
renderer: function(v) {
-   return v || '1';
+   if (v === undefined) {
+   return '1';
+   }
+   return v;
},
dataIndex: 'max_relocate'
},
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs 1/2] Add documentation on bootloaders (systemd-boot)

2019-07-08 Thread Stoiko Ivanov
On Mon, 8 Jul 2019 11:09:38 +0200
Aaron Lauterer  wrote:

> Some things that I have seen, mostly regarding style and readability.

Thanks big time - incorporated and will be included in my v2
> 
> On 7/5/19 6:31 PM, Stoiko Ivanov wrote:
> > With the recently added support for booting ZFS on root on EFI
> > systems via `systemd-boot` the documentation needs adapting (mostly
> > related to editing the kernel commandline).
> > 
> > This patch adds a short section on Bootloaders to the sysadmin
> > chapter describing both `grub` and PVE's use of `systemd-boot`
> > 
> > Signed-off-by: Stoiko Ivanov 
> > ---
> >   sysadmin.adoc   |   2 +
> >   system-booting.adoc | 144
> >  2 files changed, 146
> > insertions(+) create mode 100644 system-booting.adoc
> > 
> > diff --git a/sysadmin.adoc b/sysadmin.adoc
> > index 21537f1..e045610 100644
> > --- a/sysadmin.adoc
> > +++ b/sysadmin.adoc
> > @@ -74,6 +74,8 @@ include::local-zfs.adoc[]
> >   
> >   include::certificate-management.adoc[]
> >   
> > +include::system-booting.adoc[]
> > +
> >   endif::wiki[]
> >   
> >   
> > diff --git a/system-booting.adoc b/system-booting.adoc
> > new file mode 100644
> > index 000..389a0e9
> > --- /dev/null
> > +++ b/system-booting.adoc
> > @@ -0,0 +1,144 @@
> > +[[system_booting]]
> > +Bootloaders
> > +---
> > +ifdef::wiki[]
> > +:pve-toplevel:
> > +endif::wiki[]
> > +
> > +Depending on the disk setup chosen in the installer {pve} uses two
> > bootloaders +for bootstrapping the system.  
> 
> {pve} is using one of two bootloaders, depending on the disk setup 
> selected in the installer.
> 
> (Putting the most important info at the beginning of the sentence)
> 
> > +
> > +For EFI Systems installed with ZFS as the root filesystem
> > `systemd-boot` is +used. All other deployments use the standard
> > `grub` bootloader (this usually +also applies to systems which are
> > installed on top of Debian). +
> > +[[installer_partitioning_scheme]]
> > +Partitioning scheme used by the installer
> > +~
> > +
> > +The {pve} installer creates 3 partitions on disks:
> > +
> > +* a 1M BIOS Boot Partition (gdisk type EF02)
> > +
> > +* a 512M EFI System Partition (ESP, gdisk type EF00)  
> 
> Besides what Thomas already mentioned; what about using MB (with a 
> space) instead of M? "512 MB" instead of "512M"?
> > +
> > +* a third partition spanning the remaining space used for the
> > chosen storage
> > +  type
> > +
> > +`grub` in BIOS mode (`--target i386-pc`) is installed onto the
> > BIOS Boot +Partition of all bootable disks for supporting older
> > systems. +
> > +
> > +Grub
> > +
> > +
> > +`grub` has been the de-facto standard for booting Linux systems
> > for many years +and is quite well documented
> > +footnote:[Grub Manual
> > https://www.gnu.org/software/grub/manual/grub/grub.html]. +
> > +The kernel and initrd images are taken from `/boot` and its
> > configuration file +`/boot/grub/grub.cfg` gets updated by the
> > kernel installation process. +
> > +Configuration
> > +^
> > +Changes to the `grub` configuration are done via the defaults file
> > + `/etc/default/grub` or config snippets in `/etc/default/grub.d`.
> > +To regenerate the `/boot/grub/grub.cfg` after a change to the
> > configuration +run `update-grub`.
> > +
> > +Systemd-boot
> > +
> > +
> > +`systemd-boot` is a lightweight EFI bootloader, which reads the
> > kernel and  
> 
> "...EFI bootloader. It reads the kernel and "
> Splitting the sentence will produce two shorter sentences that are 
> easier to grasp.
> 
> > +initrd images directly from the EFI Service Partition (ESP) where
> > it is +installed.  The main advantage of directly loading the
> > +kernel from the ESP is that it does not need to reimplement the
> > drivers for +accessing the storage.  In the context of ZFS as root
> > filesystem this means +that you can use all optional features on
> > your root pool instead of the subset +which is also present in the
> > ZFS implementation in `grub` or having to create +a separate small
> > boot-pool +footnote:[Booting ZFS on root with grub
> > https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS].
> > + +In setups with redundancy (RAID1, RAID10, RAIDZ*) all bootable
> > disks (those +being part of the first `vdev`) are partitioned with
> > an ESP, ensuring the  
> 
> "with an ESP. This ensure that the system can boot even..."
> 
> > +system boots even if the first boot device fails.  The ESPs are
> > kept in sync by +a kernel postinstall hook script
> > `/etc/kernel/postinst.d/zz-pve-efiboot`. The +script copies certain
> > kernel versions and the initrd images to `EFI/proxmox/` +on the
> > root of each ESP and creates the appropriate config files in
> > +`loader/entries/proxmox-*.conf`. +
> > +The following kernel versions are configured by default:
> > +
> > +* the currently booted kernel
> > +* the version being installed
> 

[pve-devel] [PATCH stable5 manager 1/6] 5to6: add Corosync resolve helper

2019-07-08 Thread Fabian Grünbichler
copied from PVE 6.x's pve-cluster.

since Corosync 2.x has a different default value for ip_version, we
don't want to backport this for general usage in PVE::Corosync. the
check here needs the default of Corosync 3.x, since that is what we
upgrade to.

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve5to6.pm | 66 +-
 1 file changed, 65 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index b0bd531b..7d23ac3c 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -15,9 +15,10 @@ use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::RPCEnvironment;
 use PVE::Storage;
-use PVE::Tools qw(run_command);
+use PVE::Tools qw(run_command $IPV4RE $IPV6RE);
 use PVE::QemuServer;
 
+use Socket qw(AF_INET AF_INET6 inet_ntop);
 use Term::ANSIColor;
 
 use PVE::CLIHandler;
@@ -136,6 +137,69 @@ my $get_pkg = sub {
 }
 };
 
+# taken from pve-cluster 6.0-4
+my $resolve_hostname_like_corosync = sub {
+my ($hostname, $corosync_conf) = @_;
+
+my $corosync_strategy = $corosync_conf->{main}->{totem}->{ip_version};
+$corosync_strategy = lc ($corosync_strategy // "ipv6-4");
+
+my $match_ip_and_version = sub {
+   my ($addr) = @_;
+
+   return undef if !defined($addr);
+
+   if ($addr =~ m/^$IPV4RE$/) {
+   return ($addr, 4);
+   } elsif ($addr =~ m/^$IPV6RE$/) {
+   return ($addr, 6);
+   }
+
+   return undef;
+};
+
+my ($resolved_ip, $ip_version) = $match_ip_and_version->($hostname);
+
+return ($resolved_ip, $ip_version) if defined($resolved_ip);
+
+my $resolved_ip4;
+my $resolved_ip6;
+
+my @resolved_raw;
+eval { @resolved_raw = PVE::Tools::getaddrinfo_all($hostname); };
+
+return undef if ($@ || !@resolved_raw);
+
+foreach my $socket_info (@resolved_raw) {
+   next if !$socket_info->{addr};
+
+   my ($family, undef, $host) = 
PVE::Tools::unpack_sockaddr_in46($socket_info->{addr});
+
+   if ($family == AF_INET && !defined($resolved_ip4)) {
+   $resolved_ip4 = inet_ntop(AF_INET, $host);
+   } elsif ($family == AF_INET6 && !defined($resolved_ip6)) {
+   $resolved_ip6 = inet_ntop(AF_INET6, $host);
+   }
+
+   last if defined($resolved_ip4) && defined($resolved_ip6);
+}
+
+# corosync_strategy specifies the order in which IP addresses are resolved
+# by corosync. We need to match that order, to ensure we create firewall
+# rules for the correct address family.
+if ($corosync_strategy eq "ipv4") {
+   $resolved_ip = $resolved_ip4;
+} elsif ($corosync_strategy eq "ipv6") {
+   $resolved_ip = $resolved_ip6;
+} elsif ($corosync_strategy eq "ipv6-4") {
+   $resolved_ip = $resolved_ip6 // $resolved_ip4;
+} elsif ($corosync_strategy eq "ipv4-6") {
+   $resolved_ip = $resolved_ip4 // $resolved_ip6;
+}
+
+return $match_ip_and_version->($resolved_ip);
+};
+
 sub check_pve_packages {
 print_header("CHECKING VERSION INFORMATION FOR PVE PACKAGES");
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 3/6] 5to6: reword/-structure corosync message

2019-07-08 Thread Fabian Grünbichler
and fix a typo as well

Signed-off-by: Fabian Grünbichler 
(cherry picked from commit 388a505104eae0d8c6389b247aba7eca713b03ba)
---
 PVE/CLI/pve5to6.pm | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index f84a8c1b..33079553 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -426,9 +426,9 @@ sub check_cluster_corosync {
 
 foreach my $cs_node (keys %$conf_nodelist) {
my $entry = $conf_nodelist->{$cs_node};
-   log_fail("No name entry for node '$cs_node' in corosync.conf.")
+   log_fail("$cs_node: no name entry in corosync.conf.")
if !defined($entry->{name});
-   log_fail("No nodeid configured for node '$cs_node' in corosync.conf.")
+   log_fail("$cs_node: no nodeid configured in corosync.conf.")
if !defined($entry->{nodeid});
 
my $verify_ring_ip = sub {
@@ -438,12 +438,12 @@ sub check_cluster_corosync {
my ($resolved_ip, undef) = 
$resolve_hostname_like_corosync->($ring, $conf);
if (defined($resolved_ip)) {
if ($resolved_ip ne $ring) {
-   log_warn("$key '$ring' of node '$cs_node' resolves to 
'$resolved_ip'.\n Consider replacing it with the currently resolved IP 
address.");
+   log_warn("$cs_node: $key '$ring' resolves to 
'$resolved_ip'.\n Consider replacing it with the currently resolved IP 
address.");
} else {
-   log_pass("$key is configured to use IP address 
'$ring'");
+   log_pass("$cs_node: $key is configured to use IP 
address '$ring'");
}
} else {
-   log_fail("unable to resolve $key '$ring' of node '$cs_node' 
to an IP address according to Corosync's resolve strategy - cluster will fail 
with Corosync 3.x/kronosnet!");
+   log_fail("$cs_node: unable to resolve $key '$ring' to an IP 
address according to Corosync's resolve strategy - cluster will potentially 
fail with Corosync 3.x/kronosnet!");
}
}
};
@@ -455,7 +455,7 @@ sub check_cluster_corosync {
 
 my $transport = $totem->{transport};
 if (defined($transport)) {
-   log_fail("Corosync transport expliclitly set to '$transport' instead of 
implicit default!");
+   log_fail("Corosync transport explicitly set to '$transport' instead of 
implicit default!");
 }
 
 if ((!defined($totem->{secauth}) || $totem->{secauth} ne 'on') && 
(!defined($totem->{crypto_cipher}) || $totem->{crypto_cipher} eq 'none')) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 0/6] 5to6 corosync improvements

2019-07-08 Thread Fabian Grünbichler
patch #1 is new (inline of pve-cluster commits)
patch #2 is adapated for #1

rest are clean cherry-picks

Fabian Grünbichler (6):
  5to6: add Corosync resolve helper
  5to6: attempt to resolve corosync rings
  5to6: reword/-structure corosync message
  5to6: fail if a corosync node has neither ring0 nor ring1 defined
  5to6: add more corosync subheaders
  5to6: make corosync totem checks more verbose

 PVE/CLI/pve5to6.pm | 107 -
 1 file changed, 96 insertions(+), 11 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 4/6] 5to6: fail if a corosync node has neither ring0 nor ring1 defined

2019-07-08 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
(cherry picked from commit e6b956df7bb1522e0cf47e2afb3dbb609a88b750)
---
 PVE/CLI/pve5to6.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index 33079553..e7c9a24f 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -430,6 +430,8 @@ sub check_cluster_corosync {
if !defined($entry->{name});
log_fail("$cs_node: no nodeid configured in corosync.conf.")
if !defined($entry->{nodeid});
+   log_fail("$cs_node: neither ring0_addr nor ring1_addr defined in 
corosync.conf.")
+   if !defined($entry->{ring0_addr}) && !defined($entry->{ring1_addr});
 
my $verify_ring_ip = sub {
my $key = shift;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 5/6] 5to6: add more corosync subheaders

2019-07-08 Thread Fabian Grünbichler
to improve readability

Signed-off-by: Fabian Grünbichler 
(cherry picked from commit 5684da54dc15fb2a5bb26fdef95db67cea836a21)
---
 PVE/CLI/pve5to6.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index e7c9a24f..0c7efbb1 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -365,6 +365,7 @@ sub check_cluster_corosync {
 my $conf_nodelist = PVE::Corosync::nodelist($conf);
 my $node_votes = 0;
 
+print "\nAnalzying quorum settings and state..\n";
 if (!defined($conf_nodelist)) {
log_fail("unable to retrieve nodelist from corosync.conf");
 } else {
@@ -424,6 +425,7 @@ sub check_cluster_corosync {
 log_fail("corosync.conf ($conf_nodelist_count) and pmxcfs 
($cfs_nodelist_count) don't agree about size of nodelist.")
if $conf_nodelist_count != $cfs_nodelist_count;
 
+print "\nChecking nodelist entries..\n";
 foreach my $cs_node (keys %$conf_nodelist) {
my $entry = $conf_nodelist->{$cs_node};
log_fail("$cs_node: no name entry in corosync.conf.")
@@ -453,8 +455,8 @@ sub check_cluster_corosync {
$verify_ring_ip->('ring1_addr');
 }
 
+print "\nChecking totem settings..\n";
 my $totem = $conf->{main}->{totem};
-
 my $transport = $totem->{transport};
 if (defined($transport)) {
log_fail("Corosync transport explicitly set to '$transport' instead of 
implicit default!");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 6/6] 5to6: make corosync totem checks more verbose

2019-07-08 Thread Fabian Grünbichler
to avoid just printing the subheader with no results

Signed-off-by: Fabian Grünbichler 
(cherry picked from commit c4bc94bb7b019e3c5a4518eda55883bb989146c5)
---
 PVE/CLI/pve5to6.pm | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index 0c7efbb1..618f20aa 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -459,15 +459,23 @@ sub check_cluster_corosync {
 my $totem = $conf->{main}->{totem};
 my $transport = $totem->{transport};
 if (defined($transport)) {
-   log_fail("Corosync transport explicitly set to '$transport' instead of 
implicit default!");
+   if ($transport ne 'knet') {
+   log_fail("Corosync transport explicitly set to '$transport' instead 
of implicit default!");
+   } else {
+   log_pass("Corosync transport set to '$transport'.");
+   }
+} else {
+   log_pass("Corosync transport set to implicit default.");
 }
 
 if ((!defined($totem->{secauth}) || $totem->{secauth} ne 'on') && 
(!defined($totem->{crypto_cipher}) || $totem->{crypto_cipher} eq 'none')) {
log_fail("Corosync authentication/encryption is not explicitly enabled 
(secauth / crypto_cipher / crypto_hash)!");
-}
-
-if (defined($totem->{crypto_cipher}) && $totem->{crypto_cipher} eq '3des') 
{
-   log_fail("Corosync encryption cipher set to '3des', no longer supported 
in Corosync 3.x!");
+} else {
+   if (defined($totem->{crypto_cipher}) && $totem->{crypto_cipher} eq 
'3des') {
+   log_fail("Corosync encryption cipher set to '3des', no longer 
supported in Corosync 3.x!");
+   } else {
+   log_pass("Corosync encryption and authentication enabled.");
+   }
 }
 
 print "\n";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH stable5 manager 2/6] 5to6: attempt to resolve corosync rings

2019-07-08 Thread Fabian Grünbichler
and only fail if unable to

Signed-off-by: Fabian Grünbichler 

(backported from commit 669211d8bbb0857275669068fcbf62560782b888)

use local copy of resolve_hostname_like_corosync instead of
pve-cluster's.

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve5to6.pm | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index 7d23ac3c..f84a8c1b 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -434,8 +434,17 @@ sub check_cluster_corosync {
my $verify_ring_ip = sub {
my $key = shift;
my $ring = $entry->{$key};
-   if (defined($ring) && !PVE::JSONSchema::pve_verify_ip($ring, 1)) {
-   log_fail("$key '$ring' of node '$cs_node' is not an IP address, 
consider replacing it with the currently resolved IP address.");
+   if (defined($ring)) {
+   my ($resolved_ip, undef) = 
$resolve_hostname_like_corosync->($ring, $conf);
+   if (defined($resolved_ip)) {
+   if ($resolved_ip ne $ring) {
+   log_warn("$key '$ring' of node '$cs_node' resolves to 
'$resolved_ip'.\n Consider replacing it with the currently resolved IP 
address.");
+   } else {
+   log_pass("$key is configured to use IP address 
'$ring'");
+   }
+   } else {
+   log_fail("unable to resolve $key '$ring' of node '$cs_node' 
to an IP address according to Corosync's resolve strategy - cluster will fail 
with Corosync 3.x/kronosnet!");
+   }
}
};
$verify_ring_ip->('ring0_addr');
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC firewall] ebtables: remove PVE chains properly

2019-07-08 Thread Fabian Grünbichler
On Mon, Jul 08, 2019 at 10:38:26AM +0200, Thomas Lamprecht wrote:
> Am 7/8/19 um 9:33 AM schrieb Fabian Grünbichler:
> > when globally disabling the FW, or on shutdown of firewall service.
> > otherwise, ebtables rules are leftover (and perpetually displayed as
> > pending changes as well).
> > 
> > the actual removal is done by taking the same code path as when
> > disabling just ebtables on the cluster level, i.e. applying an empty
> > ruleset.
> > 
> > Signed-off-by: Fabian Grünbichler 
> > ---
> > 
> > Notes:
> > another approach would be to make ebtables_get_chains more like
> > iptables_get_chains, and then re-use remove_pvefw_chains_iptables..
> > 
> > should backport cleanly to stable-5
> > 
> >  src/PVE/Firewall.pm | 7 +++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> > index 96c45e9..4147f87 100644
> > --- a/src/PVE/Firewall.pm
> > +++ b/src/PVE/Firewall.pm
> > @@ -4269,6 +4269,7 @@ sub remove_pvefw_chains {
> >  PVE::Firewall::remove_pvefw_chains_iptables("iptables");
> >  PVE::Firewall::remove_pvefw_chains_iptables("ip6tables");
> >  PVE::Firewall::remove_pvefw_chains_ipset();
> > +PVE::Firewall::remove_pvefw_chains_ebtables();
> >  
> >  }
> >  
> > @@ -4314,6 +4315,12 @@ sub remove_pvefw_chains_ipset {
> >  ipset_restore_cmdlist($cmdlist) if $cmdlist;
> >  }
> >  
> > +sub remove_pvefw_chains_ebtables {
> > +# empty ruleset == ebtables disabled
> > +my ($cmdlist, $changes) = get_ebtables_cmdlist({});
> > +ebtables_restore_cmdlist($cmdlist) if $changes && $cmdlist;
> 
> $cmdlist is always true here..

true, and $changes is only 1 for anything besides exists/ignore/delete
(the latter seems incorrect IMHO, since both ipset and iptables treat
deletions as changes).

will send a v2..

> Also while it is not too useful to flush the rules if no changes
> (i.e., already emptied ebtables ruleset) is detected we could do
> it anyway, e.g. a simple (untested):
> 
> ebtables_restore_cmdlist("*filter\n");

that would also remove rules set by the admin, AFAICT?

> 
> > +}
> > +
> >  sub init {
> >  my $cluster_conf = load_clusterfw_conf();
> >  my $cluster_options = $cluster_conf->{options};
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] use new pcie port hardware

2019-07-08 Thread Dominik Csapak
with qemu 4.0 we can make use of the new pcie-root-ports with settings
for the width/speed which can resolve issues with some hardware combinations
when negioating link speed

so we add a new q35 cfg that we include with machine types >= 4.0
to preserve live migration of machines without passthrough but q35

for details about the link speeds see:

pcie: Enhanced link speed and width support
https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg02827.html

Signed-off-by: Dominik Csapak 
---
i would like to get this into 6.0 before release, else we either cannot do this
until qemu 4.0.1/4.1 or have some situations where live migration is not 
possible

an alternative would be to only do this change when we do pci(e) passthrough
which would minimize the impact on live migration, but makes the code a bit
more complicated

 Makefile  |   1 +
 PVE/QemuServer.pm |   9 +++
 PVE/QemuServer/USB.pm |   6 +-
 pve-q35-4.0.cfg   | 161 ++
 4 files changed, 172 insertions(+), 5 deletions(-)
 create mode 100644 pve-q35-4.0.cfg

diff --git a/Makefile b/Makefile
index 8274060..6e8fc78 100644
--- a/Makefile
+++ b/Makefile
@@ -77,6 +77,7 @@ install: ${PKGSOURCES}
install -d ${DESTDIR}/usr/share/${PACKAGE}
install -m 0644 pve-usb.cfg ${DESTDIR}/usr/share/${PACKAGE}
install -m 0644 pve-q35.cfg ${DESTDIR}/usr/share/${PACKAGE}
+   install -m 0644 pve-q35-4.0.cfg ${DESTDIR}/usr/share/${PACKAGE}
install -m 0644 -D qm.bash-completion ${DESTDIR}/${BASHCOMPLDIR}/qm
install -m 0644 -D qmrestore.bash-completion 
${DESTDIR}/${BASHCOMPLDIR}/qmrestore
install -m 0644 -D qm.zsh-completion ${DESTDIR}/${ZSHCOMPLDIR}/_qm
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5ef92a3..9f29927 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3628,6 +3628,15 @@ sub config_to_command {
push @$cmd, '-drive', 
"if=pflash,unit=1,format=$format,id=drive-efidisk0,file=$path";
 }
 
+# load q35 config
+if ($q35) {
+   # we use different pcie-port hardware for qemu >= 4.0 for passthrough
+   if (qemu_machine_feature_enabled($machine_type, $kvmver, 4, 0)) {
+   push @$devices, '-readconfig', 
'/usr/share/qemu-server/pve-q35-4.0.cfg';
+   } else {
+   push @$devices, '-readconfig', '/usr/share/qemu-server/pve-q35.cfg';
+   }
+}
 
 # add usb controllers
 my @usbcontrollers = PVE::QemuServer::USB::get_usb_controllers($conf, 
$bridges, $arch, $machine_type, $usbdesc->{format}, $MAX_USB_DEVICES);
diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm
index 9eaaccc..a2097b9 100644
--- a/PVE/QemuServer/USB.pm
+++ b/PVE/QemuServer/USB.pm
@@ -42,11 +42,7 @@ sub get_usb_controllers {
 if ($arch eq 'aarch64') {
 $pciaddr = print_pci_addr('ehci', $bridges, $arch, $machine);
 push @$devices, '-device', "usb-ehci,id=ehci$pciaddr";
-} elsif ($machine =~ /q35/) { # FIXME: combine this and machine_type_is_q35
-   # the q35 chipset support native usb2, so we enable usb controller
-   # by default for this machine type
-push @$devices, '-readconfig', '/usr/share/qemu-server/pve-q35.cfg';
-} else {
+} elsif ($machine !~ /q35/) { # FIXME: combine this and machine_type_is_q35
 $pciaddr = print_pci_addr("piix3", $bridges, $arch, $machine);
 push @$devices, '-device', "piix3-usb-uhci,id=uhci$pciaddr.0x2";
 
diff --git a/pve-q35-4.0.cfg b/pve-q35-4.0.cfg
new file mode 100644
index 000..9a294bd
--- /dev/null
+++ b/pve-q35-4.0.cfg
@@ -0,0 +1,161 @@
+[device "ehci"]
+  driver = "ich9-usb-ehci1"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1d.7"
+
+[device "uhci-1"]
+  driver = "ich9-usb-uhci1"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1d.0"
+  masterbus = "ehci.0"
+  firstport = "0"
+
+[device "uhci-2"]
+  driver = "ich9-usb-uhci2"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1d.1"
+  masterbus = "ehci.0"
+  firstport = "2"
+
+[device "uhci-3"]
+  driver = "ich9-usb-uhci3"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1d.2"
+  masterbus = "ehci.0"
+  firstport = "4"
+
+[device "ehci-2"]
+  driver = "ich9-usb-ehci2"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1a.7"
+
+[device "uhci-4"]
+  driver = "ich9-usb-uhci4"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1a.0"
+  masterbus = "ehci-2.0"
+  firstport = "0"
+
+[device "uhci-5"]
+  driver = "ich9-usb-uhci5"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1a.1"
+  masterbus = "ehci-2.0"
+  firstport = "2"
+
+[device "uhci-6"]
+  driver = "ich9-usb-uhci6"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1a.2"
+  masterbus = "ehci-2.0"
+  firstport = "4"
+
+
+[device "audio0"]
+  driver = "ich9-intel-hda"
+  bus = "pcie.0"
+  addr = "1b.0"
+
+
+[device "ich9-pcie-port-1"]
+  driver = "pcie-root-port"
+  x-speed = "16"
+  x-width = "32"
+  multifunction = "on"
+  bus = "pcie.0"
+  addr = "1c.0"
+  port = "1"
+  chassis = 

Re: [pve-devel] [PATCH docs 2/2] Refer to the bootloader chapter in remaining docs

2019-07-08 Thread Aaron Lauterer

Not much here but one instance where it helps to simplify a sentence.

On 7/5/19 6:31 PM, Stoiko Ivanov wrote:

Editing the kernel commandline is described centrally in the bootloaders
chapter. Refer to it where appropriate (qm-pci-passthrough.adoc).

Additionally update the documentation on ZFS as rpool to the inclusion of
`systemd-boot`

Signed-off-by: Stoiko Ivanov 
---
  local-zfs.adoc  | 20 ++--
  qm-pci-passthrough.adoc | 26 +++---
  2 files changed, 21 insertions(+), 25 deletions(-)

diff --git a/local-zfs.adoc b/local-zfs.adoc
index 13f6050..aae89e0 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -154,15 +154,9 @@ rpool/swap4.25G  7.69T64K  -
  Bootloader
  ~~
  
-The default ZFS disk partitioning scheme does not use the first 2048

-sectors. This gives enough room to install a GRUB boot partition. The
-{pve} installer automatically allocates that space, and installs the
-GRUB boot loader there. If you use a redundant RAID setup, it installs
-the boot loader on all disk required for booting. So you can boot
-even if some disks fail.
-
-NOTE: It is not possible to use ZFS as root file system with UEFI
-boot.
+Depending on whether the system is booted in EFI or legacy BIOS mode the
+{pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
+See the chapter on  xref:system_booting[bootladers] for details.
  
  
  ZFS Administration

@@ -255,7 +249,13 @@ can be used as cache.
  
  .Changing a failed device
  
- zpool replace -f   

+ zpool replace -f   
+
+.Changing a failed bootable device when using systemd-boot
+
+ sgdisk  -R 
+ sgdisk -G 
+ zpool replace -f   
  
  
  Activate E-Mail Notification

diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
index 3895df4..a661848 100644
--- a/qm-pci-passthrough.adoc
+++ b/qm-pci-passthrough.adoc
@@ -45,9 +45,10 @@ some configuration to enable PCI(e) passthrough.
  
  .IOMMU
  
-The IOMMU has to be activated on the kernel commandline. The easiest way is to

-enable trough grub. Edit `'/etc/default/grub'' and add the following to the
-'GRUB_CMDLINE_LINUX_DEFAULT' variable:
+The IOMMU has to be activated on the
+xref:edit_kernel_cmdline[kernel commandline].
+
+The command line parameters are:
  
  * for Intel CPUs:

  +
@@ -60,12 +61,6 @@ enable trough grub. Edit `'/etc/default/grub'' and add the 
following to the
   amd_iommu=on
  
  
-[[qm_pci_passthrough_update_grub]]

-To bring this change in effect, make sure you run:
-
-
-# update-grub
-
  
  .Kernel Modules
  
@@ -87,6 +82,9 @@ After changing anything modules related, you need to refresh your

  # update-initramfs -u -k all
  
  
+If you are using `systemd-boot` for booting you additionally need to

+xref:systemd-boot-refresh[sync the new initramfs to the bootable partitions].


Of you are using `systemd-boot` as bootloader make sure to 
xref:systemd-boot-refresh[sync the new initramfs to the bootable 
partitions].





+
  .Finish Configuration
  
  Finally reboot to bring the changes into effect and check that it is indeed

@@ -316,10 +314,9 @@ Intels drivers for GVT-g are integrated in the Kernel and 
should work
  with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
  v5 and E3 v6 Xeon Processors.
  
-To enable it for Intel Graphcs, you have to make sure to load the module

-'kvmgt' (for example via `/etc/modules`) and to enable it on the Kernel
-commandline. For this you can edit `'/etc/default/grub'' and add the following
-to the 'GRUB_CMDLINE_LINUX_DEFAULT' variable:
+To enable it for Intel Graphics, you have to make sure to load the module
+'kvmgt' (for example via `/etc/modules`) and to enable it on the
+xref:edit_kernel_cmdline[Kernel commandline] and add the following parameter:
  
  

   i915.enable_gvt=1
@@ -327,8 +324,7 @@ to the 'GRUB_CMDLINE_LINUX_DEFAULT' variable:
  
  After that remember to

  xref:qm_pci_passthrough_update_initramfs[update the `initramfs`],
-xref:qm_pci_passthrough_update_grub[update grub] and
-reboot your host.
+and reboot your host.
  
  VM Configuration

  



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs 1/2] Add documentation on bootloaders (systemd-boot)

2019-07-08 Thread Aaron Lauterer

Some things that I have seen, mostly regarding style and readability.

On 7/5/19 6:31 PM, Stoiko Ivanov wrote:

With the recently added support for booting ZFS on root on EFI systems via
`systemd-boot` the documentation needs adapting (mostly related to editing
the kernel commandline).

This patch adds a short section on Bootloaders to the sysadmin chapter
describing both `grub` and PVE's use of `systemd-boot`

Signed-off-by: Stoiko Ivanov 
---
  sysadmin.adoc   |   2 +
  system-booting.adoc | 144 
  2 files changed, 146 insertions(+)
  create mode 100644 system-booting.adoc

diff --git a/sysadmin.adoc b/sysadmin.adoc
index 21537f1..e045610 100644
--- a/sysadmin.adoc
+++ b/sysadmin.adoc
@@ -74,6 +74,8 @@ include::local-zfs.adoc[]
  
  include::certificate-management.adoc[]
  
+include::system-booting.adoc[]

+
  endif::wiki[]
  
  
diff --git a/system-booting.adoc b/system-booting.adoc

new file mode 100644
index 000..389a0e9
--- /dev/null
+++ b/system-booting.adoc
@@ -0,0 +1,144 @@
+[[system_booting]]
+Bootloaders
+---
+ifdef::wiki[]
+:pve-toplevel:
+endif::wiki[]
+
+Depending on the disk setup chosen in the installer {pve} uses two bootloaders
+for bootstrapping the system.


{pve} is using one of two bootloaders, depending on the disk setup 
selected in the installer.


(Putting the most important info at the beginning of the sentence)


+
+For EFI Systems installed with ZFS as the root filesystem `systemd-boot` is
+used. All other deployments use the standard `grub` bootloader (this usually
+also applies to systems which are installed on top of Debian).
+
+[[installer_partitioning_scheme]]
+Partitioning scheme used by the installer
+~
+
+The {pve} installer creates 3 partitions on disks:
+
+* a 1M BIOS Boot Partition (gdisk type EF02)
+
+* a 512M EFI System Partition (ESP, gdisk type EF00)


Besides what Thomas already mentioned; what about using MB (with a 
space) instead of M? "512 MB" instead of "512M"?

+
+* a third partition spanning the remaining space used for the chosen storage
+  type
+
+`grub` in BIOS mode (`--target i386-pc`) is installed onto the BIOS Boot
+Partition of all bootable disks for supporting older systems.
+
+
+Grub
+
+
+`grub` has been the de-facto standard for booting Linux systems for many years
+and is quite well documented
+footnote:[Grub Manual https://www.gnu.org/software/grub/manual/grub/grub.html].
+
+The kernel and initrd images are taken from `/boot` and its configuration file
+`/boot/grub/grub.cfg` gets updated by the kernel installation process.
+
+Configuration
+^
+Changes to the `grub` configuration are done via the defaults file
+ `/etc/default/grub` or config snippets in `/etc/default/grub.d`.
+To regenerate the `/boot/grub/grub.cfg` after a change to the configuration
+run `update-grub`.
+
+Systemd-boot
+
+
+`systemd-boot` is a lightweight EFI bootloader, which reads the kernel and


"...EFI bootloader. It reads the kernel and "
Splitting the sentence will produce two shorter sentences that are 
easier to grasp.



+initrd images directly from the EFI Service Partition (ESP) where it is
+installed.  The main advantage of directly loading the
+kernel from the ESP is that it does not need to reimplement the drivers for
+accessing the storage.  In the context of ZFS as root filesystem this means
+that you can use all optional features on your root pool instead of the subset
+which is also present in the ZFS implementation in `grub` or having to create
+a separate small boot-pool
+footnote:[Booting ZFS on root with grub 
https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS].
+
+In setups with redundancy (RAID1, RAID10, RAIDZ*) all bootable disks (those
+being part of the first `vdev`) are partitioned with an ESP, ensuring the


"with an ESP. This ensure that the system can boot even..."


+system boots even if the first boot device fails.  The ESPs are kept in sync by
+a kernel postinstall hook script `/etc/kernel/postinst.d/zz-pve-efiboot`. The
+script copies certain kernel versions and the initrd images to `EFI/proxmox/`
+on the root of each ESP and creates the appropriate config files in
+`loader/entries/proxmox-*.conf`.
+
+The following kernel versions are configured by default:
+
+* the currently booted kernel
+* the version being installed
+* the two latest kernels
+* the latest version of each kernel series (e.g. 4.15, 5.0).
+
+The ESPs are not kept mounted during regular operation, in contrast to `grub`,
+which keeps an ESP mounted on `/boot/efi`. This helps preventing filesystem


"This helps to prevent filesystem..."


+corruption to the `vfat` formatted ESPs in case of a system crash, and removes
+the need to manually adapt `/etc/fstab` in case the primary boot device fails.
+
+[[systemd_boot_config]]
+Configuration
+^
+
+`systemd-boot` itself is configured via the file `loader/loader.conf` in the


[pve-devel] applied-series: [PATCH manager 0/6] 5to6 corosync improvements

2019-07-08 Thread Thomas Lamprecht
Am 7/5/19 um 2:44 PM schrieb Fabian Grünbichler:
> series should be cherry-pickable to stable-5, except patches #1 (see note) 
> and #6 (obviously ;))
> 
> Fabian Grünbichler (6):
>   5to6: attempt to resolve corosync rings
>   5to6: reword/-structure corosync messages
>   5to6: fail if a corosync node has neither ring0 nor ring1 defined
>   5to6: add more corosync subheaders
>   5to6: make corosync totem checks more verbose
>   build: bump versioned dependency on pve-cluster
> 
>  PVE/CLI/pve5to6.pm | 41 +++--
>  debian/control |  2 +-
>  2 files changed, 32 insertions(+), 11 deletions(-)
> 

applied to master for now, need to re-look for stable-5 (will do probably,
afternoon).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 0/6] 5to6 corosync improvements

2019-07-08 Thread Thomas Lamprecht
Am 7/5/19 um 2:44 PM schrieb Fabian Grünbichler:
> series should be cherry-pickable to stable-5, except patches #1 (see note) 
> and #6 (obviously ;))
> 

for the next time please just send the cherry-picks a long, may even just
send two series for each branch with the respective patches, so that
this is a bit more self-contained... I mean, you have the branaches already
ready like this, as you hopefully tested it on both ;)

> Fabian Grünbichler (6):
>   5to6: attempt to resolve corosync rings
>   5to6: reword/-structure corosync messages
>   5to6: fail if a corosync node has neither ring0 nor ring1 defined
>   5to6: add more corosync subheaders
>   5to6: make corosync totem checks more verbose
>   build: bump versioned dependency on pve-cluster
> 
>  PVE/CLI/pve5to6.pm | 41 +++--
>  debian/control |  2 +-
>  2 files changed, 32 insertions(+), 11 deletions(-)
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC firewall] ebtables: remove PVE chains properly

2019-07-08 Thread Thomas Lamprecht
Am 7/8/19 um 9:33 AM schrieb Fabian Grünbichler:
> when globally disabling the FW, or on shutdown of firewall service.
> otherwise, ebtables rules are leftover (and perpetually displayed as
> pending changes as well).
> 
> the actual removal is done by taking the same code path as when
> disabling just ebtables on the cluster level, i.e. applying an empty
> ruleset.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> 
> Notes:
> another approach would be to make ebtables_get_chains more like
> iptables_get_chains, and then re-use remove_pvefw_chains_iptables..
> 
> should backport cleanly to stable-5
> 
>  src/PVE/Firewall.pm | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index 96c45e9..4147f87 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -4269,6 +4269,7 @@ sub remove_pvefw_chains {
>  PVE::Firewall::remove_pvefw_chains_iptables("iptables");
>  PVE::Firewall::remove_pvefw_chains_iptables("ip6tables");
>  PVE::Firewall::remove_pvefw_chains_ipset();
> +PVE::Firewall::remove_pvefw_chains_ebtables();
>  
>  }
>  
> @@ -4314,6 +4315,12 @@ sub remove_pvefw_chains_ipset {
>  ipset_restore_cmdlist($cmdlist) if $cmdlist;
>  }
>  
> +sub remove_pvefw_chains_ebtables {
> +# empty ruleset == ebtables disabled
> +my ($cmdlist, $changes) = get_ebtables_cmdlist({});
> +ebtables_restore_cmdlist($cmdlist) if $changes && $cmdlist;

$cmdlist is always true here..
Also while it is not too useful to flush the rules if no changes
(i.e., already emptied ebtables ruleset) is detected we could do
it anyway, e.g. a simple (untested):

ebtables_restore_cmdlist("*filter\n");


> +}
> +
>  sub init {
>  my $cluster_conf = load_clusterfw_conf();
>  my $cluster_options = $cluster_conf->{options};
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC firewall] ebtables: remove PVE chains properly

2019-07-08 Thread Fabian Grünbichler
when globally disabling the FW, or on shutdown of firewall service.
otherwise, ebtables rules are leftover (and perpetually displayed as
pending changes as well).

the actual removal is done by taking the same code path as when
disabling just ebtables on the cluster level, i.e. applying an empty
ruleset.

Signed-off-by: Fabian Grünbichler 
---

Notes:
another approach would be to make ebtables_get_chains more like
iptables_get_chains, and then re-use remove_pvefw_chains_iptables..

should backport cleanly to stable-5

 src/PVE/Firewall.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 96c45e9..4147f87 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -4269,6 +4269,7 @@ sub remove_pvefw_chains {
 PVE::Firewall::remove_pvefw_chains_iptables("iptables");
 PVE::Firewall::remove_pvefw_chains_iptables("ip6tables");
 PVE::Firewall::remove_pvefw_chains_ipset();
+PVE::Firewall::remove_pvefw_chains_ebtables();
 
 }
 
@@ -4314,6 +4315,12 @@ sub remove_pvefw_chains_ipset {
 ipset_restore_cmdlist($cmdlist) if $cmdlist;
 }
 
+sub remove_pvefw_chains_ebtables {
+# empty ruleset == ebtables disabled
+my ($cmdlist, $changes) = get_ebtables_cmdlist({});
+ebtables_restore_cmdlist($cmdlist) if $changes && $cmdlist;
+}
+
 sub init {
 my $cluster_conf = load_clusterfw_conf();
 my $cluster_options = $cluster_conf->{options};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel