[ClusterLabs] pacemaker certificate is not generated with SubjectAlternativeName

2020-02-28 Thread S Sathish S
Hi Team,

We have found that the Pacemaker certificate is not generated with 
SubjectAlternativeName.

Please find the general guidelines :

In case client certificates are required, verification of the client identity 
SHOULD use the first matching subjectAltName field of the client certificate to 
be compared with an authorization identity present in a local or central AA 
database.
To mitigate the Man-in-the-Middle risk, the server identity verification is 
RECOMMENDED to be done as well. A client can accept several server certificates 
in certificate validation issued by the same trusted CA.

After certificate chain validation, the TLS client MUST check the identity of 
the server with a configured reference identity (e.g., a hostname). The clients 
MUST support checks using the subjectAltName field with type dNSName. If the 
certificate contains multiple subjectAltNamevalues then a match with any one of 
the fields is considered acceptable.

Current Certificate details:
#keytool -printcert -file /var/lib/pcsd/pcsd.crt
Owner: CN=XXX, OU=pcsd, O=pcsd, L=Minneapolis, ST=MN, C=US
Issuer: CN=XXX, OU=pcsd, O=pcsd, L=Minneapolis, ST=MN, C=US
Serial number: 1703482bc5b
Valid from: Tue Feb 11 14:49:08 CET 2020 until: Fri Feb 08 14:49:08 CET 2030
Certificate fingerprints:
 MD5:  6E:C9:F8:E2:B9:F7:F6:65:53:B4:BD:B9:18:71:B9:78
 SHA1: 9E:7C:22:DA:61:AA:86:DB:D1:74:D4:AC:47:CA:DC:06:6A:21:C2:F0
 SHA256: 
1D:8D:88:55:70:FE:01:BB:DB:5C:BD:E7:FF:79:62:02:CB:64:97:A7:16:A4:29:49:F1:94:8E:2F:7B:FC:D4:B5
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Sample  certificate with SubjectedAltName details:
#3: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: XXX
  DNSName: XXX]


Thanks and Regards,
S Sathish S
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Re: [ClusterLabs] Antw: Re: Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-28 Thread Ken Gaillot
On Fri, 2020-02-28 at 09:37 +0100, Ulrich Windl wrote:
> > > > Ken Gaillot  schrieb am 27.02.2020 um
> > > > 23:43 in Nachricht
> 
> <43512a11c2ddffbabeee11cf4cb509e4e5dc98ca.ca...@redhat.com>:
> 
> [...]
> > 
> > > 2. Resources/groups  are stopped  (target-role=stopped)
> > > 3. Node exits the cluster cleanly when no resources are  running
> > > any
> > > more
> > > 4. The node rejoins the cluster  after  the reboot
> > > 5. A  positive (on the rebooted node) & negative (ban on the rest
> > > of
> > > the nodes) constraints  are  created for the marked  in step 1
> > > resources
> > > 6.  target-role is  set back to started and the resources are
> > > back
> > > and running
> > > 7. When each resource group (or standalone resource)  is  back
> > > online
> > > -  the mark in step 1  is removed  and any location
> > > constraints  (cli-ban &  cli-prefer)  are  removed  for the
> > > resource/group.
> > 
> > Exactly, that's effectively what happens.
> 
> May I ask how robust the mechanism will be?
> For example if you do  a "resource restart" there are two target
> roles (each made persistent): stopped and started. If the node
> performing the operation is fenced (we had that a few times). The
> resources may remain "stopped" until started manually again.
> I see a similar issue with this mechanism.

Corner cases were carefully considered with this one. If a node is
fenced, its entire CIB status section is cleared, which will include
shutdown locks. I considered alternative implementations under the
hood, and the main advantage of the one chosen is that setting and
clearing the lock are atomic with recording the action results that
cause them. That eliminates a whole lot of possibilities for the type
of problem you mention. Also, there are multiple backstops to clear
locks if anything is fishy, such as if the node is unclean, the
resource somehow started elsewhere while the lock was in effect, a
locked resource is removed from the configuration while it is down,
etc.

The one area I don't consider mature yet is Pacemaker Remote nodes. I'd
recommend using the feature only in a cluster without them. This is due
mainly to a (documented) limitation that manual lock clearing and
shutdown-lock-limit only work if the remote connection is disabled
after stopping the node, which sort of defeats the "hands off" goal.
But also I think using locks with remote nodes requires more testing.

> 
> [...]
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] resource-agents v4.5.0 rc1

2020-02-28 Thread Oyvind Albrigtsen

ClusterLabs is happy to announce resource-agents v4.5.0 rc1.

Source code is available at:
https://github.com/ClusterLabs/resource-agents/releases/tag/v4.5.0rc1

The most significant enhancements in this release are:
- bugfixes and enhancements:
 - Filesystem: add trigger_udev_rules_if_need() for -U, -L, or /dev/xxx device
 - Filesystem: refresh UUID in the start phase
 - IPaddr2: add noprefixroute parameter
 - IPaddr2: add info to metadata that ipt_CLUSTERIP "iptables" extension is not 
"nft" backend compatible, and iptables-legacy support for distros that still support it
 - IPsrcaddr: replace local rule if using local table, and set src back to 
primary for device on stop
 - IPsrcaddr: fix failure during probe when using destination/table parameters
 - LVM-activate: add OCF_CHECK_LEVEL 10 check that can be enabled to verify vg or lv 
validity with an additional "read 1 byte" test in special cases like iSCSI SAN
 - MailTo: fix variable expansion
 - SAPInstance: clear the $DIR_EXECUTABLE variable so we catch the situation 
when we lose the directory with binaries after first sapinstance_init invokation
 - aliyun-vpc-move-ip: add support for both 'go' and 'python' versions of 
Aliyun CLI, and auto-detect which to use by default
 - apache: use get_release_id() to detect OS/distro, and fix LOAD_STATUS_MODULE 
issue
 - azure-lb set socat to default on SUSE distributions.
 - exportfs: allow multiple exports of same directory
 - iSCSILogicalUnit: add liot_bstype to handle block/fileio for targetcli, and 
change behavior of lio-t with portals which do not use 0.0.0.0
 - ldirectord: support sched-flags
 - lvmlockd: fix for LVM2 v2.03+ removing lvmetad
 - mysql-common: return correct rc during start-action
 - oralsnr: allow using the same tns_admin directory for different listeners
 - pgsql: Support for PostgreSQL 12
 - podman: improve the code for checking if an image exists
 - rabbitmq-cluster: ensure we delete nodename if stop action fails
 - redis: validate_all: fix file status tests
 - spec: add missing requirement (lsb-release)

The full list of changes for resource-agents is available at:
https://github.com/ClusterLabs/resource-agents/blob/v4.5.0rc1/ChangeLog

Everyone is encouraged to download and test the new release candidate.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all the contributors to this release.


Best,
The resource-agents maintainers

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: Re: Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-28 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 28.02.2020 um 06:00 in
Nachricht <60dcda9f-3f75-9fc4-8732-adc14ad3b...@gmail.com>:
[...]
> 
> Well, this requires pacemaker supporting notion of "cluster wide
> shutdown" in the first place.

Yes, a "cluster shutdown" is very much desired, just as the "cluster startup" 
(start all nodes at the same time conceptually). The latter is more than just a 
ssh running the software on each node: The DC should be elected once a quorum 
is up, then it should be waited for more nodes to join ("cluster formation 
timeout"), and when either the timeout expired or all nodes are up, resources 
should be placed (started).

Cluster shutdown should be orchestarted in a similar way: All resources 
stopped, then nodes stop without unnecessary re-elections of the DC.


THAT would be a useful extension.

[...]

Regards,
Ulrich



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: Re: Antw: Re: Antw: [EXT] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-28 Thread Ulrich Windl
>>> Ken Gaillot  schrieb am 27.02.2020 um 23:43 in 
>>> Nachricht
<43512a11c2ddffbabeee11cf4cb509e4e5dc98ca.ca...@redhat.com>:

[...]
> 
>> 2. Resources/groups  are stopped  (target-role=stopped)
>> 3. Node exits the cluster cleanly when no resources are  running any
>> more
>> 4. The node rejoins the cluster  after  the reboot
>> 5. A  positive (on the rebooted node) & negative (ban on the rest of
>> the nodes) constraints  are  created for the marked  in step 1
>> resources
>> 6.  target-role is  set back to started and the resources are back
>> and running
>> 7. When each resource group (or standalone resource)  is  back online
>> -  the mark in step 1  is removed  and any location
>> constraints  (cli-ban &  cli-prefer)  are  removed  for the
>> resource/group.
> 
> Exactly, that's effectively what happens.

May I ask how robust the mechanism will be?
For example if you do  a "resource restart" there are two target roles (each 
made persistent): stopped and started. If the node performing the operation is 
fenced (we had that a few times). The resources may remain "stopped" until 
started manually again.
I see a similar issue with this mechanism.

[...]


___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/