global maintenance before that.
You got 2 steps :
- pcs cluster auth -> allows the pcsd on the new node to communicate with pcsd
daemons on the other members of the cluster
- pcs cluster node add -> adds the node to the cluster
Best Regards,
Strahil Nikolov
В сряда, 28 октомври 2020
on EL 8 ?
Best Regards,
Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
from node2 to node1 .
Note: default stickiness is per resource , while the total stickiness score of
a group is calculated based on the scores of all resources in it.
Best Regards,
Strahil Nikolov
В сряда, 2 декември 2020 г., 16:54:43 Гринуич+2, Dan Swartzendruber
написа:
On 2020-
The problem with infinity is that the moment when the node is back - there will
be a second failover. This is bad for bulky DBs that power down/up more than 30
min (15 min down, 15 min up).
Best Regards,
Strahil Nikolov
В четвъртък, 3 декември 2020 г., 10:32:18 Гринуич+2, Andrei Borzenkov
It's more interesting why you got connection close...
Are you sure you didn't got network issues ? What is corosync saying in
the lgos ?
Offtopic: Are you using DLM with OCFS2 ?
Best Regards,
Strahil Nikolov
В 10:33 -0800 на 04.12.2020 (пт), Reid Wahl написа:
> On Fri, Dec 4, 202
Nope,
but if you don't use clustered FS, you could also use plain LVM + tags.
As far as I know you need dlm and clvmd for clustered FS.
Best Regards,
Strahil Nikolov
В вторник, 8 декември 2020 г., 10:15:39 Гринуич+2, Ulrich Windl
написа:
>>> Strahil Nikolov schrieb
systemd services do not use ulimit, so you need to check "systemctl show
pacemaker.service" for any clues.
I have seen similar error in SLES 12 SP2 when the maximum tasks was reduced and
we were hitting the limit.
Best Regards,
Strahil Nikolov
В четвъртък, 10 декември 2020 г.
I think that dlm + clvmd was enough to take care of OCFS2 .
Have you tried that ?
Best Regards,
Strahil Nikolov
В четвъртък, 10 декември 2020 г., 16:55:52 Гринуич+2, Ulrich Windl
написа:
Hi!
I configured a clustered LV (I think) for activation on three nodes, but it
won't
Have you thought about Hawk ?
Best Regards,
Strahil Nikolov
В петък, 11 декември 2020 г., 23:20:49 Гринуич+2, Alex Zarifoglu
написа:
Hello,
I have question regarding the running crm commands with the effective uid.
I am trying to create a tool to manage pacemaker resources
Use the syntax as if your resource was never in a group and use
'--before/--after' to specify the new location.
Best Regards,
Strahil Nikolov
В четвъртък, 17 декември 2020 г., 13:21:55 Гринуич+2, Tony Stocker
написа:
I have a resource group that has a number of entries.
.
What is the output of 'drbdadm status' on both nodes ? What happens
when you stop the cluster resource and start the drbd manually ?
I guess
it's unnecessary to mention how risky is to run a 2-node cluster and
that it's far safer if you have a quorum somewhere there ;)
B
.
>
>
>
>
Quote from official documentation (
https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-pacemaker-crm-drbd-backed-service
):
If you are employing the DRBD OCF resource agent, it is recommended
that you defer DRBD startup, shutdown, promotion, and
demotion ex
Have you tried on-fail=ignore option ?
Best Regards,
Strahil Nikolov
В неделя, 17 януари 2021 г., 20:45:27 Гринуич+2, Digimer
написа:
Hi all,
I'm trying to figure out how to define a resource such that if it
fails in any way, it will not cause pacemaker self self-fence
causing thetrouble.
Best Regards,
Strahil Nikolov
В събота, 16 януари 2021 г., 17:51:05 Гринуич+2, Brent Jensen
написа:
Maybe. I haven't focused on any stickiness w/ which node is generally
master or not. Going standby on the master node should move the slave to
master. I
firewall open ? This node should be connected.
Try to verify that each drbd is up and running and promoting any of the 2 nodes
is possible before proceeding with the cluster setup.
Best Regards,
Strahil Nikolov
___
Manage your subscription:
https://lists.cl
So why it is saying 'connecting' ?
Best Regards,
Strahil Nikolov
В понеделник, 18 януари 2021 г., 23:54:02 Гринуич+2, Brent Jensen
написа:
Yes all works fine outside of the cluster. No firewall running nor any
selinux.
On 1/18/2021 11:53 AM, Strahil Nikolov wrote:
file' and I hope it helps
you fix your issue.
Best Regards,Strahil Nikolov
В 09:32 -0500 на 19.01.2021 (вт), Stuart Massey написа:
> Ulrich,Thank you for that observation. We share that concern.
> We have 4 ea 1G nics active, bonded in pairs. One bonded pair serves
> the "publi
s timeout=30 (DRBD-reload-interval-0s) start
interval=0s timeout=240 (DRBD-start-interval-0s) stop
interval=0s timeout=100 (DRBD-stop-interval-0s)
Best Regards,Strahil Nikolov
В 23:30 -0500 на 21.01.2021 (чт), Stuart Massey написа:
> Hi Ulrich,
> Thank you for yo
> How to handle it?
You need to :
- Setup and TEST stonith
- Add a 3rd node (even if it doesn't host any resources) or setup a
node for kronosnet
Best Regards,
Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/
I think that it makes sense, as '--all' should mean 'reach all servers and
shutdown there'.Yet, when you run 'pcs cluster stop' - the migration of the
resources is the only option.
Still, it sounds like a bug.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail
fence_drac5 , fence_drac (not sure about that) , SBD
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Mon, Jan 25, 2021 at 11:23, Sharma, Jaikumar
wrote: ___
Manage your subscription:
https://lists.clusterlabs.org/mailman
automatically after the defined timeout ?
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
configuration
that any migration lifetime is, by default, '8 hours' ( for example) and
afterwards it expires (just like with timeout).
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Mon, Jan 25, 2021 at 18:16, Ken Gaillot wrote: On
Mon, 2021-01-25 at 13:22 +0100, Ulrich W
WARNING: cib-bootstrap-options: unknown attribute 'no-quirum-policy'
That looks like a typo.
Best Regards,Strahil Nikolov
On Fri, Feb 12, 2021 at 12:30, Lentes,
Bernd wrote:
- On Feb 12, 2021, at 11:18 AM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
>
nd staying down. For more details use journalctl -xe'.
Isn't it easier to just provide more details in the logs than integrating that
feature ?
Best Regards,Strahil Nikolov
On Tue, Feb 16, 2021 at 21:48, Ken Gaillot wrote: Hi
all,
The systemd journal has a feature calle
Hi Ulrich,
actually you can suppress them.
Best Regards,Strahil Nikolov
On Wed, Feb 17, 2021 at 13:04, Ulrich
Windl wrote: Hi Ken,
personally I think systemd is already logging too much, and I don't think that
adding instructions to many log messages is actually helpful (It cou
Hello All,
I'm currently in a process of building SAP HANA Scale-out cluster and the HANA
team has asked that all nodes on the active instance should have one IP for
backup purposes.
Yet, I'm not sure how to setup the constraints (if it is possible at all) so
all IPs will follow the master resou
on nodeAVIP2 on nodeBVIP3 on nodeC
Master on nodeDVIP1 on nodeDVIP2 on nodeEVIP3 on nodeF
Master down:All VIPs down
I think that Ken has mentioned a possible solution, bit I have to check it out.
Best Regards,Strahil Nikolov
On Thu, Feb 18, 2021 at 9:40, Ulrich Windl
wrote: >>>
>Do you have a fixed relation between node >pairs and VIPs? I.e. must
>A/D always get VIP1, B/E - VIP2 etc?
I have to verify it again, but generally speaking - yes , VIP1 is always on
nodeA/D (master), VIP2 on nodeB/E (worker1) , etc.
I guess I can set negative constraints (-inf) -> VIP1 on node
As this is in Asure and they support shared disks , I think that a simple SBD
could solve the stonith case.
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clus
When you change the token, you might consider adjusting the consensus timeout
(see man corosync.conf).
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterla
Hello all,
I'm building a test cluster on RHEL8.2 and I have noticed that the cluster
fails to assemble ( nodes stay inquorate as if the network is not working) if I
set the token at 3 or more (30s+).
What is the maximum token value with knet ?On SLES12 (I think it was corosync
1) , I used
issue.
I was hoping that I missed in the documentation about the maximum token size...
Best Regards,
Strahil Nikolov
В четвъртък, 11 март 2021 г., 19:12:58 ч. Гринуич+2, Jan Friesse
написа:
Strahil,
> Hello all,
> I'm building a test cluster on RHEL8.2 and I have noticed
fencing mechanism kicks in.
Best Regards,
Strahil Nikolov
В четвъртък, 11 март 2021 г., 19:16:04 ч. Гринуич+2, Klaus Wenninger
написа:
On 3/11/21 12:30 PM, Ulrich Windl wrote:
> Hi!
>
> I wonder: Is it possible to register some callback to sbd that is called
> whenev
trace logs for corosync only ?
Best Regards,Strahil Nikolov
On Fri, Mar 12, 2021 at 17:01, Jan Friesse wrote:
Strahil,
> Interesting...
> Yet, this doesn't explain why token of 3 causes the nodes to never
> assemble a cluster (waiting for half an hour, using wait_for_all=
Is there any reason to use lms mode for the qdevice ?
Best Regards,
Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
If firewalld is available, just try with 'firewall-cmd --panic-on' (or
something like that).
Best Regards,Strahil Nikolov
On Fri, Mar 19, 2021 at 12:50, Marcelo Terres wrote:
___
Manage your subscription:
https://lists.clusterlabs.o
,
Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
start on the nodes in site B.
I think that it's a valid use case.
Best Regards,Strahil Nikolov
On Thu, Mar 25, 2021 at 8:59, Ulrich Windl
wrote: >>> Ken Gaillot schrieb am 24.03.2021 um 18:56 in
Nachricht
<5bffded9c6e614919981dcc7d0b2903220bae19d.ca...@redhat.com>:
>
OCF_CHECK_LEVEL 20NFS sometimes fails to start (systemd racing condition with
dnsmasq)
Best Regards,Strahil Nikolov
On Thu, Mar 25, 2021 at 12:18, Andrei Borzenkov wrote:
On Thu, Mar 25, 2021 at 10:31 AM Strahil Nikolov wrote:
>
> Use Case:
>
> nfsA is shared filesystem for
Just a clarification.
I'm using separate NFS shares for each HANA, so even if someone wipes the NFS
for DC1, the cluster will failover to DC2 (separate NFS) and survive.
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.
Thanks everyone! I really appreciate your help.
Actually , I found a RH solution (#5423971) that gave me enough ideas /it is
missing some steps/ to setup the cluster prooperly.
So far , I have never used node attributes, order sets and location constraints
based on 'ocf:pacemaker: attribute's ac
>I also remember something about racing with dnsmasq, at which point I'dsay
>that making cluster depend on availability of DNS is e-h-h-h unwise
Not my choice... Or at least I would deploy bind/unbound caching servers in the
same VLAN instead of dnsmasq.Also, Filesystem resource agent's read + w
watchdog device and it never failed us. Yet, it's just a kernel
module (no hardware required) and thus RH do not support such setup.
If you decide to use 'sbd', disable vendor's system recovery solution (like
HPE's ASR) as it will also tinker with the watchdog.
Best Rega
old boot', yet I never checked
the code of fence_ipmi.
With triple sbd , I mean sbd with 3 block devices.
Best Regards,Strahil Nikolov
On Sat, Mar 27, 2021 at 23:15, Reid Wahl wrote:
On Saturday, March 27, 2021, Strahil Nikolov wrote:
> My notes:
> - ilo ssh fence mechanism is
I didn't mean DC as a designated coordinator, but as a physical Datecenter
location.
Last time I checked, the node attributes for all nodes seemed the same.I will
verify that tomorrow (Monday).
Best Regards,Strahil Nikolov
On Fri, Feb 19, 2021 at 16:51, Andrei Borzenkov wrote:
O
Hi Ken, can you provide a prototype code example.
Currently,I'm making a script that will be used in a systemd service managed by
the cluster.Yet, I would like to avoid non-pacemaker solutions.
Best Regards,Strahil Nikolov
On Mon, Mar 29, 2021 at 20:12, Ken Gaillot wrote: On
Sun, 20
y, but it looks promising. Yet, I'm not happy to have my own scripts in the
cluster's logic.
Best Regards,Strahil Nikolov
On Tue, Mar 30, 2021 at 10:06, Reid Wahl wrote: You can
try the following and see if it works, replacing the items in angle brackets
(<>).
# p
that KSM would be a problem... most probably performance would not be
optimal.
Best Regards,Strahil Nikolov
On Tue, Mar 30, 2021 at 19:47, Andrei Borzenkov wrote:
On 30.03.2021 18:16, Lentes, Bernd wrote:
> Hi,
>
> currently i'm reading "Mastering KVM Virtualization&q
.
Maybe someone can share the 'pcs cluster edit' xml section, so I can try to
push it directly into the cib ?
Best Regards,Strahil Nikolov
On Tue, Mar 30, 2021 at 19:45, Andrei Borzenkov wrote:
On 30.03.2021 17:42, Ken Gaillot wrote:
>>
>> Colocation does not work, t
Disregard the previous one... it needs 'pcs constraint colocation add' to work.
Best Regards,Strahil Nikolov
On Wed, Mar 31, 2021 at 8:08, Strahil Nikolov wrote:
I guess that feature was added in a later version (still on RHEL 8.2).
pcs constraint colocation bkp2 w
Damn... I am too hasty.
It seems that the 2 resources I have already configured are also running on the
master.
The colocation constraint is like:
rsc_bkpip3_SAPHana_SID_HDBinst_num with rsc_SAPHana_SID_HDBinst_num-clone
(score: INFINITY) (node-attribute:hana_sid_site) (rsc-role:Started)
(with-r
lot of custom stuff - I want to make it fool-proof
as much as possible. I've already organised a discussion about those backup IPs.
Best Regards,Strahil Nikolov
On Wed, Mar 31, 2021 at 10:54, Andrei Borzenkov wrote:
On Wed, Mar 31, 2021 at 8:34 AM Strahil Nikolov wrote:
>
> D
stop timeout - leads to
fencing(on-fail=fence).
I thought that the Controller resource agent is stopping the HANA and the slave
role should not be 'stopped' before that .
Maybe my expectations are wrong ?
Best Regards,Strahil Nikolov
___
M
To be more specific, the processes left are 'hdbrsutil' and the 'sapstartsrv'.
Best Regards,Strahil Nikolov
On Fri, Apr 2, 2021 at 12:20, Strahil Nikolov wrote:
Hello All,
I am testing the newly built HANA (Scale-out) cluster and it seems that:Neither
SA
Thanks Andrei,
so can we assume that killing those processes during NFS umount is acceptable
and no risk to the HANA data can be observed ?
I have noticed that the cluster is killing those when the cluster is being
stopped (including NFS) .
Best Regards,Strahil Nikolov
On Fri, Apr 2, 2021
'
mechanisms in HANA 2.0 and it looks safe to be killed (will check with SAP
about that).
P.S: Is there a way to remove a whole set in pcs , cause it's really irritating
when the stupid command wipes the resource from multiple order constraints?
Best Regards,Strahil Nikolov
On F
: Mandatory)
And also resource sets that take care that all FS start and then the relevant
nfs_active resources.
Also, It seems that regular order rules cannot be removed via ID , maybe a
Feature request is needed.
Best Regards,Strahil Nikolov
If you mean a whole constraint set, then yes -- run
ne it and it never ended into centos's wiki.
Best Regards,Strahil Nikolov
On Sat, Apr 3, 2021 at 17:52, Andrei Borzenkov wrote:
On 03.04.2021 17:35, Jason Long wrote:
> Hello,
> I configure my clustering labs with three nodes.
You have two node cluster. What is running on nodes ou
I always though that the setup is the same, just the node count is only one.
I guess you need pcs, corosync + pacemaker.If RH is going to support it, they
will require fencing. Most probably sbd or ipmi are the best candidates.
Best Regards,Strahil Nikolov
On Thu, Apr 8, 2021 at 6:52, d
Maybe booth can take care when it dies and powers up the resource in the DR.
Best Regards,Strahil Nikolov
On Thu, Apr 8, 2021 at 10:28, Ulrich Windl
wrote: >>> Reid Wahl schrieb am 08.04.2021 um 08:32 in
Nachricht
:
> On Wed, Apr 7, 2021 at 11:27 PM d tbsky wrote:
>
Better check for a location constraint created via 'pcs resource move'!pcs
constraint location --full | grep cli
Best Regards,Strahil Nikolov
On Sat, Apr 10, 2021 at 18:19, Jehan-Guillaume de Rorthais
wrote:
Le 10 avril 2021 14:22:34 GMT+02:00, lejeczek a écrit :
>Hi
By the way , how do you monitor your pacemaker clusters ?We are using Nagios
and I found only 'check_crm' but it looks like it was made for crmsh and most
probably won't work with pcs without modifications.
Best Regards,Strahil Nikolov
On Tue, Apr 13, 2021 at 10:57, d tbsky
What about a small form factor device to serve as a quorum maker ?
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
If it's a VM or container - it should be on a third location. Using a VM hosted
on one of the nodes is like giving that node more votes in a two-node cluster.
Cheap 3rd node for quorum makes more sense to me.
Best Regards,Strahil Nikolov
On Wed, Apr 14, 2021 at 21:19, Antony Stone
IPMI fencing on some vendors will first try graceful shutdown and only then it
will use ungraceful.
Disabling power button is also described in
https://access.redhat.com/solutions/1578823
Best Regards,Strahil Nikolov___
Manage your subscription:
http
In order iSCSI to be transparent to the relevant clients, you need to use a
special resource that blocks the iSCSI port during the failover. TCP will
retransmit during the failover and will never receive an error due to the fact
that the VIP is missing.
The name is ocf:heartbeat:portblock that s
to hardware issues causes performance degradation on the current master.
Both cases have their benefits and drawbacks and you have to weight them all
before taking that decision.
Best Regards,Strahil Nikolov
On Mon, Apr 26, 2021 at 20:04, Moneta, Howard wrote:
Hello community. I have
Hey Ken,
does this feature work for other Nagios stuff ?
Best Regards,Strahil Nikolov
On Fri, Apr 30, 2021 at 17:57, Ken Gaillot wrote: On
Fri, 2021-04-30 at 11:00 +0100, lejeczek wrote:
> Hi guys
>
> I'd like to ask around for thoughts & suggestions on any
>
Ken ment yo use 'Filesystem' resourse for mounting that NFS server and then
clone that resource.
Best Regards,Strahil Nikolov
On Fri, Apr 30, 2021 at 18:44, Matthew Schumacher
wrote: On 4/30/21 8:11 AM, Ken Gaillot wrote
>> 2. Make the nfs mount itself a resource and ma
If you have SAN & Hardware Watchdog device, you can also use SBD.If SAN is lost
and nodes cannot communicate - they will suicide.
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs
You can use node attributes to define in which city is each host and then use
a location constraint to control in which city to run/not run the resources.
I will try to provide an example tomorrow.
Best Regards,Strahil Nikolov
On Mon, May 10, 2021 at 15:52, Antony Stone
wrote: On
unless you specify a colocation constraint between the resources.
Best Regards,Strahil Nikolov
On Tue, May 11, 2021 at 9:15, Klaus Wenninger wrote:
On 5/10/21 7:16 PM, lejeczek wrote:
>
>
> On 10/05/2021 17:04, Andrei Borzenkov wrote:
>> On 10.05.2021 16:48, lejeczek wrote:
>&
Oh wrong thread, just ignore .
Best Regards
On Tue, May 11, 2021 at 13:54, Strahil Nikolov wrote:
Here is the example I had promised:
pcs node attribute server1 city=LApcs node attribute server2 city=NY
# Don't run on any node that is not in LApcs constraint location DummyRes1 rule
unless you specify a colocation constraint between the resources.
Best Regards,Strahil Nikolov
On Mon, May 10, 2021 at 17:53, Antony Stone
wrote: On Monday 10 May 2021 at 16:49:07, Strahil Nikolov wrote:
> You can use node attributes to define in which city is each host and then
> use a l
On EL8 I think it was named policycoreutils-python-tools or something similar.
Best Regards,Strahil Nikolov
On Thu, May 13, 2021 at 2:45, Eric Robinson wrote:
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
If something moves in/out in a non-expected way, always check:pcs constraint
location --full | grep cli
Best Regards,Strahil Nikolov
On Thu, May 13, 2021 at 10:45, Andrei Borzenkov wrote:
On Wed, May 12, 2021 at 8:15 PM Alastair Basden wrote:
>
>
> > On 12.05.2021 20:02, Ala
For DRBD there is enough info, so let's focus on VDO.There is a systemd service
that starts all VDOs on the system. You can create the VDO once drbs is open
for writes and then you can create your own systemd '.service' file which can
be used as a cluster resource.
Best
There is no VDO RA according to my knowledge, but you can use systemd service
as a resource.
Yet, the VDO service that comes with thr OS is a generic one and controlls all
VDOs - so you need to create your own vdo service.
Best Regards,Strahil Nikolov
On Fri, May 14, 2021 at 6:55, Eric
stonith method and use stonith topology.
Best Regards,Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
Are you sure that the DRBD is working properly ?
Best Regards,Strahil Nikolov
On Mon, May 17, 2021 at 0:32, Eric Robinson wrote:
#yiv0265739749 #yiv0265739749 -- _filtered {} _filtered {} _filtered
{}#yiv0265739749 #yiv0265739749 p.yiv0265739749MsoNormal, #yiv0265739749
Have you tried to set VDO in async mode ?
Best Regards,Strahil Nikolov
On Mon, May 17, 2021 at 8:57, Klaus Wenninger wrote:
Did you try VDO in sync-mode for the case the flush-fua stuff isn't working
through the layers? Did you check that VDO-service is disabled and solely
And why don't you use your own systemd service ?
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
>That was the first thing I tried. The systemd service does not work because it
>wants to stop and start all vdo devices, but mine are on different nodes.
That's why I mentioned to create your own version of the systemd service.
Best Regards,Strahil Nikolov
Also,pacemaker has a very fine grain control mechanisms when and where to run
your resources (and even with which resourses to colocate).
Best Regards,Strahil Nikolov
On Tue, May 18, 2021 at 12:43, Strahil Nikolov wrote:
>That was the first thing I tried. The systemd service does
hen make a snapshot via your Virtualization tech
stack.
Best Regards,Strahil Nikolov
On Tue, May 18, 2021 at 13:52, Ulrich
Windl wrote: Hi!
I thought using the reflink feature of OCFS2 would be just a nice way to make
crash-consistent VM snapshots while they are running.
As it is a bit tri
what is your fencing agent ?
Best Regards,Strahil Nikolov
On Thu, May 27, 2021 at 20:52, Eric Robinson wrote:
We found one of our cluster nodes down this morning. The server was up but
cluster services were not running. Upon examination of the logs, we found that
the cluster just
ra (based on memory -> so use find/locate).
Best Regards,Strahil Nikolov
On Fri, May 28, 2021 at 22:10, Abithan Kumarasamy
wrote: Hello Team, We have been recently running some tests on our Pacemaker
clusters that involve two Pacemaker resources on two nodes respectively. The
test case
I agree -> fencing is mandatory.
You can enable the debug logs by editing corosync.conf or
/etc/sysconfig/pacemaker.
In case simple reload doesn't work, you can set the cluster in global
maintenance, stop and then start the stack.
Best Regards,Strahil Nikolov
On Fri, May 28, 2021
Did you configure pacemaker blackbox ?
If not, it could be valuable in such cases.
Also consider updating as soon as possible. Most probably nobody can count the
bug fixes that were introduced between 7.5 and 7.9, nor anyone will be able to
help as you are running a pretty outdated version (even
It shouldn't relocate or affect any other resource,as long as the stop
succeeds.If the stop operation times out or fails -> fencing kicks in.
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
Based on the constraint rules you have mentioned , failure of mysql should not
cause a failover to another node. For better insight, you have to be able to
reproduce the issue and share the logs with the community.
Best Regards,Strahil Nikolov
On Sat, Jun 5, 2021 at 23:33, Eric Robinson
Did you notice any delay in 'systemctl status openstack-cinder-scheduler' ? As
far as I know the cluster will use systemd (or even maybe dbus) to get the info
of the service.
Also, 10s monitor intercal seems quite aggressive - have you considered
increasing that ?
Best Regards,Strah
Thanks for the update. Could it be something local to your environment ?
Have you checked mounting the OCFS2 on a vanilla system ?
Best Regards,Strahil Nikolov
On Tue, Jun 15, 2021 at 12:01, Ulrich
Windl wrote: Hi Guys!
Just to keep you informed on the issue:
I was informed that I'
How did you stop pacemaker ?Usually I use 'pcs cluster stop' or it's crm
alternative.
Best Regards,Strahil Nikolov
On Tue, Jun 15, 2021 at 18:21, Andrei Borzenkov wrote:
We had the following situation
2-node cluster with single device (just single external storage
avai
l
be triggered.
Best Regards,
Strahil Nikolov
В вторник, 15 юни 2021 г., 18:47:06 ч. Гринуич+3, Andrei Borzenkov
написа:
On Tue, Jun 15, 2021 at 6:43 PM Strahil Nikolov wrote:
>
> How did you stop pacemaker ?
systemctl stop pacemaker
surprise :)
> Usually I use 'pcs cl
Maybe you can try:
while true ; do echo '0' > /proc/sys/kernel/nmi_watchdog ; sleep 1 ; done
and in another shell stop pacemaker and sbd.
I guess the only way to easily reproduce is with sbd over iscsi.
Best Regards,Strahil Nikolov
On Tue, Jun 15, 2021 at 21:30, Andrei Borzenkov
You can reload corosync via 'pcs' and I think that both are supported.The main
question is if you did reload corosync on all nodes in the cluster ?
Best Regards,Strahil Nikolov
On Sat, Jun 19, 2021 at 1:22, Gerry R Sommerville wrote:
Dear community,
I would like to ask few
Also, it's worth mentioning that you can still make changes without downtime.
For example you can edit corosync conf and push it to all nodes, then set
global maintenance, stop the cluster and then start it again.
Best Regards,Strahil Nikolov
On Mon, Jun 21, 2021 at 9:37, Jan Friesse
I would try to add 'trace_ra=1' or 'trace_ra=1 trace_file=' to debug
it further. In the first option (without trace_file) , the file will be at
/var/lib/heartbeat/trace_ra//*timestamp
Are you sure that the system is not overloaded and can't respond in time ?
Best
101 - 200 of 285 matches
Mail list logo