On 2020-11-30 23:21, Petr Bena wrote:
Hello,
Is there a way to setup a preferred node for a service? I know how to
create constrain that will make it possible to run a service ONLY on
certain node, or constrain that will make it impossible to run 2
services on same node, but I don't want any of
On 2020-02-24 12:17, Strahil Nikolov wrote:
On February 24, 2020 4:56:07 PM GMT+02:00, Luke Camilleri
wrote:
Hello users, I would like to ask for assistance on the below setup
please, mainly on the monitor fence timeout:
I notice that the issue happens at 00:00 on both days .
Have you
On 2020-02-21 08:51, Ricardo Esteves wrote:
Hi,
I'm trying to understand what is the objective of the constraints to
have the fencing devices running on opposite node or on its own node or
running all on the same node. Can you explain the difference?
IPMI fencing involves the instance
I believe you in fact want each fence agent to run on the other node, yes.
On February 20, 2020, at 6:23 PM, Ricardo Esteves wrote:
Hi,
I have a question regarding fencing, i have 2 physical servers: node01,
node02, each one has an ipmi card
so i create 2 fence devices:
fence_ipmi_node01
Many people don't have red hat access, so linking those urls is not useful.
On February 17, 2020, at 1:40 AM, Strahil Nikolov wrote:
Hello Ondrej,
thanks for your reply. I really appreciate that.
I have picked fence_multipath as I'm preparing for my EX436 and I can't know
what agent will be
On 2020-02-14 13:06, Strahil Nikolov wrote:
On February 14, 2020 4:44:53 PM GMT+02:00, "BASDEN, ALASTAIR G."
wrote:
Hi Strahil,
Note2: Consider adding a third node /for example a VM/ or a qdevice
on a separate node (allows to be on a separate network, so a simple
routing is the only
On 2020-02-10 00:06, Strahil Nikolov wrote:
On February 10, 2020 2:07:01 AM GMT+02:00, Dan Swartzendruber
wrote:
I have a 2-node CentOS7 cluster running ZFS. The two nodes (vsphere
appliances on different hosts) access 2 SAS SSD in a Supermicro JBOD
with 2 mini-SAS connectors. It all works
I have a 2-node CentOS7 cluster running ZFS. The two nodes (vsphere
appliances on different hosts) access 2 SAS SSD in a Supermicro JBOD
with 2 mini-SAS connectors. It all works fine - failover and all. My
quandary was how to implement fencing. I was able to get both of the
vmware SOAP
On 9/11/2018 9:20 AM, Dan Ragle wrote:
On 9/11/2018 1:59 AM, Andrei Borzenkov wrote:
07.09.2018 23:07, Dan Ragle пишет:
On an active-active two node cluster with DRBD, dlm, filesystem mounts,
a Web Server, and some crons I can't figure out how to have the crons
jump from node to node
On 9/11/2018 1:59 AM, Andrei Borzenkov wrote:
07.09.2018 23:07, Dan Ragle пишет:
On an active-active two node cluster with DRBD, dlm, filesystem mounts,
a Web Server, and some crons I can't figure out how to have the crons
jump from node to node in the correct order. Specifically, I have two
On an active-active two node cluster with DRBD, dlm, filesystem mounts, a Web Server, and some crons I can't figure out how to have
the crons jump from node to node in the correct order. Specifically, I have two crontabs (managed via symlink creation/deletion)
which normally will run one on
On 6/19/2017 5:32 AM, Klaus Wenninger wrote:
On 06/16/2017 09:08 PM, Ken Gaillot wrote:
On 06/16/2017 01:18 PM, Dan Ragle wrote:
On 6/12/2017 10:30 AM, Ken Gaillot wrote:
On 06/12/2017 09:23 AM, Klaus Wenninger wrote:
On 06/12/2017 04:02 PM, Ken Gaillot wrote:
On 06/10/2017 10:53 AM, Dan
On 6/16/2017 3:08 PM, Ken Gaillot wrote:
On 06/16/2017 01:18 PM, Dan Ragle wrote:
On 6/12/2017 10:30 AM, Ken Gaillot wrote:
On 06/12/2017 09:23 AM, Klaus Wenninger wrote:
On 06/12/2017 04:02 PM, Ken Gaillot wrote:
On 06/10/2017 10:53 AM, Dan Ragle wrote:
So I guess my bottom line
On 6/12/2017 10:30 AM, Ken Gaillot wrote:
On 06/12/2017 09:23 AM, Klaus Wenninger wrote:
On 06/12/2017 04:02 PM, Ken Gaillot wrote:
On 06/10/2017 10:53 AM, Dan Ragle wrote:
So I guess my bottom line question is: How does one tell Pacemaker that
the individual legs of globally unique clones
On 6/12/2017 2:03 AM, Klaus Wenninger wrote:
On 06/10/2017 05:53 PM, Dan Ragle wrote:
On 5/25/2017 5:33 PM, Ken Gaillot wrote:
On 05/24/2017 12:27 PM, Dan Ragle wrote:
I suspect this has been asked before and apologize if so, a google
search didn't seem to find anything that was helpful
On 5/25/2017 5:33 PM, Ken Gaillot wrote:
On 05/24/2017 12:27 PM, Dan Ragle wrote:
I suspect this has been asked before and apologize if so, a google
search didn't seem to find anything that was helpful to me ...
I'm setting up an active/active two-node cluster and am having an issue
where
connected via the permanent IP on the NIC and not the clusterIP, to be
dropped). Is there any way to avoid this? I was thinking that the
cluster operations would only affect the ClusteIP and not the other IPs
being served on that NIC.
Thanks!
Dan
On 2016-09-13 00:20, Klaus Wenninger wrote:
Location-constraints for fencing-resources are definitely supported and
don't just work by accident - if this was the question.
On 09/13/2016 02:43 AM, Dan Swartzendruber wrote:
On 2016-09-12 10:48, Dan Swartzendruber wrote:
Posting
On 2016-09-12 10:48, Dan Swartzendruber wrote:
Posting this as a separate thread from my fence_apc one. As I said in
that thread, I created two fence_apc agents, one to fence node A and
one to fence node B. Each was configured using a static pcmk node
mapping, and constrained to only run
Posting this as a separate thread from my fence_apc one. As I said in
that thread, I created two fence_apc agents, one to fence node A and one
to fence node B. Each was configured using a static pcmk node mapping,
and constrained to only run on the other node. In the process of
testing
On 2016-09-06 10:59, Ken Gaillot wrote:
[snip]
I thought power-wait was intended for this situation, where the node's
power supply can survive a brief outage, so a delay is needed to ensure
it drains. In any case, I know people are using it for that.
Are there any drawbacks to using
On 2016-09-05 03:04, Ulrich Windl wrote:
Marek Grac schrieb am 03.09.2016 um 14:41 in
Nachricht
:
Hi,
There are two problems mentioned in the email.
1) power-wait
Power-wait is a quite advanced option and
On 2016-09-03 08:41, Marek Grac wrote:
Hi,
There are two problems mentioned in the email.
1) power-wait
Power-wait is a quite advanced option and there are only few fence
devices/agent where it makes sense. And only because the HW/firmware
on the device is somewhat broken. Basically, when we
On 2016-09-02 10:09, Ken Gaillot wrote:
On 09/02/2016 08:14 AM, Dan Swartzendruber wrote:
So, I was testing my ZFS dual-head JBOD 2-node cluster. Manual
failovers worked just fine. I then went to try an acid-test by
logging
in to node A and doing 'systemctl stop network'. Sure enough
It occurred to me folks reading this might not have any knowledge about
ZFS. Think of my setup as an mdraid pool with a filesystem mounted on
it, shared out via NFS. Same basic idea...
___
Users mailing list: Users@clusterlabs.org
So, I was testing my ZFS dual-head JBOD 2-node cluster. Manual
failovers worked just fine. I then went to try an acid-test by logging
in to node A and doing 'systemctl stop network'. Sure enough, pacemaker
told the APC fencing agent to power-cycle node A. The ZFS pool moved to
node B as
On 2016-08-25 10:24, Gabriele Bulfon wrote:
YESSS!!! That was it! :)))
Upgraded to 1.1.15, rebuilt and the rng files contain a lot more
stuff.
Packaged, published, installed on the test machine: got all my
instructions as is!!! :)))
...now last stepsmaking our custom agents/shells work on
Thanks for the info. I only use esxi, which likely explains why I never had
issues...
Patrick Zwahlen wrote:
>Hi,
>
>> -Original Message-
>> From: Andreas Kurz [mailto:andreas.k...@gmail.com]
>> Sent: mercredi, 17 août 2016 23:16
>> To: Cluster Labs - All topics
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware
Okay, I almost have this all working. fence_ipmilan for the supermicro
host. Had to specify lanplus for it to work. fence_drac5 for the R905.
That was failing to complete due to timeout. Found a couple of helpful
posts that recommended increase the retry count to 3 and the timeout to
A lot of good suggestions here. Unfortunately, my budget is tapped out
for the near future at least (this is a home lab/soho setup). I'm
inclined to go with Digimer's two-node approach, with IPMI fencing. I
understand mobos can die and such. In such a long-shot, manual
intervention is
On 2016-08-04 19:33, Digimer wrote:
On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled,
it all works perfectly. I was dubious about a 2-node solution, so I
created a 3rd
ons 2015-09-23 klockan 14:08 +0200 skrev Ulrich Windl:
> >>> dan <dan.oscars...@intraphone.com> schrieb am 23.09.2015 um 13:39 in
> >>> Nachricht
> <1443008370.2386.8.ca...@intraphone.com>:
> > Hi
> >
> > As I had problem with corosy
ons 2015-09-23 klockan 15:20 +0200 skrev Ulrich Windl:
> >>> dan <dan.oscars...@intraphone.com> schrieb am 23.09.2015 um 14:42 in
> >>> Nachricht
> <1443012134.2386.11.ca...@intraphone.com>:
> > ons 2015-09-23 klockan 14:08 +0200 skrev Ulrich Wind
e
1 of the script).
The current version in git looks like it has the same problem.
Maybe you should switch to /bin/bash for scripts that need it as not
everybody have /bin/sh linked to /bin/bash.
Dan
___
Users mailing list: Users@clusterlabs.org
h
mån 2015-09-14 klockan 10:02 +0200 skrev dan:
> Hi
>
> To see if my cluster problem go away with a newer version of pacemaker I
> have now installed pcemaker 1.1.12+git+a9c8177-3ubuntu1 and I had to get
> 4.0.19-1 (ubuntu) of fence-agents to get a working fence-ipmilan.
&g
ome other idea what I can test/try to pin this down?
Dan
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/
40 matches
Mail list logo