On 2020-11-30 23:21, Petr Bena wrote:
Hello,
Is there a way to setup a preferred node for a service? I know how to
create constrain that will make it possible to run a service ONLY on
certain node, or constrain that will make it impossible to run 2
services on same node, but I don't want any of
On 2020-02-24 12:17, Strahil Nikolov wrote:
On February 24, 2020 4:56:07 PM GMT+02:00, Luke Camilleri
wrote:
Hello users, I would like to ask for assistance on the below setup
please, mainly on the monitor fence timeout:
I notice that the issue happens at 00:00 on both days .
Have you
On 2020-02-21 08:51, Ricardo Esteves wrote:
Hi,
I'm trying to understand what is the objective of the constraints to
have the fencing devices running on opposite node or on its own node or
running all on the same node. Can you explain the difference?
IPMI fencing involves the instance
I believe you in fact want each fence agent to run on the other node, yes.
On February 20, 2020, at 6:23 PM, Ricardo Esteves wrote:
Hi,
I have a question regarding fencing, i have 2 physical servers: node01,
node02, each one has an ipmi card
so i create 2 fence devices:
fence_ipmi_node01
Many people don't have red hat access, so linking those urls is not useful.
On February 17, 2020, at 1:40 AM, Strahil Nikolov wrote:
Hello Ondrej,
thanks for your reply. I really appreciate that.
I have picked fence_multipath as I'm preparing for my EX436 and I can't know
what agent will be
On 2020-02-14 13:06, Strahil Nikolov wrote:
On February 14, 2020 4:44:53 PM GMT+02:00, "BASDEN, ALASTAIR G."
wrote:
Hi Strahil,
Note2: Consider adding a third node /for example a VM/ or a qdevice
on a separate node (allows to be on a separate network, so a simple
routing is the only
On 2020-02-10 00:06, Strahil Nikolov wrote:
On February 10, 2020 2:07:01 AM GMT+02:00, Dan Swartzendruber
wrote:
I have a 2-node CentOS7 cluster running ZFS. The two nodes (vsphere
appliances on different hosts) access 2 SAS SSD in a Supermicro JBOD
with 2 mini-SAS connectors. It all works
I have a 2-node CentOS7 cluster running ZFS. The two nodes (vsphere
appliances on different hosts) access 2 SAS SSD in a Supermicro JBOD
with 2 mini-SAS connectors. It all works fine - failover and all. My
quandary was how to implement fencing. I was able to get both of the
vmware SOAP
On 2016-09-13 00:20, Klaus Wenninger wrote:
Location-constraints for fencing-resources are definitely supported and
don't just work by accident - if this was the question.
On 09/13/2016 02:43 AM, Dan Swartzendruber wrote:
On 2016-09-12 10:48, Dan Swartzendruber wrote:
Posting
On 2016-09-12 10:48, Dan Swartzendruber wrote:
Posting this as a separate thread from my fence_apc one. As I said in
that thread, I created two fence_apc agents, one to fence node A and
one to fence node B. Each was configured using a static pcmk node
mapping, and constrained to only run
Posting this as a separate thread from my fence_apc one. As I said in
that thread, I created two fence_apc agents, one to fence node A and one
to fence node B. Each was configured using a static pcmk node mapping,
and constrained to only run on the other node. In the process of
testing
On 2016-09-06 10:59, Ken Gaillot wrote:
[snip]
I thought power-wait was intended for this situation, where the node's
power supply can survive a brief outage, so a delay is needed to ensure
it drains. In any case, I know people are using it for that.
Are there any drawbacks to using
On 2016-09-05 03:04, Ulrich Windl wrote:
Marek Grac schrieb am 03.09.2016 um 14:41 in
Nachricht
:
Hi,
There are two problems mentioned in the email.
1) power-wait
Power-wait is a quite advanced option and
On 2016-09-03 08:41, Marek Grac wrote:
Hi,
There are two problems mentioned in the email.
1) power-wait
Power-wait is a quite advanced option and there are only few fence
devices/agent where it makes sense. And only because the HW/firmware
on the device is somewhat broken. Basically, when we
On 2016-09-02 10:09, Ken Gaillot wrote:
On 09/02/2016 08:14 AM, Dan Swartzendruber wrote:
So, I was testing my ZFS dual-head JBOD 2-node cluster. Manual
failovers worked just fine. I then went to try an acid-test by
logging
in to node A and doing 'systemctl stop network'. Sure enough
It occurred to me folks reading this might not have any knowledge about
ZFS. Think of my setup as an mdraid pool with a filesystem mounted on
it, shared out via NFS. Same basic idea...
___
Users mailing list: Users@clusterlabs.org
So, I was testing my ZFS dual-head JBOD 2-node cluster. Manual
failovers worked just fine. I then went to try an acid-test by logging
in to node A and doing 'systemctl stop network'. Sure enough, pacemaker
told the APC fencing agent to power-cycle node A. The ZFS pool moved to
node B as
On 2016-08-25 10:24, Gabriele Bulfon wrote:
YESSS!!! That was it! :)))
Upgraded to 1.1.15, rebuilt and the rng files contain a lot more
stuff.
Packaged, published, installed on the test machine: got all my
instructions as is!!! :)))
...now last stepsmaking our custom agents/shells work on
Thanks for the info. I only use esxi, which likely explains why I never had
issues...
Patrick Zwahlen wrote:
>Hi,
>
>> -Original Message-
>> From: Andreas Kurz [mailto:andreas.k...@gmail.com]
>> Sent: mercredi, 17 août 2016 23:16
>> To: Cluster Labs - All topics
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware
Okay, I almost have this all working. fence_ipmilan for the supermicro
host. Had to specify lanplus for it to work. fence_drac5 for the R905.
That was failing to complete due to timeout. Found a couple of helpful
posts that recommended increase the retry count to 3 and the timeout to
A lot of good suggestions here. Unfortunately, my budget is tapped out
for the near future at least (this is a home lab/soho setup). I'm
inclined to go with Digimer's two-node approach, with IPMI fencing. I
understand mobos can die and such. In such a long-shot, manual
intervention is
On 2016-08-04 19:33, Digimer wrote:
On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled,
it all works perfectly. I was dubious about a 2-node solution, so I
created a 3rd
26 matches
Mail list logo