On Wed, Oct 9, 2024 at 3:08 PM Angelo Ruggiero via Users <
users@clusterlabs.org> wrote:
> Hello,
>
> My setup
>
>
>- We are setting up a pacemaker cluster to run SAP runnig on RHEL on
>Vmware virtual machines.
>- We will have two nodes for the application server of SAP and 2 nodes
Hello,
My setup
*
We are setting up a pacemaker cluster to run SAP runnig on RHEL on Vmware
virtual machines.
*
We will have two nodes for the application server of SAP and 2 nodes for the
Hana database. SAP/RHEL provide good support on how to setup the cluster. 🙂
*
SAP will need a n
Hi All,
I'm sorry for the previous post. Most probably it's not google-cloud-cli as
even after downgrading, fencing still doesn't work all the time.
Best Regards,
Strahil Nikolov
В сряда, 27 март 2024 г. в 15:39:06 ч. Гринуич+2, Strahil Nikolov via Users
написа:
Hi All,
I'm start
Hi All,
I'm starting this thread in order to warn you that if you updated recently and
'google-cloud-cli' rpm was deployed (obsoletes 'google-cloud-sdk'), fencing
won't work for you despite that fence_gce and 'pcs stonith fence' report
success.
The VM stays in a odd status (right now I don't ha
On Sat, 2023-07-22 at 08:33 +, Sai Siddhartha Peesapati wrote:
> This is a 3-node cluster with gfs2 filesystem resources configured
> using dlm and clvmd. Stonith is enabled. dlm and gfs2 resources are
> set to fence on failures.
> Pacemaker version on the cluster - 2.1.4-5.el8 (CentOS 8 Stream
This is a 3-node cluster with gfs2 filesystem resources configured using dlm
and clvmd. Stonith is enabled. dlm and gfs2 resources are set to fence on
failures.
Pacemaker version on the cluster - 2.1.4-5.el8 (CentOS 8 Stream)
Pacemaker default fence actions are set to power off the node instead
The quorum device cannot run resources, therefore it does not need
fencing. The point of fencing of a node is to be sure that all
resources are stopped when they can't be stopped normally.
Also, the quorum device is not a single point of failure, since one of
the nodes would have to fail as well t
Well, you can always make a single-node cluster with the quorum device's host
and setup systemd resource to keep the service up and running.With SBD, that
single-node cluster will suicide in case the machine ends in a unresponsive
state.
Best Regards,Strahil Nikolov
On Fri, Jul 15, 2022
On 15.07.2022 09:24, Viet Nguyen wrote:
> Hi,
>
> I just wonder that do we need to have fencing for a quorum device? I have 2
> node cluster with one quorum device. Both 2 nodes have fencing agents.
>
> But I wonder that should i define the fencing agent for quorum device or
> not?
You cannot.
Related my experiences, i would say absolutely yes. Fecing is needed to
keep integrity of the cluster if something goes suddenly and unexpected
wrong, even in a 2nodes+qdevice setup. I never seen a cluster without
fecing working properly.
On Fri, 15 Jul 2022, 15:29 Viet Nguyen, wrote:
> Hi,
>
Hi,
I just wonder that do we need to have fencing for a quorum device? I have 2
node cluster with one quorum device. Both 2 nodes have fencing agents.
But I wonder that should i define the fencing agent for quorum device or
not? Just in case it is laggy...
Thank you so much!
Regards,
Viet
_
On 07.06.2022 11:50, Klaus Wenninger wrote:
>>
>> From the documentation is not clear to me whether this would be:
>> a) multiple fencing where ipmi would be first level and sbd would be a
>> second level fencing (where sbd always succeeds)
>> b) or this is considered a single level fencing with a
On 07.06.2022 11:26, Zoran Bošnjak wrote:
>
> In the test scenario, the dummy resource is currently running on node1. I
> have simulated node failure by unplugging the ipmi AND host network
> interfaces from node1. The result was that node1 gets rebooted (by watchdog),
> but the rest of the pac
On Tue, Jun 7, 2022 at 10:27 AM Zoran Bošnjak wrote:
>
> Hi, I need some help with correct fencing configuration in 5-node cluster.
>
> The speciffic issue is that there are 3 rooms, where in addition to node
> failure scenario, each room can fail too (for example in case of room power
> failure
Hi, I need some help with correct fencing configuration in 5-node cluster.
The speciffic issue is that there are 3 rooms, where in addition to node
failure scenario, each room can fail too (for example in case of room power
failure or room network failure).
room0: [ node0 ]
roomA: [ node1, node
If you have SAN & Hardware Watchdog device, you can also use SBD.If SAN is lost
and nodes cannot communicate - they will suicide.
Best Regards,Strahil Nikolov___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs
On 07.05.2021 13:36, Kyle O'Donnell wrote:
> Hi Everyone.
>
> We've setup fencing with our ilo/idrac interfaces and things generally work
> well but during some of our failover scenario testing we ran into issues when
> we "failed' the switches in which those ilo/idrac interfaces were connected.
On 2021-05-07 6:36 a.m., Kyle O'Donnell wrote:
> Hi Everyone.
>
> We've setup fencing with our ilo/idrac interfaces and things generally
> work well but during some of our failover scenario testing we ran into
> issues when we "failed' the switches in which those ilo/idrac interfaces
> were conne
Hi Everyone.
We've setup fencing with our ilo/idrac interfaces and things generally work
well but during some of our failover scenario testing we ran into issues when
we "failed' the switches in which those ilo/idrac interfaces were connected.
The issue was that resources were migrated away fro
I'm not sure Ignazio is talking about an intentional delay. I think his
question may be about node loss detection.
That is done at the Corosync level. See the corosync.conf(5) man page
for all the possible ways it can be configured, but the most important
parameter is "token".
Fencing can also be
On 12/29/20 12:38 AM, Reid Wahl wrote:
> Hi, Ignazio. You can set either the delay in one of two ways:
> - Using the `delay` attribute, whose value is a bare integer
> (representing the number of seconds). This is implemented within the
> fencing library (/usr/share/fence/fencing.py).
> - Using the
Hi, Ignazio. You can set either the delay in one of two ways:
- Using the `delay` attribute, whose value is a bare integer
(representing the number of seconds). This is implemented within the
fencing library (/usr/share/fence/fencing.py).
- Using the `pcmk_delay_base` attribute, whose value is more
Hello all, I am setting a pacemaker cluster with centos 7 and ipmi idrac
fencing devices.
What I did not understand is how set the number of seconds before a node is
rebooted by stonith.
If the cluster is made up 3 nodes (A, B, C) if the node C is unreacheable
(for example have network cards corrup
Coincidentally, the documentation for the pcmk_host_check default was
recently updated for the upcoming 2.0.3 release. Once the release is
out, the online documentation will be regenerated, but here is the
text:
Default
---
static-list if either pcmk_host_list or pcmk_host_map is set, otherwis
Roger Zhou writes:
> On 11/3/19 12:56 AM, wf...@niif.hu wrote:
>
>> Andrei Borzenkov writes:
>>
>>> According to documentation, pcmk_host_list is used only if
>>> pcmk_host_check=static-list which is not default, by default pacemaker
>>> queries agent for nodes it can fence and fence_scsi does
On 11/3/19 12:56 AM, wf...@niif.hu wrote:
> Andrei Borzenkov writes:
>
>> According to documentation, pcmk_host_list is used only if
>> pcmk_host_check=static-list which is not default, by default pacemaker
>> queries agent for nodes it can fence and fence_scsi does not return
>> anything.
>
>
Andrei Borzenkov writes:
> According to documentation, pcmk_host_list is used only if
> pcmk_host_check=static-list which is not default, by default pacemaker
> queries agent for nodes it can fence and fence_scsi does not return
> anything.
The documentation is somewhat vague here. The note abo
Have you checked this article: Using SCSI Persistent Reservation Fencing
(fence_scsi) with pacemaker in a Red Hat High Availability cluster - Red Hat
Customer Portal
|
|
|
| | |
|
|
|
| |
Using SCSI Persistent Reservation Fencing (fence_scsi) with pacemaker in...
This article descr
30.10.2019 15:46, RAM PRASAD TWISTED ILLUSIONS пишет:
> Hi everyone,
>
> I am trying to set up a storage cluster with two nodes, both running debian
> buster. The two nodes called, duke and miles, have a LUN residing on a SAN
> box as their shared storage device between them. As you can see in the
On Wed, 2019-10-30 at 13:46 +0100, RAM PRASAD TWISTED ILLUSIONS wrote:
> Hi everyone,
>
> I am trying to set up a storage cluster with two nodes, both running
> debian buster. The two nodes called, duke and miles, have a LUN
> residing on a SAN box as their shared storage device between them. As
Hi everyone,
I am trying to set up a storage cluster with two nodes, both running
debian buster. The two nodes called, duke and miles, have a LUN residing
on a SAN box as their shared storage device between them. As you can see
in the output of pcs status, all the demons are active and I can g
Hi everyone,
I am trying to set up a storage cluster with two nodes, both running debian
buster. The two nodes called, duke and miles, have a LUN residing on a SAN
box as their shared storage device between them. As you can see in the
output of pcs status, all the demons are active and I can get t
On Tue, 2019-05-21 at 11:10 +, Lopez, Francisco Javier [Global IT]
wrote:
> Hello guys !
>
> Need your help to try to understand and debug what I'm facing in one
> of my clusters.
>
> I set up fencing with this detail:
>
> # pcs -f stonith_cfg stonith create fence_ao_pg01 fence_vmware_soap
>
Hello guys !
Need your help to try to understand and debug what I'm facing in one of my
clusters.
I set up fencing with this detail:
# pcs -f stonith_cfg stonith create fence_ao_pg01 fence_vmware_soap ipaddr=
ssl_insecure=1 login="" passwd="" pcmk_reboot_action=reboot
pcmk_host_list="ao-pg01-
On 5/9/19 1:03 PM, Lopez, Francisco Javier [Global IT] wrote:
> Good day guys !
>
> I'm implementing fencing in my two node cluster with this detail:
>
> - fence_vmware_soap
> - PostgreSql release 10.X
> - CentOS 7.X
>
> As far as I know, to create the resources, I can use two different ways:
>
> -
Good day guys !
I'm implementing fencing in my two node cluster with this detail:
- fence_vmware_soap
- PostgreSql release 10.X
- CentOS 7.X
As far as I know, to create the resources, I can use two different ways:
- Create only one resource for both nodes, following this way:
# pcs -f stonit
On 03/13/2019 10:56 AM, Lopez, Francisco Javier [Global IT] wrote:
> Hello guys !
>
> I'm dealing since some time ago with this configuration:
>
> - Two node cluster.
> - Vmware boxes.
> - PostgreSql release 10.X: Master/Slave.
>
> On top of this I've set up Pacemaker/Corosync and RA/PAF.
>
> Now i
Hello guys !
I'm dealing since some time ago with this configuration:
- Two node cluster.
- Vmware boxes.
- PostgreSql release 10.X: Master/Slave.
On top of this I've set up Pacemaker/Corosync and RA/PAF.
Now it's time to play with different fencing scenarios.
I'd like to know from more experie
Hi,
I have a 2 clustered nodes and it required to configure the fencing. i want
to configure the fencing thru VM ESXI, what fencing agent do i need to use
? is it fence_virt, fence_xvm or fence_vmware_soap ? im so confused on this
fencing. may you can also provide simple configuration of the best
On 2018-06-20 11:52 PM, Andrei Borzenkov wrote:
> 21.06.2018 00:50, Digimer пишет:
>> On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
>>> On Wed, 20 Jun 2018 17:24:41 -0400
>>> Digimer wrote:
>>>
Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
>>>
>>> It does
On Thu, 21 Jun 2018 07:09:43 +0200
Klaus Wenninger wrote:
> On 06/21/2018 05:52 AM, Andrei Borzenkov wrote:
> > 21.06.2018 00:50, Digimer пишет:
> >> On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
> >>> On Wed, 20 Jun 2018 17:24:41 -0400
> >>> Digimer wrote:
> >>>
> Make s
On 06/21/2018 06:02 AM, Andrei Borzenkov wrote:
> 21.06.2018 01:12, Casey & Gina пишет:
>> Please forgive me, I had inadvertently had stonith-enabled=false when
>> I thought I had it true. The fencing/rebooting is now working.
>> However, in light of what you brought up earlier, how do I set a
>>
On 06/21/2018 05:52 AM, Andrei Borzenkov wrote:
> 21.06.2018 00:50, Digimer пишет:
>> On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
>>> On Wed, 20 Jun 2018 17:24:41 -0400
>>> Digimer wrote:
>>>
Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
>>> It does with
21.06.2018 01:12, Casey & Gina пишет:
> Please forgive me, I had inadvertently had stonith-enabled=false when
> I thought I had it true. The fencing/rebooting is now working.
> However, in light of what you brought up earlier, how do I set a
> delay preference different for one of the two hosts in
21.06.2018 00:50, Digimer пишет:
> On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
>> On Wed, 20 Jun 2018 17:24:41 -0400
>> Digimer wrote:
>>
>>> Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
>>
>> It does with the "two_node" parameter enabled in corosync.conf...
Silly question; Did you actually enable stonith? Can you share your config?
digimer
On 2018-06-20 06:04 PM, Casey & Gina wrote:
>> On 2018-06-20, at 3:59 PM, Casey & Gina wrote:
>>
>>> Get the cluster healthy, tail the system logs from both nodes, trigger a
>>> fault and wait for things to settl
Please forgive me, I had inadvertently had stonith-enabled=false when I thought
I had it true. The fencing/rebooting is now working. However, in light of
what you brought up earlier, how do I set a delay preference different for one
of the two hosts in case of a communications failure?
> On 2
> On 2018-06-20, at 3:59 PM, Casey & Gina wrote:
>
>> Get the cluster healthy, tail the system logs from both nodes, trigger a
>> fault and wait for things to settle. Then share the logs please.
>
> What do you mean by "system logs"? Do you mean the corosync.log? Triggering
> a fault is power
> Note: Please reply to he list, not me directly.
I intended to. I don't know why sometimes when I click "Reply" it defaults to
the list but sometimes it does not. Anyways...
> The stonith delay helps predict who will win in a comms break event
> where both try to fence the other at the same t
Fencing is required on all clusters, and specially on 2-node ones.
https://www.alteeve.com/w/The_2-Node_Myth
digimer
On 2018-06-20 05:53 PM, Casey & Gina wrote:
> Does this mean that fencing can't actually work in a 2-node cluster?? Or is
> it just that the delay needs set differently on one o
Does this mean that fencing can't actually work in a 2-node cluster?? Or is it
just that the delay needs set differently on one of the hosts and it will start
working?
> On 2018-06-20, at 3:50 PM, Digimer wrote:
>
> On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
>> On Wed, 20 Jun
Note: Please reply to he list, not me directly.
The stonith delay helps predict who will win in a comms break event
where both try to fence the other at the same time. If you disable
quorum and it still doesn't fence, something else is wrong (and it's not
related to the delay).
Get the cluster he
On 2018-06-20 05:46 PM, Jehan-Guillaume de Rorthais wrote:
> On Wed, 20 Jun 2018 17:24:41 -0400
> Digimer wrote:
>
>> Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
>
> It does with the "two_node" parameter enabled in corosync.conf...as far as I
> understand it anyway...
My corosync.conf (which I don't manually create, I guess pcs does this?)
already has:
quorum {
provider: corosync_votequorum
two_node: 1
}
No go.
> On 2018-06-20, at 3:46 PM, Jehan-Guillaume de Rorthais
> wrote:
>
> On Wed, 20 Jun 2018 17:24:41 -0400
> Digimer wrote:
>
>> Make sure q
On Wed, 20 Jun 2018 17:24:41 -0400
Digimer wrote:
> Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
It does with the "two_node" parameter enabled in corosync.conf...as far as I
understand it anyway...
___
Users mailing list: Users
Make sure quorum is disabled. Quorum doesn't work on 2-node clusters.
Also be sure to set a fence delay on the "primary" node (however you
define that) so that you have some predictability about which node will
live in a comms break event.
digimer
On 2018-06-20 05:22 PM, Casey & Gina wrote:
> I t
I tried testing out a fencing configuration that I had working with a 3-node
cluster, using a 2-node cluster. What I found is that when I power off one of
the nodes forcibly, it does not get fenced and rebooted as it does on a 3-node
cluster. I have verified that I can fence and reboot one nod
On Mon, 2018-06-18 at 21:01 -0400, Jason Gauthier wrote:
> On Mon, Jun 18, 2018 at 11:12 AM Jason Gauthier > wrote:
> >
> > On Mon, Jun 18, 2018 at 10:58 AM Ken Gaillot
> > wrote:
> > >
> > > On Mon, 2018-06-18 at 10:10 -0400, Jason Gauthier wrote:
> > > > On Mon, Jun 18, 2018 at 9:55 AM Ken Ga
On Mon, Jun 18, 2018 at 11:12 AM Jason Gauthier wrote:
>
> On Mon, Jun 18, 2018 at 10:58 AM Ken Gaillot wrote:
> >
> > On Mon, 2018-06-18 at 10:10 -0400, Jason Gauthier wrote:
> > > On Mon, Jun 18, 2018 at 9:55 AM Ken Gaillot
> > > wrote:
> > > >
> > > > On Fri, 2018-06-15 at 21:39 -0400, Jason
On Mon, Jun 18, 2018 at 10:58 AM Ken Gaillot wrote:
>
> On Mon, 2018-06-18 at 10:10 -0400, Jason Gauthier wrote:
> > On Mon, Jun 18, 2018 at 9:55 AM Ken Gaillot
> > wrote:
> > >
> > > On Fri, 2018-06-15 at 21:39 -0400, Jason Gauthier wrote:
> > > > Greetings,
> > > >
> > > >Previously, I was
On Mon, 2018-06-18 at 10:10 -0400, Jason Gauthier wrote:
> On Mon, Jun 18, 2018 at 9:55 AM Ken Gaillot
> wrote:
> >
> > On Fri, 2018-06-15 at 21:39 -0400, Jason Gauthier wrote:
> > > Greetings,
> > >
> > > Previously, I was using fiber channel with block devices. I
> > > used
> > > sbd to fe
On Mon, Jun 18, 2018 at 9:55 AM Ken Gaillot wrote:
>
> On Fri, 2018-06-15 at 21:39 -0400, Jason Gauthier wrote:
> > Greetings,
> >
> >Previously, I was using fiber channel with block devices. I used
> > sbd to fence the disks, by creating a small block device, and then
> > using stonith to fe
On Fri, 2018-06-15 at 21:39 -0400, Jason Gauthier wrote:
> Greetings,
>
> Previously, I was using fiber channel with block devices. I used
> sbd to fence the disks, by creating a small block device, and then
> using stonith to fence the physical disk block.
>
> However, I had some reliability
Greetings,
Previously, I was using fiber channel with block devices. I used
sbd to fence the disks, by creating a small block device, and then
using stonith to fence the physical disk block.
However, I had some reliability issues with that (I believe it was the
fibre channel interfacing, not
Hi,
I have a four node cluster that uses iLo as fencing agent. When i simulate
a node crash (either killing corosync or echo c > /proc/sysrq-trigger) the
node is marked as UNCLEAN and requested to be restarted by the stonith
agent, but everytime that happens another node in the cluster is also
mar
On 15/08/16 14:48 +0200, Jan Pokorný wrote:
>> On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
>>> On 2016-08-04 19:03, Digimer wrote:
As for DRAC vs IPMI, no, they are not two things. In fact, I am pretty
certain that fence_drac is a symlink to fence_ipmilan. All DRAC is (same
with
> On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
>> On 2016-08-04 19:03, Digimer wrote:
>>> As for DRAC vs IPMI, no, they are not two things. In fact, I am pretty
>>> certain that fence_drac is a symlink to fence_ipmilan. All DRAC is (same
>>> with iRMC, iLO, RSA, etc) is "IPMI + features". Fundam
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware upd
On 2016-08-06 21:59, Digimer wrote:
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
(snip)
What about using ipmitool directly? I can't imagine that such a long
time is normal. Maybe there is a firmware updat
On 06/08/16 08:22 PM, Dan Swartzendruber wrote:
> On 2016-08-06 19:46, Digimer wrote:
>> On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
>>>
>>> Okay, I almost have this all working. fence_ipmilan for the supermicro
>>> host. Had to specify lanplus for it to work. fence_drac5 for the R905.
>>>
On 2016-08-06 19:46, Digimer wrote:
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
Okay, I almost have this all working. fence_ipmilan for the
supermicro
host. Had to specify lanplus for it to work. fence_drac5 for the
R905.
That was failing to complete due to timeout. Found a couple of
On 06/08/16 07:33 PM, Dan Swartzendruber wrote:
>
> Okay, I almost have this all working. fence_ipmilan for the supermicro
> host. Had to specify lanplus for it to work. fence_drac5 for the R905.
> That was failing to complete due to timeout. Found a couple of helpful
> posts that recommended
Okay, I almost have this all working. fence_ipmilan for the supermicro
host. Had to specify lanplus for it to work. fence_drac5 for the R905.
That was failing to complete due to timeout. Found a couple of helpful
posts that recommended increase the retry count to 3 and the timeout to
60.
A lot of good suggestions here. Unfortunately, my budget is tapped out
for the near future at least (this is a home lab/soho setup). I'm
inclined to go with Digimer's two-node approach, with IPMI fencing. I
understand mobos can die and such. In such a long-shot, manual
intervention is fin
On Fri, Aug 5, 2016 at 7:08 AM, Digimer wrote:
> On 04/08/16 11:44 PM, Andrei Borzenkov wrote:
>> 05.08.2016 02:33, Digimer пишет:
>>> On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
On 2016-08-04 19:03, Digimer wrote:
> On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
>> I'm setting u
On 04/08/16 11:44 PM, Andrei Borzenkov wrote:
> 05.08.2016 02:33, Digimer пишет:
>> On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
>>> On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
> I'm setting up an HA NFS server to serve up storage to a couple of
>
05.08.2016 02:33, Digimer пишет:
> On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
>> On 2016-08-04 19:03, Digimer wrote:
>>> On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depe
On 2016-08-04 19:33, Digimer wrote:
On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZF
On 04/08/16 07:21 PM, Dan Swartzendruber wrote:
> On 2016-08-04 19:03, Digimer wrote:
>> On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
>>> I'm setting up an HA NFS server to serve up storage to a couple of
>>> vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
>>> agent which i
On 2016-08-04 19:03, Digimer wrote:
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled,
it a
On 04/08/16 06:56 PM, Dan Swartzendruber wrote:
> I'm setting up an HA NFS server to serve up storage to a couple of
> vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
> agent which imports or exports a pool. So far, with stonith disabled,
> it all works perfectly. I was dubi
I'm setting up an HA NFS server to serve up storage to a couple of
vsphere hosts. I have a virtual IP, and it depends on a ZFS resource
agent which imports or exports a pool. So far, with stonith disabled,
it all works perfectly. I was dubious about a 2-node solution, so I
created a 3rd node
On 02/22/2016 06:56 PM, Ferenc Wágner wrote:
> Ken Gaillot writes:
>
>> On 02/21/2016 06:19 PM, Ferenc Wágner wrote:
>>
>>> Last night a node in our cluster (Corosync 2.3.5, Pacemaker 1.1.14)
>>> experienced some failure and fell out of the cluster: [...]
>>>
>>> However, no fencing agent reporte
Ken Gaillot writes:
> On 02/21/2016 06:19 PM, Ferenc Wágner wrote:
>
>> Last night a node in our cluster (Corosync 2.3.5, Pacemaker 1.1.14)
>> experienced some failure and fell out of the cluster: [...]
>>
>> However, no fencing agent reported ability to fence the failing node
>> (vhbl07), beca
On 02/21/2016 06:19 PM, Ferenc Wágner wrote:
> Hi,
>
> Last night a node in our cluster (Corosync 2.3.5, Pacemaker 1.1.14)
> experienced some failure and fell out of the cluster:
>
> Feb 21 22:11:12 vhbl06 corosync[3603]: [TOTEM ] A new membership
> (10.0.6.9:612) was formed. Members left: 167
Hi,
Last night a node in our cluster (Corosync 2.3.5, Pacemaker 1.1.14)
experienced some failure and fell out of the cluster:
Feb 21 22:11:12 vhbl06 corosync[3603]: [TOTEM ] A new membership
(10.0.6.9:612) was formed. Members left: 167773709
Feb 21 22:11:12 vhbl06 corosync[3603]: [TOTEM ] Fa
On 05/11/15 11:04 AM, Ken Gaillot wrote:
> On 11/05/2015 02:43 AM, Gonçalo Lourenço wrote:
>> Greetings, everyone!
>>
>>
>> I'm having some trouble understanding how to properly setup fencing in my
>> two-node cluster (Pacemaker + Corosync). I apologize beforehand if this
>> exact question has be
On 11/05/2015 02:43 AM, Gonçalo Lourenço wrote:
> Greetings, everyone!
>
>
> I'm having some trouble understanding how to properly setup fencing in my
> two-node cluster (Pacemaker + Corosync). I apologize beforehand if this exact
> question has been answered in the past, but I think the intric
Greetings, everyone!
I'm having some trouble understanding how to properly setup fencing in my
two-node cluster (Pacemaker + Corosync). I apologize beforehand if this exact
question has been answered in the past, but I think the intricacies of my
situation might be interesting enough to warran
On 19/10/15 10:51 -0400, Digimer wrote:
> On 19/10/15 06:53 AM, Arjun Pandey wrote:
>> 2. Fencing test cases.
>> Based on the internet queries i could find , apart from plugging out
>> the dedicated cable. The only other case suggested is killing corosync
>> process on one of the nodes.
>> Are the
Hi Digimer
Please find my response inilne.
On Mon, Oct 19, 2015 at 8:21 PM, Digimer wrote:
> On 19/10/15 06:53 AM, Arjun Pandey wrote:
> > Hi
> >
> > I am running a 2 node cluster with this config on centos 6.5/6.6 where
>
> It's important to keep both nodes on the same minor version,
> part
On 19/10/15 06:53 AM, Arjun Pandey wrote:
> Hi
>
> I am running a 2 node cluster with this config on centos 6.5/6.6 where
It's important to keep both nodes on the same minor version,
particularly in this case. Please either upgrade centos 6.5 to 6.6 or
both to 6.7.
> i have a multi-state resour
Hi
I am running a 2 node cluster with this config on centos 6.5/6.6 where i
have a multi-state resource foo being run in master/slave mode and a bunch
of floating IP addresses configured. Additionally i have a collocation
constraint
for the IP addr to be collocated with the master.
Please find
10 juillet 2015 08:46 "Digimer" a écrit:
> On 09/07/15 11:37 PM, Nicolas S. wrote:
>
>> Hello,
>>
>> I'm working on a 3 node cluster project.
>> I didnt want to go to 2 node cluster , I'd rather3 nodes for the
>> ressources, and to have a quorum. My 3 machines are identical.
>>
>> Each machine
On 10/07/15 12:39 AM, Nicolas S. wrote:
> 10 juillet 2015 08:46 "Digimer" a écrit:
>
>> On 09/07/15 11:37 PM, Nicolas S. wrote:
>>
>>> Hello,
>>>
>>> I'm working on a 3 node cluster project.
>>> I didnt want to go to 2 node cluster , I'd rather3 nodes for the
>>> ressources, and to have a quorum.
10 juillet 2015 08:46 "Digimer" a écrit:
> On 09/07/15 11:37 PM, Nicolas S. wrote:
>
>> Hello,
>>
>> I'm working on a 3 node cluster project.
>> I didnt want to go to 2 node cluster , I'd rather3 nodes for the
>> ressources, and to have a quorum. My 3 machines are identical.
>>
>> Each machine
On 09/07/15 11:37 PM, Nicolas S. wrote:
> Hello,
>
> I'm working on a 3 node cluster project.
> I didnt want to go to 2 node cluster , I'd rather3 nodes for the
> ressources, and to have a quorum. My 3 machines are identical.
>
> Each machine exports a disk to the cluster via iscsi (it's to simul
Hello,
I'm working on a 3 node cluster project.
I didnt want to go to 2 node cluster , I'd rather3 nodes for the ressources,
and to have a quorum. My 3 machines are identical.
Each machine exports a disk to the cluster via iscsi (it's to simulate a SAN on
my test platform).
For the moment all i
98 matches
Mail list logo