Yep, that works fine. Thanks for the explanation.
On Thu, Apr 18, 2019 at 5:00 PM Ken Gaillot wrote:
> On Thu, 2019-04-18 at 15:51 -0600, JCA wrote:
> > I have my CentOS two-node cluster, which some of you may already be
> > sick and tired of reading about:
> >
> > #
I have my CentOS two-node cluster, which some of you may already be sick
and tired of reading about:
# pcs status
Cluster name: FirstCluster
Stack: corosync
Current DC: two (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with
quorum
Last updated: Thu Apr 18 13:52:38 2019
Last change: Thu Apr 18
more specific
> details.
>
> digimer
> On 2019-04-17 5:46 p.m., JCA wrote:
>
> Thanks. This implies that I officially do not understand what it is that
> fencing can do for me, in my simple cluster. Back to the drawing board.
>
> On Wed, Apr 17, 2019 at 3:33 PM digimer
er the peer has been confirmed
> terminated will IO resume. This way, split-nodes become effectively
> impossible.
>
> digimer
> On 2019-04-17 5:17 p.m., JCA wrote:
>
> Here is what I did:
>
> # pcs stonith create disk_fencing fence_scsi pcmk_host_list="one two"
at which point all the resources above get started immediately.
Obviously, I am missing something big here. But, what is it?
On Wed, Apr 17, 2019 at 2:59 PM Adam Budziński
wrote:
> You did not configure any fencing device.
>
> śr., 17.04.2019, 22:51 użytkownik JCA <1.41...@gmail.com>
I am trying to get fencing working, as described in the "Cluster from
Scratch" guide, and I am stymied at get-go :-(
The document mentions a property named stonith-enabled. When I was trying
to get my first cluster going, I noticed that my resources would start only
when this property is set to fa
:
> [letter-casing wise:
> it's either "Pacemaker" or down-to-the-terminal "pacemaker"]
>
> On 16/04/19 10:21 -0600, JCA wrote:
> > 2. It would seem that what Pacemaker is doing is the following:
> >a. Check out whether the app is running.
> >
Thanks to everybody who has contributed to this. Let me summarize things,
if it is only for my own benefit - I learn more quickly when I try to
explain that I am trying to learn something to others.
I instrumented my script in order to find out exactly how many times it is
invoked when creating my
Thanks. See my comments interspersed below.
On Mon, Apr 15, 2019 at 4:30 PM Ken Gaillot wrote:
> On Mon, 2019-04-15 at 14:15 -0600, JCA wrote:
> > I have a simple two-node cluster, node one and node two, with a
> > single resource, ClusterMyApp. The nodes are CentOS 7 VMs. The
This is weird. Further experiments, consisting of creating and deleting the
resource, reveal that, on creating the resource, myapp-script may be
invoked multiple times - sometimes four, sometimes twenty or so, sometimes
returning OCF_SUCCESS, some other times returning OCF_NOT_RUNNING. And
whether
Well, I remain puzzled. I added a statement to the end of my script in
order to capture its return value. Much to my surprise, when I create the
associated resource (as described in my previous post) myapp-script gets
invoked four times in node one (where the resource is created) and two in
node t
I have a simple two-node cluster, node one and node two, with a single
resource, ClusterMyApp. The nodes are CentOS 7 VMs. The resource is created
executing the following line in node one:
# pcs resource create ClusterMyApp ocf:myapp:myapp-script op monitor
interval=30s
This invokes myapp-scri
Making some progress with Pacemaker/DRBD, but still trying to grasp some of
the basics of this framework. Here is my current situation:
I have a two-node cluster, pmk1 and pmk2, with resources ClusterIP and
DrbdFS. In what follows, commands preceded by '[pmk1] #' are to be
understood as commands i
odes, you need to use primary/primary
> > mode in drbd
> >
> > Il giorno mer 20 mar 2019 alle ore 16:51 JCA <1.41...@gmail.com
> > <mailto:1.41...@gmail.com>> ha scritto:
> >
> > OK, thanks. Yet another thing I was not aware of in the clustering
>
OK, thanks. Yet another thing I was not aware of in the clustering world :-(
On Wed, Mar 20, 2019 at 9:41 AM Valentin Vidic
wrote:
> On Wed, Mar 20, 2019 at 09:36:58AM -0600, JCA wrote:
> > # pcs -f fs_cfg resource create TestFS Filesystem
> device="/dev/drbd1"
>
I sent what follows to the DRDB list, but immediately after doing so I
concluded that this list probably is more apposite. Therefore here it goes,
with my apologies to those who read both lists.
I am a complete newbie to this, so please bear with me.
I am trying to use DRBD in conjunction with
16 matches
Mail list logo