On 19/02/2024 09:06, Strahil Nikolov via Users wrote:
Hi All,
Is there a specific setup I missed in order to setup the
web interface ?
Usually, you just login with the hacluster user on
https://fqdn:2224 but when I do a curl, I get an empty
response.
Best Regards,
Strahil Nikolov
Hi guys.
Everything seems to be working a ok yet pacemakers logs
...
error: clone_op_key: Triggered fatal assertion at
pcmk_graph_producer.c:207 : (n_type != NULL) && (n_task != NULL)
error: pcmk__notify_key: Triggered fatal assertion at
actions.c:187 : op_type != NULL
error: clone_op_key:
On 31/01/2024 16:37, lejeczek via Users wrote:
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users
wrote:
Hi guys
On 01/01/2024 18:28, Ken Gaillot wrote:
On Fri, 2023-12-22 at 17:02 +0100, lejeczek via Users wrote:
hi guys.
I have a colocation constraint:
-> $ pcs constraint ref DHCPD
Resource: DHCPD
colocation-DHCPD-GATEWAY-NM-link-INFINITY
and the trouble is... I thought DHCPD is to fol
On 04/02/2024 12:57, lejeczek via Users wrote:
hi guys.
So, I've manged to make my volume go haywire, here:
-> $ gluster volume heal VMAIL info
Brick 10.1.1.100:/devs/00.GLUSTERs/VMAIL
Status: Connected
Number of entries: 0
Brick 10.1.1.101:/devs/00.GLUSTERs/VMAIL
/dovecot-uidlist
Sta
hi guys.
So, I've manged to make my volume go haywire, here:
-> $ gluster volume heal VMAIL info
Brick 10.1.1.100:/devs/00.GLUSTERs/VMAIL
Status: Connected
Number of entries: 0
Brick 10.1.1.101:/devs/00.GLUSTERs/VMAIL
/dovecot-uidlist
Status: Connected
Number of entries: 1
Brick
On 01/02/2024 15:02, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 18:23:40 +0100
lejeczek via Users wrote:
On 31/01/2024 17:13, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:37:21 +0100
lejeczek via Users wrote:
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais
On 31/01/2024 18:11, Ken Gaillot wrote:
On Wed, 2024-01-31 at 16:37 +0100, lejeczek via Users wrote:
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55
On 31/01/2024 17:13, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:37:21 +0100
lejeczek via Users wrote:
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users wrote:
Hi guys.
Is it possible to trigger some... action - I'm thinking
On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users wrote:
Hi guys.
Is it possible to trigger some... action - I'm thinking specifically
at shutdown/start.
If not within the cluster then - if you do that - perhaps outside.
I would like to create
Hi guys.
Is it possible to trigger some... action - I'm thinking
specifically at shutdown/start.
If not within the cluster then - if you do that - perhaps
outside.
I would like to create/remove constraints, when cluster
starts & stops, respectively.
many thanks,
Hi guys.
I wonder if you might have any tips/tweaks for
volume/cluster to make it more resilient? accommodating? to
qcow2 files but! when a peer is lots or missing?
I have 3-peer cluster/volume: 2 + 1 arbiter & my experience
is such, that when all is good then.. well, all is good, but...
when
hi guys.
I have a colocation constraint:
-> $ pcs constraint ref DHCPD
Resource: DHCPD
colocation-DHCPD-GATEWAY-NM-link-INFINITY
and the trouble is... I thought DHCPD is to follow
GATEWAY-NM-link, always!
If that is true that I see very strange behavior, namely.
When there is an issue with
On 19/12/2023 19:13, lejeczek via Users wrote:
hi guys,
Is this below not the weirdest thing?
-> $ pcs constraint ref PGSQL-PAF-5435
Resource: PGSQL-PAF-5435
colocation-HA-10-1-1-84-PGSQL-PAF-5435-clone-INFINITY
colocation-REDIS-6385-clone-PGSQL-PAF-5435-clone-INFINITY
order-PGSQL-
hi guys,
Is this below not the weirdest thing?
-> $ pcs constraint ref PGSQL-PAF-5435
Resource: PGSQL-PAF-5435
colocation-HA-10-1-1-84-PGSQL-PAF-5435-clone-INFINITY
colocation-REDIS-6385-clone-PGSQL-PAF-5435-clone-INFINITY
order-PGSQL-PAF-5435-clone-HA-10-1-1-84-Mandatory
Hi guys.
my resources-agents depend like so:
resource-agents-deps.target
○ ├─00\\x2dVMsy.mount
● └─virt-guest-shutdown.target
when I reboot a node VMs seems to migrated off it live a ok,
but..
when node comes back on after a reboot, VMs fail to migrate
back to it, live.
I see on such node
On 08/12/2023 13:25, Jehan-Guillaume de Rorthais wrote:
Hi,
On Wed, 6 Dec 2023 10:36:39 +0100
lejeczek via Users wrote:
How do your colocate your promoted resources with balancing
underlying resources as priority?
What do you mean?
With a simple scenario, say
3 nodes and 3 pgSQL
On 04/12/2023 20:58, Reid Wahl wrote:
On Thu, Nov 30, 2023 at 10:30 AM lejeczek via Users
wrote:
On 07/02/2022 20:09, lejeczek via Users wrote:
Hi guys
How do you guys go about doing link up/down as a resource?
many thanks, L.
With simple tests I confirmed that indeed Linux - on my
Hi guys.
How do your colocate your promoted resources with balancing
underlying resources as priority?
With a simple scenario, say
3 nodes and 3 pgSQL clusters
what would be best possible way - I'm thinking most gentle
at the same time, if that makes sense.
many thanks,
On 26/11/2023 12:20, Reid Wahl wrote:
On Sun, Nov 26, 2023 at 1:32 AM lejeczek via Users
wrote:
Hi guys.
With these:
-> $ pcs resource status REDIS-6381-clone
* Clone Set: REDIS-6381-clone [REDIS-6381] (promotable):
* Promoted: [ ubusrv2 ]
* Unpromoted: [ ubusrv1 ubus
hi guys.
A cluster thinks the resource is up:
...
* HA-10-1-1-80 (ocf:heartbeat:IPaddr2): Started
ubusrv3 (disabled)
..
while it is not the case. What might it mean?
Config is simple:
-> $ pcs resource config HA-10-1-1-80
Resource: HA-10-1-1-80 (class=ocf provider=heartbeat
On 07/02/2022 20:09, lejeczek via Users wrote:
Hi guys
How do you guys go about doing link up/down as a resource?
many thanks, L.
With simple tests I confirmed that indeed Linux - on my
hardware at leat - can easily power down an eth link - if
a @devel reads this:
Is there an agent
On 16/02/2022 10:37, Klaus Wenninger wrote:
On Tue, Feb 15, 2022 at 5:25 PM lejeczek via Users
wrote:
On 07/02/2022 19:21, Antony Stone wrote:
> On Monday 07 February 2022 at 20:09:02, lejeczek via
Users wrote:
>
>> Hi guys
>>
>>
On 26/11/2023 17:44, Andrei Borzenkov wrote:
On 26.11.2023 12:32, lejeczek via Users wrote:
Hi guys.
With these:
-> $ pcs resource status REDIS-6381-clone
* Clone Set: REDIS-6381-clone [REDIS-6381] (promotable):
* Promoted: [ ubusrv2 ]
* Unpromoted: [ ubusrv1 ubus
On 26/11/2023 12:20, Reid Wahl wrote:
On Sun, Nov 26, 2023 at 1:32 AM lejeczek via Users
wrote:
Hi guys.
With these:
-> $ pcs resource status REDIS-6381-clone
* Clone Set: REDIS-6381-clone [REDIS-6381] (promotable):
* Promoted: [ ubusrv2 ]
* Unpromoted: [ ubusrv1 ubus
On 26/11/2023 10:32, lejeczek via Users wrote:
Hi guys.
With these:
-> $ pcs resource status REDIS-6381-clone
* Clone Set: REDIS-6381-clone [REDIS-6381] (promotable):
* Promoted: [ ubusrv2 ]
* Unpromoted: [ ubusrv1 ubusrv3 ]
-> $ pcs resource status PGSQL-PAF-5433-clone
*
Hi guys.
With these:
-> $ pcs resource status REDIS-6381-clone
* Clone Set: REDIS-6381-clone [REDIS-6381] (promotable):
* Promoted: [ ubusrv2 ]
* Unpromoted: [ ubusrv1 ubusrv3 ]
-> $ pcs resource status PGSQL-PAF-5433-clone
* Clone Set: PGSQL-PAF-5433-clone [PGSQL-PAF-5433]
e is short.
Regards,
Ulrich
-Original Message-
From: Users On Behalf Of lejeczek via Users
Sent: Friday, November 17, 2023 12:55 PM
To: users@clusterlabs.org
Cc: lejeczek
Subject: [EXT] [ClusterLabs] moving VM live fails?
Hi guys.
I have a resource which when asked to 'move' the
Hi guys.
Having a node with a couple of _promoted_ resources - when
such node is os-shutdown in an orderly manner it seems that
cluster takes a while.
By a "while" I mean longer than I'd expect a relatively
simple 3-node cluster to move/promote a few _promoted_
resources:
redis, postgresql,
Hi guys.
My 3-node cluster had one node absent for a long time and
now when it's back I cannot get _mariadb_ to start on that node.
...
* MARIADB (ocf:heartbeat:galera): ORPHANED Stopped
...
* MARIADB-last-committed : 147
* MARIADB-safe-to-bootstrap :
On 13/11/2023 13:08, Jehan-Guillaume de Rorthais via Users
wrote:
On Mon, 13 Nov 2023 11:39:45 +
"Windl, Ulrich" wrote:
But shouldn't the RA check for that (and act appropriately)?
Interesting. I'm open to discuss this. Below my thoughts so far.
Why the RA should check that? There's
Hi guys.
I have a resource which when asked to 'move' then it fails with:
virtqemud[3405456]: operation failed: guest CPU doesn't
match specification: missing features: xsave
but VM domain does not require (nor disable) the feature:
what even more interesting, _virsh_ migrate does
On 10/11/2023 18:16, Jehan-Guillaume de Rorthais wrote:
On Fri, 10 Nov 2023 17:17:41 +0100
lejeczek via Users wrote:
...
Of course you can use "pg_stat_tmp", just make sure the temp folder exists:
cat < /etc/tmpfiles.d/postgresql-part.conf
# Directory for Postgre
On 10/11/2023 13:13, Jehan-Guillaume de Rorthais wrote:
On Fri, 10 Nov 2023 12:27:24 +0100
lejeczek via Users wrote:
...
to share my "fix" for it - perhaps it was introduced by
OS/packages (Ubuntu 22) updates - ? - as oppose to resource
agent itself.
As the logs point out - p
On 07/11/2023 17:57, lejeczek via Users wrote:
hi guys
Having 3-node pgSQL cluster with PAF - when all three
systems are shutdown at virtually the same time then PAF
fails to start when HA cluster is operational again.
from status:
...
Migration Summary:
* Node: ubusrv2 (2):
* PGSQL
On 07/11/2023 17:57, lejeczek via Users wrote:
hi guys
Having 3-node pgSQL cluster with PAF - when all three
systems are shutdown at virtually the same time then PAF
fails to start when HA cluster is operational again.
from status:
...
Migration Summary:
* Node: ubusrv2 (2):
* PGSQL
hi guys
Having 3-node pgSQL cluster with PAF - when all three
systems are shutdown at virtually the same time then PAF
fails to start when HA cluster is operational again.
from status:
...
Migration Summary:
* Node: ubusrv2 (2):
* PGSQL-PAF-5433: migration-threshold=100
On 08/09/2023 17:29, Jehan-Guillaume de Rorthais wrote:
On Fri, 8 Sep 2023 16:52:53 +0200
lejeczek via Users wrote:
Hi guys.
Before I start fiddling and brake things I wonder if
somebody knows if:
pgSQL can work with: |wal_level = archive for PAF ?
Or more general question with pertains
Hi guys.
Before I start fiddling and brake things I wonder if
somebody knows if:
pgSQL can work with: |wal_level = archive for PAF ?
Or more general question with pertains to ||wal_level - can
_barman_ be used with pgSQL "under" PAF?
many thanks, L.
On 07/09/2023 16:20, lejeczek via Users wrote:
On 07/09/2023 16:09, Andrei Borzenkov wrote:
On Thu, Sep 7, 2023 at 5:01 PM lejeczek via Users
wrote:
Hi guys.
I'm trying to set ocf_heartbeat_pgsqlms agent but I get:
...
Failed Resource Actions:
* PGSQL-PAF-5433 stop on ubusrv3 returned
On 07/09/2023 16:09, Andrei Borzenkov wrote:
On Thu, Sep 7, 2023 at 5:01 PM lejeczek via Users wrote:
Hi guys.
I'm trying to set ocf_heartbeat_pgsqlms agent but I get:
...
Failed Resource Actions:
* PGSQL-PAF-5433 stop on ubusrv3 returned 'invalid parameter' because 'Parameter
Hi guys.
I'm trying to set ocf_heartbeat_pgsqlms agent but I get:
...
Failed Resource Actions:
* PGSQL-PAF-5433 stop on ubusrv3 returned 'invalid
parameter' because 'Parameter "recovery_target_timeline"
MUST be set to 'latest'. It is currently set to ''' at Thu
Sep 7 13:58:06 2023 after
Hi guys.
That below should work, right?
-> $ pcs quorum update last_man_standing=1 --skip-offline
Checking corosync is not running on nodes...
Warning: Unable to connect to dzien (Failed to connect to
dzien port 2224: No route to host)
Warning: dzien: Unable to check if corosync is not
On 13/07/2023 17:33, Ken Gaillot wrote:
On Wed, 2023-07-12 at 21:08 +0200, lejeczek via Users wrote:
Hi guys.
I have a fresh new 'galera' clone and that one would not start &
cluster says:
...
INFO: Waiting on node to report database status before Master
instances can s
On 13/07/2023 17:33, Ken Gaillot wrote:
On Wed, 2023-07-12 at 21:08 +0200, lejeczek via Users wrote:
Hi guys.
I have a fresh new 'galera' clone and that one would not start &
cluster says:
...
INFO: Waiting on node to report database status before Master
instances can s
Hi guys.
I have a fresh new 'galera' clone and that one would not
start & cluster says:
...
INFO: Waiting on node to report database status
before Master instances can start.
...
Is that only for newly created resources - which I guess it
must be - and if so then why?
Naturally, next
On 03/07/2023 18:55, Andrei Borzenkov wrote:
On 03.07.2023 19:39, Ken Gaillot wrote:
On Mon, 2023-07-03 at 19:22 +0300, Andrei Borzenkov wrote:
On 03.07.2023 18:07, Ken Gaillot wrote:
On Mon, 2023-07-03 at 12:20 +0200, lejeczek via Users
wrote:
On 03/07/2023 11:16, Andrei Borzenkov wrote
On 03/07/2023 11:16, Andrei Borzenkov wrote:
On 03.07.2023 12:05, lejeczek via Users wrote:
Hi guys.
I have pgsql with I constrain like so:
-> $ pcs constraint location PGSQL-clone rule role=Promoted
score=-1000 gateway-link ne 1
and I have a few more location constrai
Hi guys.
I have pgsql with I constrain like so:
-> $ pcs constraint location PGSQL-clone rule role=Promoted
score=-1000 gateway-link ne 1
and I have a few more location constraints with that
ethmonitor & those work, but this one does not seem to.
When contraint is created cluster is silent,
Hi guys.
Having 'pgsql' set up in what I'd say is a vanilla-default
confg, pacemaker's journal log is flooded with:
...
pam_unix(runuser:session): session closed for user postgres
pam_unix(runuser:session): session opened for user
postgres(uid=26) by (uid=0)
pam_unix(runuser:session):
On 09/06/2023 09:04, Reid Wahl wrote:
On Thu, Jun 8, 2023 at 10:55 PM lejeczek via Users
wrote:
On 09/06/2023 01:38, Reid Wahl wrote:
On Thu, Jun 8, 2023 at 2:24 PM lejeczek via Users wrote:
Ouch.
Let's see the full output of the move command, with the whole CIB that
failed
On 09/06/2023 01:38, Reid Wahl wrote:
On Thu, Jun 8, 2023 at 2:24 PM lejeczek via Users wrote:
Ouch.
Let's see the full output of the move command, with the whole CIB that
failed to validate.
For a while there I thought perhaps it was just that one
pglsq resource, but it seems that any
Ouch.
Let's see the full output of the move command, with the whole CIB that
failed to validate.
For a while there I thought perhaps it was just that one
pglsq resource, but it seems that any - though only a few
are set up - (only clone promoted?)resource fails to move.
Perhaps primarily
On 05/06/2023 16:23, Ken Gaillot wrote:
On Sat, 2023-06-03 at 15:09 +0200, lejeczek via Users wrote:
Hi guys.
I've something which I'm new to entirely - cluster which is seemingly
okey errors, fails to move a resource.
What pcs version are you using? I believe there was a move regression
On 05/06/2023 16:23, Ken Gaillot wrote:
On Sat, 2023-06-03 at 15:09 +0200, lejeczek via Users wrote:
Hi guys.
I've something which I'm new to entirely - cluster which is seemingly
okey errors, fails to move a resource.
What pcs version are you using? I believe there was a move regression
Hi guys.
I've something which I'm new to entirely - cluster which is
seemingly okey errors, fails to move a resource.
I'd won't contaminate here just yet with long json cluster
spits when fails but a snippet:
-> $ pcs resource move PGSQL-clone --promoted podnode1
Error: cannot move resource
Hi guys.
I have a resource seemingly running a ok but when a node
gets rebooted then cluster finds it not able, not good to
start the resource.
Failed Resource Actions:
* PGSQL start on podnode3 returned 'error' (My data may
be inconsistent. You have to remove
On 05/05/2023 10:41, Jehan-Guillaume de Rorthais wrote:
On Fri, 5 May 2023 10:08:17 +0200
lejeczek via Users wrote:
On 25/04/2023 14:16, Jehan-Guillaume de Rorthais wrote:
Hi,
On Mon, 24 Apr 2023 12:32:45 +0200
lejeczek via Users wrote:
I've been looking up and fiddling with this RA
On 05/05/2023 10:08, Andrei Borzenkov wrote:
On Fri, May 5, 2023 at 11:03 AM lejeczek via Users
wrote:
On 29/04/2023 21:02, Reid Wahl wrote:
On Sat, Apr 29, 2023 at 3:34 AM lejeczek via Users
wrote:
Hi guys.
I presume these are a consequence of having resource of VirtuaDomain type set
On 25/04/2023 14:16, Jehan-Guillaume de Rorthais wrote:
Hi,
On Mon, 24 Apr 2023 12:32:45 +0200
lejeczek via Users wrote:
I've been looking up and fiddling with this RA but
unsuccessfully so far, that I wonder - is it good for
current versions of pgSQLs?
As far as I know, the pgsql agent
On 29/04/2023 21:02, Reid Wahl wrote:
On Sat, Apr 29, 2023 at 3:34 AM lejeczek via Users
wrote:
Hi guys.
I presume these are a consequence of having resource of VirtuaDomain type set up(&
enabled) - but where, how cab users control presence & content of those?
Yep:
https://gi
Hi guys.
I presume these are a consequence of having resource of
VirtuaDomain type set up(& enabled) - but where, how cab
users control presence & content of those?
many thanks, L.___
Manage your subscription:
Hi guys.
anybody here use PAF with up-to-date Centos?
I see this RA fails from cluster perspective, I've failed a
report over at github no comment there so thought I'd ask
around.
many thanks, L.___
Manage your subscription:
Hi guys.
I've been looking up and fiddling with this RA but
unsuccessfully so far, that I wonder - is it good for
current versions of pgSQLs?
many thanks, L.g___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
On 19/04/2023 21:08, Ken Gaillot wrote:
Hi all,
I am considering deprecating Pacemaker's support for nagios-class
resources.
This has nothing to do with nagios monitoring of a Pacemaker cluster,
which would be unaffected. This is about Pacemaker's ability to use
nagios plugin scripts as a
On 19/04/2023 16:16, Ken Gaillot wrote:
On Wed, 2023-04-19 at 08:00 +0200, lejeczek via Users wrote:
On 18/04/2023 21:02, Ken Gaillot wrote:
On Tue, 2023-04-18 at 19:36 +0200, lejeczek via Users wrote:
On 18/04/2023 18:22, Ken Gaillot wrote:
On Tue, 2023-04-18 at 14:58 +0200, lejeczek via
On 18/04/2023 21:02, Ken Gaillot wrote:
On Tue, 2023-04-18 at 19:36 +0200, lejeczek via Users wrote:
On 18/04/2023 18:22, Ken Gaillot wrote:
On Tue, 2023-04-18 at 14:58 +0200, lejeczek via Users wrote:
Hi guys.
When it's done by the cluster itself, eg. a node goes 'standby' -
how
do
On 18/04/2023 18:22, Ken Gaillot wrote:
On Tue, 2023-04-18 at 14:58 +0200, lejeczek via Users wrote:
Hi guys.
When it's done by the cluster itself, eg. a node goes 'standby' - how
do clusters migrate VirtualDomain resources?
1. Call resource agent migrate_to action on original node
2. Call
Hi guys.
When it's done by the cluster itself, eg. a node goes
'standby' - how do clusters migrate VirtualDomain resources?
Do users have any control over it and if so then how?
I'd imagine there must be some docs - I failed to find
Especially in large deployments one obvious question would
On 17/04/2023 06:17, Andrei Borzenkov wrote:
On 16.04.2023 16:29, lejeczek via Users wrote:
On 16/04/2023 12:54, Andrei Borzenkov wrote:
On 16.04.2023 13:40, lejeczek via Users wrote:
Hi guys
Some agents do employ that concept of node/host map
which I
do not see in any manual/docs
On 16/04/2023 12:54, Andrei Borzenkov wrote:
On 16.04.2023 13:40, lejeczek via Users wrote:
Hi guys
Some agents do employ that concept of node/host map which I
do not see in any manual/docs that this agent does - would
you suggest some technique or tips on how to achieve
similar?
I'm
Hi guys
Some agents do employ that concept of node/host map which I
do not see in any manual/docs that this agent does - would
you suggest some technique or tips on how to achieve similar?
I'm thinking specifically of 'migrate' here, as I understand
'migration' just uses OS' own resolver to
Hi guys.
I'd prefer to avoid putting 'mysqld_t' into permissive side
of SELinux - which apparently would be the easiest way out -
so I wonder: does anybody here have a SELinux module which
would make Galera agent run successfully?
Devel/authors, I think ignored that part - that critical
Hi guys.
I have a simple 2-node cluster with redundant links and I
wonder why status reports like this:
...
Node List:
* Node swir (1): online, feature set 3.16.2
* Node whale (2): online, feature set 3.16.2
...
PCSD Status:
swir: Online
whale: Offline
...
Cluster's config:
...
Nodes:
On 03/01/2023 21:44, Ken Gaillot wrote:
On Tue, 2023-01-03 at 18:18 +0100, lejeczek via Users wrote:
On 03/01/2023 17:03, Jehan-Guillaume de Rorthais wrote:
Hi,
On Tue, 3 Jan 2023 16:44:01 +0100
lejeczek via Users wrote:
To get/have Postgresql cluster with 'pgsqlms' resource
On 03/01/2023 17:03, Jehan-Guillaume de Rorthais wrote:
Hi,
On Tue, 3 Jan 2023 16:44:01 +0100
lejeczek via Users wrote:
To get/have Postgresql cluster with 'pgsqlms' resource, such
cluster needs a 'master' IP - what do you guys do when/if
you have multiple resources off this agent?
I
Hi guys.
To get/have Postgresql cluster with 'pgsqlms' resource, such
cluster needs a 'master' IP - what do you guys do when/if
you have multiple resources off this agent?
I wonder if it is possible to keep just one IP and have all
those resources go to it - probably 'scoring' would be very
On 28/12/2022 21:53, Reid Wahl wrote:
On Wed, Dec 28, 2022 at 6:08 AM lejeczek via Users
wrote:
Hi guys.
I have a situation which begins to look like quite the pickle and I'm in it,
with no possible or no elegant at least, way out.
I'm hoping you guys can share your thoughts.
My cluster
Hi guys.
I have a situation which begins to look like quite the
pickle and I'm in it, with no possible or no elegant at
least, way out.
I'm hoping you guys can share your thoughts.
My cluster mounts a path, in two steps
1) runs systemd luks service
2) mount that unlocked luks device under a
Hi guys.
This might be tricky but also can be trivial, I'm hoping for
the latter of course.
Ordering Constraints:
...
start non-clone then start a-clone-resource (kind:Mandatory)
...
Now, when 'non-clone' gets relocated what then happens to
'a-clone-resource' ?
or.. for that matter, to any
On 28/07/2022 00:33, Reid Wahl wrote:
On Wed, Jul 27, 2022 at 2:08 AM lejeczek via Users
wrote:
On 26/07/2022 20:56, Reid Wahl wrote:
On Tue, Jul 26, 2022 at 4:21 AM lejeczek via Users
wrote:
Hi guys
I set up a clone of a new instance of mariadb galera - which otherwise,
outside of pcs
On 26/07/2022 20:56, Reid Wahl wrote:
On Tue, Jul 26, 2022 at 4:21 AM lejeczek via Users
wrote:
Hi guys
I set up a clone of a new instance of mariadb galera - which otherwise,
outside of pcs works - but I see something weird.
Firstly cluster claims it's all good:
-> $ pcs status --f
Hi guys
I set up a clone of a new instance of mariadb galera - which otherwise,
outside of pcs works - but I see something weird.
Firstly cluster claims it's all good:
-> $ pcs status --full
...
* Clone Set: mariadb-apps-clone [mariadb-apps] (promotable):
* mariadb-apps
Hi guys.
I have a peculiar case - to me at least - here.
Unless I tell cluster to move resource to a node (two-node
cluster) with '--master' cluster keeps both nodes as "slaves"
As soon as I remove such a constraint both nodes start to log:
...
3442363:M 07 Jul 2022 20:05:31.561 # Setting
On 08/03/2022 16:20, Jehan-Guillaume de Rorthais wrote:
Removing the node attributes with the resource might be legit from the
Pacemaker point of view, but I'm not sure how they can track the dependency
(ping Ken?).
PAF has no way to know the ressource is being deleted and can not remove its
On 08/03/2022 10:21, Jehan-Guillaume de Rorthais wrote:
op start timeout=60s \ op stop timeout=60s \ op promote timeout=30s >> \ op demote timeout=120s \ op monitor interval=15s
timeout=10s >> role="Master" meta master-max=1 \ op monitor
interval=16s >> timeout=10s role="Slave" \ op notify
On 21/02/2022 16:01, Jehan-Guillaume de Rorthais wrote:
On Mon, 21 Feb 2022 09:04:27 +
CHAMPAGNE Julie wrote:
...
The last release is 2 years old, is it still in development?
There's no activity because there's not much to do on it. PAF is mainly in
maintenance (bug fix) mode.
I have few
Hi guys.
With CentOS 9's packages & binaries libgfapi is removed from
libvirt/qemu which means that if you want to use GlusterFS
for VMs image storage you have to expose its volumes via FS
mount point - that is how I understand these changes - which
seems to cause quite a problem to the HA.
On 07/02/2022 19:21, Antony Stone wrote:
On Monday 07 February 2022 at 20:09:02, lejeczek via Users wrote:
Hi guys
How do you guys go about doing link up/down as a resource?
I apply or remove addresses on the interface, using "IPaddr2" and "IPv6addr",
which I know i
Hi guys
How do you guys go about doing link up/down as a resource?
many thanks, L.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
Hi guys
I'm having problems with cluster on new CentOS Stream 9 and
I'd be glad if you can share your thoughts.
-> $ pcs resource move c8kubermaster2 swir
Location constraint to move resource 'c8kubermaster2' has
been created
Waiting for the cluster to apply configuration changes...
hi guys
I've always been RHE/Fedora user and memories of times
before 'systemd' almost completely vacated my brain -
nowadays, is it possible to have HA without systemd would
you know and if so, then how would that work?
many thanks, L.
___
Manage
On 15/12/2021 08:16, Michele Baldessari wrote:
Hi,
On Tue, Dec 14, 2021 at 03:09:56PM +, lejeczek via Users wrote:
I failed to find any good or any for that matter info, on present-day
mariadb/mysql galera cluster setups - would you know of any docs which
discuss such scenario
Hi guys
I failed to find any good or any for that matter info, on
present-day mariadb/mysql galera cluster setups - would you
know of any docs which discuss such scenario comprehensively?
I see there among resources are 'mariadb' and 'galera' but
which one to use I'm still confused.
many
Hi!
My guess is that you checked the corresponding logs already; why not show them
here?
I can imagine that the VMs die rather early after start.
Regards,
Ulrich
lejeczek via Users schrieb am 10.12.2021 um 17:33 in
Nachricht :
Hi guys.
I quite often.. well, to frequently in my mind
On 10/12/2021 21:17, Ken Gaillot wrote:
On Fri, 2021-12-10 at 16:33 +, lejeczek via Users wrote:
Hi guys.
I quite often.. well, to frequently in my mind, see a VM
which cluster says:
-> $ pcs resource status | grep -v disabled
...
* c8kubermaster2(ocf::heartbeat:VirtualDom
Hi guys.
I quite often.. well, to frequently in my mind, see a VM
which cluster says:
-> $ pcs resource status | grep -v disabled
...
* c8kubermaster2 (ocf::heartbeat:VirtualDomain):
Started dzien
..
but that is false, also cluster itself confirms it:
-> $ pcs resource debug-monitor
...for fixing DMARCs(my Yahoo).
many thanks, L.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
On 26/08/2021 10:35, Klaus Wenninger wrote:
On Thu, Aug 26, 2021 at 11:13 AM lejeczek via Users
mailto:users@clusterlabs.org>> wrote:
Hi guys.
I sometimes - I think I know when in terms of any
pattern -
get resources stuck on one node (two-node c
1 - 100 of 170 matches
Mail list logo