Bonjour Thierry,
On Mon, 25 Mar 2024 10:55:06 +
FLORAC Thierry wrote:
> I'm trying to create a PostgreSQL master/slave cluster using streaming
> replication and pgsqlms agent. Cluster is OK but my problem is this : the
> master node is sometimes restarted for system operations, and the
On Wed, 31 Jan 2024 18:23:40 +0100
lejeczek via Users wrote:
> On 31/01/2024 17:13, Jehan-Guillaume de Rorthais wrote:
> > On Wed, 31 Jan 2024 16:37:21 +0100
> > lejeczek via Users wrote:
> >
> >>
> >> On 31/01/2024 16:06, Jehan-Guillaume de Rorthais w
On Wed, 31 Jan 2024 16:37:21 +0100
lejeczek via Users wrote:
>
>
> On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
> > On Wed, 31 Jan 2024 16:02:12 +0100
> > lejeczek via Users wrote:
> >
> >>
> >> On 29/01/2024 17:22, Ken Gaillot w
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users wrote:
>
>
> On 29/01/2024 17:22, Ken Gaillot wrote:
> > On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users wrote:
> >> Hi guys.
> >>
> >> Is it possible to trigger some... action - I'm thinking specifically
> >> at shutdown/start.
> >>
On Wed, 31 Jan 2024 15:41:28 +0100
Adam Cecile wrote:
[...]
> Thanks a lot for your suggestion, it seems I have something that work
> correctly now, final configuration is:
I would recommend configuring in an offline CIB then pushing it to production
as a whole. Eg.:
# get current CIB
pcs
On Wed, 24 Jan 2024 16:47:54 -0600
Ken Gaillot wrote:
...
> > Erm. Well, as this is a major upgrade where we can affect people's
> > conf and
> > break old things & so on, I'll jump in this discussion with a
> > wishlist to
> > discuss :)
> >
>
> I made sure we're tracking all these (links
Hi there !
On Wed, 03 Jan 2024 11:06:27 -0600
Ken Gaillot wrote:
> Hi all,
>
> I'd like to release Pacemaker 3.0.0 around the middle of this year.
> I'm gathering proposed changes here:
>
> https://projects.clusterlabs.org/w/projects/pacemaker/pacemaker_3.0_changes/
>
> Please review for
On Fri, 8 Dec 2023 17:11:58 +0100
lejeczek via Users wrote:
...
> Apologies, perhaps I was quite vague.
> I was thinking - having a 3-node HA cluster and 3-node
> single-master->slaves pgSQL, now..
> say, I want pgSQL masters to spread across HA cluster so I
> theory - having each HA node
Hi,
On Wed, 6 Dec 2023 10:36:39 +0100
lejeczek via Users wrote:
> How do your colocate your promoted resources with balancing
> underlying resources as priority?
What do you mean?
> With a simple scenario, say
> 3 nodes and 3 pgSQL clusters
> what would be best possible way - I'm thinking
On Fri, 10 Nov 2023 20:34:40 +0100
lejeczek via Users wrote:
> On 10/11/2023 18:16, Jehan-Guillaume de Rorthais wrote:
> > On Fri, 10 Nov 2023 17:17:41 +0100
> > lejeczek via Users wrote:
> >
> > ...
> >>> Of course you can use "pg_stat_tmp&q
t; -Original Message-
> From: Users On Behalf Of Jehan-Guillaume de
> Rorthais via Users Sent: Friday, November 10, 2023 1:13 PM
> To: lejeczek via Users
> Cc: Jehan-Guillaume de Rorthais
> Subject: [EXT] Re: [ClusterLabs] PAF / pgSQL fails after OS/system shutdown -
> FIX
On Fri, 10 Nov 2023 17:17:41 +0100
lejeczek via Users wrote:
...
> > Of course you can use "pg_stat_tmp", just make sure the temp folder exists:
> >
> >cat < /etc/tmpfiles.d/postgresql-part.conf
> ># Directory for PostgreSQL temp stat files
> >d /var/run/postgresql/14-paf.pg_stat_tmp
On Fri, 10 Nov 2023 12:27:24 +0100
lejeczek via Users wrote:
...
> >
> to share my "fix" for it - perhaps it was introduced by
> OS/packages (Ubuntu 22) updates - ? - as oppose to resource
> agent itself.
>
> As the logs point out - pg_stat_tmp - is missing and from
> what I see it's only
On Wed, 13 Sep 2023 17:32:01 +0200
lejeczek via Users wrote:
> On 08/09/2023 17:29, Jehan-Guillaume de Rorthais wrote:
> > On Fri, 8 Sep 2023 16:52:53 +0200
> > lejeczek via Users wrote:
> >
> >> Hi guys.
> >>
> >> Before I start fiddling
On Fri, 8 Sep 2023 16:52:53 +0200
lejeczek via Users wrote:
> Hi guys.
>
> Before I start fiddling and brake things I wonder if
> somebody knows if:
> pgSQL can work with: |wal_level = archive for PAF ?
> Or more general question with pertains to ||wal_level - can
> _barman_ be used with
On Fri, 8 Sep 2023 10:26:42 +0200
lejeczek via Users wrote:
> On 07/09/2023 16:20, lejeczek via Users wrote:
> >
> >
> > On 07/09/2023 16:09, Andrei Borzenkov wrote:
> >> On Thu, Sep 7, 2023 at 5:01 PM lejeczek via Users
> >> wrote:
> >>> Hi guys.
> >>>
> >>> I'm trying to set
On Fri, 5 May 2023 10:08:17 +0200
lejeczek via Users wrote:
> On 25/04/2023 14:16, Jehan-Guillaume de Rorthais wrote:
> > Hi,
> >
> > On Mon, 24 Apr 2023 12:32:45 +0200
> > lejeczek via Users wrote:
> >
> >> I've been looking up and fiddling with this
Hi,
On Mon, 24 Apr 2023 12:32:45 +0200
lejeczek via Users wrote:
> I've been looking up and fiddling with this RA but
> unsuccessfully so far, that I wonder - is it good for
> current versions of pgSQLs?
As far as I know, the pgsql agent is still supported, last commit on it happen
in Jan
On Tue, 21 Mar 2023 11:47:23 +0100
Jérôme BECOT wrote:
> Le 21/03/2023 à 11:00, Jehan-Guillaume de Rorthais a écrit :
> > Hi,
> >
> > On Tue, 21 Mar 2023 09:33:04 +0100
> > Jérôme BECOT wrote:
> >
> >> We have several clusters run
Hi,
On Tue, 21 Mar 2023 09:33:04 +0100
Jérôme BECOT wrote:
> We have several clusters running for different zabbix components. Some
> of these clusters consist of 2 zabbix proxies,where nodes run Mysql,
> Zabbix-proxy server and a VIP, and a corosync-qdevice.
I'm not sure to understand your
Hi,
What about using the Dummy resource agent (ocf_heartbeat_dummy(7)) and collocate
it with your IP address? This RA creates a local file on start and removes it
on stop. The game now is to watch for this path from a systemd path unit and
trigger the reload when file appears. See
Hi,
I definitely have some work/improvements to do on the pgsqlms agent, but
there's still some details I'm interested to discuss below.
On Fri, 6 Jan 2023 16:36:19 -0800
Reid Wahl wrote:
> On Fri, Jan 6, 2023 at 3:26 PM Jehan-Guillaume de Rorthais via Users
> wrote:
>>
>>
gt; On Tue, 2023-01-03 at 18:18 +0100, lejeczek via Users wrote:
> >>>> On 03/01/2023 17:03, Jehan-Guillaume de Rorthais wrote:
> >>>>> Hi,
> >>>>>
> >>>>> On Tue, 3 Jan 2023 16:44:01 +0100
> >>>>> lejecze
Hi,
On Tue, 3 Jan 2023 16:44:01 +0100
lejeczek via Users wrote:
> To get/have Postgresql cluster with 'pgsqlms' resource, such
> cluster needs a 'master' IP - what do you guys do when/if
> you have multiple resources off this agent?
> I wonder if it is possible to keep just one IP and have
On Mon, 7 Nov 2022 14:06:51 +
Robert Hayden wrote:
> > -Original Message-
> > From: Users On Behalf Of Valentin Vidic
> > via Users
> > Sent: Sunday, November 6, 2022 5:20 PM
> > To: users@clusterlabs.org
> > Cc: Valentin Vidić
> > Subject: Re: [ClusterLabs] [External] : Re: Fence
On Sat, 5 Nov 2022 20:54:55 +
Robert Hayden wrote:
> > -Original Message-
> > From: Jehan-Guillaume de Rorthais
> > Sent: Saturday, November 5, 2022 3:45 PM
> > To: users@clusterlabs.org
> > Cc: Robert Hayden
> > Subject: Re: [ClusterLabs
On Sat, 5 Nov 2022 20:53:09 +0100
Valentin Vidić via Users wrote:
> On Sat, Nov 05, 2022 at 06:47:59PM +, Robert Hayden wrote:
> > That was my impression as well...so I may have something wrong. My
> > expectation was that SBD daemon should be writing to the /dev/watchdog
> > within 20
On Mon, 3 Oct 2022 14:45:49 +0200
Tomas Jelinek wrote:
> Dne 28. 09. 22 v 18:22 Jehan-Guillaume de Rorthais via Users napsal(a):
> > Hi,
> >
> > A small addendum below.
> >
> > On Wed, 28 Sep 2022 11:42:53 -0400
> > "Kevin P. Fleming" wrote:
Hi,
A small addendum below.
On Wed, 28 Sep 2022 11:42:53 -0400
"Kevin P. Fleming" wrote:
> On Wed, Sep 28, 2022 at 11:37 AM Dave Withheld
> wrote:
> >
> > Is it possible to get corosync to use the private network and stop trying
> > to use the LAN for cluster communications? Or am I totally
On Wed, 28 Sep 2022 02:33:59 -0400
Madison Kelly wrote:
> ...
> I'm happy to go into more detail, but I'll stop here until/unless you have
> more questions. Otherwise I'd write a book. :)
I would buy it ;)
___
Manage your subscription:
Hey,
On Wed, 7 Sep 2022 19:12:53 +0900
권오성 wrote:
> Hello.
> I am a student who wants to implement a redundancy system with raspberry pi.
> Last time, I posted about how to proceed with installation on raspberry pi
> and received a lot of comments.
> Among them, I searched a lot after looking
Hi,
On Wed, 22 Jun 2022 16:36:03 +
CHAMPAGNE Julie wrote:
> ...
> # pcs resource create pgsqld ocf:heartbeat:pgsqlms \
> pgdata="/etc/postgresql/11/main" \
> bindir="/usr/lib/postgresql/11/bin" \
> datadir="/var/lib/postgresql/11/main" \
>
Hi,
On Tue, 15 Mar 2022 12:35:11 -0400
"john tillman" wrote:
> I'm trying to guarantee that all my cloned drbd resources start on the
> same node and I can't figure out the syntax of the constraint to do it.
>
> I could nominate one of the drbd resources as a "leader" and have all the
> others
On Tue, 8 Mar 2022 17:44:36 +
lejeczek via Users wrote:
> On 08/03/2022 16:20, Jehan-Guillaume de Rorthais wrote:
> > Removing the node attributes with the resource might be legit from the
> > Pacemaker point of view, but I'm not sure how they can track the dependenc
Hi,
Sorry, your mail was really hard to read on my side, but I think I understood
and try to answer bellow.
On Tue, 8 Mar 2022 11:45:30 +
lejeczek via Users wrote:
> On 08/03/2022 10:21, Jehan-Guillaume de Rorthais wrote:
> >> op start timeout=60s \ op stop timeout=60s \ op pro
read this page as well:
https://clusterlabs.github.io/PAF/administration.html
Regards,
> -Message d'origine-----
> De : Jehan-Guillaume de Rorthais
> Envoyé : mardi 8 mars 2022 11:21
> À : CHAMPAGNE Julie
> Cc : Cluster Labs - All topics related to open-source clustering welc
Hi,
On Tue, 8 Mar 2022 08:00:22 +
CHAMPAGNE Julie wrote:
> I've created the ressource pgsqld as follow (don't think the cluster creation
> command is necessary):
>
> pcs resource create pgsqld ocf:heartbeat:pgsqlms promotable \
The problem is here. The argument order given to pcs is
On Mon, 7 Mar 2022 14:49:35 +
CHAMPAGNE Julie wrote:
> The return gives nothing for the first command.
> Then:
>
> name="test-debug" host="node1" value="testvalue" for node1.
>
> After executing both commands on node2, it gives me the following return on
> both server:
>
>
On Mon, 7 Mar 2022 14:32:46 +
CHAMPAGNE Julie wrote:
> root@node1 ~ > attrd_updater --private --lifetime reboot --name
> "lsn_location-pgsqld" --query Could not query value of lsn_location-pgsqld:
> attribute does not exist
Mh, sorry, could you please exec these two commands:
Hi,
Caution, this is an english spoken mailing list :)
Bellow my answer.
On Mon, 7 Mar 2022 12:31:07 +
CHAMPAGNE Julie wrote:
> Lorsque je crée un problème sur le noeud1,
What's the issue you are testing precisely?
> * pgsqld_promote_0 on node2 'error' (1): call=24,
Hi,
On Wed, 2 Mar 2022 14:39:40 +0100
damiano giuliani wrote:
> ...
> my question is: what happens in case of failover of the master on another
> node to the wal logs that i am archiving to build the incrementals?
The new primary is supposed to archive WALs to your backup server.
> would I
On Tue, 22 Feb 2022 12:25:15 +0100
Oyvind Albrigtsen wrote:
> ...
> >Ping Oyvind, maybe you have some input about this as the resource-agents
> >package maintainer?
> I dont know how it got excluded on CentOS Stream only, but I've
> created a bz to fix it:
>
Hello,
On Tue, 22 Feb 2022 09:27:16 +
lejeczek via Users wrote:
> ...
> Perhaps as the author(s) you can chip in and/or help via comments to
> rectify this:
>
> ...
>
> Problem: package resource-agents-paf-4.9.0-7.el8.x86_64 requires
PAF doesn't share the same release plans than the
On Mon, 21 Feb 2022 09:04:27 +
CHAMPAGNE Julie wrote:
...
> The last release is 2 years old, is it still in development?
There's no activity because there's not much to do on it. PAF is mainly in
maintenance (bug fix) mode.
I have few ideas here and there. It might land soon or later, but
Hello,
On Fri, 18 Feb 2022 21:44:58 +
"Larry G. Mills" wrote:
> ... This happened again recently, and the running primary DB was demoted and
> then re-promoted to be the running primary. What I'm having trouble
> understanding is why the running Master/primary DB was demoted. After the
>
On Fri, 11 Feb 2022 08:07:33 +0100
"Ulrich Windl" wrote:
> >> Jehan-Guillaume de Rorthais schrieb am 10.02.2022 um
> 16:40 in Nachricht <20220210164000.2e395a37@karst>:
> > ...
> > I wonder if after the cluster shutdown complete, the target-role=Stop
On Thu, 10 Feb 2022 22:15:07 +0800
Roger Zhou via Users wrote:
>
> On 2/9/22 17:46, Lentes, Bernd wrote:
> >
> >
> > - On Feb 7, 2022, at 4:13 PM, Jehan-Guillaume de Rorthais
> > j...@dalibo.com wrote:
> >
> >> On Mon, 7 Feb 2022 14:
On Thu, 10 Feb 2022 15:10:20 +0100
"Ulrich Windl" wrote:
...
> > If you want to gracefully shutdown your cluster, then you can add one
> manual
> > step to first gracefully stop your resources instead of betting the cluster
> > will do the good things.
>
> It's the old discussion: Old HP
On Wed, 9 Feb 2022 17:42:35 + (UTC)
Strahil Nikolov via Users wrote:
> If you gracefully shutdown a node - pacemaker will migrate all resources away
> so you need to shut them down simultaneously and all resources should be
> stopped by the cluster.
>
> Shutting down the nodes would be my
On Wed, 9 Feb 2022 10:46:30 +0100 (CET)
"Lentes, Bernd" wrote:
> - On Feb 7, 2022, at 4:13 PM, Jehan-Guillaume de Rorthais j...@dalibo.com
> wrote:
>
> > On Mon, 7 Feb 2022 14:24:44 +0100 (CET)
> > "Lentes, Bernd" wrote:
> >
> >> H
On Mon, 7 Feb 2022 14:24:44 +0100 (CET)
"Lentes, Bernd" wrote:
> Hi,
>
> i'm currently changing a bit in my cluster because i realized the my
> configuration for a power outtage didn't work as i expected. My idea is
> currently:
> - first stop about 20 VirtualDomains, which are my services.
On Mon, 31 Jan 2022 08:49:44 +0100
Klaus Wenninger wrote:
...
> Depending on the environment it might make sense to think about
> having the manual migration-step controlled by the cluster(s) using
> booth. Just thinking - not a specialist on that topic ...
Could you elaborate a bit on this?
Hi,
On Sat, 29 Jan 2022 16:51:47 -0500
Digimer wrote:
> ...
> Though going back to the original question, deleting the server from
> pacemaker while the VM is left running, is still something I am quite curious
> about.
As the real resource moved away, meaning it couldn't be stopped locally
Le Fri, 21 Jan 2022 18:17:04 +0100,
damiano giuliani a écrit :
> Ehy,
>
> Take in account when a master node crash, you should re-allign the old
> master into the slave using pg_basebackup/pg_rewind and then rejoin the
> node into the cluster as a slave. This is the only way to avoid data
>
Hi,
Under EL and Debian, there's a PCMK_debug variable (iirc) in
"/etc/sysconfig/pacemaker" or "/etc/default/pacemaker".
Comments in there explain how to set debug mode for part or all of the
pacemaker processes.
This might be the environment variable you are looking for ?
Regards,
Le 31
On Tue, 12 Oct 2021 09:46:04 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 12.10.2021 um
> >>> 09:35 in
> Nachricht <20211012093554.4bb761a2@firost>:
> > On Tue, 12 Oct 2021 08:42:49 +0200
> > "Ulrich Wind
On Tue, 12 Oct 2021 08:42:49 +0200
"Ulrich Windl" wrote:
> ...
> >> sysctl ‑a | grep dirty
> >> vm.dirty_background_bytes = 0
> >> vm.dirty_background_ratio = 10
> >
> > Considering your 256GB of physical memory, this means you can dirty up to
> > 25GB
> > pages in cache before the kernel
Hi,
I kept the full answer in history to keep the list informed of your full
answer.
My answer down below.
On Mon, 11 Oct 2021 11:33:12 +0200
damiano giuliani wrote:
> ehy guys sorry for being late, was busy during the WE
>
> here i im:
>
>
> > Did you see the swap activity (in/out, not
On Sat, 9 Oct 2021 09:55:28 +0300
Andrei Borzenkov wrote:
> On 08.10.2021 16:00, damiano giuliani wrote:
> > ...
> > the servers are all resoruce overkills with 80 cpus and 256 gb ram even if
> > the db ingest milions records x day, the network si bonded 10gbs, ssd disks.
I don't remember if we
Le 9 octobre 2021 00:11:27 GMT+02:00, Strahil Nikolov a
écrit :
>What do you mean by 1s default timeout ?
I suppose Damiano is talking about the corosync totem token timeout.
___
Manage your subscription:
On Fri, 8 Oct 2021 15:00:30 +0200
damiano giuliani wrote:
> Hi Guys,
Hi,
Good to hear from you, thank for the follow up!
My answer below.
> ...
> So it turn out that a lil bit of swap was used and i suspect corosync
> process were swapped to disks creating lag where 1s default corosync
>
On Fri, 23 Jul 2021 12:52:00 +0200
damiano giuliani wrote:
> the time query isnt the problem, is known that took its time. the network
> is 10gbs bonding, quite impossible to sature with queries :=).
Everything is possible, it's just harder :)
[...]
> checking again the logs what for me is not
On Thu, 22 Jul 2021 15:36:03 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 22.07.2021 um
> 12:05 in
> Nachricht <20210722120537.0d65c2a1@firost>:
> > On Wed, 21 Jul 2021 22:02:21 -0400
> > "Frank D. Engel, Jr.
On Sat, 19 Jun 2021 08:32:02 +0100
lejeczek wrote:
> I've just yesterday updated OS packages among which some
> were for various PCS components, to versions:
> corosynclib-3.1.0-5.el8.x86_64
> pacemaker-schemas-2.1.0-2.el8.noarch
> pacemaker-cluster-libs-2.1.0-2.el8.x86_64
>
Hi,
On Wed, 14 Jul 2021 07:58:14 +0200
"Ulrich Windl" wrote:
[...]
> Could it be that a command saturated the network?
> Jul 13 00:39:28 ltaoperdbs02 postgres[172262]: [20-1] 2021-07-13 00:39:28.936
> UTC [172262] LOG: duration: 660.329 ms execute : SELECT
> xmf.file_id, f.size, fp.full_path
On Thu, 22 Jul 2021 13:10:45 +0300
Andrei Borzenkov wrote:
> On Thu, Jul 22, 2021 at 1:05 PM Jehan-Guillaume de Rorthais
> wrote:
> > To do some rewording in regard with the current topic: if Pacemaker is able
> > to stop its resources after a quorum lost, it will not
On Thu, 22 Jul 2021 12:56:40 +0300
Andrei Borzenkov wrote:
> On Thu, Jul 22, 2021 at 12:43 PM Jehan-Guillaume de Rorthais
> wrote:
> >
> > On Wed, 21 Jul 2021 12:45:40 -0400
> > Digimer wrote:
> >
> > > On 2021-07-21 3:26 a.m., Jehan-Gu
On Wed, 21 Jul 2021 22:02:21 -0400
"Frank D. Engel, Jr." wrote:
> In OpenVMS, the kernel is aware of the cluster. As is mentioned in that
> presentation, it actually stops processes from running and blocks access
> to clustered storage when quorum is lost, and resumes them appropriately
>
On Wed, 21 Jul 2021 12:45:40 -0400
Digimer wrote:
> On 2021-07-21 3:26 a.m., Jehan-Guillaume de Rorthais wrote:
> > Hi,
> >
> > On Wed, 21 Jul 2021 04:28:30 + (UTC)
> > Strahil Nikolov via Users wrote:
> >
> >> Hi,
> >> consider usin
On Wed, 21 Jul 2021 04:50:09 -0400
"Frank D. Engel, Jr." wrote:
> OpenVMS can do this sort of thing without a requirement for fencing (you
> still need a third disk as a quorum device in a 2-node cluster), but
> Linux (at least in its current form) cannot.
Yes it can, as far as what you are
Hi,
On Wed, 21 Jul 2021 04:28:30 + (UTC)
Strahil Nikolov via Users wrote:
> Hi,
> consider using a 3rd system as a Q disk. Also, you can use iscsi from that
> node as a SBD device, so you will have proper fencing .If you don't have a
> hardware watchdog device, you can use softdog kernel
On Thu, 15 Jul 2021 12:46:10 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 15.07.2021 um
> 10:09 in
> Nachricht <20210715100930.06b45f5b@firost>:
> > Hi all,
> >
> > On Tue, 13 Jul 2021 19:55:30 + (UTC)
&
Hi all,
On Tue, 13 Jul 2021 19:55:30 + (UTC)
Strahil Nikolov wrote:
> In some cases the third location has a single IP and it makes sense to use it
> as QDevice. If it has multiple network connections to that location - use a
> full blown node .
By the way, what's the point of multiple
On Wed, 30 Jun 2021 14:36:29 +0200
damiano giuliani wrote:
> the replication is async, having a look into the postgres logs seems some
> updates failed cuz no master available.
'Not sure un understand what you mean. As Pacemaker recovered the primary on
the same node, standbys and clients lost
Hi,
On Wed, 30 Jun 2021 13:44:28 +0200
damiano giuliani wrote:
> looks some applications lost connection to the master losing some
> update/insert.
>
> i found the cause into the logs, the psqld-monitor went timeout after
> 1ms and the master resource been demote, the instance stopped and
On Wed, 26 May 2021 14:30:44 -0500
kgail...@redhat.com wrote:
> Without further comments, we've gone ahead with Libera.Chat as the new
> home of #clusterlabs. There is a new wiki page with the channel
> details:
>
> https://wiki.clusterlabs.org/wiki/ClusterLabs_IRC_channel
>
> so we can just
On Wed, 28 Apr 2021 12:00:40 -0500
Ken Gaillot wrote:
> On Wed, 2021-04-28 at 18:14 +0200, Jehan-Guillaume de Rorthais wrote:
> > Hi all,
> >
> > It seems to me the concern raised by Ulrich hasn't been discussed:
> >
> > On Wed, 12 Apr 2021 Ulrich Windl wrote
Hi all,
It seems to me the concern raised by Ulrich hasn't been discussed:
On Wed, 12 Apr 2021 Ulrich Windl wrote:
> Personally I think an RA calling crm_mon is inherently broken: Will it ever
> pass ocf-tester?
Would it be possible to rely on the following command ?
cibadmin --query
On Mon, 26 Apr 2021 18:04:41 + (UTC)
Strahil Nikolov wrote:
> I prefer that the stack is auto enabled. Imagine that you got a DB that is
> replicated and primary DB node is fenced. You would like that node to join
> the cluster and if possible to sync with the new primary instead of staying
On Tue, 13 Apr 2021 12:17:38 +0200
"Ulrich Windl" wrote:
[...]
> >good for SUSE! unfortunately RHEL didn't include the utility...
>
> Technically it should work, but there could be "political" reasons.
Few years ago, it was more incompatibilities reasons than political one.
I'm not sure
On Sun, 11 Apr 2021 16:03:34 +0100
lejeczek wrote:
> On 10/04/2021 16:19, Jehan-Guillaume de Rorthais wrote:
> >
> > Le 10 avril 2021 14:22:34 GMT+02:00, lejeczek a
> > écrit :
> >> Hi guys.
> >>
> >> Any users perhaps experts on PA
On Sun, 11 Apr 2021 04:21:02 + (UTC)
Strahil Nikolov wrote:
> Better check for a location constraint created via 'pcs resource move'!pcs
> constraint location --full | grep cli Best Regards,Strahil Nikolov
Oh, yes this is a good one, this should probably enters in our FAQ.
Thanks,
Le 10 avril 2021 14:22:34 GMT+02:00, lejeczek a écrit :
>Hi guys.
>
>Any users perhaps experts on PAF agent if happen to read
>this - a question - with pretty regular 3-node cluster when
>node on which "master" runs goes down then cluster/agent
>successfully moves 'master' to a next node.
Hi,
I'm one of the PAF author, so I'm biased.
On Fri, 26 Mar 2021 14:51:28 +
Isaac Pittman wrote:
> My team has the opportunity to update our PostgreSQL resource agent to either
> PAF (https://github.com/ClusterLabs/PAF) or pgsql
>
On Thu, 18 Mar 2021 17:29:59 +0900
井上和徳 wrote:
> On Tue, Mar 16, 2021 at 10:23 PM Jehan-Guillaume de Rorthais
> wrote:
> >
> > > On Tue, 16 Mar 2021, 09:58 井上和徳, wrote:
> > >
> > > > Hi!
> > > >
> > > > Cluster (corosync an
> On Tue, 16 Mar 2021, 09:58 井上和徳, wrote:
>
> > Hi!
> >
> > Cluster (corosync and pacemaker) can be started with pcs,
> > but corosync-notifyd needs to be started separately with systemctl,
> > which is not easy to use.
Maybe you can add to the [Install] section of corosync-notifyd a dependency
On Thu, 11 Mar 2021 17:51:15 + (UTC)
Strahil Nikolov wrote:
> Interesting...
> Yet, this doesn't explain why token of 3 causes the nodes to never
> assemble a cluster (waiting for half an hour, using wait_for_all=1) , while
> setting it to 29000 works like a charm.
>
> Thankfully we got
On Tue, 26 Jan 2021 16:15:55 +0100
Tomas Jelinek wrote:
> Dne 25. 01. 21 v 17:01 Ken Gaillot napsal(a):
> > On Mon, 2021-01-25 at 09:51 +0100, Jehan-Guillaume de Rorthais wrote:
> >> Hi Digimer,
> >>
> >> On Sun, 24 Jan 2021 15:31:22 -0500
> >> Di
On Mon, 25 Jan 2021 10:22:20 +0100
"Ulrich Windl" wrote:
> Maybe it's time for target-role=stopped">... in CIB ;-)
Could you elaborate on what would be the differences with "stop‑all‑resources"?
Kind regards,
___
Manage your subscription:
the case?
AFAIK, yes, because each cluster shutdown request is handled independently at
node level. There's a large door open for all kind of race conditions if
requests are handled with some random lags on each nodes.
Regards,
--
Jehan-Guillaume de Rorthais
Dalibo
___
On Tue, 13 Oct 2020 04:48:04 -0400
Digimer wrote:
> On 2020-10-13 4:32 a.m., Jehan-Guillaume de Rorthais wrote:
> > On Mon, 12 Oct 2020 19:08:39 -0400
> > Digimer wrote:
> >
> >> Hi all,
> >
> > Hi you,
> >
> >>
> >&
On Mon, 12 Oct 2020 19:08:39 -0400
Digimer wrote:
> Hi all,
Hi you,
>
> I noticed that there appear to be a global "maintenance mode"
> attribute under cluster_property_set. This seems to be independent of
> node maintenance mode. It seemed to not change even when using
> 'pcs node
On Fri, 2 Oct 2020 15:18:18 +0300
Олег Самойлов wrote:
> > On 29 Sep 2020, at 11:34, Jehan-Guillaume de Rorthais
> > wrote:
> >
> >
> > Vagrant use virtualbox by default, which supports softdog, but it support
> > many other virtualization plateform,
On Fri, 25 Sep 2020 17:20:28 +0300
Олег Самойлов wrote:
> Sorry for the late reply. I was on leave and after this some problems at my
> work.
>
> > On 3 Sep 2020, at 17:23, Jehan-Guillaume de Rorthais
> > wrote:
> >
> > Hi,
> >
> > Thanks fo
On Wed, 16 Sep 2020 19:57:12 + (UTC)
Strahil Nikolov wrote:
> Theoretically the CIB is a file on each node,so a script that is looking for
> that file's timestamps or in the cluster's logs should work.
Good one.
This could be simple with a daemon relaying on inotify. Or even simpler,
don't
On Wed, 16 Sep 2020 02:20:35 -0400
Digimer wrote:
> Is there a way to invoke a script when something happens with the
> cluster? Be it a simple transition, stonith action, resource dis/enable
> or resovery, etc?
Not exactly a trigger on all CIB changes, but Alerts are triggered on much of
the
On Fri, 4 Sep 2020 10:55:31 +0200
Oyvind Albrigtsen wrote:
> Add the "recovery.conf" parameters to postgresql.conf (except the
> standby one) and touch standby.signal (which does the same thing).
+1
> After you've verified that it's working and stop PostgreSQL you simply
> rm standby.signal
On Thu, 03 Sep 2020 10:58:54 -0500
Ken Gaillot wrote:
> [...] there are other cluster test platforms already, but none of them really
> cover everybody's desired scenarios (or is easily extensible).
I thought "ra-tester" was, among other things, about extending CTS with custom
tests? Did you
Hi,
Thanks for sharing.
I had a very quick glance at your project. I wonder if you were aware of some
existing projects/scripts that would have save you a lot of time. Or maybe you
know them but they did not fit your needs? Here are some pointers:
# PAF vagrant files
PAF repository have 3
On Tue, 18 Aug 2020 08:21:50 +0200
Klaus Wenninger wrote:
> On 8/18/20 7:49 AM, Andrei Borzenkov wrote:
> > 17.08.2020 23:39, Jehan-Guillaume de Rorthais пишет:
> >> On Mon, 17 Aug 2020 10:19:45 -0500
> >> Ken Gaillot wrote:
> >>
> >>> On
1 - 100 of 284 matches
Mail list logo