Albrigtsen <
oalbr...@redhat.com> ha scritto:
> On 25/06/24 12:21 GMT, Damiano Giuliani wrote:
> >Hi Oyvind, thanks for the explanation and joining. my only doubt about
> >using systemd is pacemaker will not check in any way the status of the
> >rabbitmq cluster but only
set in e.g.
> the config files in /etc/ or similar).
>
>
> Oyvind Albrigtsen
>
> On 25/06/24 11:48 GMT, Damiano Giuliani wrote:
> >I Ken, thanks for answering.
> >Yes unfortunately the rabbitmq-cluster agent wipe everything and losing
> our
> >quorum queue is n
hich is fine with recreating the cluster from scratch after
> problems. I'm not sure about the other two, and I'm not really familiar
> with any of the agents. Hopefully someone with more experience with
> RabbitMQ can jump in.
>
> On Thu, 2024-06-20 at 10:33 +0200, Damiano Giu
Hi,
hope you guys can help me,
we have builded up a rabbitmq cluster using pacemaker resource called
rabbitmq-cluster.
everything works as exptected till for maintenance reason, we shutted down
the entire cluster gracefully.
at the startup we noticed all the user and permissions were dropped and
It could be the watchdog? Are u using diskless watchdog?Two nodes are not
supported in diskless mode.
On Tue, Dec 5, 2023, 5:40 PM Raphael DUBOIS-LISKI <
raphael.dubois-li...@soget.fr> wrote:
> Hello,
>
>
>
> I am seeking help for the setup of an Active/Active pacemaker cluster that
> relies on a
, and we havent
> currently been able to find a way to use the suggested replacement to
> perform the same kind of logic:
>
> https://wiki.nftables.org/wiki-nftables/index.php/Supported_features_compared_to_xtables#cluster
>
>
> Oyvind
>
> On 20/10/23 11:49 +0200, Dam
Hi guys,
im trying to create a IPaddr2 cloned resource for one of my project.
i need some kind of simple but effective loadbalancer for my rabbitmq
cluster managed by pacemaker.
My current SO is Almalinux 8.6.
seems IPaddr2 coned resource is not working / supported anymore, probably
because CLUSTE
cluster
using pacemaker on the web :/
Il giorno mar 12 set 2023 alle ore 10:28 Damiano Giuliani <
damianogiulian...@gmail.com> ha scritto:
> thanks Ken,
>
> could you point me in th right direction for a guide or some already
> working configuration?
>
> Thanks
>
> Da
17:01:24, Damiano Giuliani wrote:
> >
> > > Everything is clear now.
> > > So the point is to use pacemaker and create the floating vip and
> > > bind it to
> > > sqlproxy to health check and route the traffic to the available and
> > > healthy
>
ber 2023 at 17:01:24, Damiano Giuliani wrote:
>
> > Everything is clear now.
> > So the point is to use pacemaker and create the floating vip and bind it
> to
> > sqlproxy to health check and route the traffic to the available and
> healthy
> > galera nodes.
>
>
this everything together?
Thanks ur info's are so precious to me!
On Wed, Sep 6, 2023, 4:24 PM Antony Stone
wrote:
> On Wednesday 06 September 2023 at 13:58:51, Damiano Giuliani wrote:
>
> > What I miss is how my application can support the connection on a multi
> >
another cluster(pacemaker) for the VIP alongside
galera?
Thanks for the time you spent
Thanks
On Wed, Sep 6, 2023, 2:12 PM Antony Stone
wrote:
> On Wednesday 06 September 2023 at 12:50:40, Damiano Giuliani wrote:
>
> > Looking at some Galera cluster designs on web seems a coup
of server
proxy are placed in front.
If I would have only 3 nodes where I clustered MySQL with galera how then I
have to point my application to the right nodes?
On Wed, Sep 6, 2023, 1:32 PM Antony Stone
wrote:
> On Wednesday 06 September 2023 at 12:10:23, Damiano Giuliani wrote:
>
>
s
not dbms replication.
Probably I'm going to try drbd on very small and low usage db.
Thanks for sharing the doc.
More I know about MySQL more postgresql seems have better replication at
least for me.
On Wed, Sep 6, 2023, 12:40 PM Antony Stone
wrote:
> On Wednesday 06 September 2023 at
37 AM Antony Stone
wrote:
> On Tuesday 05 September 2023 at 22:20:36, Damiano Giuliani wrote:
>
> > Hi guys, I'm about to figure out how setup a pacemaker cluster for MySQL
> > replication.
>
> Why do you need pacemaker?
>
> Why not just set up several ma
Hi guys, I'm about to figure out how setup a pacemaker cluster for MySQL
replication.
I'm super new about MySQL and also it's replication method but I'm very
experienced related postgres cluster using PAF.
Diggin the web I found out many different way to achieve.
I would like to know which is the m
Seems you are not using any fencing / stonith mechanism. A cluster is not
fully functional without it.
On Thu, Aug 10, 2023, 4:03 PM Tiaan Wessels wrote:
> Hi,
>
> I need some help!
>
> I have a DRBD cluster and one node was switched off for a couple of days.
> The single node ran fine without
Related my experiences, i would say absolutely yes. Fecing is needed to
keep integrity of the cluster if something goes suddenly and unexpected
wrong, even in a 2nodes+qdevice setup. I never seen a cluster without
fecing working properly.
On Fri, 15 Jul 2022, 15:29 Viet Nguyen, wrote:
> Hi,
>
Hello everybody,
I would need some clarification regarding the backup system I was asked to
implement.
the current configuration is as follows:
3 servers where a postgres cluster with PAF
1 server where the backup and its incrementals must be stored
my idea is to use pg_basebackup and then back u
got few fully working cluster with PAF 2.3.0 and postgres13, probably your
configuration is not working correclty.
BR
Damiano
Il giorno lun 21 feb 2022 alle ore 15:44 CHAMPAGNE Julie <
julie.champa...@pm.gouv.fr> ha scritto:
> Hi,
>
>
>
> Does PAF 2.3.0 https://github.com/ClusterLabs/PAF/releas
gt;
> On Fri, Jan 28, 2022 at 2:50 PM damiano giuliani <
> damianogiulian...@gmail.com> wrote:
>
>> Ehy, i solved the issue you talking about few months ago, you have to
>> modify .xml configuration on keycloak side, if you re not in hurry monday i
>> send you how i fi
Ehy, i solved the issue you talking about few months ago, you have to
modify .xml configuration on keycloak side, if you re not in hurry monday i
send you how i fix it.
Damiano
On Fri, 28 Jan 2022, 20:25 Ken Gaillot, wrote:
> On Fri, 2022-01-28 at 12:15 -0500, Philip Alesio wrote:
> > Hi Everyo
Ehy,
Take in account when a master node crash, you should re-allign the old
master into the slave using pg_basebackup/pg_rewind and then rejoin the
node into the cluster as a slave. This is the only way to avoid data
corruption and be sure the new slave is correcly synchronised with the new
master
Rorthais <
j...@dalibo.com> ha scritto:
> On Fri, 23 Jul 2021 12:52:00 +0200
> damiano giuliani wrote:
>
> > the time query isnt the problem, is known that took its time. the network
> > is 10gbs bonding, quite impossible to sature with queries :=).
>
> Everyth
hi guys thanks for supporting.
the time query isnt the problem, is known that took its time. the network
is 10gbs bonding, quite impossible to sature with queries :=).
the servers are totally overkilled, at database full working loads 20% of
the resources have been used.
checking again the logs wh
4 lug 2021 alle ore 10:08 Klaus Wenninger <
kwenn...@redhat.com> ha scritto:
>
>
> On Wed, Jul 14, 2021 at 6:40 AM Andrei Borzenkov
> wrote:
>
>> On 13.07.2021 23:09, damiano giuliani wrote:
>> > Hi Klaus, thanks for helping, im quite lost because cant find ou
Hi guys,
im back with some PAF postgres cluster problems.
tonight the cluster fenced the master node and promote the PAF resource to
a new node.
everything went fine, unless i really dont know why.
so this morning i noticed the old master was fenced by sbd and a new master
was promoted, this happen
free hdd space.
how you guys suggest me to find out why the monitor timed out?
Really thanks for your support.
Pepe
Il giorno mer 30 giu 2021 alle ore 14:17 Ulrich Windl <
ulrich.wi...@rz.uni-regensburg.de> ha scritto:
> >>> damiano giuliani schrieb am 30.06.2021
> um
Hi Guys,
sorry for bothering, unfortunally i was called for an issue related to a
cluster i did months ago which was fully functional till last saturday.
looks some applications lost connection to the master losing some
update/insert.
i found the cause into the logs, the psqld-monitor went timeo
Thanks for the clarifications guys!
Il giorno mar 27 apr 2021 alle ore 18:24 Jehan-Guillaume de Rorthais <
j...@dalibo.com> ha scritto:
> On Mon, 26 Apr 2021 18:04:41 + (UTC)
> Strahil Nikolov wrote:
>
> > I prefer that the stack is auto enabled. Imagine that you got a DB that
> is
> > repli
Personally i discourage the use of the auto restarts/rejoin, if something
wrong happened, better investigate the causes and then enable the failed
node again.
failovers shouldnt occour frequently, only if something went really bad: as
far i know, pacemaker and PAF doesnt support any kind of autohea
Could be an idea to let your cluster start it on all your nodes
Br
On Tue, 16 Mar 2021, 09:58 井上和徳, wrote:
> Hi!
>
> Cluster (corosync and pacemaker) can be started with pcs,
> but corosync-notifyd needs to be started separately with systemctl,
> which is not easy to use.
>
> # pcs cluster sta
properly without it.
On Sun, 21 Feb 2021, 07:29 İsmet BALAT, wrote:
> Sorry, I am in +3utc and was sleeping. I will try first fix node, then
> start cluster. Thank you
>
> On 21 Feb 2021 Sun at 00:00 damiano giuliani
> wrote:
>
>> resources configured in a master/slave mode
ter (for
> first example in video - master/slave changing)? So I need a check script
> for fault states :(
>
> And thank you for reply
>
> On 20 Feb 2021 Sat at 23:40 damiano giuliani
> wrote:
>
>> Hi,
>>
>> Have you correcly configure a working fencing me
Hi,
Have you correcly configure a working fencing mechanism?without it you cant
rely on a safe and consistent environment.
My suggestion is to disable the autostart services (and so the autojoin
into the cluster) on both nodes.
if there is a fault you have to investigate before you rejoin the old
Hi Guys, sorry for the late answer, today i had the time to test the Igor's
solution and it works flawlessy.
creating a colocation constraint , binding the first and the last group
resources with an INFINITY score make possible to "If at least one resource
in the group fails the group will fail all
Hi jaikumar,
Ilo/sbd fencing should provide node reboot in case of network split. Is
your node properly fenced?
Anyway if your cluster is managing a db services like postgres as
replication throught wal log when a failover occurs you MUST resync or
pg_rewind in order to guarantee the cosistency. Pa
critto:
> >>> damiano giuliani schrieb am 27.01.2021
> um
> 19:25
> in Nachricht
> :
> > Hi Andrei, Thanks for ur help.
> > if one of my resource in the group fails or the primary node went down (
> > in my case acspcmk-02 ), the probe notices it and pacemaker trie
100, damiano giuliani wrote:
> > Hi Andrei, Thanks for ur help.
> > if one of my resource in the group fails or the primary node went
> > down ( in my case acspcmk-02 ), the probe notices it and pacemaker
> > tries to restart the whole resource group on the second node.
> >
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
sbd: active/enabled
I hope i explained my problem at my best,
Thanks for your time and help.
Good Evening
Damiano
Il giorno mer 27 gen 2021 alle ore 19:03 Andrei Borzenkov <
arvidj...@gmail.com> ha scritt
hi all im pretty new to the clusters, im struggling trying to configure a
bounch of resources and test how they failover.my need is to start and
manage a group of resources as one (in order to archive this a resource
group has been created), and if one of them cant run and still fails, the
cluster
Hi all im pretty new to the clusters, im struggling trying to configure a
bounch of resources and test how they failover.my need is to start and
manage a group of resources as one (in order to archive this a resource
group has been created), and if one of them cant run and still fails, the
cluster
42 matches
Mail list logo