Hi, some updates about this?
Thank you
Il Mer 4 Set 2019, 10:46 Marco Marino ha scritto:
> First of all, thank you for your support.
> Andrey: sure, I can reach machines through IPMI.
> Here is a short "log":
>
> #From ld1 trying to contact ld1
> [root@ld
e1-monitor-interval-60s)
Any idea?
How can I reset the state of the cluster without downtime? "pcs resource
cleanup" is enough?
Thank you,
Marco
Il giorno mer 4 set 2019 alle ore 10:29 Jan Pokorný
ha scritto:
> On 03/09/19 20:15 +0300, Andrei Borzenkov wrote:
> > 03.09.2019 11:09
Hi, I have a problem with fencing on a two node cluster. It seems that
randomly the cluster cannot complete monitor operation for fence devices.
In log I see:
crmd[8206]: error: Result of monitor operation for fence-node2 on
ld2.mydomain.it: Timed Out
As attachment there is
- /var/log/messages
in dual primary mode with lvm
is not a good idea due to the fact that I don't need an active/active
cluster.
Anyway, thank you for your time again
Marco
2018-04-13 15:54 GMT+02:00 emmanuel segura <emi2f...@gmail.com>:
> the first thing that you need to configure is the stonith, because y
I invite someone to reproduce the configuration and possibly
the issue. It seems a bug but obviously I'm not sure. What I'm worried is
that it should be pacemaker that states where and when one resource should
start so probably there is something wrong in my co
s bad."
in the function LVM_validate_all()
Anyway, it's only a warning but there is a good reason. I'm not an expert,
I'm studying for a certification and I have a lot of doubts.
Thank you for your help
Marco
2017-01-18 11:03 GMT+01:00 Ferenc Wágner <wf...@niif.hu>:
> Marco Marino &l
b...@suse.com>:
> Hi, Marco
>
> On 01/18/2017 04:45 PM, Marco Marino wrote:
>
> Hi, I'm trying to realize a cluster with 2 nodes that manages a volume
> group.
> Basically I have a san connected to both nodes that exposes 1 lun. So both
> nodes have a disk /dev/sd
Hi, I'm trying to realize a cluster with 2 nodes that manages a volume
group.
Basically I have a san connected to both nodes that exposes 1 lun. So both
nodes have a disk /dev/sdb. From one node I did:
fdisk /dev/sdb <- Create a partition with type = 8e (LVM)
pvcreate /dev/sdb1
vgcreate myvg
regards
Marco
On Mon, 20 Jun 2016 15:42:11 -0500
Ken Gaillot <kgail...@redhat.com> wrote:
> On 06/20/2016 07:45 AM, ma...@nucleus.it wrote:
> > Hi,
> > i have a two node cluster with some vms (pacemaker resources)
> > running on the two hypervisors:
> > pacemaker-1.0
the vms
- stop cluster stuff (corosync/pacemaker) so it do not
start/stop/monitor vms
- reboot the hypervisors.
- start cluster stuff
- remove maintenance from the cluster stuff so it start all the vms
What is the corret way to do that ( corosync/pacemaker) side ?
Best regards
Marco
Hi Ken,
by the way I’ve just also tried with pacemaker 1.1.14 (I builded it from
sources into a new RPM) but it doesn’t work
> On 18 May 2016, at 11:29, Marco A. Carcano <marco.carc...@itc4u.ch> wrote:
>
> Hi Ken,
>
> thank you for the reply
>
> I tried as you sug
acemaker-1.1.13-10 resource-agents-3.9.5-54 and
fence-agents-scsi-4.0.11-27
the error message are Couldn't find anyone to fence (on) apache-up003.ring0
with any device anderror: Operation on of apache-up003.ring0 by
for crmd.15918@apache-up001.ring0.0599387e: No such device
Thanks
Marco
reducing rebuild array time is my goal, so I think that
create a virtual drive for each drive group is the right way. Please give
me some advises
Thanks
2015-09-18 13:02 GMT+02:00 Kai Dupke <kdu...@suse.com>:
> On 09/18/2015 09:28 AM, Marco Marino wrote:
> > Can you explain me
k has to be recovered, affecting all 16
volumes."
Can you explain me this? 16 volumes?
Thank you
2015-09-17 15:54 GMT+02:00 Kai Dupke <kdu...@suse.com>:
> On 09/17/2015 09:44 AM, Marco Marino wrote:
> > Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm
> >
14 matches
Mail list logo