Hi, some updates about this?
Thank you
Il Mer 4 Set 2019, 10:46 Marco Marino ha scritto:
> First of all, thank you for your support.
> Andrey: sure, I can reach machines through IPMI.
> Here is a short "log":
>
> #From ld1 trying to contact ld1
> [root@ld
e-node1-monitor-interval-60s)
Any idea?
How can I reset the state of the cluster without downtime? "pcs resource
cleanup" is enough?
Thank you,
Marco
Il giorno mer 4 set 2019 alle ore 10:29 Jan Pokorný
ha scritto:
> On 03/09/19 20:15 +0300, Andrei Borzenkov wrote:
> > 03.09.2019
Hi, I have a problem with fencing on a two node cluster. It seems that
randomly the cluster cannot complete monitor operation for fence devices.
In log I see:
crmd[8206]: error: Result of monitor operation for fence-node2 on
ld2.mydomain.it: Timed Out
As attachment there is
- /var/log/messages fo
in dual primary mode with lvm
is not a good idea due to the fact that I don't need an active/active
cluster.
Anyway, thank you for your time again
Marco
2018-04-13 15:54 GMT+02:00 emmanuel segura :
> the first thing that you need to configure is the stonith, because you
> have this
the
problem is, but please I need to understand how to solve the issue. Please,
if possible I invite someone to reproduce the configuration and possibly
the issue. It seems a bug but obviously I'm not sure. What I'm worried is
that it should be pac
"
in the function LVM_validate_all()
Anyway, it's only a warning but there is a good reason. I'm not an expert,
I'm studying for a certification and I have a lot of doubts.
Thank you for your help
Marco
2017-01-18 11:03 GMT+01:00 Ferenc Wágner :
> Marco Marino writes
ve an application (managed as a resource in the cluster)
that continously create and remove logical volumes in the cluster. Is this
a problem? The application uses a custom lvm.conf configuration file where
I have volume_list = [ "@pacemaker" ]
Thank you
2017-01-18 10:12 GMT+01:00 bliu
Hi, I'm trying to realize a cluster with 2 nodes that manages a volume
group.
Basically I have a san connected to both nodes that exposes 1 lun. So both
nodes have a disk /dev/sdb. From one node I did:
fdisk /dev/sdb <- Create a partition with type = 8e (LVM)
pvcreate /dev/sdb1
vgcreate myvg
then
regards
Marco
On Mon, 20 Jun 2016 15:42:11 -0500
Ken Gaillot wrote:
> On 06/20/2016 07:45 AM, ma...@nucleus.it wrote:
> > Hi,
> > i have a two node cluster with some vms (pacemaker resources)
> > running on the two hypervisors:
> > pacemaker-1.0.10
> > corosync-1.3.0
p the vms
- stop cluster stuff (corosync/pacemaker) so it do not
start/stop/monitor vms
- reboot the hypervisors.
- start cluster stuff
- remove maintenance from the cluster stuff so it start all the vms
What is the corret way to do that ( corosync/pacemaker) side ?
Best regards
Hi Ken,
by the way I’ve just also tried with pacemaker 1.1.14 (I builded it from
sources into a new RPM) but it doesn’t work
> On 18 May 2016, at 11:29, Marco A. Carcano wrote:
>
> Hi Ken,
>
> thank you for the reply
>
> I tried as you suggested, and now the stonith d
acemaker-1.1.13-10 resource-agents-3.9.5-54 and
fence-agents-scsi-4.0.11-27
the error message are Couldn't find anyone to fence (on) apache-up003.ring0
with any device anderror: Operation on of apache-up003.ring0 by
for crmd.15918@apache-up001.ring0.0599387e: No such device
Thanks
I hope to find here someone who can help me:
I have a 3 node cluster and I’m struggling to create a GFSv2 shared storage.
The weird thing is that despite cluster seems OK, I’m not able to have the
fence_scsi stonith device managed, and this prevent CLVMD and GFSv2 to start.
I’m using CentOS 7.
However, reducing rebuild array time is my goal, so I think that
create a virtual drive for each drive group is the right way. Please give
me some advises
Thanks
2015-09-18 13:02 GMT+02:00 Kai Dupke :
> On 09/18/2015 09:28 AM, Marco Marino wrote:
> > Can you explain me this? 16 volu
disk has to be recovered, affecting all 16
volumes."
Can you explain me this? 16 volumes?
Thank you
2015-09-17 15:54 GMT+02:00 Kai Dupke :
> On 09/17/2015 09:44 AM, Marco Marino wrote:
> > Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm
> > trying to
Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm
trying to build a SAN with drbd and pacemaker. I'm studying, but I have no
experience on large array of disks with drbd and pacemaker, so I have some
questions:
I'm using MegaRAID Storage Manager to create virtual drives. Each
16 matches
Mail list logo