:
> I would do same tests on the backing storage first, without drbd, cluster
> management or any other complexity involved. Then after confirming that
> there's no any kind of bottleneck there, I would slowly move to the "upper
> layers" ...
>
>
>
> On Mon, 6
Hello, I'm using drbd 8.4.11 on a two node cluster on top of centos 7. Both
servers have the same hardware configuration: same cpu, ram, disks,...More
precisely there is a Megaraid lsi SAS 9361-8i with a raid5 volume.
CacheCade is enabled for both controllers and I have a raid0 volume with
4x256GB
Hello, I've configured a drbd/pacemaker cluster with 2 nodes and I'm doing
some tests for failover. Basically my cluster is quite simple: I have 2
drbd resources configured in pacemaker:
[root@pcmk2 ~]# pcs resource show DrbdRes
Resource: DrbdRes (class=ocf provider=linbit type=drbd)
Attributes:
when clients are
outside the replication network.
I opened another thread here ->
http://lists.linbit.com/pipermail/drbd-user/2017-November/023850.html
Thank you
2017-11-28 11:19 GMT+01:00 Roland Kammerer :
> On Tue, Nov 14, 2017 at 05:50:08PM +0100, Marco Marino wrote:
> > Hi,
>
ces
on various nodes.
Thank you for your support.
Marco
2017-11-26 23:05 GMT+01:00 Igor Cicimov :
> Hi Marco,
>
> On 23 Nov 2017 7:05 am, "Marco Marino" wrote:
>
> Hi, I'm trying to configure drbd9 with openstack-cinder.
> Actually my (simplified) infrastructure
Hi, I'm trying to configure drbd9 with openstack-cinder.
Actually my (simplified) infrastructure is composed by:
- 2 drbd9 nodes with 2 NICs on each node, one for the "replication" network
(without using a switch) and one for the "storage" network.
- 1 compute node with a dedicated NIC connected to
Hi,
I'm trying to understand if it is possible to deploy a 2 node solution with
drbd9/drbdmanage compatible with openstack-cinder-volume. Should I use 2 or
3 nodes with drbdmanage? It seems that, in a 2 node configuration, if one
node goes down, drbdmanage becomes unstable ( please see
https://list
to
resize it. On the initiator server I need to create a PV and then a VG and
many LVs. Actually I'm using a raw device as a backing device for drbd and
/dev/drbdX as a backstores. Let me know what do you think about this.
Thank you,
Marco
2017-03-27 23:40 GMT+02:00 Igor Cicimov :
>
>
Hi Robert,
I think the problem is related to the fact that there is a (LVM) partition
on top of the drbd device. I tried different configurations and if I use a
raw device for the drbd resource and then I use the drbd device without
partitions as a PV the problem disappear. Anyway, I dont know if t
Hi Yannis, thank you for your answer. I don't think that the first
"partition" is where drbd stores metadata for two reasons:
1) I'm using internal metadata and, as suggested by the drbd documentation:
"Configuring a resource to use internal meta data means that DRBD stores
its meta data on the sam
Hi, I'm trying to understand how to configure raw devices when used with
drbd. I think I have a problem with data alignment. Let me describe my case:
I have a raw device /dev/sde on both nodes and on top of it there is the
drbd device. So, in the .res configuration file I have
disk
Hi, I'm using drbd 8.4.9-1 with pacemaker and corosync on top of 2 centos
7.3 nodes to realize a SAN. Basically it works well, but due to my
inexperience when I configured the SAN, I used tgt instead of lio, that is
the default on rhel7 and derivatives. Furthermore (and more important), I
configure
Hi, I'm trying to test drbd performance using
https://www.drbd.org/en/doc/users-guide-84/ch-benchmark#s-measure-throughput
for throughput and
https://www.drbd.org/en/doc/users-guide-84/s-measure-latency for latency
I have some questions:
I'm using 2 vms for testing purposes. Measure latency and th
ould be create a mirror of ssd drives
for CacheCade. But I'm using drbd/pacemaker because
in a similar situation I need to switch resources automatically on the
other node.
2016-09-20 13:12 GMT+02:00 Igor Cicimov :
> On Tue, Sep 20, 2016 at 7:13 PM, Marco Marino
> wrote:
>
>
r?
Thank you
2016-09-20 10:33 GMT+02:00 Igor Cicimov :
> On 20 Sep 2016 5:00 pm, "Marco Marino" wrote:
> >
> > Furthermore there are logs from the secondary node:
> >
> > http://pastebin.com/A2ySXDCB
> >
> >
> > Please compare time. It see
Furthermore there are logs from the secondary node:
http://pastebin.com/A2ySXDCB
Please compare time. It seems that also on the secondary node drbd goes to
diskless mode. Why?
2016-09-20 8:44 GMT+02:00 Marco Marino :
> Hi, logs can be found here: http://pastebin.com/BGR33jN6
>
>
Hi, logs can be found here: http://pastebin.com/BGR33jN6
@digimer:
Using local-io-error should power off the node and switch the cluster on
the remaing node is this a good idea?
Regards,
Marco
2016-09-19 12:58 GMT+02:00 Adam Goryachev :
>
>
> On 19/09/2016 19:06, Marco Mar
2016-09-19 10:50 GMT+02:00 Igor Cicimov :
> On 19 Sep 2016 5:45 pm, "Marco Marino" wrote:
> >
> > Hi, I'm trying to build an active/passive cluster with drbd and
> pacemaker for a san. I'm using 2 nodes with one raid controller (megaraid)
> on each one
Hi Digimer, thank you for your support!
2016-09-19 10:09 GMT+02:00 Digimer :
> On 19/09/16 03:37 AM, Marco Marino wrote:
> > Hi, I'm trying to build an active/passive cluster with drbd and
> > pacemaker for a san. I'm using 2 nodes with one raid controller
> > (m
Hi, I'm trying to build an active/passive cluster with drbd and pacemaker
for a san. I'm using 2 nodes with one raid controller (megaraid) on each
one. Each node has an ssd disk that works as cache for read (and write?)
realizing the CacheCade proprietary tecnology.
Basically, the structure of the
Hi, i'm trying to use drbdmanage with my iscsi san made with pacemaker and
drbd.
Reading here ->
http://drbd.linbit.com/users-guide-9.0/ch-admin-drbdmanage.html , it seems
that if I want to use drbdmanage, I have to create a volume group and on
top of this build new drbd resources. Questions:
1) de
Hi, I have 2 servers supermicro lsi 2108 and I'm trying to build a SAN with
drbd and pacemaker. I'm stydying, but I have no experience on large array
of disks with drbd, so I have some questions:
I'm using MegaRAID Storage Manager to create virtual drives. Each virtual
drive is a device on linux,
Hi,
I'm trying to build an high available nfs server with drbd and pacemaker. I
have some doubts related to the devices and the drbd resource. Following
the guide "NFS on RHEL6" on the linbit site, I found that under the drbd
resource there is a Logical Volume:
PV -> VG -> LV -> DRBD resource
Hi,
I'm using pacemaker/corosync for an high available NFS server following a
guide in this page -> http://www.linbit.com/en/downloads/tech-guides
I would like to mount the nfs export from the passive node too, is this
possible?
I cannot mount the nfs export from the passive node because /var/lib/n
24 matches
Mail list logo