Does anyone have a good resource for setting up a fault tolerant NFS cluster
using DRBD? I am currently using DRBD, Pacemaker, Corosync and OCFS2 on Ubuntu
12.04.
High availability doesn't meet my needs. I have spent quite a while reading and
trying out every combination of settings, but nothin
Hi,
I'm having a two-node Corosync/Pacemaker cluster (active/passive) with XFS on
two DRBD-volumes (SLES11 SP1 + HAE).
The active node has both volumes mounted and if this node fails, the passive
node should take over and mount these volumes.
I'm aware that I can't mount XFS simultaneously on bo
> Is it possible to have DRBD verify B only when A is done verifying?
I use crontab.
On Monday at 2:11 AM, verify r0
On Tuesday at 2:11 AM, verify r1
On Wednesday at 2:11 AM, verify r2
On Thursday at 2:11 AM, verify r3
--or--
11 2 * * 1 /sbin/drbdadm verify r0
11 2 * * 2
Hey guys,
During our testing with DRBD 8.4.1, the final round included seeing if
the read balancing would be any good. Unfortunately it appears to cause
intermittent "Resource temporarily unavailable" errors, no matter what
protocol I use. Least-pending? Round-robin? 32K-striping? Instant
fil
- Original Message -
> From: "Florian Haas"
> To: "Wiebe Cazemier"
> Cc: drbd-user@lists.linbit.com
> Sent: Monday, 4 June, 2012 2:43:31 PM
> Subject: Re: [DRBD-user] Performance hit DRBD vs raw block device, even when
> disconnected
>
> I don't think you've told us your DRBD version; t
On Mon, Jun 4, 2012 at 2:17 PM, Wiebe Cazemier wrote:
> - Original Message -
>> From: "Lars Ellenberg"
>> To: "Wiebe Cazemier"
>> Cc: drbd-user@lists.linbit.com
>> Sent: Monday, 4 June, 2012 11:29:20 AM
>> Subject: Re: [DRBD-user] Performance hit DRBD vs raw block device, even when
>> d
- Original Message -
> From: "Lars Ellenberg"
> To: "Wiebe Cazemier"
> Cc: drbd-user@lists.linbit.com
> Sent: Monday, 4 June, 2012 11:29:20 AM
> Subject: Re: [DRBD-user] Performance hit DRBD vs raw block device, even when
> disconnected
>
> Put DRBD meta data on RAID 1.
> Use decent bat
On Sat, Jun 02, 2012 at 01:11:10PM -0400, Chris Dickson wrote:
> Hey guys,
>
> I ran into an odd issue with LVM and the wiping the drbd meta data from a
> logical volume. Here is what my wipe md command looks like, this is all
> done via a script:
>
> drbdmeta --force num v08 /dev/vg0/vol100 inte
On Sat, Jun 02, 2012 at 02:02:03PM +0200, Wiebe Cazemier wrote:
> - Original Message -
> > From: "Florian Haas"
> > To: "Wiebe Cazemier"
> > Cc: drbd-user@lists.linbit.com
> > Sent: Friday, 1 June, 2012 9:03:42 PM
> > Subject: Re: [DRBD-user] Performance hit DRBD vs raw block device, even
On Sat, Jun 02, 2012 at 12:20:14AM +0200, Florian Haas wrote:
> On 06/01/12 18:22, Lars Ellenberg wrote:
> > There is one improvement we could make in DRBD:
> > call the fence-peer handler not only for connection loss,
> > but also for peer disk failure.
>
> That sounds like a good and simple idea
Thank you for your explanation. I have found this in change log, Can you
explain it to me?
In case our backing devices support write barriers and cache flushes, we use
these means to ensure data integrity in the presence of volatile disk write
caches and power outages.
在 2012-06-04 15:48:36
Hi,
please don't forget to CC the list.
On 06/04/2012 02:58 AM, 陈楠 wrote:
> Thank you! There are volatile write cashes in our I/O system. Does the
As long as you're making use of these caches, you *will* loose data on
power outage or critical hardware failure.
Performance can become unacceptabl
Hello,
Thanks for your resonse.
i have already set the rate to 50M
" finish: 0:49:23 speed: 596 (2,236) want: 51,200 K/sec "
this current speed is 596 or wanted rate 51200 ..
So the sync speed is fast enough i think
On 4 jun. 2012, at 09:45, zf5984599 wrote:
> Did you set one "syncer rate"
Did you set one "syncer rate" in your drbd.conf?
such as:
syncer
{
rate 200M;
}
B.R.
Zheng Feng
From: Marcel Kraan
Date: 2012-06-04 15:38
To: drbd-user
Subject: [DRBD-user] drbd sync very slow
Hello,
I use DRBD for a while now.
But today after "image
Hello,
I use DRBD for a while now.
But today after "image creation crash of 850GB" a the disks where out of sync
So i synced the secondary again with the primary
secondary
drbdadm secondary main
drbdadm disconnect main
drbdadm -- --discard-my-data connect main
primary
drbdadm connect main
a
15 matches
Mail list logo