Re: [DRBD-user] DRBD 8.4 agent sets fallback master score 5

2011-08-05 Thread Dominik Klein
I thought you were on squeeze? Were ... as in past tense ... as in "previous employer" :) I'll leave that decision to Lars and Phil, but another option is to simply include that patch in the Debian drbd-utils package. And then have different agents on different distros? My vote would go aga

Re: [DRBD-user] DRBD 8.4 agent sets fallback master score 5

2011-08-04 Thread Dominik Klein
Can you file a bug against the bash package in Debian please, citing this thread as a reference? Somehow I figured you'd say that ... :) I'll try to see if this particular thing is already in one of the bug reports against bash in lenny. Unfortunately, there already are like 240 bug reports a

Re: [DRBD-user] DRBD 8.4 agent sets fallback master score 5

2011-08-04 Thread Dominik Klein
diff --git a/scripts/drbd.ocf b/scripts/drbd.ocf index 3277caa..b16148a 100644 --- a/scripts/drbd.ocf +++ b/scripts/drbd.ocf @@ -339,7 +339,7 @@ drbd_update_master_score() { # For multi volume, we need to compare who is "better" a bit more sophisticated. # The ${XXX[*]//UpToDate},

Re: [DRBD-user] DRBD 8.4 agent sets fallback master score 5

2011-07-27 Thread Dominik Klein
: Hi, Is this the same issue? http://developerbugs.linux-foundation.org/show_bug.cgi?id=2608 Thanks, Junko IKEDA NTT DATA INTELLILINK CORPORATION 2011/7/26 Dominik Klein: Hi as discussed on IRC with fghaas, I see an error with a setup of drbd 8.4, pacemaker 1.1.5 and corosync 1.4 It is a group

[DRBD-user] DRBD 8.4 agent sets fallback master score 5

2011-07-26 Thread Dominik Klein
Hi as discussed on IRC with fghaas, I see an error with a setup of drbd 8.4, pacemaker 1.1.5 and corosync 1.4 It is a group of filesystem, ip and mysql on top of the drbd master and upon crm migrate $group, the cluster just stops the resources, no demote, no promote on the other node and no

Re: [DRBD-user] Question on drbdadm verify

2011-07-03 Thread Dominik Klein
On 07/02/2011 03:37 PM, Lars Ellenberg wrote: On Fri, Jul 01, 2011 at 11:00:32PM +0200, Lionel Sausin wrote: Hi, My understanding is that "verify" has a source and a target - and then when you resync, blocks will flow from the source to the target, so if the source is corrupt you'll replicate th

Re: [DRBD-user] Question on drbdadm verify

2011-07-01 Thread Dominik Klein
Anyone? Thanks Dominik On 06/28/2011 08:29 AM, Dominik Klein wrote: > Hi > > while recovering from a failure that impacted a node's power and its > raid controller's battery (ouch), I came across a question regarding > drbdadm verify I could not find the answer to

[DRBD-user] Question on drbdadm verify

2011-06-27 Thread Dominik Klein
Hi while recovering from a failure that impacted a node's power and its raid controller's battery (ouch), I came across a question regarding drbdadm verify I could not find the answer to. Maybe you guys can shed some light. In my situation, drbd thinks it did write data that might not have made i

Re: [DRBD-user] resyncing, bandwidth limits and new writes.

2010-04-27 Thread Dominik Klein
If by limiting bandwidth you mean the "rate" configuration in the syncer section, then that does only limit the rate for background synchronisation. The "foreground" sync of current writes is not affected by that setting. That always happens as fast as possible. Regards Dominik On 04/26/2010 04:0

Re: [DRBD-user] Recommended procedure to fsck a drbd device

2010-04-14 Thread Dominik Klein
On 04/13/2010 01:51 PM, Jose Manuel Torralba wrote: > Hi, this is my first post in this list and the question is regarding an > inherited system of which I'm not completely familiar with. If I'm missing > any necessary details, please ask. > > We have a cluster, running drbd 8.2.6, both nodes

Re: [DRBD-user] Critical bug in documentation with nested lvm<-drbd->lvm

2009-10-27 Thread Dominik Klein
Martin Gombac wrote: > Hi, > > i've read http://www.drbd.org/users-guide/s-nested-lvm.html. > > It seems that when you delegate/create logical volumes in step 11. you > don't take into account, that in the backing storage for it (/dev/drbd0) > there is also metadata, 'cause you use internal. So

Re: [DRBD-user] Concurrent state changes detected and aborted

2009-06-19 Thread Dominik Klein
>> What exactly happened there and how can I avoid it? > > I have no idea. > possibly both have been told to "down" at the exact same time. > > there are a few "cluster wide state changes", and while one of those is > pending, no other cluster wide state change is allowed. > > apparently virt-1