On Sat, May 28, 2011 at 02:58:28PM -0700, Matt Graham wrote:
> If you can ssh to the bad machine, fix the /etc/init.d/drbd script so that it
> starts *after* all the NICs are running.
Yeah, I could do that if I could ssh. If I could ssh all would be pretty.
But the drbd init.d script is blockin
From: Whit Blauvelt
> On Sat, May 28, 2011 at 11:12:50PM +0200, Valentin Vidic wrote:
>> If there is a second node, perhaps you can make it appear? And
>> after the primary finishes with the boot adjust the startup timeouts
>> from the default values.
> Sadly, the second node is up. But the node
On Sat, May 28, 2011 at 11:12:50PM +0200, Valentin Vidic wrote:
> If there is a second node, perhaps you can make it appear? And after the
> primary finishes with the boot adjust the startup timeouts from the default
> values.
Sadly, the second node is up. But the node that's stuck was dumb enou
On Sat, May 28, 2011 at 04:25:45PM -0400, Dan Barker wrote:
> The setting is wfc-timeout and possibly degr-wfc-timeout. Unless you can
> force power off the machine and mount it's boot disk (like you would be able
> to in a VM setting), I think you have to drive over there or talk the
> janitor in
On Sat, May 28, 2011 at 02:22:49PM -0400, Whit Blauvelt wrote:
> Meanwhile, any suggestions on getting past this will be most appreciated.
If there is a second node, perhaps you can make it appear? And after the
primary finishes with the boot adjust the startup timeouts from the default
values.
Hi,
I've got myself in a sticky situation. Restarting a system remotely that was
a drbd primary, it's getting to "Starting DRBD resources," and then, after
finding the meta data, there's the "DRBD's startup script waits for the peer
node(s) to appear," plus mention that degr-wfc-timeout and wfc-ti