On 03/23/2010 06:16 PM, Dimitri Maziuk wrote:
> On Tuesday 23 March 2010 17:15:24 Terry Inzauro wrote:
>>
>> Please elaborate.
>>
>> Are you telling me that if 1 or 100 clients have an active NFS mount to the
>> clustered NFS server, then resources can't be migrated?
>
> No, I'm asking you if a cl
On Tuesday 23 March 2010 17:15:24 Terry Inzauro wrote:
>
> Please elaborate.
>
> Are you telling me that if 1 or 100 clients have an active NFS mount to the
> clustered NFS server, then resources can't be migrated?
No, I'm asking you if a clustered nfs *server* has an nfs-mounted filesystem.
Espe
On 03/23/2010 01:18 PM, Greg Woods wrote:
>
>>> On one node, i can get all services to start(and they work fine), but
>>> whenever fail over occurs, there's nfs related handles left open thus
>>> inhibiting/hanging the fail over. more specifically, the file systems fails
>>> to unmount.
>
> If yo
On 03/23/2010 01:18 PM, Greg Woods wrote:
>
>>> On one node, i can get all services to start(and they work fine), but
>>> whenever fail over occurs, there's nfs related handles left open thus
>>> inhibiting/hanging the fail over. more specifically, the file systems fails
>>> to unmount.
>
> If yo
On 03/23/2010 12:30 PM, Dimitri Maziuk wrote:
> On Tuesday 23 March 2010 08:37:28 Terry Inzauro wrote:
>
>> On one node, i can get all services to start(and they work fine), but
>> whenever fail over occurs, there's nfs related handles left open thus
>> inhibiting/hanging the fail over. more speci
On 03/23/2010 01:18 PM, Greg Woods wrote:
>
>>> On one node, i can get all services to start(and they work fine), but
>>> whenever fail over occurs, there's nfs related handles left open thus
>>> inhibiting/hanging the fail over. more specifically, the file systems fails
>>> to unmount.
>
> If yo
On Tue, Mar 23, 2010 at 15:00, Andrew Beekhof wrote:
> On Tue, Mar 23, 2010 at 7:11 PM, Eric Blau wrote:
> > On Tue, Mar 23, 2010 at 13:17, Andrew Beekhof
> wrote:
> >
> >> On Tue, Mar 23, 2010 at 6:01 PM, Eric Blau wrote:
> >> > Hi everyone,
> >> >
> >> > I'm working with a test configuration
On Mon, Mar 15, 2010 at 4:17 PM, Robinson, Eric wrote:
> Well, I now have multiple MySQL instances failing over, but they do it
> sequentially. Every time I try to make them failover in parallel, I
> break the config so badly that crm starts throwing errors and I end up
> having to rebuild the who
On Tue, Mar 23, 2010 at 7:11 PM, Eric Blau wrote:
> On Tue, Mar 23, 2010 at 13:17, Andrew Beekhof wrote:
>
>> On Tue, Mar 23, 2010 at 6:01 PM, Eric Blau wrote:
>> > Hi everyone,
>> >
>> > I'm working with a test configuration containing 128 resources using the
>> > Stateful example resource agen
> > On one node, i can get all services to start(and they work fine), but
> > whenever fail over occurs, there's nfs related handles left open thus
> > inhibiting/hanging the fail over. more specifically, the file systems fails
> > to unmount.
If you are referring to file systems on the server th
On Tue, Mar 23, 2010 at 13:17, Andrew Beekhof wrote:
> On Tue, Mar 23, 2010 at 6:01 PM, Eric Blau wrote:
> > Hi everyone,
> >
> > I'm working with a test configuration containing 128 resources using the
> > Stateful example resource agent supplied with Linux HA. I'm trying to
> > figure out how
On Tuesday 23 March 2010 08:37:28 Terry Inzauro wrote:
> On one node, i can get all services to start(and they work fine), but
> whenever fail over occurs, there's nfs related handles left open thus
> inhibiting/hanging the fail over. more specifically, the file systems fails
> to unmount.
>
> Any
On Tue, Mar 23, 2010 at 6:01 PM, Eric Blau wrote:
> Hi everyone,
>
> I'm working with a test configuration containing 128 resources using the
> Stateful example resource agent supplied with Linux HA. I'm trying to
> figure out how to get resource colocation constraints working efficiently.
>
> I
Hi everyone,
I'm working with a test configuration containing 128 resources using the
Stateful example resource agent supplied with Linux HA. I'm trying to
figure out how to get resource colocation constraints working efficiently.
I have 128 master/slave Stateful resources with a configuration f
Hello,
I am reading the Linux-HA documentation at http://www.linux-ha.org/doc,
however it is divided into a million tiny little pages. I would like one
file that I can easily print and read with ease away from the monitor. Can
anyone provide such a file please?
Thank you,
~ Boaz
You want "configuration explained"
http://www.clusterlabs.org/wiki/Documentation#Reference_Material
On Tue, Mar 23, 2010 at 3:34 PM, mike wrote:
> Hello all,
>
> I'm new to the LinuxHA world so be patient with me :--)
>
> I'm trying to find a document that will help me understand how to use
>
Hello all,
I'm new to the LinuxHA world so be patient with me :--)
I'm trying to find a document that will help me understand how to use
cibadmin. I think what I'm looking for is the ClusterInformationBase
UserGuide. Every link to it that I seem to find refers me back to
linux-ha.org which con
On 12/28/2009 08:07 AM, Michael Schwartzkopff wrote:
> Am Montag, 28. Dezember 2009 14:57:53 schrieb Christopher Deneen:
>> On Mon, Dec 28, 2009 at 8:50 AM, Michael Schwartzkopff
>>
>> wrote:
>>> Am Montag, 28. Dezember 2009 14:43:25 schrieb Christopher Deneen:
acpid atop
Thanks, I will try this.
One more question. I have set up of two node running in Active/Standby
with DRBD.
Is it good practise to start heartbeat while primary node is still
synching with secondary node?
In other words, when drbd connection between Primary and Secondary
Node is in "SyncSource" stat
Hi,
here some code of my cib.xml ... hope this will help you:
and ...
Umakant Goyal schrieb:
> Hi, Thanks for Quick Response. Could u please suggest me the what all the
> parameter values i need to change?
>
>
> On Tue, Mar 23, 2010 at 1:37 PM, Jochen Lienhard
Hi, Thanks for Quick Response. Could u please suggest me the what all the
parameter values i need to change?
On Tue, Mar 23, 2010 at 1:37 PM, Jochen Lienhard <
lienh...@ub.uni-freiburg.de> wrote:
> Hi,
>
> I had a similar problem too.
> The problem was, that the system tried to demote the dbrd b
Hi,
I had a similar problem too.
The problem was, that the system tried to demote the dbrd before unmount
... even if I had
a rule for this ... I solved this problems changing the timeouts. It
seems to me that the default
timeout oft the filesystem-ocf is to low.
Greetings
Jochen
Umakant Goya
22 matches
Mail list logo