Hi Holger,
On Tue, Feb 22, 2011 at 06:25:37PM +0100, Holger Teutsch wrote:
Hi,
I resubmit the db2 agent for inclusion into the project. Besides fixing
some loose ends the major change is a reimplementation of attribute
management. Now the attributes are of type -t nodes -l reboot. IMHO
all
On Mon, Feb 21, 2011 at 06:18:58PM +0100, Dejan Muhamedagic wrote:
Hi Florian,
On Wed, Feb 16, 2011 at 09:11:21AM +0100, Florian Haas wrote:
Hello,
I have taken John Shi's original ocft README and turned it into a
section in the RA dev guide. It's not on the web site yet as I've not
On 2011-02-20 14:06, Holger Teutsch wrote:
# HG changeset patch
# User Holger Teutsch holger.teut...@web.de
# Date 1298206988 -3600
# Node ID 91a9bede86dd9642fda4dc9fd06514142c0a7d9a
# Parent 1b3222748961d4acd501707c67d891b050a44c7f
Doc: ra2refentry.xml: Add preservation of empty lines to
On 2011-02-23 13:37, Dejan Muhamedagic wrote:
On Mon, Feb 21, 2011 at 06:18:58PM +0100, Dejan Muhamedagic wrote:
Hi Florian,
On Wed, Feb 16, 2011 at 09:11:21AM +0100, Florian Haas wrote:
Hello,
I have taken John Shi's original ocft README and turned it into a
section in the RA dev guide.
Hi Dejan,
On Wed, 2011-02-23 at 11:54 +0100, Dejan Muhamedagic wrote:
Hi Holger,
On Tue, Feb 22, 2011 at 06:25:37PM +0100, Holger Teutsch wrote:
Hi,
I resubmit the db2 agent for inclusion into the project. Besides fixing
some loose ends the major change is a reimplementation of
Hi Dejan,
On Wed, 2011-02-23 at 18:03 +0100, Holger Teutsch wrote:
Hi Dejan,
On Wed, 2011-02-23 at 11:54 +0100, Dejan Muhamedagic wrote:
Hi Holger,
On Tue, Feb 22, 2011 at 06:25:37PM +0100, Holger Teutsch wrote:
Hi,
I resubmit the db2 agent for inclusion into the project. Besides
Hello!
I'm currently looking for a suitable stonith solution for our environment:
1. We have three cluster nodes running OpenSuSE 10.3 with corosync and
pacemaker.
2. The nodes reside on two VMware ESXi-Servers ( v. 4.1.0) in two locations,
where one VMware Server hosts two, the other hosts
Just a small point, why call grep twice below? - if you are looking for
something like Filesystem state:somethingerrorsomething then use
egrep 'Filesystem state:.*error'
with_error=`dumpe2fs -h $DEVICE 2/dev/null | grep Filesystem
state: | grep error`
if [ -n $with_error ];
Hi,
On Wed, Feb 23, 2011 at 09:55:00AM +, Stallmann, Andreas wrote:
Hello!
I'm currently looking for a suitable stonith solution for our environment:
1. We have three cluster nodes running OpenSuSE 10.3 with corosync and
pacemaker.
2. The nodes reside on two VMware ESXi-Servers (
Hi!
- (3) rules out sbd, as this method requires access to a physical device,
that offers the shared storage. Am I right? The manual explicitly says, that
sbd may even not be used on a DRBD-Partition. Question: Is there a way to
insert the sbd-Header on a mounted drive instead of a
On Wed, Feb 23, 2011 at 12:19:20PM +, Stallmann, Andreas wrote:
Hi!
- (3) rules out sbd, as this method requires access to a physical device,
that offers the shared storage. Am I right? The manual explicitly says,
that sbd may even not be used on a DRBD-Partition. Question: Is there
Hi there!
...
Please no-one try a loop-mounted image file on NFS ;-) Even though in theory
it may work, if you mount -o sync ...
*Outch*
...
Does this help?
http://www.linux-ha.org/w/index.php?title=SBD_Fencingdiff=481oldid=97
Yes, this helps... somehow. Well, I should use iSCSI to share
Hi,
I noticed these two recent commits:
http://hg.linux-ha.org/glue/rev/be41a3ef5717
http://hg.linux-ha.org/glue/rev/d38599e0d926
I think .ocf-shellfuncs is defunct as of resource-agents 1.0.4.
Should be cluster-glue made aware of the fact?
Cheers,
Vadym
smime.p7s
Description: S/MIME
Hi there,
I'm running a setup with a Heartbeat/DRDB cluster with 2 nodes and open
ldap database stored inside the DRDB-device.
No problem with the setup itself, it runs perfectly.
But I'm having following problem: How to update LDAP in a cluster?
The plan was to first run the update on the
Why not to use ldap syncrepl feature instead of DRBD?
On other hand what exactly are you trying to get? I don't think that
you can build and active/active LDAP cluster using DRBD since LDAP
caches it's database and doesn't provide mechanisms to synchronize
caches, so your only choice would be
Christopher Metter wrote:
Hi there,
I'm running a setup with a Heartbeat/DRDB cluster with 2 nodes and open
ldap database stored inside the DRDB-device.
No problem with the setup itself, it runs perfectly.
But I'm having following problem: How to update LDAP in a cluster?
Are you
Sorry, I thought about database updates, not software updates.
On Wed, Feb 23, 2011 at 2:33 PM, Serge Dubrouski serge...@gmail.com wrote:
Why not to use ldap syncrepl feature instead of DRBD?
On other hand what exactly are you trying to get? I don't think that
you can build and active/active
On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Serge Dubrouski wrote:
Why not to use ldap syncrepl feature instead of DRBD?
The problem with syncrepl is not the replication, it's the timeouts in
the failover. As in you type ls -l, your computer freezes for 5
Serge Dubrouski wrote:
On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Serge Dubrouski wrote:
Why not to use ldap syncrepl feature instead of DRBD?
The problem with syncrepl is not the replication, it's the timeouts in
the failover. As in you type ls -l, your
On Wed, Feb 23, 2011 at 3:36 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Serge Dubrouski wrote:
On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu
wrote:
Serge Dubrouski wrote:
Why not to use ldap syncrepl feature instead of DRBD?
The problem with syncrepl is not the
Serge Dubrouski wrote:
But you still can have just 1 IP associated with a node that has LDAP
up. Or you can have an IP with load balancer and health monitor. It's
all design issues.
Yes. You can have a lot of things. however, the ldap failover + syncrepl
README is just put URI server-1
Hello everyone,
I have been trying to get STONITH work with the MasterSwitch Plus (AP9225 +
AP9617) with very little success.
I have to mention that I am able to access the MasterSwitch Plus via either
serial port or Ethernet port with no issue. But when I try the stonith
command, it appears
Hi,
On 23 February 2011 22:57, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Serge Dubrouski wrote:
But you still can have just 1 IP associated with a node that has LDAP
up. Or you can have an IP with load balancer and health monitor. It's
all design issues.
Yes. You can have a lot of
23 matches
Mail list logo