Hi,
On 23 February 2011 22:57, Dimitri Maziuk wrote:
> Serge Dubrouski wrote:
>
>> But you still can have just 1 IP associated with a node that has LDAP
>> up. Or you can have an IP with load balancer and health monitor. It's
>> all design issues.
>
> Yes. You can have a lot of things. however, t
Hello everyone,
I have been trying to get STONITH work with the MasterSwitch Plus (AP9225 +
AP9617) with very little success.
I have to mention that I am able to access the MasterSwitch Plus via either
serial port or Ethernet port with no issue. But when I try the stonith
command, it appears th
Serge Dubrouski wrote:
> But you still can have just 1 IP associated with a node that has LDAP
> up. Or you can have an IP with load balancer and health monitor. It's
> all design issues.
Yes. You can have a lot of things. however, the ldap failover + syncrepl
README is just put "URI server-1 se
On Wed, Feb 23, 2011 at 3:36 PM, Dimitri Maziuk wrote:
> Serge Dubrouski wrote:
>> On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk
>> wrote:
>>> Serge Dubrouski wrote:
Why not to use ldap syncrepl feature instead of DRBD?
>>> The problem with syncrepl is not the replication, it's the timeou
Serge Dubrouski wrote:
> On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk wrote:
>> Serge Dubrouski wrote:
>>> Why not to use ldap syncrepl feature instead of DRBD?
>> The problem with syncrepl is not the replication, it's the timeouts in
>> the failover. As in you type "ls -l", your computer freez
On Wed, Feb 23, 2011 at 2:56 PM, Dimitri Maziuk wrote:
> Serge Dubrouski wrote:
>> Why not to use ldap syncrepl feature instead of DRBD?
>
> The problem with syncrepl is not the replication, it's the timeouts in
> the failover. As in you type "ls -l", your computer freezes for 5 minutes.
With syn
Serge Dubrouski wrote:
> Why not to use ldap syncrepl feature instead of DRBD?
The problem with syncrepl is not the replication, it's the timeouts in
the failover. As in you type "ls -l", your computer freezes for 5 minutes.
Dimitri
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madiso
Sorry, I thought about database updates, not software updates.
On Wed, Feb 23, 2011 at 2:33 PM, Serge Dubrouski wrote:
> Why not to use ldap syncrepl feature instead of DRBD?
>
> On other hand what exactly are you trying to get? I don't think that
> you can build and active/active LDAP cluster us
Christopher Metter wrote:
> Hi there,
>
> I'm running a setup with a Heartbeat/DRDB cluster with 2 nodes and open
> ldap database stored inside the DRDB-device.
> No problem with the setup itself, it runs perfectly.
>
>
> But I'm having following problem: How to update LDAP in a cluster?
Are y
Why not to use ldap syncrepl feature instead of DRBD?
On other hand what exactly are you trying to get? I don't think that
you can build and active/active LDAP cluster using DRBD since LDAP
caches it's database and doesn't provide mechanisms to synchronize
caches, so your only choice would be acti
Hi there,
I'm running a setup with a Heartbeat/DRDB cluster with 2 nodes and open
ldap database stored inside the DRDB-device.
No problem with the setup itself, it runs perfectly.
But I'm having following problem: How to update LDAP in a cluster?
The plan was to first run the update on the cur
Hi,
I noticed these two recent commits:
http://hg.linux-ha.org/glue/rev/be41a3ef5717
http://hg.linux-ha.org/glue/rev/d38599e0d926
I think .ocf-shellfuncs is "defunct" as of resource-agents 1.0.4.
Should be cluster-glue made aware of the fact?
Cheers,
Vadym
smime.p7s
Description: S/MIME crypto
Hi there,
I'm afraid, I'm asking a question that several other people asked before.
Believe me, I think I tried everything from the posts I've found yet.
I'm currently trying to get my apache webserver to be started by pacemaker.
Here's the config:
primitive sharedIP ocf:heartbeat:IPaddr2 \
Hi there!
...
> Please no-one try a loop-mounted image file on NFS ;-) Even though in theory
> it may work, if you mount -o sync ...
> *Outch*
...
> Does this help?
> http://www.linux-ha.org/w/index.php?title=SBD_Fencing&diff=481&oldid=97
Yes, this helps... somehow. Well, I should use iSCSI to s
Hi,
On Wed, Feb 23, 2011 at 03:04:15PM +0100, Cristian Mammoli - Apra Sistemi wrote:
> On 02/23/2011 12:12 PM, Dejan Muhamedagic wrote:
>
> > There's also external/libvirt which hasn't been in any release
> > yet, but seems to be of very good quality. You can get it here:
> >
> > http://hg.linux-
On Wed, Feb 23, 2011 at 12:19:20PM +, Stallmann, Andreas wrote:
> Hi!
>
> >> - (3) rules out sbd, as this method requires access to a physical device,
> >> that offers the shared storage. Am I right? The manual explicitly says,
> >> that sbd may even not be used on a DRBD-Partition. Question
On 02/23/2011 12:12 PM, Dejan Muhamedagic wrote:
> There's also external/libvirt which hasn't been in any release
> yet, but seems to be of very good quality. You can get it here:
>
> http://hg.linux-ha.org/glue/file/tip/lib/plugins/stonith/external/libvirt
>
> and just put it into /usr/lib64/ston
Hi!
>> - (3) rules out sbd, as this method requires access to a physical device,
>> that offers the shared storage. Am I right? The manual explicitly says, that
>> sbd may even not be used on a DRBD-Partition. Question: Is there a way to
>> insert the sbd-Header on a mounted drive instead of a
Hi,
On Wed, Feb 23, 2011 at 09:55:00AM +, Stallmann, Andreas wrote:
> Hello!
>
> I'm currently looking for a suitable stonith solution for our environment:
>
> 1. We have three cluster nodes running OpenSuSE 10.3 with corosync and
> pacemaker.
> 2. The nodes reside on two VMware ESXi-Serve
Just a small point, why call grep twice below? - if you are looking for
something like "Filesystem state:error" then use
egrep 'Filesystem state:.*error'
>> with_error=`dumpe2fs -h $DEVICE 2>/dev/null | grep "Filesystem
>> state:" | grep "error"`
>> if [ -n "$with_error" ]; then
Hello!
I'm currently looking for a suitable stonith solution for our environment:
1. We have three cluster nodes running OpenSuSE 10.3 with corosync and
pacemaker.
2. The nodes reside on two VMware ESXi-Servers ( v. 4.1.0) in two locations,
where one VMware Server hosts two, the other hosts on
21 matches
Mail list logo