On Wed, Apr 14, 2010 at 09:11:43AM +0200, Ivan Coronado wrote:
> I was wondering what is the better stonith-action. If I set reboot but
> the node doesn't restart (damaged motherboard or not power, for example)
> resources do not ever migrate. So it would be better set stonith-action
> to poweroff,
On Wed, Mar 24, 2010 at 07:59:26PM +, Mario Giammarco wrote:
> Andrew Beekhof writes:
> > Have you seen:
> >http://www.clusterlabs.org/doc/crm_fencing.html
> > I have been led to believe that STONITH
> > > will help prevent split brain situations, but the LINBIT instructions do
> > > not
On Fri, Mar 19, 2010 at 10:47:59PM +0100, Emmanuel Lesouef wrote:
> I'm trying to make a active/passive dhcp server.
[...]
> The problem is that when node1 come online again, there's a difference
> in the dhcp lease file.
>
> I think that using rsync to synchronize the lease file is not the best
On Wed, Mar 17, 2010 at 07:16:16AM -0500, Schaefer, Diane E wrote:
> We were wondering what the node state of UNCLEAN, with the three
> variations of online, offline and pending returned in crm_mon mean. We
> had the heartbeat service off on one of our nodes and the other node
> reported U
On Fri, Mar 12, 2010 at 09:48:57AM -, darren.mans...@opengi.co.uk wrote:
> /proc/drbd on the slave said Secondary/Primary UpToDate/Inconsistent
> while it was syncing data back - so it was able to mount the
> inconsistent data on the primary node and access the files that hadn't
> yet sync'd ov
On Thu, Mar 11, 2010 at 05:26:19PM +0800, Martin Aspeli wrote:
> Matthew Palmer wrote:
>> On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
>>> I was wondering, though, if fencing at the DRBD level would get around
>>> the possible problem with a full powe
On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
> I was wondering, though, if fencing at the DRBD level would get around
> the possible problem with a full power outage taking the fencing device
> down.
>
> In my poor understanding of things, it'd work like this:
>
> - Pacemaker
On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
> Dejan Muhamedagic wrote:
>> ocfs2 introduces an extra level of complexity. You don't want
>> that unless really necessary.
>
> How would that complexity manifest?
Have you noticed the number of extra daemons and kernel bits that have
[Up-front disclaimer: I'm not a fan of cluster filesystems, having had large
chunks of my little remaining sanity shredded by GFS. So what I say is
likely tinged with lingering loathing, although I do *try* to stay factual]
On Wed, Mar 10, 2010 at 09:01:01PM +0800, Martin Aspeli wrote:
>
On Thu, Mar 11, 2010 at 08:30:29AM +0800, Martin Aspeli wrote:
> Martin Aspeli wrote:
>> Hi folks,
>>
>> Let's say have a two-node cluster with DRBD and OCFS2, with a database
>> server that's supposed to be active on one node at a time, using the
>> OCFS2 partition for its data store.
>>
>> If we
On Wed, Mar 10, 2010 at 11:26:41AM -, darren.mans...@opengi.co.uk wrote:
>
> On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> > Florian Haas wrote:
> >> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
> >>> Hi folks,
> >>>
> >>> Let's say have a two-node cluster with DRBD and OCFS
On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
> Florian Haas wrote:
>> On 03/09/2010 06:07 AM, Martin Aspeli wrote:
>>> Hi folks,
>>>
>>> Let's say have a two-node cluster with DRBD and OCFS2, with a database
>>> server that's supposed to be active on one node at a time, using the
On Mon, Mar 08, 2010 at 03:21:32PM +0800, Martin Aspeli wrote:
> Matthew Palmer wrote:
>>> What is the normal way to handle this? Do people have one floating IP
>>> address per service?
>>
>> This is how I prefer to do it. RFC1918 IP addresses are cheap, IPv6
On Mon, Mar 08, 2010 at 01:34:01PM +0800, Martin Aspeli wrote:
> This question was sort of implied in my thread last week, but I'm going
> to re-ask it properly, to reduce my own confusion if nothing else.
>
> We have two servers, master and slave. In the cluster, we have:
[bunchteen services, s
On Tue, Mar 02, 2010 at 03:47:56PM +0100, Testuser SST wrote:
> I?m running an 2-Node apache cluster and all works fine, but is there a
> way to start the apache with the supply of a needed password to start up
> the ssl-engine (there is one ssl-cert with and one without password on
> this server)
On Thu, Feb 11, 2010 at 10:52:34AM +0100, Sander van Vugt wrote:
> I'm working on different Xen HA projects, but sometimes get the idea
> that I'm the only one on the planet doing such projects. Is there anyone
> on the list involved in Xen HA projects? I would appreciate having the
> opportunity t
On Wed, Dec 30, 2009 at 10:54:59AM +0100, f...@fredleroy.com wrote:
> Many thanks for your help !
>
> just one question about your mysql ip.
> Do you use a dedicated ip for mysql ? Why not just refer to localhost ?
We have a strong policy of "one service, one IP", on the basis that sooner
or late
On Wed, Dec 30, 2009 at 10:04:43AM +0100, f...@fredleroy.com wrote:
> Hi all,
>
> I'm a real newbie to pacemaker and after quite a few reading, I believe my
> setup would be the following :
> - 2 node cluster active/passive
> - using debian lenny, 1 nic per node, hard raid1 on each node
> - plan t
On Fri, Nov 20, 2009 at 03:14:16PM -0200, Alexandre Biancalana wrote:
> On Fri, Nov 20, 2009 at 2:53 PM, Matthew Palmer wrote:
> > On Fri, Nov 20, 2009 at 02:42:29PM -0200, Alexandre Biancalana wrote:
> >> ?I'm building a 4 node cluster where 2 nodes will export drbd devi
On Fri, Nov 20, 2009 at 02:42:29PM -0200, Alexandre Biancalana wrote:
> I'm building a 4 node cluster where 2 nodes will export drbd devices
> via ietd iscsi target (storage nodes) and other 2 nodes will run xen
> vm (app nodes) stored in lvm partition accessed via open-iscsi
> initiator, using mu
On Mon, Oct 26, 2009 at 11:46:28AM +0100, I?aki S?nchez wrote:
> I want to set up a two node mysql cluster, active-passive with shared
> storage in a SAN.
> I want only one node at a time to have mysqld running and mysql data
> filesystem mounted. In case of takeover, the second node would moun
On Fri, Oct 16, 2009 at 10:54:18AM +0200, Raoul Bhatia [IPAX] wrote:
> On 10/16/2009 09:59 AM, Matthew Palmer wrote:
> > If this were a single-machine service, I'd completely agree with you.
> > Unfortunately, a cluster service like pacemaker needs to have absolutely
> >
On Fri, Oct 16, 2009 at 10:50:44AM +0200, Raoul Bhatia [IPAX] wrote:
> On 10/16/2009 09:59 AM, Matthew Palmer wrote:
> >> (1 min later: http://wiki.github.com/camptocamp/puppet-pacemaker has
> >> no downloads, and no documentation; is it even remotely stable/ready
> >&
On Fri, Oct 16, 2009 at 09:23:41AM +0200, Colin wrote:
> On Thu, Oct 15, 2009 at 10:51 AM, Matthew Palmer wrote:
> > On Thu, Oct 15, 2009 at 10:07:56AM +0200, Colin wrote:
> >> Another question regarding how to activate a pacemaker config: Is
> >> there any way to act
On Thu, Oct 15, 2009 at 10:07:56AM +0200, Colin wrote:
> On Sun, Oct 11, 2009 at 9:13 PM, Andrew Beekhof wrote:
> > On Fri, Oct 9, 2009 at 3:12 PM, Colin wrote:
> >> The config explained document is excellent -- once everything is up
> >> and running to arrive at "its level".
> >
> > Agreed. ?I'v
On Mon, Oct 05, 2009 at 02:39:19PM +0200, Florian Haas wrote:
> And whether or not these node names are fully-qualified or not is
> actually not up to the user, but depends on the distro used. That was my
> point. :)
On the contrary, all my (Debian) pacemaker nodes have their FQDN as the node
name
On Mon, Sep 28, 2009 at 09:36:48AM +0200, Johan Verrept wrote:
> On Sun, 2009-09-27 at 16:32 +1000, Matthew Palmer wrote:
> > On a related topic, is there any way to find out what the cluster's scores
> > for all resources are, and how it came to calculate those scores? The
Hi all,
I've got a cluster of three Xen dom0s (running VMs managed by pacemaker with
DRBD in the dom0s for the VM disks) that I'm trying to get working in a
stable fashion, but I'm having a hard time avoiding what I've dubbed the
"startled herd" problem.
Basically, once the allocation of VMs is i
On Mon, Sep 21, 2009 at 09:47:56PM +0900, renayama19661...@ybb.ne.jp wrote:
> I understand that I can come true by the method that you showed enough.
>
> However, we wanted to do respawn under the cluster software if possible.
Why, though? It's not the right solution to the problem.
- Matt
___
29 matches
Mail list logo