Hi all,
I have a problem after I removed a node with the force command from my crm
config.
Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6, pacemaker
1.1.7-6.el6)
Then I wanted to add a third node acting as quorum node, but was not able to
get it to work (probably because I
On Thu, 2013-03-14 at 16:26 +0100, Lars Marowsky-Bree wrote:
On 2013-03-14T09:44:11, GGS (linux ha) support-linu...@ggsys.net wrote:
That's fine. But the cluster software really assumes that only one
instance of it is running per server - said instance can then manage
multiple software
hi,
On Thu, Mar 14, 2013 at 11:15:29AM -0500, Alberto Alonso wrote:
On Thu, 2013-03-14 at 16:26 +0100, Lars Marowsky-Bree wrote:
On 2013-03-14T09:44:11, GGS (linux ha) support-linu...@ggsys.net wrote:
That's fine. But the cluster software really assumes that only one
instance of it is
On Mon, 2013-03-11 at 16:28 +0100, Dejan Muhamedagic wrote:
Hi,
On Mon, Mar 11, 2013 at 10:53:55AM +0100, Roman Haefeli wrote:
On Fri, 2013-03-08 at 14:15 +0100, Dejan Muhamedagic wrote:
Hi,
On Fri, Mar 08, 2013 at 01:39:27PM +0100, Roman Haefeli wrote:
On Fri, 2013-03-08 at
Hello Fedrik
Why you have a clone of cl_exportfs_root and you have ext4 filesystem, and
i think this order is not correct
order o_drbd_before_nfs inf: ms_drbd_nfs:promote g_nfs:start
order o_root_before_nfs inf: cl_exportfs_root g_nfs:start
I think like that you try to start g_nfs twice
On Fri, Mar 15, 2013 at 10:44:37AM +0100, Roman Haefeli wrote:
On Mon, 2013-03-11 at 16:28 +0100, Dejan Muhamedagic wrote:
Hi,
On Mon, Mar 11, 2013 at 10:53:55AM +0100, Roman Haefeli wrote:
On Fri, 2013-03-08 at 14:15 +0100, Dejan Muhamedagic wrote:
Hi,
On Fri, Mar 08, 2013
On 3/14/2013 11:15 AM, Alberto Alonso wrote:
That's what I thought. The emails from 2009 seemed to indicate
that it was possible to run multiple instances.
I've always had difficulties with the concept: the way I see it if your
hardware fails you want *all* your 200+ services moved. If you
On 2013-03-15T09:54:22, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
I've always had difficulties with the concept: the way I see it if your
hardware fails you want *all* your 200+ services moved. If you want them
independently moved to different places, you're likely better off with a
full
On 03/15/2013 10:08 AM, Lars Marowsky-Bree wrote:
You're contradicting yourself ;-) Pacemaker in fact gives you the
management you suggest for the cloud use case - whether the services
are handled natively or encapsulated into a VM.
Yeah, I suppose. I meant going Open/CloudStack.
(We get to
On 2013-03-15T11:43:56, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Yeah, I suppose. I meant going Open/CloudStack.
(We get to write buzzword-compliant funding proposals, or I don't get to
eat. So my perspective is skewed towards the hottest shiny du jour...)
Yeah, I'd agree that today there
On 03/15/2013 11:55 AM, Lars Marowsky-Bree wrote:
...
Right. Thankfully, we already have that, it's called pacemaker ;-)
Which brings me back to my original problem with the concept: I can
think of only one reason to failover services (as opposed to
hardware), and that is your daemons are
On Fri, 2013-03-15 at 11:43 -0500, Dimitri Maziuk wrote:
On 03/15/2013 10:08 AM, Lars Marowsky-Bree wrote:
You're contradicting yourself ;-) Pacemaker in fact gives you the
management you suggest for the cloud use case - whether the
services
are handled natively or encapsulated into a
On Fri, 2013-03-15 at 17:55 +0100, Lars Marowsky-Bree wrote:
Yeah, I'd agree that today there are scenarios where a cloud makes
more sense then a traditional HA environment. OpenStack et al still have
to up their HA game a bit, though.
You are being way too kind, a lot of improvement is
On Fri, 2013-03-15 at 12:32 -0500, Dimitri Maziuk wrote:
On 03/15/2013 11:55 AM, Lars Marowsky-Bree wrote:
...
Right. Thankfully, we already have that, it's called pacemaker ;-)
Which brings me back to my original problem with the concept: I can
think of only one reason to failover
On 03/15/2013 12:59 PM, GGS (linux ha) wrote:
Unfortunately I'm not at liberty to discuss the full architecture
or what they are doing without written permission, which would
make it clear why we are going the path we are.
Yeah, I suspected something like that. Hopefully I won't ever need to
On Fri, 2013-03-15 at 13:08 -0500, Dimitri Maziuk wrote:
On 03/15/2013 12:59 PM, GGS (linux ha) wrote:
Unfortunately I'm not at liberty to discuss the full architecture
or what they are doing without written permission, which would
make it clear why we are going the path we are.
Yeah,
On 03/15/2013 01:20 PM, GGS (linux ha) wrote:
Virtualization has a huge penalty on performance, specially
at the IO level. At another place we do Xen and KVM with up to
40 VMs/server and when there is any kind of IO (disk specially) going
on things slow down to a crawl.
I'm yet to find
17 matches
Mail list logo