Hi,
We've got the SuSE Linux Enterprise 11 HA add-on, which comes with
OpenAIS, Pacemaker and DRBD, as well as YaST modules for configuring these.
We want to run two DRBD pairs:
- One with ext3 in a standard master/slave configuration
- One with ocfs2 in an active/active configuration
I ha
Matthew Palmer wrote:
On Thu, Mar 11, 2010 at 03:34:50PM +0800, Martin Aspeli wrote:
I was wondering, though, if fencing at the DRBD level would get around
the possible problem with a full power outage taking the fencing device
down.
In my poor understanding of things, it'd work like
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 6:59 PM, Martin Aspeli wrote:
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli
wrote:
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supp
Serge Dubrouski wrote:
On Wed, Mar 10, 2010 at 5:30 PM, Martin Aspeli wrote:
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data sto
Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
If we detect a failure on the active node and fail the database over to
the
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 11:10:31PM +0800, Martin Aspeli wrote:
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file
Dejan Muhamedagic wrote:
Hi,
On Wed, Mar 10, 2010 at 09:02:48PM +0800, Martin Aspeli wrote:
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file system on a regular partition or LV, and replace every
"echo> blob" with
darren.mans...@opengi.co.uk wrote:
Please forgive my ignorance, I seem to have missed the specifics about
using OCFS2 on DRBD dual-primary but what are the main issues? How can
you use PgSQL on dual-primary without OCFS2?
For the record, we are *not* using dual primary in our setup. We'll have
Lars Ellenberg wrote:
Or, if this is as infrequent as you say it is, have those blobs in a
regular file system on a regular partition or LV, and replace every
"echo> blob" with "echo> blob&& csync2 -x blob" (you get the idea).
Unfortunately, that'd mean modifying software I don't really hav
Matthew Palmer wrote:
On Wed, Mar 10, 2010 at 02:32:05PM +0800, Martin Aspeli wrote:
Florian Haas wrote:
On 03/09/2010 06:07 AM, Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node
Florian Haas wrote:
On 03/09/2010 06:07 AM, Martin Aspeli wrote:
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
*cringe* Which databa
Hi Dejan,
Thanks for all the help!
- The postgres data would need fencing when failing over, from what
I understand. I read the notes that using an on-board device like
Dell's DRAC to implemenet STONITH is not a good idea. We don't have
the option at this stage to buy a UPS-based solution (we
Dejan Muhamedagic wrote:
Hi,
On Mon, Mar 08, 2010 at 12:00:44PM +0800, Martin Aspeli wrote:
Hi,
We have a two-node cluster of Dell servers. They have an iDRAC 6
Enterprise each. The cluster is also backed up by a UPS with a
diesel generator.
I realise on-board devices like the DRAC are not
Hi folks,
Let's say have a two-node cluster with DRBD and OCFS2, with a database
server that's supposed to be active on one node at a time, using the
OCFS2 partition for its data store.
If we detect a failure on the active node and fail the database over to
the other node, we need to fence o
Matthew Palmer wrote:
On Mon, Mar 08, 2010 at 03:21:32PM +0800, Martin Aspeli wrote:
Matthew Palmer wrote:
What is the normal way to handle this? Do people have one floating IP
address per service?
This is how I prefer to do it. RFC1918 IP addresses are cheap, IPv6 address
quintuply so
Matthew Palmer wrote:
On Mon, Mar 08, 2010 at 01:34:01PM +0800, Martin Aspeli wrote:
This question was sort of implied in my thread last week, but I'm going
to re-ask it properly, to reduce my own confusion if nothing else.
We have two servers, master and slave. In the cluster, we
Hi,
This question was sort of implied in my thread last week, but I'm going
to re-ask it properly, to reduce my own confusion if nothing else.
We have two servers, master and slave. In the cluster, we have:
- A shared IP address (192.168.245.10)
- HAProxy (active on master, may fail over to
Hi,
We have a two-node cluster of Dell servers. They have an iDRAC 6
Enterprise each. The cluster is also backed up by a UPS with a diesel
generator.
I realise on-board devices like the DRAC are not ideal for fencing, but
it's probably the best we're going to be able to do. However, I've rea
Hi Serge,
>> I don't know if the pgsql RA can support "cold standby"
>> instances.
>>
>
> In my opinion "cold standby" is a server has has access to the data
> files where PostreSQL is down but can be brought up any time. pgsql RA
> does exactly that if other resources proved access to the data.
Hi Dejan,
Dejan Muhamedagic wrote:
Hi,
On Fri, Mar 05, 2010 at 10:00:06AM +0800, Martin Aspeli wrote:
Hi,
I'm pretty new to all this stuff, but I've read pretty much all the
documentation on the clusterlabs website. I'm seeking a bit of
clarification/confirmation on how to
Hi,
I'm pretty new to all this stuff, but I've read pretty much all the
documentation on the clusterlabs website. I'm seeking a bit of
clarification/confirmation on how to achieve certain things, in
particular around fencing/STONITH, before we dive into trying to set
this up.
We're using Su
21 matches
Mail list logo