Re: [Pacemaker] HA FTP Server in aws vpc

2012-12-17 Thread Art Zemon

Have you thought about using a load balancer instead of a VIP? The ELB can span 
subnets.
 
-- Art Z.
 
-Original Message-
From: "Yossi Nachum" 
Sent: Monday, December 17, 2012 2:22am
To: pacemaker@oss.clusterlabs.org
Subject: [Pacemaker] HA FTP Server in aws vpc



Hi,
I want to run ftp server in active passive mode in amazon aws environment.
I use a vpc and two subnets: ftp-1 is on 192.168.10.x and ftp-2 is on 
192.168.20.x
The two subnets are in different availability zones.
In this configuration I don't see how can I use a vip so I thought of creating 
an init script that change the DNS record when one server become the active 
server.
what do you think? does anyone have more elgant solution for this?
Thanks
Yossi

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Trouble Starting Filesystem

2012-12-10 Thread Art Zemon
"600s" timeout="300s" \
op start start-delay="10s" interval="0"
ms ms_drbd_share_plesk p_drbd_share_plesk \
meta master-max="2" notify="true" interleave="true" clone-max="2" 
is-managed="true" target-role="Started"
clone cl_fencing p_stonith \
meta target-role="Started"
clone cl_fs_share_plesk p_fs_share_plesk \
meta clone-max="2" interleave="true" notify="true" 
globally-unique="false" target-role="Started"
clone cl_o2cb p_o2cb \
meta clone-max="2" interleave="true" globally-unique="false" 
target-role="Started"
location lo_drbd_plesk3 ms_drbd_share_plesk -inf: aztestc3
location lo_drbd_plesk4 ms_drbd_share_plesk -inf: aztestc4
location lo_fs_plesk3 cl_fs_share_plesk -inf: aztestc3
location lo_fs_plesk4 cl_fs_share_plesk -inf: aztestc4
location lo_o2cb3 cl_o2cb -inf: aztestc3
location lo_o2cb4 cl_o2cb -inf: aztestc4
order o_20plesk inf: ms_drbd_share_plesk:promote cl_o2cb:start
order o_40fs_plesk inf: cl_o2cb cl_fs_share_plesk
property $id="cib-bootstrap-options" \
stonith-enabled="true" \
stonith-timeout="180s" \
no-quorum-policy="freeze" \
dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
cluster-infrastructure="cman" \
last-lrm-refresh="1355179514"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"



and here is my previous 2-node configuration, which worked "mostly." Sometimes 
I had to manually "crm resource cleanup cl_fs_share" to get the filesystem to 
mount but otherwise eveyrthing was fine.

node aztestc1 \
attributes standby="off"
node aztestc2 \
attributes standby="off"
primitive p_drbd_share ocf:linbit:drbd \
params drbd_resource="share" \
op monitor interval="15s" role="Master" timeout="20s" \
op monitor interval="20s" role="Slave" timeout="20s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive p_fs_share ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/share" directory="/share" 
fstype="ocfs2" options="rw,noatime" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40"
primitive p_o2cb ocf:pacemaker:o2cb \
params stack="cman" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
op monitor interval="10" timeout="20"
primitive p_stonith stonith:fence_ec2 \
params pcmk_host_check="static-list" pcmk_host_list="aztestc1 aztestc2" 
\
op monitor interval="600s" timeout="300s" \
op start start-delay="10s" interval="0"
ms ms_drbd_share p_drbd_share \
meta master-max="2" notify="true" interleave="true" clone-max="2" 
is-managed="true" target-role="Started"
clone cl_fencing p_stonith \
meta target-role="Started"
clone cl_fs_share p_fs_share \
meta interleave="true" notify="true" globally-unique="false" 
target-role="Started"
clone cl_o2cb p_o2cb \
meta interleave="true" globally-unique="false"
order o_ocfs2 inf: ms_drbd_share:promote cl_o2cb
order o_share inf: cl_o2cb cl_fs_share
property $id="cib-bootstrap-options" \
stonith-enabled="true" \
stonith-timeout="180s" \
dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
cluster-infrastructure="cman" \
last-lrm-refresh="1354808774"


Thoughts? Ideas? Suggestions?

Thank you,
-- Art Z.

--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net
 


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Locating a clone on 2 nodes of a 4 node cluster

2012-12-08 Thread Art Zemon
Hello,

I need some help with the syntax for making a clone run on two nodes of a four 
node cluster. I have two OCFS2 filesystems: 
   * cl_fs_share_db should run on nodes aztestc3 and aztestc4
   * cl_fs_share_plesk should run on nodes aztestc1 and aztestc2
The files /etc/drbd.d/sharedb.res and /etc/drbd.d/shareplesk.res each specify 
the correct pair of nodes.

How do I update the following pacemaker configuration? Each clone should run in 
dual-primary mode on both of its nodes.

Side question: I have configured just one o2cb. Do I need two of them?

Thank you,
-- Art Z.


node aztestc1 \
attributes standby="off"
node aztestc2 \
attributes standby="off"
node aztestc3 \
attributes standby="off"
node aztestc4 \
attributes standby="off"
primitive p_drbd_share_db ocf:linbit:drbd \
params drbd_resource="sharedb" \
op monitor interval="15s" role="Master" timeout="20s" \
op monitor interval="20s" role="Slave" timeout="20s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive p_drbd_share_plesk ocf:linbit:drbd \
params drbd_resource="shareplesk" \
op monitor interval="15s" role="Master" timeout="20s" \
op monitor interval="20s" role="Slave" timeout="20s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive p_fs_share_db ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/share" directory="/share" 
fstype="ocfs2" options="rw,noatime" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40"
primitive p_fs_share_plesk ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/share" directory="/share" 
fstype="ocfs2" options="rw,noatime" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40"
primitive p_mysqld lsb:mysql \
meta target-role="Started" \
op monitor interval="10" timeout="20"
primitive p_o2cb ocf:pacemaker:o2cb \
params stack="cman" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
op monitor interval="10" timeout="20"
primitive p_stonith stonith:fence_ec2 \
params pcmk_host_check="static-list" pcmk_host_list="aztestc1 aztestc2 
aztestc3 aztestc4" \
op monitor interval="600s" timeout="300s" \
op start start-delay="10s" interval="0"
ms ms_drbd_share_db p_drbd_share_db \
meta master-max="2" notify="true" interleave="true" clone-max="2" 
is-managed="true" target-role="Stopped"
ms ms_drbd_share_plesk p_drbd_share_plesk \
meta master-max="2" notify="true" interleave="true" clone-max="2" 
is-managed="true" target-role="Stopped"
clone cl_fencing p_stonith \
meta target-role="Started"
clone cl_fs_share_db p_fs_share_db \
meta interleave="true" notify="true" globally-unique="false" 
target-role="Started"
clone cl_fs_share_plesk p_fs_share_plesk \
meta interleave="true" notify="true" globally-unique="false" 
target-role="Started"
clone cl_o2cb p_o2cb \
meta interleave="true" globally-unique="false" target-role="Stopped"
order o_mysqld inf: cl_fs_share_db p_mysqld
order o_ocfs2db inf: ms_drbd_share_db:promote cl_o2cb:start
order o_ocfs2plesk inf: ms_drbd_share_plesk:promote cl_o2cb:start
order o_sharedb inf: cl_o2cb cl_fs_share_db
order o_shareplesk inf: cl_o2cb cl_fs_share_plesk
property $id="cib-bootstrap-options" \
stonith-enabled="true" \
stonith-timeout="180s" \
no-quorum-policy="freeze" \
dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
cluster-infrastructure="cman" \
last-lrm-refresh="1354969282"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

 

--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] One Cluster or Two

2012-12-07 Thread Art Zemon
On 12/06/2012 08:22 PM, Andrew Beekhof wrote:
> I like clusters with >2 nodes because quorum makes sense.

Andrew,

That sounds like a solid reason to prefer one, larger cluster. Thanks.

-- Art Z.

-- 

Art Zemon, President
Hen's Teeth Network <http://www.hens-teeth.net/> for reliable web
hosting and programming
(866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] One Cluster or Two

2012-12-06 Thread Art Zemon
Folks,

I am building a high availability web hosting platform which will
include a pair of web servers with an OCFS2 shared filesystem and a
MySQL database server with a backup (using a DRBD-based filesystem
instead of MySQL replication). Does this sound like one cluster or two
(one for the web servers and a second for the database servers)?

If one cluster, each configuration is very very simple.

If two clusters, the config is more complex because it has everything in
it for all resources, plus location constraints. But it can have a real
quorum since the full cluster will be at least four nodes.

Thoughts?

Thanks,
-- Art Z.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Trouble Starting Filesystem

2012-12-04 Thread Art Zemon
Folks,

I have having trouble starting my DRBD+OCFS2 filesystem. It seems to be
a timing thing, with the filesystem trying to come up before DRBD has
gotten the second node of the cluster into Primary mode. I find this in
the logs:

Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) FATAL: Module scsi_hostadapter not found.
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) blockdev:
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) cannot open /dev/drbd/by-res/share
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) :
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) Wrong medium type
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) mount.ocfs2
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) :
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) I/O error on channel
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) 
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr) while opening device /dev/drbd1
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: RA output:
(p_fs_share:1:start:stderr)
Dec  4 15:50:05 aztestc4 Filesystem[1631]: ERROR: Couldn't mount
filesystem /dev/drbd/by-res/share on /share
Dec  4 15:50:05 aztestc4 lrmd: [1177]: WARN: Managed
p_fs_share:1:start process 1631 exited with return code 1.
Dec  4 15:50:05 aztestc4 lrmd: [1177]: info: operation start[15] on
p_fs_share:1 for client 1180: pid 1631 exited with return code 1
Dec  4 15:50:05 aztestc4 crmd: [1180]: debug:
create_operation_update: do_update_resource: Updating resouce
p_fs_share:1 after complete start op (interval=0)
Dec  4 15:50:05 aztestc4 crmd: [1180]: info: process_lrm_event: LRM
operation p_fs_share:1_start_0 (call=15, rc=1, cib-update=18,
confirmed=true) unknown error

If I simply wait a little while (maybe a minute, maybe less) and then
"crm resource cleanup cl_fs_share", the filesystem starts properly on
both nodes. Here are the pertinent parts of my configuration:

primitive p_drbd_share ocf:linbit:drbd \
params drbd_resource="share" \
op monitor interval="15s" role="Master" timeout="20s" \
op monitor interval="20s" role="Slave" timeout="20s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive p_fs_share ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/share" directory="/share"
fstype="ocfs2" options="rw,noatime" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40"
primitive p_o2cb ocf:pacemaker:o2cb \
params stack="cman" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
op monitor interval="10" timeout="20"
ms ms_drbd_share p_drbd_share \
meta master-max="2" notify="true" interleave="true"
clone-max="2" is-managed="true" target-role="Started"
clone cl_fs_share p_fs_share \
meta interleave="true" notify="true" globally-unique="false"
target-role="Started"
clone cl_o2cb p_o2cb \
meta interleave="true" globally-unique="false"
order o_ocfs2 inf: ms_drbd_share:promote cl_o2cb
order o_share inf: cl_o2cb cl_fs_share

Should I increase the timeout value in

primitive p_fs_share ocf:heartbeat:Filesystem \
... \
op start interval="0" timeout="60"

to take care of this? I am dubious because I think cl_o2cb is starting,
which allows cl_fs_share to start, before ms_drbd_share is done promote-ing.

Thanks,
-- Art Z.

-- 

Art Zemon, President
Hen's Teeth Network <http://www.hens-teeth.net/> for reliable web
hosting and programming
(866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Where is the Pacemaker Documentation

2012-11-22 Thread Art Zemon
I finally found it:
http://clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/

-- Art Z.


-Original Message-
From: "Art Zemon" 
Sent: Thursday, November 22, 2012 7:36pm
To: "Pacemaker List" 
Subject: [Pacemaker] Where is the Pacemaker Documentation

Folks,
 
I see lots of pacemaker "how to" guides and lots of examples but is there a 
wiki or something that lists all of the options for various things? For 
instance, I have
 
ms ms_drbd_share p_drbd_share \
meta master-max="2" notify="true" interleave="true" clone-max="2"

Where is there some documentation on master-max, notify, interleave, clone-max, 
and any other meta values for ms?

Thanks,
-- Art Z.


--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net



___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Where is the Pacemaker Documentation

2012-11-22 Thread Art Zemon
Folks,
 
I see lots of pacemaker "how to" guides and lots of examples but is there a 
wiki or something that lists all of the options for various things? For 
instance, I have
 
ms ms_drbd_share p_drbd_share \
meta master-max="2" notify="true" interleave="true" clone-max="2"

Where is there some documentation on master-max, notify, interleave, clone-max, 
and any other meta values for ms?

Thanks,
-- Art Z.


--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Changed IP Address; Filesystem Won't Start

2012-11-22 Thread Art Zemon

After much gnashing of teeth and bashing of head, I finally hit upon
crm resource cleanup cl_fs_share
That did it and everything immediately started running.
 
Here is my configuration:
 

primitive p_drbd_share ocf:linbit:drbd \
 params drbd_resource="share" \
 op monitor interval="15s" role="Master" timeout="20s" \
 op monitor interval="20s" role="Slave" timeout="20s" \
 op start interval="0" timeout="240s" \
 op stop interval="0" timeout="100s"
primitive p_fs_share ocf:heartbeat:Filesystem \
 params device="/dev/drbd/by-res/share" directory="/share" fstype="ocfs2" \
 op start interval="0" timeout="60" \
 op stop interval="0" timeout="60" \
 op monitor interval="20" timeout="40"
primitive p_o2cb ocf:pacemaker:o2cb \
 params stack="cman" \
 op start interval="0" timeout="90" \
 op stop interval="0" timeout="100" \
 op monitor interval="10" timeout="20"
ms ms_drbd_share p_drbd_share \
 meta master-max="2" notify="true" interleave="true" clone-max="2"
clone cl_fs_share p_fs_share \
 meta interleave="true" notify="true" globally-unique="false" 
target-role="Started"
clone cl_o2cb p_o2cb \
 meta interleave="true" globally-unique="false"
colocation colo_share inf: cl_fs_share ms_drbd_share:Master cl_o2cb
order o_o2cb inf: cl_o2cb cl_fs_share
order o_share inf: ms_drbd_share:promote cl_fs_share


What does "cleanup" do which is so necessary?

Thanks,
-- Art Z.
 

--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net
 ___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Changed IP Address; Filesystem Won't Start

2012-11-20 Thread Art Zemon
Folks,

I had a working cluster... for a few minutes. Then I restarted one of
the nodes in EC2 so it's IP address changed. Now the nodes come up, talk
to each other, DRBD syncs, but the filesystem won't start. I'm baffled.

Following is some config info. All I did was update the IP address of
aztestc4 in /etc/hosts and in /etc/drbd.d/share.res and reboot to
restart everything. /var/log/syslog is so full of stuff that I can't see
the trees for the forest.

Any help will be greatly appreciated.
-- Art Z.


root@aztestc3:~# drbdadm status









root@aztestc3:~# crm status

Last updated: Tue Nov 20 13:56:43 2012
Last change: Tue Nov 20 13:37:44 2012 via cibadmin on aztestc3
Stack: cman
Current DC: aztestc3 - partition with quorum
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
2 Nodes configured, unknown expected votes
6 Resources configured.


Online: [ aztestc3 aztestc4 ]

 Master/Slave Set: ms_drbd_share [p_drbd_share]
 Masters: [ aztestc3 aztestc4 ]
 Clone Set: cl_o2cb [p_o2cb]
 Started: [ aztestc3 aztestc4 ]

Failed actions:
p_fs_share:0_start_0 (node=aztestc3, call=10, rc=1,
status=complete): unknown error
p_drbd_share:0_promote_0 (node=aztestc3, call=34, rc=1,
status=complete): unknown error
p_fs_share:0_start_0 (node=aztestc4, call=10, rc=1,
status=complete): unknown error




root@aztestc3:~# crm configure show
node aztestc3 \
attributes standby="off"
node aztestc4 \
attributes standby="off"
primitive p_drbd_share ocf:linbit:drbd \
params drbd_resource="share" \
op monitor interval="15s" role="Master" timeout="20s" \
op monitor interval="20s" role="Slave" timeout="20s" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
primitive p_fs_share ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/share" directory="/share"
fstype="ocfs2" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40"
primitive p_o2cb ocf:pacemaker:o2cb \
params stack="cman" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
op monitor interval="10" timeout="20"
ms ms_drbd_share p_drbd_share \
meta master-max="2" notify="true" interleave="true" clone-max="2"
target-role="Started"
clone cl_fs_share p_fs_share \
meta interleave="true" notify="true" globally-unique="false"
target-role="Started"
clone cl_o2cb p_o2cb \
meta interleave="true" globally-unique="false"
colocation colo_share inf: cl_fs_share ms_drbd_share:Master cl_o2cb
order o_o2cb inf: cl_o2cb cl_fs_share
order o_share inf: ms_drbd_share:promote cl_fs_share
property $id="cib-bootstrap-options" \
dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
cluster-infrastructure="cman" \
stonith-enabled="false" \
no-quorum-policy="ignore"


-- 

Art Zemon, President
Hen's Teeth Network <http://www.hens-teeth.net/> for reliable web
hosting and programming
(866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Getting Started on Ubuntu 12.04

2012-11-16 Thread Art Zemon
Thank you, Andrew. That is great info, particularly the Ubuntu
12.04-specifics about dependencies and libraries.

I also found a very very dumbed down getting-started guide on the
Minecraft wiki, of all places.
http://www.minecraftwiki.net/wiki/Tutorials/High-Availability_Cluster My
biggest problem is not getting the code and compiling it. My problem is
that I have never seen or managed a cluster with this software on it. I
think this will get me started on the path so that I can have a working
corosync+pacemaker+drbd to play with.

-- Art Z.

-- 

Art Zemon, President
Hen's Teeth Network <http://www.hens-teeth.net/> for reliable web
hosting and programming
(866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Getting Started on Ubuntu 12.04

2012-11-15 Thread Art Zemon

Hello,
 
I am trying to build my first cluster on Ubuntu 12.04 and struggling because, 
though I have a lot of Linux and UNIX sysadmin experience, I am new to 
clustering.
 
I would appreciate any pointers to docs/examples/tutorials/whatever on getting 
started with Corosync 1.4.2 + Pacemaker 1.1.6 (the versions in the Ubuntu 12.04 
repository).
 
Thanks,
-- Art Z.
 

--
Art Zemon, President
 [http://www.hens-teeth.net/] Hen's Teeth Network for reliable web hosting and 
programming
 (866)HENS-NET / (636)447-3030 ext. 200 / www.hens-teeth.net___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org