On Fri, Dec 2, 2011 at 5:35 PM, Charles DeVoe wrote:
>
> We are building a 4 node active/active cluster, which I believe is the same
> as High Performance.
Not quite. That's still an HA cluster with some scale-out capability.
HPC is a slightly different ballgame.
> The Cluster has a SAN format
We are building a 4 node active/active cluster, which I believe is the same as
High Performance. The Cluster has a SAN formatted with GFS2. The discussion
is whether to install the applications on the shared drive and point each
machine to that install point or install the applications locally
On 02.12.2011 09:06, Vadim Bulst wrote:
> yes , locking_type=3. But clvm needs a place for his socket
> (/var/run/lvm) . This directory is not generated automatically.
Correct. It works in Lucid, it doesn't in Oneiric and probably some
other versions too.
> Shell I make a bug report to Ubuntu ?
Hello Andreas, first of all, thanks for the quick reply. I've noticed
that stop and start is executed on any resource when I'm running cleanup
on. Which also affects all other resources when it is in an "order".
Please find attached the configuration. Regards, Georgios
-
Hello Kashif,
On 12/02/2011 06:04 AM, Kashif Jawed Siddiqui wrote:
> Hi All,
>
>
>
> I am using pacemaker 1.0.11 + corosync 1.4.2 for a 2 node cluster.
>
>
>
> The old cib.xml for Heartbeat based cluster had an option
> "ordered=true | false" for "group" tag which supported startin
Hello Georgios,
On 12/02/2011 10:54 AM, Georgios Kasapoglou wrote:
> Hi all,
> I've a 2-nodes cluster using pacemaker 1-2.
> I've set a DRBD resource, according to
> http://www.clusterlabs.org/wiki/PostgresHowto#4._Configuring_DRBD
>
> Everything works fine, except when I'm trying to cleanup the
Hi all,
I've a 2-nodes cluster using pacemaker 1-2.
I've set a DRBD resource, according to
http://www.clusterlabs.org/wiki/PostgresHowto#4._Configuring_DRBD
Everything works fine, except when I'm trying to cleanup the "master".
E.g., when I ran cleanup on drbd_r0:0 which runs on node1 while it
02.12.2011 11:06, Vadim Bulst wrote:
[snip]
> Now I run into new problems:
>
> I created a cloneset for managing the volume groups:
>
> node bbzclnode04
> node bbzclnode06
> node bbzclnode07
> primitive clvm ocf:lvm2:clvmd \
> params daemon_timeout="30" \
> meta target-role="Started"
> pr
Hi Ante,
yes , locking_type=3. But clvm needs a place for his socket (/var/run/lvm) . This directory is not
generated automatically. I made a little add to my ra - clvm:
file: /usr/lib/ocf/resource.d/lvm2/clvmd
added: variable RUNDIR="/var/run/lvm"
in funcion bringup_daemon() i added:
if [
Am 02.12.2011 00:32, schrieb Andreas Kurz:
> Hello Lutz,
>
> On 12/01/2011 01:26 PM, Lutz Reinhardt wrote:
>> hi
>>
>> use a simple config:
>>
>> node node1
>> node node2 \
>> attributes standby="off"
>> primitive res_drbd_cluster_ocfs ocf:linbit:drbd \
>> params drbd_resource="cluster-o
10 matches
Mail list logo