Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux Containers) - Linux-HA-Dev Digest, Vol 89, Issue 32

2011-05-03 Thread Florian Haas
Hello Darren,

Please get the current version from
https://github.com/fghaas/resource-agents/blob/lxc/heartbeat/lxc, and
also review the commit history at
https://github.com/fghaas/resource-agents/commits/lxc/heartbeat/lxc.

When you send more updates, please do make sure they track the latest
version in my repo. I am doing my best splitting this up into patches as
I can and check them in individually, but the re-introduction of errors
that have already been fixed is not something that gives me thrills. Thanks.

Cheers,
Florian





signature.asc
Description: OpenPGP digital signature
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/


Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux Containers) - Linux-HA-Dev Digest, Vol 89, Issue 33

2011-05-03 Thread Darren Thompson
Florian/Team

Another update to the lxc (Linux container) ocf file. (attached)

Changes (summary):

Added very very very experimental support for alternate init systems
inside containers (it should now support sysvinit, upstart and systemd).

Adding this support did not break the default sysvinit, but since I do
not know how to create a LXC container that uses 'upstart' or 'systemd'
my testing is very rudimentary for those two systems.

I have made no progress whatsoever with removing the requirement for
screen as I still have not found a working alternative to provide the
root console created by lxc-start (it takes over the default console
it's run on and that is lost when run as a cluster service, if screen is
not used.)  

At this point I may have to confess that getting this working without
using screen may be beyond my abilities (for now, I'm stubborn so will
keep plugging away at this, but don't hold your breath).

I'm still not sure why the use of screen is so repellent to some, as it
works well and is quite innocuous generally.

Regards
Darren


On Sat, 2011-04-30 at 12:00 -0600,
linux-ha-dev-requ...@lists.linux-ha.org wrote:

 Date: Sat, 30 Apr 2011 16:10:52 +0930
 From: Darren Thompson darr...@akurit.com.au
 Subject: Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux
 Containers) - Linux-HA-Dev Digest, Vol 89, Issue 32
 To: linux-ha-dev@lists.linux-ha.org
 Message-ID: 1304145652.5625.50.ca...@darrenspc.akurit.com.au
 Content-Type: text/plain; charset=utf-8
 
 Florin/TEAM
 
 Please find the latest instalment of the LXC containers ocf.
 
 Changes (summary):
 Moved cgroup_mounted out of default initialisation and made it a
 function (used by start/stop).
 Also cleaned up some other code sections, including expanding on
 verify_all section to more fully test configuration. Also merged
 validate and status sections.
 My next work will be determining the best way to make the containers
 init type independent (due to the rise of init replacements like
 systemd and upstart)  and  also investigating the removal of the
 screen tool from the startup as it's received negative feed back
 from
 a few sources
 
 Darren


lxc
Description: application/shellscript
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/


Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux Containers) - Linux-HA-Dev Digest, Vol 90, Issue 1

2011-05-03 Thread Dejan Muhamedagic
Hi,

On Tue, May 03, 2011 at 11:07:45PM +0930, Darren Thompson wrote:
 Lars/Team
 
 I agree.
 
 I can understand that there are some parameters that are MANDATORY and
 UNPREDICTABLE, but there are just as likely to be parameters that are
 MANDATORY and have a REASONABLE DEFAULT VALUE.
 
 As an example, the IP address is Mandatory and Unpredictable, but the
 path too and name of a configuration file  (although still Mandatory)
 could have a Reasonable Default Value (although there may still be merit
 in allowing it to be overridden to provide localisations etc).
 
 I think it's too extreme to force all mandatory value to not be given
 default values, as (for example this particular case), there are a few
 mandatory values that do have reasonable defaults that could be used, in
 most cases.

If the parameter is required (mandatory) under which
circumstances can the default be used? If you want to let the
user _not_ specify a parameter and use the default, then the
parameter is optional. I really don't understand why all the
confusion.

Thanks,

Dejan

 Darren
 
 
 On Mon, 2011-05-02 at 12:00 -0600,
 linux-ha-dev-requ...@lists.linux-ha.org wrote:
 
  Date: Sun, 1 May 2011 20:49:41 +0200
  From: Lars Marowsky-Bree l...@novell.com
  Subject: Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux
  Containers)
  To: High-Availability Linux Development List
  linux-ha-dev@lists.linux-ha.org
  Message-ID: 20110501184941.gr17...@suse.de
  Content-Type: text/plain; charset=iso-8859-1
  
  On 2011-04-26T16:03:48, Dejan Muhamedagic de...@suse.de wrote:
  
   - the required attributes in meta-data need to be reviewed,
 a parameter is either required or has a default, cannot be
 both
  
  Why would this be the case?
  
  
  Regards,
  Lars
  

 ___
 Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
 Home Page: http://linux-ha.org/

___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/


Re: [Linux-ha-dev] [Openais] An OCF agent for LXC (Linux Containers) - Linux-HA-Dev Digest, Vol 90, Issue 2

2011-05-03 Thread Darren Thompson
Florian/Team

Sorry I did not read this sooner, my last update will still have be
messy for you (sorry).

I'll grab a copy of the current version and re-base my work on that.

I see that you have streamlined it quite a bit, I'll test it in my
environment to ensure it's working as expected (I note that the
parameters have changed names and some functionality so will re-create
my cluster/lxc/containers using this and re-test).

Darren


On Tue, 2011-05-03 at 07:59 -0600,
linux-ha-dev-requ...@lists.linux-ha.org wrote:

 Hello Darren,
 
 Please get the current version from
 https://github.com/fghaas/resource-agents/blob/lxc/heartbeat/lxc, and
 also review the commit history at
 https://github.com/fghaas/resource-agents/commits/lxc/heartbeat/lxc.
 
 When you send more updates, please do make sure they track the latest
 version in my repo. I am doing my best splitting this up into patches
 as
 I can and check them in individually, but the re-introduction of
 errors
 that have already been fixed is not something that gives me thrills.
 Thanks.
 
 Cheers,
 Florian
___
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/


Re: [Linux-HA] get haresources2cib.py

2011-05-03 Thread Andrew Beekhof
On Mon, May 2, 2011 at 9:33 PM, Vinay Nagrik vnag...@gmail.com wrote:
 Thank you Andrew.

 Could you please tell me where to get the DTD for cib.xml and where from can
 I download crm shell.

Both get installed with the rest of pacemaker


 thanks in anticipation.

 With best regards.

 nagrik

 On Mon, May 2, 2011 at 12:56 AM, Andrew Beekhof and...@beekhof.net wrote:

 On Sun, May 1, 2011 at 9:26 PM, Vinay Nagrik vnag...@gmail.com wrote:
  Dear Andrew,
 
  I read your document clusters from scratch and found it very detailed.
  It
  gave lots of information, but I was looking for creating a cib.xml and
 could
  not decipher the language as to the syntex and different fields to be put
 in
  cib.xml.

 Don't look at the xml.  Use the crm shell.

 
  I am still looking for the haresources2cib.py script.

 Don't. It only creates configurations conforming to the older and now
 unsupported syntax.

   I searched the web
  but could not find anywhere.
 
  I have 2 more questions.
 
  Do I have to create the cib.xml file on the nodes I am running heartbeat
 v.2
  software.
  Does cib.xml has to reside in /var/lib/crm directory or can it reside
  anywhere else.
 
  Kindly provide these answers.  I will greatly appreciate your help.
 
  Have a nice day.
 
  Thanks.
 
  nagrik
 
  On Sat, Apr 30, 2011 at 1:32 AM, Andrew Beekhof and...@beekhof.net
 wrote:
 
  Forget the conversion.
  Use the crm shell to create one from scratch.
 
  And look for the clusters from scratch doc relevant to your version
  - its worth the read.
 
  On Sat, Apr 30, 2011 at 1:19 AM, Vinay Nagrik vnag...@gmail.com
 wrote:
   Hello Group,
  
   Kindly tell me where can I download
  
   haresources2cib.py file
  
   from.
  
   Please also tell me can I convert haresources file on a node where I
 am
  not
   running high availability service and then can I copy the converted
 ..xml
   file in
  
   /var/lib/heartbeat
  
   directory on which I am running the high availability.
  
   Also does
  
   cib file
  
   must resiede under
  
   /var/lib/heartbeat
  
   directory or can it reside under any directory like under
  
   /etc.
  
   please let me know.  I am just a beginner.
  
   Thanks in advance.
  
   --
   Thanks
  
   Nagrik
   ___
   Linux-HA mailing list
   Linux-HA@lists.linux-ha.org
   http://lists.linux-ha.org/mailman/listinfo/linux-ha
   See also: http://linux-ha.org/ReportingProblems
  
  ___
  Linux-HA mailing list
  Linux-HA@lists.linux-ha.org
  http://lists.linux-ha.org/mailman/listinfo/linux-ha
  See also: http://linux-ha.org/ReportingProblems
 
 
 
 
  --
  Thanks
 
  Nagrik
  ___
  Linux-HA mailing list
  Linux-HA@lists.linux-ha.org
  http://lists.linux-ha.org/mailman/listinfo/linux-ha
  See also: http://linux-ha.org/ReportingProblems
 
 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems




 --
 Thanks

 Nagrik
 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Antw: Re: ocf:pacemaker:ping: dampen

2011-05-03 Thread Andrew Beekhof
On Mon, May 2, 2011 at 5:29 PM, Lars Ellenberg
lars.ellenb...@linbit.com wrote:
 On Mon, May 02, 2011 at 04:04:56PM +0200, Andrew Beekhof wrote:
  Still, we may get a spurious failover in this case:
 
  reachability:
    +__
  Node A monitoring intervals:
         +    -    +    +    +    -    -    -    -    -
  Node B monitoring intervals:
      +    +    -    +    +    -    -    -    -    -
  dampening interval:         |-|
 
  Note how the dampening helps to ignore the first network glitch.
 
  But for the permanent network problem, we may get spurious failover:

 Then your dampen setting is too short or interval too long :-)

 No.
 Regardless of dampen and interval setting.

 Unless both nodes notice the change at the exact same time,
 expire their dampen at the exact same time,

This is where you've diverged.
Once dampen expires on one node, _all_ nodes write their current value.

 and place their updated
 values into the CIB at exactly the same time.

 If a ping node just dies, then one node will always notice it first.
 And regardless of dampen and interval settings,
 one will reach the CIB first, and therefor the PE will see the
 connectivity change first for only one of the nodes, and only later for
 the other (once it noticed, *and* expired its dampen interval, too).

 Show me how you can work around that using dampen or interval settings.

 --
 : Lars Ellenberg
 : LINBIT | Your Way to High Availability
 : DRBD/HA support and consulting http://www.linbit.com
 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Antw: Re: ocf:pacemaker:ping: dampen

2011-05-03 Thread Ulrich Windl
 Andrew Beekhof and...@beekhof.net schrieb am 02.05.2011 um 13:20 in 
 Nachricht
banlktimmruow2ldzsrzlmb1wwy9hpp4...@mail.gmail.com:
 On Mon, May 2, 2011 at 8:27 AM, Ulrich Windl
 ulrich.wi...@rz.uni-regensburg.de wrote:
 Andrew Beekhof and...@beekhof.net schrieb am 29.04.2011 um 09:31 in 
 Nachricht
  BANLkTi=-ftyk9uxcgu0m2wqhquu_rt8...@mail.gmail.com:
  On Fri, Apr 29, 2011 at 9:27 AM, Dominik Klein d...@in-telegence.net 
  wrote:
   It waits $dampen before changes are pushed to the cib. So that
   eventually occuring icmp hickups do not produce an unintended failover.
  
   At least that's my understanding.
 
  correcto
 
  Hi!
 
  Strange: So the update is basically just delayed by that amount of time? I 
 see no advantage: If you put a bad value to the CIB immediately or after some 
 delay, the value won't get better by that. Damping siggests some filtering 
 to me, but you are saying your are not filtering the values, but just 
 delaying them. Right?
 
 Only the current value is written.
 So the cluster will tolerate minor outages provided they last for
 less than the dampen interval and the monitor frequency is high
 enough.

Hi!

It seems I'll have to use the source to understand. Maybe then I can suggest 
how to properly describe it ;-)

Thanks anyway, maybe I'm too stupid to understand.

Regards,
Ulrich


___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] problem with ldirectord- web server up/site down :(

2011-05-03 Thread Dave Augustus
We had a failure yesterday(and we have had this happen in the past about 
once a month- I am now taking the time to post the problem) and one of 
our web sites was unavailable. After a few minutes of investigation, I 
found that the load-balancer did not have any hosts in the rotation for 
that site. All 3 web servers were up and working so the check in 
ldirectord should have had all 3 in the current running configuration of 
ipvs. A simple restart of ldirectord caused all 3 web servers to be 
added back into the rotation immediately and the site was restored to 
service.

There is no clustering software used in this current configuration.

It seems that ldirectord forgets what it is supposed to do over time(a 
few weeks) and a simple restart makes it happy again, as it has in this 
case and in previous cases.

Here are the software versions for the loadbalancer:
CentOS release 5.5 x86_64
ldirectord-1.0.4-1.1.el5
kernel 2.6.18-194.32.1.el5

Here are the important parts of the ldirectord.cf file (anonymized)
=
# Global Directives
checktimeout=20
checkinterval=30
autoreload=yes
logfile=local0
quiescent=no
fork=yes

# http virtual service for redirecting port 80 to my.securesite.com
virtual=192.168.35.117:80
 real=192.168.35.43:80 gate 100
 real=192.168.35.44:80 gate 100
 real=192.168.35.45:80 gate 100
 service=http
 scheduler=rr
 netmask=255.255.255.255
 protocol=tcp

# http virtual service for my.securesite.com
virtual=192.168.35.117:443
 real=192.168.35.43:40117 gate 100
 real=192.168.35.44:40117 gate 100
 real=192.168.35.45:40117 gate 100
 service=https
 scheduler=wlc
 persistent=600
 netmask=255.255.255.255
 protocol=tcp
 virtualhost=my.securesite.com
=

/etc/ipvsadm.rules
=
(no entry for this host- let ldirectord figure it out)
(note: I have since ADDED the rules here for the 117 https host
but I don't see how not having it matters as ldirectord manages that.)
=

The logs had no place where the actual site was removed from ipvs. It 
did have some like the following with failed - notice the timestamps:

May  1 21:10:56 lb71 ldirectord[7336]: system(/sbin/ipvsadm -a -t 
63.251.35.117:80 -r 192.168.35.45:80 -g -w 100) failed:
May  1 21:10:56 lb71 ldirectord[7336]: Added real server: 
192.168.35.45:80 (192.168.35.117:80) (Weight set to 100)
May  1 21:10:56 lb71 ldirectord[7343]: Resetting soft failure count: 
192.168.35.45:40117 (tcp:192.168.35.117:443)
May  1 21:10:56 lb71 ldirectord[7343]: system(/sbin/ipvsadm -a -t 
192.168.35.117:443 -r 192.168.35.45:40117 -g -w 100) failed:
May  1 21:10:56 lb71 ldirectord[7343]: Added real server: 
192.168.35.45:40117 (192.168.35.117:443) (Weight set to 100)

Is this a bug in ldirectord? Some thing wrong in my config? Should I 
look to keepalived? mon?

Thanks,
Dave
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] [PATCH] Low: adding cluster-glue-extras subpackage

2011-05-03 Thread Vadym Chepkov
On Mon, May 2, 2011 at 4:20 AM, Dejan Muhamedagic deja...@fastmail.fm wrote:
 Hi Vadim,

 On Sun, May 01, 2011 at 10:53:44AM -0400, Vadym Chepkov wrote:
 Hi,

 recent addition of vcenter external plugin generates dependency on exotic 
 perl(VMware::VIRuntime) package, majority won't use.

 The dependency is automatically generated by find-requires on RH
 distributions. Whether the package is exotic or not, depends on
 what you run. One could brand exotic just about any stonith
 plugin.

But this is what I was changing, cluster-glue-fedora.spec is intended
to be used on RH distros, no?
Maybe exotic is a wrong choice of a word, but it is not part of
fedora or epel repositories



 I propose to create a separate subpackage, cluster-glue-extras, for all 
 optional components

 That is a wrong place to fix this issue. We should not create an
 extra package just because find-requires (actually perl.req)
 cannot be told not to create a dependency on a module.

Not sure what you mean a wrong place? You mean let packagers deal with it?

Dependency can certainly be filtered out, but this would just mask the
problem, not solve it.
The module definitely needs this dependency, it is not a false finding
and I don't see how having an extra package can hurt anything. This is
a standard way to reduce amount of dependencies. For example, zabbix
can work with either mysql, postgres or sqlite as a backend. Instead
of forcing to install libraries to support everything, they have
subpackages that would pull with them only necessary components, when
desired backend is selected for installation.

Just a though,

Vadym
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] [PATCH] Low: adding cluster-glue-extras subpackage

2011-05-03 Thread Dejan Muhamedagic
On Tue, May 03, 2011 at 07:41:47AM -0400, Vadym Chepkov wrote:
 On Mon, May 2, 2011 at 4:20 AM, Dejan Muhamedagic deja...@fastmail.fm wrote:
  Hi Vadim,
 
  On Sun, May 01, 2011 at 10:53:44AM -0400, Vadym Chepkov wrote:
  Hi,
 
  recent addition of vcenter external plugin generates dependency on exotic 
  perl(VMware::VIRuntime) package, majority won't use.
 
  The dependency is automatically generated by find-requires on RH
  distributions. Whether the package is exotic or not, depends on
  what you run. One could brand exotic just about any stonith
  plugin.
 
 But this is what I was changing, cluster-glue-fedora.spec is intended
 to be used on RH distros, no?
 Maybe exotic is a wrong choice of a word, but it is not part of
 fedora or epel repositories
 
 
 
  I propose to create a separate subpackage, cluster-glue-extras, for all 
  optional components
 
  That is a wrong place to fix this issue. We should not create an
  extra package just because find-requires (actually perl.req)
  cannot be told not to create a dependency on a module.
 
 Not sure what you mean a wrong place? You mean let packagers deal with it?

No, what I meant was to filter the dependency out. However, it
is eventually up to the packagers to split packages in any way
they feel like. I think that packaging is not exactly equal
between various distributions.

 Dependency can certainly be filtered out, but this would just mask the
 problem, not solve it.

I don't think that this is a problem at all. Whoever wants to
use vcenter will have to install whatever the plugin needs.
That's how it has always been with stonith plugins. Or do you
propose to create a package per stonith plugin?

 The module definitely needs this dependency, it is not a false finding
 and I don't see how having an extra package can hurt anything. This is
 a standard way to reduce amount of dependencies. For example, zabbix
 can work with either mysql, postgres or sqlite as a backend. Instead
 of forcing to install libraries to support everything, they have
 subpackages that would pull with them only necessary components, when
 desired backend is selected for installation.

Right, but there's a limit to the number of backends. If we
start creating packages because of dependencies for specific
stonith modules there's no telling how many will we have in the
end. Besides, the plugin can deliver at least its description
for the interested user even without dependencies.

Thanks,

Dejan


 Just a though,
 
 Vadym
 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] Filesystem do not start on Pacemaker-Cluster

2011-05-03 Thread KoJack

Hi,
i was trying to set up a pacemaker cluster. After I added all resources, the
filesystem will not start at one Node. 

crm_verify -L -V


crm_verify[30068]: 2011/05/03_10:35:39 WARN: unpack_rsc_op: Processing
failed op WebFS:0_start_0 on apache01: unknown error (1)
crm_verify[30068]: 2011/05/03_10:35:39 WARN: unpack_rsc_op: Processing
failed op WebFS:0_stop_0 on apache01: unknown exec error (-2)
crm_verify[30068]: 2011/05/03_10:35:39 WARN: common_apply_stickiness:
Forcing WebFSClone away from apache01 after 100 failures (max=100)
crm_verify[30068]: 2011/05/03_10:35:39 WARN: common_apply_stickiness:
Forcing WebFSClone away from apache01 after 100 failures (max=100)
crm_verify[30068]: 2011/05/03_10:35:39 WARN: common_apply_stickiness:
Forcing WebFSClone away from apache01 after 100 failures (max=100)
crm_verify[30068]: 2011/05/03_10:35:39 ERROR: clone_rsc_colocation_rh:
Cannot interleave clone WebSiteClone and WebIP because they do not support
the same number of resources per node
crm_verify[30068]: 2011/05/03_10:35:39 ERROR: clone_rsc_colocation_rh:
Cannot interleave clone WebSiteClone and WebIP because they do not support
the same number of resources per node
crm_verify[30068]: 2011/05/03_10:35:39 WARN: should_dump_input: Ignoring
requirement that WebFS:0_stop_0 comeplete before WebFSClone_stopped_0:
unmanaged failed resources cannot prevent clone shutdown
Errors found during check: config not valid


crm configure show

node apache01
node apache02
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=10.1.1.5 cidr_netmask=8 nic=eth0
clusterip_hash=sourceip \
op monitor interval=30s
primitive WebData ocf:linbit:drbd \
params drbd_resource=wwwdata \
op monitor interval=60s \
op start interval=0 timeout=240s \
op stop interval=0 timeout=100s
primitive WebFS ocf:heartbeat:Filesystem \
params device=/dev/drbd/by-res/wwwdata directory=/var/www/html
fstype=gfs2 \
op start interval=0 timeout=60s \
op stop interval=0 timeout=60s
primitive WebSite ocf:heartbeat:apache \
params configfile=/etc/httpd/conf/httpd.conf \
op monitor interval=1min \
op start interval=0 timeout=40s \
op stop interval=0 timeout=60s
primitive dlm ocf:pacemaker:controld \
op monitor interval=120s \
op start interval=0 timeout=90s \
op stop interval=0 timeout=100s
primitive gfs-control ocf:pacemaker:controld \
params daemon=gfs_controld.pcmk args=-g 0 \
op monitor interval=120s \
op start interval=0 timeout=90s \
op stop interval=0 timeout=100s
ms WebDataClone WebData \
meta master-max=2 master-node-max=1 clone-max=2
clone-node-max=1 notify=true
clone WebFSClone WebFS
clone WebIP ClusterIP \
meta globally-unique=true clone-max=2 clone-node-max=2
clone WebSiteClone WebSite
clone dlm-clone dlm \
meta interleave=true
clone gfs-clone gfs-control \
meta interleave=true
colocation WebFS-with-gfs-control inf: WebFSClone gfs-clone
colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone
colocation fs_on_drbd inf: WebFSClone WebDataClone:Master
colocation gfs-with-dlm inf: gfs-clone dlm-clone
colocation website-with-ip inf: WebSiteClone WebIP
order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start
order WebSite-after-WebFS inf: WebFSClone WebSiteClone
order apache-after-ip inf: WebIP WebSiteClone
order start-WebFS-after-gfs-control inf: gfs-clone WebFSClone
order start-gfs-after-dlm inf: dlm-clone gfs-clone
property $id=cib-bootstrap-options \
dc-version=1.1.4-ac608e3491c7dfc3b3e3c36d966ae9b016f77065 \
cluster-infrastructure=openais \
expected-quorum-votes=2 \
stonith-enabled=false \
no-quorum-policy=ignore
rsc_defaults $id=rsc-options \
resource-stickiness=100


Did you see any mistake in my configuration?

Thanks a lot
-- 
View this message in context: 
http://old.nabble.com/Filesystem-do-not-start-on-Pacemaker-Cluster-tp31530410p31530410.html
Sent from the Linux-HA mailing list archive at Nabble.com.

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] get haresources2cib.py

2011-05-03 Thread Vinay Nagrik
Hello Andrew,

We have been goint in small details and I still did not get any answer,
which will put me on the right path.

I apologise to ask you these questions.  But these are important for my
work.

I have down loaded

*eat-3-0-STABLE-3.0.4.tar.bz2*

and unzipped it.

I looked for crm shell and any .dtd .DTD file and did not find any.

Please please tell me where to get the crm shell or what are the steps or
did I download a wrong .tar.bz2 file.

There were these files as well

glue-1.0.7.tar.bz2 http://hg.linux-ha.org/glue/archive/glue-1.0.7.tar.bz2
and
*gents-1.0.4.tar.gz*

Do I have to download these files also.

My first and  very first step is to create a cib.xml file.  And I am running
in small circles.

Kindly help.  I will greatly apprecite this.

Thanks.

arun
On Mon, May 2, 2011 at 11:16 PM, Andrew Beekhof and...@beekhof.net wrote:

 On Mon, May 2, 2011 at 9:33 PM, Vinay Nagrik vnag...@gmail.com wrote:
  Thank you Andrew.
 
  Could you please tell me where to get the DTD for cib.xml and where from
 can
  I download crm shell.

 Both get installed with the rest of pacemaker

 
  thanks in anticipation.
 
  With best regards.
 
  nagrik
 
  On Mon, May 2, 2011 at 12:56 AM, Andrew Beekhof and...@beekhof.net
 wrote:
 
  On Sun, May 1, 2011 at 9:26 PM, Vinay Nagrik vnag...@gmail.com wrote:
   Dear Andrew,
  
   I read your document clusters from scratch and found it very
 detailed.
   It
   gave lots of information, but I was looking for creating a cib.xml and
  could
   not decipher the language as to the syntex and different fields to be
 put
  in
   cib.xml.
 
  Don't look at the xml.  Use the crm shell.
 
  
   I am still looking for the haresources2cib.py script.
 
  Don't. It only creates configurations conforming to the older and now
  unsupported syntax.
 
I searched the web
   but could not find anywhere.
  
   I have 2 more questions.
  
   Do I have to create the cib.xml file on the nodes I am running
 heartbeat
  v.2
   software.
   Does cib.xml has to reside in /var/lib/crm directory or can it reside
   anywhere else.
  
   Kindly provide these answers.  I will greatly appreciate your help.
  
   Have a nice day.
  
   Thanks.
  
   nagrik
  
   On Sat, Apr 30, 2011 at 1:32 AM, Andrew Beekhof and...@beekhof.net
  wrote:
  
   Forget the conversion.
   Use the crm shell to create one from scratch.
  
   And look for the clusters from scratch doc relevant to your version
   - its worth the read.
  
   On Sat, Apr 30, 2011 at 1:19 AM, Vinay Nagrik vnag...@gmail.com
  wrote:
Hello Group,
   
Kindly tell me where can I download
   
haresources2cib.py file
   
from.
   
Please also tell me can I convert haresources file on a node where
 I
  am
   not
running high availability service and then can I copy the converted
  ..xml
file in
   
/var/lib/heartbeat
   
directory on which I am running the high availability.
   
Also does
   
cib file
   
must resiede under
   
/var/lib/heartbeat
   
directory or can it reside under any directory like under
   
/etc.
   
please let me know.  I am just a beginner.
   
Thanks in advance.
   
--
Thanks
   
Nagrik
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
   
   ___
   Linux-HA mailing list
   Linux-HA@lists.linux-ha.org
   http://lists.linux-ha.org/mailman/listinfo/linux-ha
   See also: http://linux-ha.org/ReportingProblems
  
  
  
  
   --
   Thanks
  
   Nagrik
   ___
   Linux-HA mailing list
   Linux-HA@lists.linux-ha.org
   http://lists.linux-ha.org/mailman/listinfo/linux-ha
   See also: http://linux-ha.org/ReportingProblems
  
  ___
  Linux-HA mailing list
  Linux-HA@lists.linux-ha.org
  http://lists.linux-ha.org/mailman/listinfo/linux-ha
  See also: http://linux-ha.org/ReportingProblems
 
 
 
 
  --
  Thanks
 
  Nagrik
  ___
  Linux-HA mailing list
  Linux-HA@lists.linux-ha.org
  http://lists.linux-ha.org/mailman/listinfo/linux-ha
  See also: http://linux-ha.org/ReportingProblems
 
 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems




-- 
Thanks

Nagrik
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] How to create cib.xml file

2011-05-03 Thread Vinay Nagrik
Hello Group,

I am absolutely new to this group and need a lot of help.

1. I am working with heartbeat version 2, but see that version 3 is already
out.

Heartbeat-3-0-STABLE-3.0.4.tar.bz2http://hg.linux-ha.org/heartbeat-STABLE_3_0/archive/STABLE-3.0.4.tar.bz2

I have downloaded it and expanded it, but did not find crm shell and dtd of
cib.xml file.

Please tell me where do I get crm shell and corresponding dtd for cib.xml
file.

2.  I will also appreciate if someonw can direct me to a place from where I
can see the contents of cib.xml.

Kindly reply

Thanks in anticipation.

nagrik
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] [PATCH] Low: adding cluster-glue-extras subpackage

2011-05-03 Thread Vadym Chepkov

On May 3, 2011, at 10:02 AM, Dejan Muhamedagic wrote:
 
 Dependency can certainly be filtered out, but this would just mask the
 problem, not solve it.
 
 I don't think that this is a problem at all. Whoever wants to
 use vcenter will have to install whatever the plugin needs.
 That's how it has always been with stonith plugins. Or do you
 propose to create a package per stonith plugin?

If it is justified, why not?

It's not something unheard of:

# rpm -qlp nagios-plugins-oracle-1.4.15-2.el5.x86_64.rpm 
/usr/lib64/nagios/plugins/check_oracle


There are plenty of packages out there with no files at all, by the way.
They serve as meta package, just to pull n proper dependencies.


Vadym

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] How to create cib.xml file

2011-05-03 Thread Tim Serong
On 5/4/2011 at 05:54 AM, Vinay Nagrik vnag...@gmail.com wrote: 
 Hello Group, 
  
 I am absolutely new to this group and need a lot of help. 
  
 1. I am working with heartbeat version 2, but see that version 3 is already 
 out. 
  
 Heartbeat-3-0-STABLE-3.0.4.tar.bz2http://hg.linux-ha.org/heartbeat-STABLE_3_0/arch
  
 ive/STABLE-3.0.4.tar.bz2 
  
 I have downloaded it and expanded it, but did not find crm shell and dtd of 
 cib.xml file. 
  
 Please tell me where do I get crm shell and corresponding dtd for cib.xml 
 file. 

The crm shell (and everything else to do with the CIB) is included with
Pacemaker, which is a cluster resource manager that uses heartbeat
or corosync/openais for messaging.  See:

  http://linux-ha.org/wiki/Heartbeat
  
http://theclusterguy.clusterlabs.org/post/1262495133/pacemaker-heartbeat-corosync-wtf
  http://www.clusterlabs.org/wiki/FAQ

For some basic configuration information see:

  http://www.clusterlabs.org/wiki/Initial_Configuration

Once the cluster nodes are talking to each other, you can use the crm shell
(just run crm) to configure the cluster.

I would strongly encourage you to read Clusters from Scratch, see:

  http://www.clusterlabs.org/wiki/Documentation

If you're using Heartbeat, the corosync/openais references won't help, but
all the Pacemaker information is equally relevant regardless of which
messaging layer you use.

I'm not sure if there's a similar guide around which talks about using
Pacemaker with Heartbeat in this detail, but you should probably also
look at:

  http://www.linux-ha.org/doc/users-guide/users-guide.html

 2.  I will also appreciate if someonw can direct me to a place from where I 
 can see the contents of cib.xml. 

The CIB lives in /var/lib/heartbeat/crm/cib.xml.  You do not ever edit
this file directly, and usually don't need to look at it in its raw
XML form, thanks to the existence of the crm shell.  You might be
interested in reading:

  
http://theclusterguy.clusterlabs.org/post/178680309/configuring-heartbeat-v1-was-so-simple

HTH,

Tim


-- 
Tim Serong tser...@novell.com
Senior Clustering Engineer, OPS Engineering, Novell Inc.



___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems