Re: [Linux-HA] The active trap of the SNMP is delayed.
Hi Yan, Hi Andrew, I confirmed movement in combination with Pacemaker1.0.12 in a repository of your test. * https://github.com/gao-yan/pacemaker-mgmt/commits/2.0-test On my test, both SNMP and GUI worked without a problem. Please release the contents of this repository as GUI for Pacemaker1.0 system. Best Regards, Hideo Yamauchi. --- On Thu, 2011/12/1, renayama19661...@ybb.ne.jp wrote: > Hi Yan, > > > I pushed a new branch "2.0-test" which is supposed to be compatible with > > pacemaker-1.0.x: > > > > https://github.com/gao-yan/pacemaker-mgmt/commits/2.0-test > > > > Could you please build and test it against pacemaker-1.0 branch? > > > > If everything works fine, I'll make a "2.0" branch and tag a 2.0.1 version. > > All right! > > I report the result that I tested to you in the first half of the next week. > > Cheers, > Hideo Yamauchi. > > > --- On Wed, 2011/11/30, Gao,Yan wrote: > > > Hi Hideo, > > > > On 11/25/11 08:26, renayama19661...@ybb.ne.jp wrote: > > > Hi Yan, > > > > > > I confirmed contents. > > > I think that I do not have any problem. > > Nice, thanks for doing that! > > > > > > > > I demand that I prepare the tag of 2.0.1 version that applied the next > > > patch. > > > * http://hg.clusterlabs.org/pacemaker/pygui/rev/c08b84a8203f > > > > > > Because we want latest GUI for Pacemaker1.0. > > I pushed a new branch "2.0-test" which is supposed to be compatible with > > pacemaker-1.0.x: > > > > https://github.com/gao-yan/pacemaker-mgmt/commits/2.0-test > > > > Could you please build and test it against pacemaker-1.0 branch? > > > > If everything works fine, I'll make a "2.0" branch and tag a 2.0.1 version. > > > > Regards, > > Gaoyan > > -- > > Gao,Yan > > Software Engineer > > China Server Team, SUSE. > > ___ > > Linux-HA mailing list > > Linux-HA@lists.linux-ha.org > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > See also: http://linux-ha.org/ReportingProblems > > > ___ > Linux-HA mailing list > Linux-HA@lists.linux-ha.org > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems > ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Custom resource agent script assistance
Hello Chris, On 12/01/2011 06:25 PM, Chris Bowlby wrote: > Hi Everyone, > > I'm in the process of configuring a 2 node + DRBD enabled DHCP cluster > using the following packages: > > SLES 11 SP1, with Pacemaker 1.1.6, corosync 1.4.2, and drbd 8.3.12. > > I know about DHCP's internal fail-over abilities, but after testing, it > simply failed to remain viable as a more robust HA type cluster. As such > I began working on this solution. For reference my current configuration > looks like this: > > node dhcp-vm01 \ > attributes standby="off" > node dhcp-vm02 \ > attributes standby="on" > primitive DHCPFS ocf:heartbeat:Filesystem \ > params device="/dev/drbd1" directory="/var/lib/dhcp" > fstype="ext4" \ > meta target-role="Started" > primitive dhcp-cluster ocf:heartbeat:IPaddr2 \ > params ip="xxx.xxx.xxx.xxx" cidr_netmask="32" \ > op monitor interval="10s" > primitive dhcpd_service ocf:heartbeat:dhcpd \ > params dhcpd_config="/etc/dhcpd.conf" \ > dhcpd_interface="eth0" \ > op monitor interval="1min" \ > meta target-role="Started" > primitive dhcpdrbd ocf:linbit:drbd \ > params drbd_resource="dhcpdata" \ > op monitor interval="60s" > ms DHCPData dhcpdrbd \ > meta master-max="1" master-node-max="1" clone-max="2" > clone-node-max="1" notify="true" > colocation dhcpd_service-with_cluster_ip inf: dhcpd_service dhcp-cluster > colocation fs_on_drbd inf: DHCPFS DHCPData:Master > order DHCP-after-dhcpfs inf: DHCPFS:promote dhcpd_service:start > order dhcpfs_after_dhcpdata inf: DHCPData:promote DHCPFS:start DHCPFS:promote ?? .. that action will never occour, so dhcpd_service will start whenever it likes ... typically not when it should ;-) ... remove that :promote ... and you miss a colocation between dhcpd_service and it's file system. I'd suggest using a group and colocate/order that with DRBD: group g_dhcp DHCPFS dhcpd_service dhcp-cluster .. or IP before dhcp if it needs to bind to it Regards, Andreas -- Need help with Pacemaker? http://www.hastexo.com/now > property $id="cib-bootstrap-options" \ > dc-version="1.1.5-ecb6baaf7fc091b023d6d4ba7e0fce26d32cf5c8" \ > cluster-infrastructure="openais" \ > expected-quorum-votes="2" \ > stonith-enabled="false" \ > no-quorum-policy="ignore" > rsc_defaults $id="rsc-options" \ > resource-stickiness="100" > > The floating IP works without issue, as does the DRBD integration such > that if I put a node into standby, the IP, DRBD master/slave and FS > mounts all transfer correctly. Only the DHCP component itself is > failing, in that it wont start properly from within pacemaker. > > I suspect it is due to having to write a new script as I could not find > an existing DHCPD RA agent anywhere. I built my own based off the > development guide for resource agents on the wiki. I've managed to get > it to complete all the tests I need it to pass in the ocf-tester script: > > ocf-tester -n dhcpd -o > monitor_client_interface=eth0 /usr/lib/ocf/resource.d/heartbeat/dhcpd > Beginning tests for /usr/lib/ocf/resource.d/heartbeat/dhcpd... > * Your agent does not support the notify action (optional) > * Your agent does not support the demote action (optional) > * Your agent does not support the promote action (optional) > * Your agent does not support master/slave (optional) > /usr/lib/ocf/resource.d/heartbeat/dhcpd passed all tests > > Additionally if I run each of the various options > (start/stop/monitor/validate-all/status/meta-data) at the command line, > they all work with out issue, and stop/start the DHCPD process as > expected. > > dhcp-vm01:/usr/lib/ocf/resource.d/heartbeat # ps aux | grep dhcp > root 12516 0.0 0.1 4344 756 pts/3S+ 17:16 0:00 grep > dhcp > dhcp-vm01:/usr/lib/ocf/resource.d/heartbeat > # /usr/lib/ocf/resource.d/heartbeat/dhcpd start > DEBUG: Validating the dhcpd binary exists. > DEBUG: Validating that we are running in chrooted mode > DEBUG: Chrooted mode is active, testing the chrooted path exists > DEBUG: Checking to see if the /var/lib/dhcp//etc/dhcpd.conf exists and > is readable > DEBUG: Validating the dhcpd user exists > DEBUG: Validation complete, everything looks good. > DEBUG: Testing the state of the daemon itself > DEBUG: OCF_NOT_RUNNING: 7 > INFO: The dhcpd process is not running > Internet Systems Consortium DHCP Server V3.1-ESV > Copyright 2004-2010 Internet Systems Consortium. > All rights reserved. > For info, please visit https://www.isc.org/software/dhcp/ > WARNING: Host declarations are global. They are not limited to the > scope you declared them in. > Not searching LDAP since ldap-server, ldap-port and ldap-base-dn were > not specified in the config file > Wrote 0 deleted host decls to leases file. > Wrote 0 new dynamic host decls to leases file. > Wrote 0 leases to leases file. > Listening on LPF/eth0/00:0c:29:d7:64:99/SERVERS > Sending on LPF/eth0/00:0c:29:d7:
Re: [Linux-HA] Antw: Re: ocf_heartbeat:Xinetd: bad status report
On Mon, Nov 28, 2011 at 03:57:19PM +0100, Ulrich Windl wrote: > >>> Florian Haas schrieb am 28.11.2011 um 15:05 in > >>> Nachricht > : > > On Mon, Nov 28, 2011 at 2:58 PM, Dejan Muhamedagic > > wrote: > > >> Why? It seems "typeset" is the POSIX thing, while "local" is a BASH-ism. > > >> So > > what's wrong with local variables? > > > > > > local is almost certainly not a bashism. At least I can recall > > > once changing typeset to local in some RA. > > > > IIRC, then "local foo=bar" is a bashism, whereas "local foo; foo=bar" > > is POSIX compliant. At least that's what checkbashisms seems to > > indicate. > > Hmmm: HP-UX POSIX Shell uses "typeset -i e=0", and I always thought that's > just POSIX. AFAIK, neither typeset nor declare nor local is POSIX (yet) (that should be true for about half a year ago, at least). thers is indeed talk to make this POSIX, possibly indeed standardized on the name "typeset". Yes, well, kornshell knows about typeset, and does not know declare nor local (or has that changed?). But ksh is rarely used as /bin/sh. Bash knows about all three, but "deprecates" typeset in favor of declare, where both are synonyms in the internal implementation. Besides, what we care for here is not what is written in some standard (yet to come), but the real world, and that real world looks like this: dash -c 'typeset X=1 ; echo $X' dash: typeset: not found dash -c 'declare X=1 ; echo $X' dash: declare: not found dash -c 'local X=1 ; echo $X' 1 Where dash is the only "relevant" thing frequently used as /bin/sh again: afaik. So as long as you keep the agent #!/bin/sh, make sure it works with dash. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Antw: Re: Q: unmanaged MD-RAID & auto-recovery
On 11/30/2011 02:06 AM, Lars Marowsky-Bree wrote: > On 2011-11-29T12:36:39, Dimitri Maziuk wrote: > >> If you repeatedly try to re-sync with a dying disk, with each resync >> interrupted by i/o error, you will get data corruption sooner or later. > > No, you shouldn't. (Unless the drive returns faulty data on read, which > is actually a pretty rare failure mode.) Unfortunately it is not. That is the reason for T10DIF and proprietary data correction by hardware raid vendors (mostly enterprise storage). Cheers, Bernd ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] OCF RA mysql
On Wed, Nov 30, 2011 at 3:14 PM, Nick Khamis wrote: > Does the latest version of the RAs have all the old > heartbeat related material removed? I don't follow. Care to clarify the question? Florian -- Need help with High Availability? http://www.hastexo.com/now ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Custom resource agent script assistance
On Thu, 2011-12-01 at 13:25 -0400, Chris Bowlby wrote: > Hi Everyone, > > I'm in the process of configuring a 2 node + DRBD enabled DHCP cluster This doesn't really address your specific question, but I got dhcpd to work by using the ocf:heartbeat:anything RA. primitive dhcp ocf:heartbeat:anything \ params binfile="/usr/sbin/dhcpd" cmdline_options="-f -cf /vmgroup2/rep/dhcpd.conf -lf /vmgroup2/rep/dhcpd/dhcpd.leases" \ op monitor interval="10" timeout="50" depth="0" \ op start interval="0" timeout="90s" \ op stop interval="0" timeout="100s" \ meta target-role="Started" The "-cf" and "-lf" arguments are just to ensure that the config file and the leases file are located within a DRBD-replicated partition. No doubt 10 people will surface to explain why this is a horrible way to do it, but it does work. --Greg ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
[Linux-HA] Custom resource agent script assistance
Hi Everyone, I'm in the process of configuring a 2 node + DRBD enabled DHCP cluster using the following packages: SLES 11 SP1, with Pacemaker 1.1.6, corosync 1.4.2, and drbd 8.3.12. I know about DHCP's internal fail-over abilities, but after testing, it simply failed to remain viable as a more robust HA type cluster. As such I began working on this solution. For reference my current configuration looks like this: node dhcp-vm01 \ attributes standby="off" node dhcp-vm02 \ attributes standby="on" primitive DHCPFS ocf:heartbeat:Filesystem \ params device="/dev/drbd1" directory="/var/lib/dhcp" fstype="ext4" \ meta target-role="Started" primitive dhcp-cluster ocf:heartbeat:IPaddr2 \ params ip="xxx.xxx.xxx.xxx" cidr_netmask="32" \ op monitor interval="10s" primitive dhcpd_service ocf:heartbeat:dhcpd \ params dhcpd_config="/etc/dhcpd.conf" \ dhcpd_interface="eth0" \ op monitor interval="1min" \ meta target-role="Started" primitive dhcpdrbd ocf:linbit:drbd \ params drbd_resource="dhcpdata" \ op monitor interval="60s" ms DHCPData dhcpdrbd \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" colocation dhcpd_service-with_cluster_ip inf: dhcpd_service dhcp-cluster colocation fs_on_drbd inf: DHCPFS DHCPData:Master order DHCP-after-dhcpfs inf: DHCPFS:promote dhcpd_service:start order dhcpfs_after_dhcpdata inf: DHCPData:promote DHCPFS:start property $id="cib-bootstrap-options" \ dc-version="1.1.5-ecb6baaf7fc091b023d6d4ba7e0fce26d32cf5c8" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ stonith-enabled="false" \ no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \ resource-stickiness="100" The floating IP works without issue, as does the DRBD integration such that if I put a node into standby, the IP, DRBD master/slave and FS mounts all transfer correctly. Only the DHCP component itself is failing, in that it wont start properly from within pacemaker. I suspect it is due to having to write a new script as I could not find an existing DHCPD RA agent anywhere. I built my own based off the development guide for resource agents on the wiki. I've managed to get it to complete all the tests I need it to pass in the ocf-tester script: ocf-tester -n dhcpd -o monitor_client_interface=eth0 /usr/lib/ocf/resource.d/heartbeat/dhcpd Beginning tests for /usr/lib/ocf/resource.d/heartbeat/dhcpd... * Your agent does not support the notify action (optional) * Your agent does not support the demote action (optional) * Your agent does not support the promote action (optional) * Your agent does not support master/slave (optional) /usr/lib/ocf/resource.d/heartbeat/dhcpd passed all tests Additionally if I run each of the various options (start/stop/monitor/validate-all/status/meta-data) at the command line, they all work with out issue, and stop/start the DHCPD process as expected. dhcp-vm01:/usr/lib/ocf/resource.d/heartbeat # ps aux | grep dhcp root 12516 0.0 0.1 4344 756 pts/3S+ 17:16 0:00 grep dhcp dhcp-vm01:/usr/lib/ocf/resource.d/heartbeat # /usr/lib/ocf/resource.d/heartbeat/dhcpd start DEBUG: Validating the dhcpd binary exists. DEBUG: Validating that we are running in chrooted mode DEBUG: Chrooted mode is active, testing the chrooted path exists DEBUG: Checking to see if the /var/lib/dhcp//etc/dhcpd.conf exists and is readable DEBUG: Validating the dhcpd user exists DEBUG: Validation complete, everything looks good. DEBUG: Testing the state of the daemon itself DEBUG: OCF_NOT_RUNNING: 7 INFO: The dhcpd process is not running Internet Systems Consortium DHCP Server V3.1-ESV Copyright 2004-2010 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ WARNING: Host declarations are global. They are not limited to the scope you declared them in. Not searching LDAP since ldap-server, ldap-port and ldap-base-dn were not specified in the config file Wrote 0 deleted host decls to leases file. Wrote 0 new dynamic host decls to leases file. Wrote 0 leases to leases file. Listening on LPF/eth0/00:0c:29:d7:64:99/SERVERS Sending on LPF/eth0/00:0c:29:d7:64:99/SERVERS Sending on Socket/fallback/fallback-net 0 INFO: dhcpd [chrooted] has started. DEBUG: Resource Agent Exit Status 0 DEBUG: default start returned 0 dhcp-vm01:/usr/lib/ocf/resource.d/heartbeat # ps aux | grep dhcp dhcpd12653 0.0 0.2 26636 1164 ?Ss 17:16 0:00 dhcpd -cf /etc/dhcpd.conf -chroot /var/lib/dhcp -lf /db/dhcpd.leases -user dhcpd -group nogroup -pf /var/run/dhcpd.pid root 12658 0.0 0.1 4344 752 pts/3S+ 17:16 0:00 grep dhcp However, when I try to do the same from within pacemaker it fails to properly start up and I get the following error (crm_mon): Failed actions: dhcpd_service_monitor_0 (node=dhcp-vm01, call=3, rc=5, status=complete): not installed
Re: [Linux-HA] Antw: Re: Q: "exec-time" values
>>> Dejan Muhamedagic schrieb am 01.12.2011 um 11:16 in Nachricht <20111201101646.GA11310@walrus.homenet>: > Hi, > > On Thu, Dec 01, 2011 at 08:37:57AM +0100, Ulrich Windl wrote: [...] > > So what are you doing here? > > I'm not sure :) You seem to be very proficient at programming, > why don't then just take a look at the code in glue/clplumbing. > Search for exec_time. Hi! In which package? pacemaker? CRM? LRM? Regards, Ulrich ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Antw: Re: Q: "cib-last-written"
>>> Tim Serong schrieb am 01.12.2011 um 10:19 in Nachricht <4ed74691.9000...@suse.com>: > On 12/01/2011 09:10 AM, Ulrich Windl wrote: > "Gao,Yan" schrieb am 01.12.2011 um 06:55 in Nachricht > > <4ed716be.9090...@suse.com>: > >> Hi, > >> > >> On 11/30/11 21:35, Ulrich Windl wrote: > >>> Hi! > >>> > >>> Simple question: when is the attribute "cib-last-written" in XML's "cib" > >> element updated? > >> When "//cib/configuration" is changed. > > > > So why isn't that an attribute of then? > > From: > http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/c > h-cluster-options.html > > "The reason for these fields to be placed at the top level instead of > with the rest of cluster options is simply a matter of parsing. These > options are used by the configuration database which is, by design, > mostly ignorant of the content it holds. So the decision was made to > place them in an easy to find location." Hi! I wonder about that usefulness of that value, especially as any configuration change seems to increase the epoch anyway. I never saw that CRM cares about the cib-last-written string. When talking about "easy to find locations", it was a mistake to use XML anyway ;-) Regards, Ulrich ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Antw: Re: Q: "exec-time" values
Hi, On Thu, Dec 01, 2011 at 08:37:57AM +0100, Ulrich Windl wrote: > >>> Dejan Muhamedagic schrieb am 30.11.2011 um 15:35 in > Nachricht <2030143512.GB6964@walrus.homenet>: > > Hi, > > > > On Wed, Nov 30, 2011 at 02:56:34PM +0100, Ulrich Windl wrote: > > > Hi! > > > > > > It seems the execution time is shown in milliseconds. However it seems > > > all > > execution times are multiples of 10ms. Is that intended? > > > > > > Examples (human readable times): > > > exec-time="0" > > > exec-time="100ms" > > > exec-time="10ms" > > > exec-time="70ms" > > > exec-time="70ms" > > > exec-time="70ms" > > > exec-time="710ms" > > > exec-time="710ms" > > > exec-time="7s500ms" > > > exec-time="80ms" > > > exec-time="820ms" > > > exec-time="850ms" > > > exec-time="870ms" > > > exec-time="880ms" > > > exec-time="90ms" > > > exec-time="910ms" > > > exec-time="910ms" > > > > That's the clock resolution (10ms) for this purpose. I think it's > > platform dependent, but I cannot recall seeing anything with > > finer resolution (see _SC_CLK_TCK) > > Hi! > > I don't know how you measure your runtime, but even gettimeofday() has a > better resolution. Is that exec-time the wall-time, or is it CPU-time? > > I don't think it makes much sense to use CPU-time there. > > Even then, I cannot reproduce the result: > "CPU time used = 0.004" says the following program: > > #include > #include > #include > #include > > static int get_cpu_usage(struct timeval *tvp) > { > struct rusage res; > > if ( getrusage(RUSAGE_SELF, &res) != 0 ) > return(-1); > tvp->tv_sec = res.ru_utime.tv_sec; > tvp->tv_usec = res.ru_utime.tv_usec; > return(0); > } > > int main(int argc, char *argv[]) > { > struct timeval tv, now; > > if (get_cpu_usage(&tv) == 0) { > while (get_cpu_usage(&now) == 0 && >memcmp(&tv, &now, sizeof(tv)) == 0) { > } > now.tv_usec -= tv.tv_usec; > now.tv_sec -= tv.tv_sec; > if (now.tv_usec < 0) > now.tv_usec += 100, now.tv_sec -= 1; > printf("CPU time used = %g\n", >now.tv_sec + (double) now.tv_usec / 100); > } > return 0; > } > > So what are you doing here? I'm not sure :) You seem to be very proficient at programming, why don't then just take a look at the code in glue/clplumbing. Search for exec_time. Thanks, Dejan > Regards, > Ulrich > > > > > ___ > Linux-HA mailing list > Linux-HA@lists.linux-ha.org > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
[Linux-HA] Antw: Re: Q: RA "reload"
>>> Andreas Kurz schrieb am 30.11.2011 um 14:54 in >>> Nachricht <4ed6357a.5040...@hastexo.com>: > On 11/30/2011 12:58 PM, Ulrich Windl wrote: > > Hi, > > > > when changing the performce-related-only mount option for a filesystem I > noticed that the LRM decided to restart the resource and all the depending > resources. > > > > As I know that Linux supports "-o remount", such a restart would not be > necessary. > > > > So I wonder: When ever will the LRM decide to try a "reload" method > (assuming the RA has one)? > > > > A pointer to the documentation would be OK. > > http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Expla > > ined/index.html#s-reload > Hi! I just noticed that "5 Resource Agent Actions" in the current dev-guide does not even mention "reload" anywhere. So no surprise that only very few agents support it. Regards, Ulrich > Regards, > Andreas ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Antw: Re: Q: "cib-last-written"
On 12/01/2011 09:10 AM, Ulrich Windl wrote: "Gao,Yan" schrieb am 01.12.2011 um 06:55 in Nachricht > <4ed716be.9090...@suse.com>: >> Hi, >> >> On 11/30/11 21:35, Ulrich Windl wrote: >>> Hi! >>> >>> Simple question: when is the attribute "cib-last-written" in XML's "cib" >> element updated? >> When "//cib/configuration" is changed. > > So why isn't that an attribute of then? From: http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch-cluster-options.html "The reason for these fields to be placed at the top level instead of with the rest of cluster options is simply a matter of parsing. These options are used by the configuration database which is, by design, mostly ignorant of the content it holds. So the decision was made to place them in an easy to find location." Regards, Tim -- Tim Serong Senior Clustering Engineer SUSE tser...@suse.com ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Antw: Re: Q: "cib-last-written"
On 12/01/11 16:10, Ulrich Windl wrote: "Gao,Yan" schrieb am 01.12.2011 um 06:55 in Nachricht > <4ed716be.9090...@suse.com>: >> Hi, >> >> On 11/30/11 21:35, Ulrich Windl wrote: >>> Hi! >>> >>> Simple question: when is the attribute "cib-last-written" in XML's "cib" >> element updated? >> When "//cib/configuration" is changed. > > So why isn't that an attribute of then? Actually besides that, changes on some attributes of will trigger to update "cib-last-written" too, such as "validate-with". Regards, Gaoyan -- Gao,Yan Software Engineer China Server Team, SUSE. ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
[Linux-HA] Antw: Re: Q: RA "reload"
OK, I read "9.5. Reloading Services After a Definition Change".So the filesystem RA lacks a "reload" operation. However even when having a "reload" operation, not all parameter changes can be done via a "reload"; some need a real restart. Now the confusing thing comes into play: Why can't a "unique" parameter be changed and then the service be reloaded? You are unnecessarily overloading the semantics of "unique" with something completely unrelated: There are unique parameters that can be changed, still allowing a reload. There are non-unique parameters that can be changed, but don't allow a reload. Why not having a "reloadable" attribute for parameters that can be reloaded? That's another example for some strange design. I also don't understand the note: "The metadata is re-read when the resource is started. This may mean that the resource will be restarted the first time, even though you changed a parameter with unique=0" I read this as ``the first "reload" will always be a "restart" for no obvious reason''. Regards, Ulrich >>> Andreas Kurz 30.11.11 14.54 Uhr >>> On 11/30/2011 12:58 PM, Ulrich Windl wrote: > Hi, > > when changing the performce-related-only mount option for a filesystem I > noticed that the LRM decided to restart the resource and all the depending > resources. > > As I know that Linux supports "-o remount", such a restart would not be > necessary. > > So I wonder: When ever will the LRM decide to try a "reload" method (assuming > the RA has one)? > > A pointer to the documentation would be OK. http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#s-reload Regards, Andreas -- Need help with Pacemaker? http://www.hastexo.com/now > > Regards, > Ulrich > > > ___ > Linux-HA mailing list > Linux-HA@lists.linux-ha.org > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
[Linux-HA] Antw: Re: Q: "cib-last-written"
>>> "Gao,Yan" schrieb am 01.12.2011 um 06:55 in Nachricht <4ed716be.9090...@suse.com>: > Hi, > > On 11/30/11 21:35, Ulrich Windl wrote: > > Hi! > > > > Simple question: when is the attribute "cib-last-written" in XML's "cib" > element updated? > When "//cib/configuration" is changed. So why isn't that an attribute of then? > > > > > I have a CIB that was changed (new epoch) today, but the "cib-last-written" > is "Thu Sep 29 08:24:01 2011" > > > > Regards, > > Ulrich > > > > > > ___ > > Linux-HA mailing list > > Linux-HA@lists.linux-ha.org > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > See also: http://linux-ha.org/ReportingProblems > > > > > > Regards, > Gaoyan ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems