Re: [Pacemaker] crm_mon --as-html default permissions
I found the umask code after further inspection. Solution was chmod from php right before html output from crm_mon is read. On Tue, Feb 18, 2014 at 1:38 AM, Andrew Beekhof wrote: > > On 12 Feb 2014, at 9:53 pm, Marko Potocnik > wrote: > > > Hi, > > > > I've upgraded from pacemaker-1.1.7-6.el6.x86_64 to > pacemaker-1.1.10-14.el6_5.2.x86_64. > > I use crm_mon with --as-html option to get the cluster status in html > file. I've noticed that the permissions for file have changed from 644 to > 640. Looking at source code I see that umask is set to reflect the 640 > permissions, but not for crm_mon. > > default system umask is set to 0022 (644 permission). > > > > Any idea why I get the 640 permissions? > > There doesn't seem to be anything explicit in the crm_mon code. Just a > call to fopen() > >Any created files will have mode S_IRUSR | S_IWUSR | S_IRGRP | > S_IWGRP | S_IROTH | S_IWOTH (0666), as modified by the process's umask > value (see umask(2)). > > However, it seems all code runs the following in crm_log_init(): > > umask(S_IWGRP | S_IWOTH | S_IROTH); > > which could well be the cause > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[Pacemaker] crm_mon --as-html default permissions
Hi, I've upgraded from pacemaker-1.1.7-6.el6.x86_64 to pacemaker-1.1.10-14.el6_5.2.x86_64. I use crm_mon with --as-html option to get the cluster status in html file. I've noticed that the permissions for file have changed from 644 to 640. Looking at source code I see that umask is set to reflect the 640 permissions, but not for crm_mon. default system umask is set to 0022 (644 permission). Any idea why I get the 640 permissions? Regards, Marko ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [Pacemaker] Filesystem resource agent patch
Actually the symbolic link is the beautifier. We use different versions of database server and using the symbolic link mount point is always the same. Do I need to do anything else for the patch to make it into the main branch? Regards, Marko On Fri, Mar 18, 2011 at 2:29 PM, Dejan Muhamedagic wrote: > Hi, > > On Fri, Mar 18, 2011 at 11:35:01AM +0100, Marko Potocnik wrote: > > If you use symbolic links in Filesystem resource agent directory > parameter, > > then monitoring operation fails, because actual mount point in > /proc/mounts > > (or the output of mount command) is diferent as the configured one. > > Why don't you just specify the actual mount point? Are they so > ugly? Must say that I've never even tried to do a mount on a > symbolic link :) > > Cheers, > > Dejan > > > Here is the patch that fixes this: > > > > --- Filesystem_new_org 2011-03-18 11:32:37.0 +0100 > > +++ Filesystem_new 2011-03-18 12:27:35.0 +0100 > > @@ -1002,0 +1003,6 @@ > > + > > + #Resolve symlinks in MOUNTPOINT > > + resolved_mntpnt=`readlink -f $MOUNTPOINT` > > + if [ $? -eq 0 ]; then > > + MOUNTPOINT=$resolved_mntpnt > > + fi > > > > > > Regards, > > > > Marko > > > ___ > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
[Pacemaker] Filesystem resource agent patch
If you use symbolic links in Filesystem resource agent directory parameter, then monitoring operation fails, because actual mount point in /proc/mounts (or the output of mount command) is diferent as the configured one. Here is the patch that fixes this: --- Filesystem_new_org 2011-03-18 11:32:37.0 +0100 +++ Filesystem_new 2011-03-18 12:27:35.0 +0100 @@ -1002,0 +1003,6 @@ + + #Resolve symlinks in MOUNTPOINT + resolved_mntpnt=`readlink -f $MOUNTPOINT` + if [ $? -eq 0 ]; then + MOUNTPOINT=$resolved_mntpnt + fi Regards, Marko ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Reboot host when service fails
setting on-fail parameter does nothing. I still have to define a stonith agent and enable stonih. I'm a little lost here. I don't know which stonith agent to use and I don't have any real stonih device. I think I don't really need one, since I just want to reboot a node when a service fails. Also is it possible to fence a node only when fail-count of a resource reaces a certain number? Regards, Marko On Wed, Dec 8, 2010 at 1:29 PM, Pavlos Parissis wrote: > On 8 December 2010 10:50, Marko Potocnik wrote: > > Hi, > > is it possible to configure pacemaker to reboot host machine when a > service > > pacemaker monitors fails (or migration-threshold) for the service is > > reached? > > Service could be Virtual Machine or ordinary service (apache, database, > ...) > > Regards, > > Marko > > set on-fail parameter for monitor operation to fence, have a look here > > http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-resource-operations.html > > Cheers, > Pavlos > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
[Pacemaker] Reboot host when service fails
Hi, is it possible to configure pacemaker to reboot host machine when a service pacemaker monitors fails (or migration-threshold) for the service is reached? Service could be Virtual Machine or ordinary service (apache, database, ...) Regards, Marko ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Pacemaker on RHEL 4.8
Scratch that, resource stopping works also. And I can live without yum :) Thanks again. On Thu, Nov 25, 2010 at 2:05 PM, Marko Potocnik wrote: > Thanks Andrew, I downloaded and compiled libxml2 2.8.7-1. Resource editing > with crm now work, but I am still having problems with resource stopping. > Any idea-a why whole node exits and rejoins the cluster? > > > On Thu, Nov 25, 2010 at 10:22 AM, Andrew Beekhof wrote: > >> On Wed, Nov 24, 2010 at 5:55 PM, Marko Potocnik >> wrote: >> > Hi, >> > >> > >> > >> > I’m also having problem with pacemaker / heartbeat on RHEL 4.8. >> > >> > First of all clusterlabs repo for epel doesn’t work with yum on RHEL 4.8 >> > (yum is installed from EPEL): >> > >> > >> > >> > [r...@lucija ~]# yum search pacemaker >> > >> > Searching Packages: >> > >> > Setting up repositories >> > >> > epel 100% |=| 3.8 >> kB00:00 >> > >> > clusterlabs 100% |=| 1.2 >> kB00:00 >> > >> > Reading repository metadata in from local files >> > >> > 534b70e747a5d8683eaf75a00 100% |=| 653 >> kB00:00 >> > >> > epel : ## 1946/1946 >> > >> > Added 1946 new packages, deleted 0 old in 4.81 seconds >> > >> > primary.xml.gz100% |=| 62 >> kB00:00 >> > >> > clusterlab: >> > # 122/278Traceback >> (most >> > recent call last): >> > >> > File "/usr/bin/yum", line 29, in ? >> > >> > yummain.main(sys.argv[1:]) >> > >> > File "/usr/share/yum-cli/yummain.py", line 97, in main >> > >> > result, resultmsgs = do() >> > >> > File "/usr/share/yum-cli/cli.py", line 596, in doCommands >> > >> > return self.search() >> > >> > File "/usr/share/yum-cli/cli.py", line 1216, in search >> > >> > matching = self.searchPackages(searchlist, args, >> > callback=self.matchcallback) >> > >> > File "__init__.py", line 1061, in searchPackages >> > >> > File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup >> > >> > self.doSackSetup(thisrepo=thisrepo) >> > >> > File "__init__.py", line 260, in doSackSetup >> > >> > File "repos.py", line 287, in populateSack >> > >> > File "sqlitecache.py", line 96, in getPrimary >> > >> > File "sqlitecache.py", line 89, in _getbase >> > >> > File "sqlitecache.py", line 359, in updateSqliteCache >> > >> > File "sqlitecache.py", line 251, in addPrimary >> > >> > File "sqlitecache.py", line 197, in insertHash >> > >> > File "sqlitecache.py", line 449, in values >> > >> > File "sqlitecache.py", line 441, in __getitem__ >> > >> > File "mdparser.py", line 73, in __getitem__ >> > >> > KeyError: 'sourcerpm' >> > >> > >> > >> > Here is the pacemaker.repo: >> > >> > [r...@lucija ~]# cat /etc/yum.repos.d/pacemaker.repo >> > >> > [clusterlabs] >> > >> > name=High Availability/Clustering server technologies (epel-4) >> > >> > baseurl=http://www.clusterlabs.org/rpm/epel-4 >> > >> > type=rpm-md >> > >> > gpgcheck=0 >> > >> > enabled=1 >> > >> > >> > >> > If I install it by hand if says it need python2.4, so I installed it >> from >> > fedora rpms (http://www.python.org/download/releases/2.4.2/rpms/). I >> then >> > copied crm python files to python2.4 and modified crm script to use >> > python2.4. >> > >> > Pacemaker, heartbeat and crm now run, but are buggy: >> > >> > >> > >> > - If I edit configuration I often get an error that xml in CIB can not >> be >> > replaced. I got this when I tried to change res_ftp monitor timeout to >> 40s: >> >> I think this is due to an old bug in libxml2. >> NTT posted about the same problem recently. >>
Re: [Pacemaker] Active / Active pacemaker configuration advice
On Thu, Nov 25, 2010 at 2:05 PM, Michael Schwartzkopff wrote: > On Thursday 25 November 2010 08:56:21 Marko Potocnik wrote: > (...) > > The order constraint does not affect the IP migration strategy in case of > > service failure. I agree that I missed it, but I does not affect the > > current behavior of IP resource in case of service failure. > > > Perhaps it is better if you describe what you want to acchieve so we could > help with the solution: > > - You want to have a service running on two nodes so tha failover happens > faster. > > - In case of one failure of the service on one node you want the IP address > to > move to the other node. > > Is that all? > Pretty much, service which failed should also be restarted, so that fail-back is possible. And that is all :) > > -- > Dr. Michael Schwartzkopff > Guardinistr. 63 > 81375 München > > Tel: (0163) 172 50 98 > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Pacemaker on RHEL 4.8
Thanks Andrew, I downloaded and compiled libxml2 2.8.7-1. Resource editing with crm now work, but I am still having problems with resource stopping. Any idea-a why whole node exits and rejoins the cluster? On Thu, Nov 25, 2010 at 10:22 AM, Andrew Beekhof wrote: > On Wed, Nov 24, 2010 at 5:55 PM, Marko Potocnik > wrote: > > Hi, > > > > > > > > I’m also having problem with pacemaker / heartbeat on RHEL 4.8. > > > > First of all clusterlabs repo for epel doesn’t work with yum on RHEL 4.8 > > (yum is installed from EPEL): > > > > > > > > [r...@lucija ~]# yum search pacemaker > > > > Searching Packages: > > > > Setting up repositories > > > > epel 100% |=| 3.8 > kB00:00 > > > > clusterlabs 100% |=| 1.2 > kB00:00 > > > > Reading repository metadata in from local files > > > > 534b70e747a5d8683eaf75a00 100% |=| 653 > kB00:00 > > > > epel : ## 1946/1946 > > > > Added 1946 new packages, deleted 0 old in 4.81 seconds > > > > primary.xml.gz100% |=| 62 > kB00:00 > > > > clusterlab: > > # 122/278Traceback (most > > recent call last): > > > > File "/usr/bin/yum", line 29, in ? > > > > yummain.main(sys.argv[1:]) > > > > File "/usr/share/yum-cli/yummain.py", line 97, in main > > > > result, resultmsgs = do() > > > > File "/usr/share/yum-cli/cli.py", line 596, in doCommands > > > > return self.search() > > > > File "/usr/share/yum-cli/cli.py", line 1216, in search > > > > matching = self.searchPackages(searchlist, args, > > callback=self.matchcallback) > > > > File "__init__.py", line 1061, in searchPackages > > > > File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup > > > > self.doSackSetup(thisrepo=thisrepo) > > > > File "__init__.py", line 260, in doSackSetup > > > > File "repos.py", line 287, in populateSack > > > > File "sqlitecache.py", line 96, in getPrimary > > > > File "sqlitecache.py", line 89, in _getbase > > > > File "sqlitecache.py", line 359, in updateSqliteCache > > > > File "sqlitecache.py", line 251, in addPrimary > > > > File "sqlitecache.py", line 197, in insertHash > > > > File "sqlitecache.py", line 449, in values > > > > File "sqlitecache.py", line 441, in __getitem__ > > > > File "mdparser.py", line 73, in __getitem__ > > > > KeyError: 'sourcerpm' > > > > > > > > Here is the pacemaker.repo: > > > > [r...@lucija ~]# cat /etc/yum.repos.d/pacemaker.repo > > > > [clusterlabs] > > > > name=High Availability/Clustering server technologies (epel-4) > > > > baseurl=http://www.clusterlabs.org/rpm/epel-4 > > > > type=rpm-md > > > > gpgcheck=0 > > > > enabled=1 > > > > > > > > If I install it by hand if says it need python2.4, so I installed it from > > fedora rpms (http://www.python.org/download/releases/2.4.2/rpms/). I > then > > copied crm python files to python2.4 and modified crm script to use > > python2.4. > > > > Pacemaker, heartbeat and crm now run, but are buggy: > > > > > > > > - If I edit configuration I often get an error that xml in CIB can not be > > replaced. I got this when I tried to change res_ftp monitor timeout to > 40s: > > I think this is due to an old bug in libxml2. > NTT posted about the same problem recently. > > > > > > > > > [r...@ankaran ~]# crm configure edit > > > > ERROR: could not replace rg_ftp > > > > INFO: offending xml: > > > > > type="IPaddr2"> > > > > > > > > > name="ip" value="172.18.251.6"/> > > > > > id="res_ip_ftp-instance_attributes-cidr_netmask" name="cidr_netmask" > > value="24"/> > > > > > > > > > > > > > name="monitor"
Re: [Pacemaker] Active / Active pacemaker configuration advice
I mistyped. Actually I want the service to be controlled by pacemaker, the service should run on both nodes (hot-standby, I achieved this using clones), service should be restarted if it fails on any node, IP should move if service on the same node fails. (and service should be restarted). On Wed, Nov 24, 2010 at 11:02 PM, Devin Reade wrote: > --On Tuesday, November 23, 2010 10:21:04 AM +0100 Marko Potocnik > wrote: > > > I'm using ftp just for test. I want a service to run on both nodes and > > only IP to move in case a service fails. > > I don't want to stop / start service if node fails. > > You might be able to use the behavior of sshd as a hint on how to > proceed. In that case, if you have sshd running on both nodes and > bound to the wildcard address, then you can have the floating IP > move around under the control of the cluter, but leave sshd starting > and stopping in the normal fashion (via rc scripts). > > To a client, it *looks* like the sshd service moves back and forth > between the nodes, even though sshd itself is unmanaged. > > On the other hand, if sshd binds to a specific IP, this mechanism > breaks. > > (In the particular case of sshd, this only works of course if both > hosts have the same host key, otherwise the client will get warnings > about inconsistent server keys.) > > Devin > > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Active / Active pacemaker configuration advice
On Wed, Nov 24, 2010 at 4:04 PM, Michael Schwartzkopff wrote: > On Tuesday 23 November 2010 10:21:04 Marko Potocnik wrote: > > On Tue, Nov 23, 2010 at 9:55 AM, Michael Schwartzkopff > > > > > > wrote: > > > > > > On Tuesday 23 November 2010 09:10:58 Marko Potocnik wrote: > > > > Hi, > > > > > > > > > > > > > > > > I am trying to configure a service (ftp for proof of concept) to run > in > > > > > > an > > > > > > > active / active configuration. A floating IP is used to access this > > > > service. What I am trying to achieve is that a floating IP would move > > > > to second node, if service fails on the first node. After that > service > > > > > > should > > > > > > > be restarted. > > > > > > > > I am able to achieve all but the restart of the service with the > > > > > > following > > > > > > > configuration: > > > > > > > > > > > > > > > > node $id="34d5eba9-130f-4c64-9460-4a5310ac510c" jesenice.iskratel.si\ > > > > > > > > attributes standby="off" > > > > > > > > node $id="5fdf23a5-61c4-4a57-80fb-c764954a5f14" olimpija.iskratel.si\ > > > > > > > > attributes standby="off" > > > > > > > > *primitive res_ftp lsb:vsftpd \* > > > > > > > > *meta migration-threshold="1" \* > > > > > > > > *op monitor on-fail="restart" interval="10s"* > > > > > > > > *primitive res_ip_ftp ocf:heartbeat:IPaddr2 \* > > > > > > > > *params ip="172.18.251.6" cidr_netmask="24" \* > > > > > > > > *op monitor interval="15s" timeout="30s"* > > > > > > > > *clone c_ftp res_ftp \* > > > > > > > > *meta clone-max="2" clone-node-max="1" > globally-unique="false" > > > > target-role="Started"* > > > > > > > > *colocation c_ip_on_ftp inf: res_ip_ftp c_ftp* > > Here I'm am missing I'd like to see something like > > order ord_FTP_IP inf: c_ftp res_ip_ftp > > The order constraint does not affect the IP migration strategy in case of service failure. I agree that I missed it, but I does not affect the current behavior of IP resource in case of service failure. > > -- > Dr. Michael Schwartzkopff > Guardinistr. 63 > 81375 München > > Tel: (0163) 172 50 98 > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] Active / Active pacemaker configuration advice
On Wed, Nov 24, 2010 at 2:37 PM, Dejan Muhamedagic wrote: > Hi, > > On Tue, Nov 23, 2010 at 09:10:58AM +0100, Marko Potocnik wrote: > > Hi, > > > > > > > > I am trying to configure a service (ftp for proof of concept) to run in > an > > active / active configuration. A floating IP is used to access this > service. > > What I am trying to achieve is that a floating IP would move to second > node, > > if service fails on the first node. After that service should be > restarted. > > > > I am able to achieve all but the restart of the service with the > following > > configuration: > > You want some kind of hot-standby service? Why would you want to > restart it then? > > Yes, I want a hot-standby service, so that the failover time is as low as possible (just move the IP). I would like to restart it so that the automatic failback is possible. > > node $id="34d5eba9-130f-4c64-9460-4a5310ac510c" jesenice.iskratel.si \ > > > > attributes standby="off" > > > > node $id="5fdf23a5-61c4-4a57-80fb-c764954a5f14" olimpija.iskratel.si \ > > > > attributes standby="off" > > > > *primitive res_ftp lsb:vsftpd \* > > > > *meta migration-threshold="1" \* > > > > *op monitor on-fail="restart" interval="10s"* > > > > *primitive res_ip_ftp ocf:heartbeat:IPaddr2 \* > > > > *params ip="172.18.251.6" cidr_netmask="24" \* > > > > *op monitor interval="15s" timeout="30s"* > > > > *clone c_ftp res_ftp \* > > > > *meta clone-max="2" clone-node-max="1" globally-unique="false" > > target-role="Started"* > > > > *colocation c_ip_on_ftp inf: res_ip_ftp c_ftp* > > > > property $id="cib-bootstrap-options" \ > > > > dc-version="1.0.8-9881a7350d6182bae9e8e557cf20a3cc5dac3ee7" \ > > > > cluster-infrastructure="Heartbeat" \ > > > > stonith-enabled="false" \ > > > > default-resource-stickiness="200" \ > > > > no-quorum-policy="ignore" \ > > > > last-lrm-refresh="1290440895" > > > > > > > > If I up the migration-threshold then service restarts n times, but > floating > > IP doesn’t move to another node. Do you have any idea how to achieve the > > desired configuration? > > Sorry, didn't quite understand what you want to achieve. Can you > please rephrase. > I would like the IP to move and ftp service to restart in case of service failure. But I can only get IP to move and service stays stoped. If I up the migration-threshold then service restarts and IP does not move until migration-threshold is reached. I am guessing this could maybe be achieved with some location rule ? > > Thanks, > > Dejan > > > > > > > > > Regards, > > > > > > Marko > > > > * > > * > > > ___ > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
[Pacemaker] Pacemaker on RHEL 4.8
Hi, I’m also having problem with pacemaker / heartbeat on RHEL 4.8. First of all clusterlabs repo for epel doesn’t work with yum on RHEL 4.8 (yum is installed from EPEL): [r...@lucija ~]# yum search pacemaker Searching Packages: Setting up repositories epel 100% |=| 3.8 kB00:00 clusterlabs 100% |=| 1.2 kB00:00 Reading repository metadata in from local files 534b70e747a5d8683eaf75a00 100% |=| 653 kB00:00 epel : ## 1946/1946 Added 1946 new packages, deleted 0 old in 4.81 seconds *primary.xml.gz100% |=| 62 kB00:00* *clusterlab: # 122/278Traceback (most recent call last):* * File "/usr/bin/yum", line 29, in ?* yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 97, in main result, resultmsgs = do() File "/usr/share/yum-cli/cli.py", line 596, in doCommands return self.search() File "/usr/share/yum-cli/cli.py", line 1216, in search matching = self.searchPackages(searchlist, args, callback=self.matchcallback) File "__init__.py", line 1061, in searchPackages File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "__init__.py", line 260, in doSackSetup File "repos.py", line 287, in populateSack File "sqlitecache.py", line 96, in getPrimary File "sqlitecache.py", line 89, in _getbase File "sqlitecache.py", line 359, in updateSqliteCache File "sqlitecache.py", line 251, in addPrimary File "sqlitecache.py", line 197, in insertHash File "sqlitecache.py", line 449, in values File "sqlitecache.py", line 441, in __getitem__ File "mdparser.py", line 73, in __getitem__ KeyError: 'sourcerpm' Here is the pacemaker.repo: [r...@lucija ~]# cat /etc/yum.repos.d/pacemaker.repo [clusterlabs] name=High Availability/Clustering server technologies (epel-4) baseurl=http://www.clusterlabs.org/rpm/epel-4 type=rpm-md gpgcheck=0 enabled=1 If I install it by hand if says it need python2.4, so I installed it from fedora rpms (http://www.python.org/download/releases/2.4.2/rpms/). I then copied crm python files to python2.4 and modified crm script to use python2.4. Pacemaker, heartbeat and crm now run, but are buggy: *- If I edit configuration I often get an error that xml in CIB can not be replaced. I got this when I tried to change res_ftp monitor timeout to 40s:* [r...@ankaran ~]# crm configure edit *ERROR: could not replace rg_ftp* INFO: offending xml: *- If I try to stop a group rg_ftp, I the node on which group runs exits and rejoins the cluster:* [r...@lucija ~]# date Tue Nov 23 08:33:26 CET 2010 [r...@lucija ~]# *crm resource stop rg_ftp* crm_mon on ankaran: Last updated: Tue Nov 23 08:33:03 2010 Stack: Heartbeat Current DC: ankaran.iskratel.si (1e7ca0d8-0bbc-4a1b-a1ce-3117975c6862) - partition with quorum Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677 2 Nodes configured, unknown expected votes 1 Resources configured. Node lucija.iskratel.si (620b4679-8f8f-4d43-9b32-b67af24df67f): standby Online: [ ankaran.iskratel.si ] Full list of resources: Resource Group: rg_ftp res_ip_ftp (ocf::heartbeat:IPaddr2): Started ankaran.iskratel.si res_ftp(lsb:vsftpd): Started ankaran.iskratel.si Migration summary: * Node ankaran.iskratel.si: * Node lucija.iskratel.si: *Connection to the CIB terminated* *Reconnecting...* Then after a few seconds: Last updated: Tue Nov 23 08:33:33 2010 Stack: Heartbeat *Current DC: NONE* 2 Nodes configured, unknown expected votes 1 Resources configured. *OFFLINE: [ ankaran.iskratel.si lucija.iskratel.si ]* Full list of resources: Resource Group: rg_ftp res_ip_ftp (ocf::heartbeat:IPaddr2): Stopped res_ftp(lsb:vsftpd): Stopped Migration summary: Here is the configuration on RHEL4.8: node $id="1e7ca0d8-0bbc-4a1b-a1ce-3117975c6862" ankaran.iskratel.si node $id="620b4679-8f8f-4d43-9b32-b67af24df67f" lucija.iskratel.si \ attributes standby="on" primitive res_ftp lsb:vsftpd \ op monitor interval="15s" timeout="30s" primitive res_ip_ftp ocf:heartbeat:IPaddr2 \ params ip="172.18.251.6" cidr_netmask="24" \ op monitor interval="15s" timeout="30s" group rg_ftp res_ip_ftp res_ftp property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false" \
Re: [Pacemaker] Active / Active pacemaker configuration advice
I'm using ftp just for test. I want a service to run on both nodes and only IP to move in case a service fails. I don't want to stop / start service if node fails. On Tue, Nov 23, 2010 at 9:55 AM, Michael Schwartzkopff wrote: > On Tuesday 23 November 2010 09:10:58 Marko Potocnik wrote: > > Hi, > > > > > > > > I am trying to configure a service (ftp for proof of concept) to run in > an > > active / active configuration. A floating IP is used to access this > > service. What I am trying to achieve is that a floating IP would move to > > second node, if service fails on the first node. After that service > should > > be restarted. > > > > I am able to achieve all but the restart of the service with the > following > > configuration: > > > > > > > > node $id="34d5eba9-130f-4c64-9460-4a5310ac510c" jesenice.iskratel.si \ > > > > attributes standby="off" > > > > node $id="5fdf23a5-61c4-4a57-80fb-c764954a5f14" olimpija.iskratel.si \ > > > > attributes standby="off" > > > > *primitive res_ftp lsb:vsftpd \* > > > > *meta migration-threshold="1" \* > > > > *op monitor on-fail="restart" interval="10s"* > > > > *primitive res_ip_ftp ocf:heartbeat:IPaddr2 \* > > > > *params ip="172.18.251.6" cidr_netmask="24" \* > > > > *op monitor interval="15s" timeout="30s"* > > > > *clone c_ftp res_ftp \* > > > > *meta clone-max="2" clone-node-max="1" globally-unique="false" > > target-role="Started"* > > > > *colocation c_ip_on_ftp inf: res_ip_ftp c_ftp* > > > > property $id="cib-bootstrap-options" \ > > > > dc-version="1.0.8-9881a7350d6182bae9e8e557cf20a3cc5dac3ee7" \ > > > > cluster-infrastructure="Heartbeat" \ > > > > stonith-enabled="false" \ > > > > default-resource-stickiness="200" \ > > > > no-quorum-policy="ignore" \ > > > > last-lrm-refresh="1290440895" > > > > > > > > If I up the migration-threshold then service restarts n times, but > floating > > IP doesn’t move to another node. Do you have any idea how to achieve the > > desired configuration? > > > > > > > > Regards, > > > > > > Marko > > Why do you clone the FTP server? Use a simple FTP server resource and make > a > group from IP and FTP. > > -- > Dr. Michael Schwartzkopff > Guardinistr. 63 > 81375 München > > Tel: (0163) 172 50 98 > > ___ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > > ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
[Pacemaker] Active / Active pacemaker configuration advice
Hi, I am trying to configure a service (ftp for proof of concept) to run in an active / active configuration. A floating IP is used to access this service. What I am trying to achieve is that a floating IP would move to second node, if service fails on the first node. After that service should be restarted. I am able to achieve all but the restart of the service with the following configuration: node $id="34d5eba9-130f-4c64-9460-4a5310ac510c" jesenice.iskratel.si \ attributes standby="off" node $id="5fdf23a5-61c4-4a57-80fb-c764954a5f14" olimpija.iskratel.si \ attributes standby="off" *primitive res_ftp lsb:vsftpd \* *meta migration-threshold="1" \* *op monitor on-fail="restart" interval="10s"* *primitive res_ip_ftp ocf:heartbeat:IPaddr2 \* *params ip="172.18.251.6" cidr_netmask="24" \* *op monitor interval="15s" timeout="30s"* *clone c_ftp res_ftp \* *meta clone-max="2" clone-node-max="1" globally-unique="false" target-role="Started"* *colocation c_ip_on_ftp inf: res_ip_ftp c_ftp* property $id="cib-bootstrap-options" \ dc-version="1.0.8-9881a7350d6182bae9e8e557cf20a3cc5dac3ee7" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false" \ default-resource-stickiness="200" \ no-quorum-policy="ignore" \ last-lrm-refresh="1290440895" If I up the migration-threshold then service restarts n times, but floating IP doesn’t move to another node. Do you have any idea how to achieve the desired configuration? Regards, Marko * * ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker