[ClusterLabs] Antw: Re: Q: Resource balancing opration

2016-04-20 Thread Ulrich Windl
>>> Ken Gaillot  schrieb am 20.04.2016 um 16:44 in 
>>> Nachricht
<571795e5.4090...@redhat.com>:
> On 04/20/2016 01:17 AM, Ulrich Windl wrote:
>> Hi!
>> 
>> I'm wondering: If you boot a node on a cluster, most resources will go to 
> another node (if possible). Due to stickiness configured, those resources 
> will stay there.
>> So I'm wondering whether or how I could cause a rebalance of resources on 
> the cluster. I must admit that I don't understand the details of stickiness 
> related to other parameters. In my understanding stickiness should be related 
> to a percentage of utilization dynamically, so that a resource running on a 
> node that is "almost full" should dynamically lower its stickiness to allow 
> resource migration.
>> 
>> So if you are going to implement a manual resource rebalance operation, 
> could you dynamically lower the stickiness for each resource (by some amount 
> or some factor), wait if something happens, and then repeat the process until 
> resources look balanced. "Looking balanced" should be no worse as if all 
> resources are started when all cluster nodes are up.
>> 
>> Spontaneous pros and cons for "resource rebalancing"?
>> 
>> Regards,
>> Ulrich
> 
> Pacemaker gives you a few levers to pull. Stickiness and utilization
> attributes (with a placement strategy) are the main ones.
> 
> Normally, pacemaker *will* continually rebalance according to what nodes
> are available. Stickiness tells the cluster not to do that.
> 
> Whether you should use stickiness (and how much) depends mainly on how
> significant is the interruption that occurs when a service is moved. For

We agree on this: What I was asking for was a "manually triggered automatic 
rebalance" that would temporarily override the stickiness parameters set for 
the resources. Manually means people will blame me, not the cluster (at least 
for starting the operation ;-)) if something bad happens.

> a large database supporting a high-traffic website, stopping and
> starting can take a long time and cost a lot of business -- so maybe you
> want an infinite stickiness in that case, and only rebalance manually
> during a scheduled window. For a small VM that can live-migrate quickly
> and doesn't affect any of your customer-facing services, maybe you don't
> mind setting a small or zero stickiness.
> 
> You can also use rules to make the process intelligent. For example, for
> a server that provides office services, you could set a rule that sets
> infinite stickiness during business hours, and small or zero stickiness
> otherwise. That way, you'd get no disruptions when people are actually
> using the service during the day, and at night, it would automatically
> rebalance.

Could you give a concrete example for this?

> 
> Normally, pacemaker's idea of "balancing" is to simply distribute the
> number of resources on each node as equally as possible. Utilization
> attributes and placement strategies let you add more intelligence. For
> example, you can define the number of cores per node or the amount of
> RAM per node, along with how much each resource is expected to use, and
> let pacemaker balance by that instead of just counting the number of
> resources.

Knew that; I was specifically talking about the imbalance that occurs after one 
node was down for service: If capacity allows the remaining nodes will run all 
the services for the downed node, and they will stay there even if the node is 
up again.

Usually I want to avoid moving the resources, e.g. when one resource does down 
(or up), causing an imbalance, causing other resources to be moved in turn. 
Specifically if you know the resource will be back up (down) soon.

I guess it's not possible to delay the rebalancing effect.

Regards,
Ulrich

> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Fwd: FW: heartbeat can monitor virtual IP alive or not .

2016-04-20 Thread Digimer
On 20/04/16 01:18 PM, fu ml wrote:
> Dear Sir,
> 
> I am sorry disturbing you,
> 
> I do not understand A point about Heartbeat ha.cf 
> configure, Could you explain in more detail to me please? Thank you very
> much.

The heartbeat project has been deprecated for a long time and there are
no plans to develop it further. You should use corosync and pacemaker.

https://alteeve.ca/w/History_of_HA_Clustering

> We have two node in our heartbeat cluster (2 virtual IP):
> 
> Ha.cf:

Please post the actual text (and in general, text-emails are better for
mailing lists).

> Could you explain in *heartbeat can monitor virtual IP alive or not*
> please ? thank a lot.

Pacemaker can do this just fine. It's one of the initial examples in
"Clusters from Scratch";

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/index.html

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Fwd: FW: heartbeat can monitor virtual IP alive or not .

2016-04-20 Thread fu ml
Dear Sir,



I am sorry disturbing you,

I do not understand A point about Heartbeat ha.cf configure, Could you
explain in more detail to me please? Thank you very much.



We have two node in our heartbeat cluster (2 virtual IP):



Ha.cf:





The question is we want heartbeat monitor virtual IP,

If this virtual IP on Linux01 can’t ping or respond ,

We want Linux02 auto take over this service IP Regardless of Linux01’s
Admin IP is alive or not,



We try modify ha.cf as following (ex. Linux01):

1)ucast eth0 10.88.222.53

2)ucast eth0:0 10.88.222.53

3)ucast eth0 10.88.222.51 & ucast eth0 10.88.222.53

4)ucast eth0 10.88.222.51 & ucast eth0:0 10.88.222.53



We test the four type option but all failed,

Could you explain in *heartbeat can monitor virtual IP alive or not* please
? thank a lot.





Cheers,
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] HA meetup at OpenStack Summit

2016-04-20 Thread Ken Gaillot
Lunch on Wednesday it is!

Anyone planning to attend next week's OpenStack Summit in Austin is
cordially invited to an informal ClusterLabs meetup over lunch
(12:30pm-1:50pm by the summit schedule) Wednesday, April 27.

We'll meet at Expo Hall 5, the lunch room adjacent to the Marketplace
(vendor booths). I'll put a ClusterLabs sign on the table to help people
find it.

On 04/14/2016 09:53 AM, Adam Spiers wrote:
> Ken Gaillot  wrote:
>> Hi everybody,
>>
>> The upcoming OpenStack Summit is April 25-29 in Austin, Texas (US). Some
>> regular ClusterLabs contributors are going, so I was wondering if anyone
>> would like to do an informal meetup sometime during the summit.
>>
>> It looks like the best time would be that Wednesday, either lunch (at
>> the venue) or dinner (offsite). It might also be possible to reserve a
>> small (10-person) meeting room, or just meet informally in the expo hall.
>>
>> Anyone interested? Preferences/conflicts?
> 
> Yes, I'd be very interested!  I think lunch on Wednesday should work
> for me; dinner might too.


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread Ken Gaillot
On 04/20/2016 12:20 PM, Klaus Wenninger wrote:
> On 04/20/2016 05:35 PM, fatcha...@gmx.de wrote:
>>
>>> Gesendet: Mittwoch, 20. April 2016 um 16:31 Uhr
>>> Von: "Klaus Wenninger" 
>>> An: users@clusterlabs.org
>>> Betreff: Re: [ClusterLabs] pacemaker apache and umask on CentOS 7
>>>
>>> On 04/20/2016 04:11 PM, fatcha...@gmx.de wrote:
 Hi,

 I´m running a 2-node apache webcluster on a fully patched CentOS 7 
 (pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
 Some files which are generated by the apache are created with a umask 137 
 but I need this files created with a umask of 117.
 To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
 rebooted the system. This had no effekt.
 So I found out (after some research) that this is not working under CentOS 
 7 and that this had to be changed via systemd.
 So I created a directory "/etc/systemd/system/httpd.service.d" and put 
 there a "umask.conf"-File with this content: 
 [Service]
 UMask=0117

 Again I rebooted the system but no effekt.
 Is the pacemaker really starting the apache over the systemd ? And how can 
 I solve the problem ?
>>> Didn't check with CentOS7 but on RHEL7 there is a
>>> /usr/lib/ocf/resource.d/heartbeat/apache.
>>> So it depends on how you defined the resource starting apache if systemd
>>> is used or if it being done by the ocf-ra.
>> MY configuration is:
>> Resource: apache (class=ocf provider=heartbeat type=apache)
>>   Attributes: configfile=/etc/httpd/conf/httpd.conf 
>> statusurl=http://127.0.0.1:8089/server-status
>>   Operations: start interval=0s timeout=40s (apache-start-timeout-40s)
>>   stop interval=0s timeout=60s (apache-stop-timeout-60s)
>>   monitor interval=1min (apache-monitor-interval-1min)
>>
>> So I quess it is ocf. But what will be the right way to do it ? I lack a bit 
>> of understandig about this /usr/lib/ocf/resource.d/heartbeat/apache file.  
>>
> There are the ocf-Resource-Agents (if there is none you can always
> create one for your service) which usually
> give you a little bit more control of the service from the cib. (You can
> set a couple of variables like in this example
> the pointer to the config-file)
> And of course you can always create resources referring the native
> services of your distro (systemd-units in
> this case).
>>
>>
>>
 Any suggestions are welcome

If you add envfiles="/etc/sysconfig/httpd" to your apache resource, it
should work.

 Kind regards

 fatcharly

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread Klaus Wenninger
On 04/20/2016 05:35 PM, fatcha...@gmx.de wrote:
>
>> Gesendet: Mittwoch, 20. April 2016 um 16:31 Uhr
>> Von: "Klaus Wenninger" 
>> An: users@clusterlabs.org
>> Betreff: Re: [ClusterLabs] pacemaker apache and umask on CentOS 7
>>
>> On 04/20/2016 04:11 PM, fatcha...@gmx.de wrote:
>>> Hi,
>>>
>>> I´m running a 2-node apache webcluster on a fully patched CentOS 7 
>>> (pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
>>> Some files which are generated by the apache are created with a umask 137 
>>> but I need this files created with a umask of 117.
>>> To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
>>> rebooted the system. This had no effekt.
>>> So I found out (after some research) that this is not working under CentOS 
>>> 7 and that this had to be changed via systemd.
>>> So I created a directory "/etc/systemd/system/httpd.service.d" and put 
>>> there a "umask.conf"-File with this content: 
>>> [Service]
>>> UMask=0117
>>>
>>> Again I rebooted the system but no effekt.
>>> Is the pacemaker really starting the apache over the systemd ? And how can 
>>> I solve the problem ?
>> Didn't check with CentOS7 but on RHEL7 there is a
>> /usr/lib/ocf/resource.d/heartbeat/apache.
>> So it depends on how you defined the resource starting apache if systemd
>> is used or if it being done by the ocf-ra.
> MY configuration is:
> Resource: apache (class=ocf provider=heartbeat type=apache)
>   Attributes: configfile=/etc/httpd/conf/httpd.conf 
> statusurl=http://127.0.0.1:8089/server-status
>   Operations: start interval=0s timeout=40s (apache-start-timeout-40s)
>   stop interval=0s timeout=60s (apache-stop-timeout-60s)
>   monitor interval=1min (apache-monitor-interval-1min)
>
> So I quess it is ocf. But what will be the right way to do it ? I lack a bit 
> of understandig about this /usr/lib/ocf/resource.d/heartbeat/apache file.  
>
There are the ocf-Resource-Agents (if there is none you can always
create one for your service) which usually
give you a little bit more control of the service from the cib. (You can
set a couple of variables like in this example
the pointer to the config-file)
And of course you can always create resources referring the native
services of your distro (systemd-units in
this case).
>
>
>
>>> Any suggestions are welcome
>>>
>>> Kind regards
>>>
>>> fatcharly
>>>  
>>>
>>>
>>> ___
>>> Users mailing list: Users@clusterlabs.org
>>> http://clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread fatcharly


> Gesendet: Mittwoch, 20. April 2016 um 16:31 Uhr
> Von: "Klaus Wenninger" 
> An: users@clusterlabs.org
> Betreff: Re: [ClusterLabs] pacemaker apache and umask on CentOS 7
>
> On 04/20/2016 04:11 PM, fatcha...@gmx.de wrote:
> > Hi,
> >
> > I´m running a 2-node apache webcluster on a fully patched CentOS 7 
> > (pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
> > Some files which are generated by the apache are created with a umask 137 
> > but I need this files created with a umask of 117.
> > To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
> > rebooted the system. This had no effekt.
> > So I found out (after some research) that this is not working under CentOS 
> > 7 and that this had to be changed via systemd.
> > So I created a directory "/etc/systemd/system/httpd.service.d" and put 
> > there a "umask.conf"-File with this content: 
> > [Service]
> > UMask=0117
> >
> > Again I rebooted the system but no effekt.
> > Is the pacemaker really starting the apache over the systemd ? And how can 
> > I solve the problem ?
> Didn't check with CentOS7 but on RHEL7 there is a
> /usr/lib/ocf/resource.d/heartbeat/apache.
> So it depends on how you defined the resource starting apache if systemd
> is used or if it being done by the ocf-ra.
MY configuration is:
Resource: apache (class=ocf provider=heartbeat type=apache)
  Attributes: configfile=/etc/httpd/conf/httpd.conf 
statusurl=http://127.0.0.1:8089/server-status
  Operations: start interval=0s timeout=40s (apache-start-timeout-40s)
  stop interval=0s timeout=60s (apache-stop-timeout-60s)
  monitor interval=1min (apache-monitor-interval-1min)

So I quess it is ocf. But what will be the right way to do it ? I lack a bit of 
understandig about this /usr/lib/ocf/resource.d/heartbeat/apache file.  





> >
> > Any suggestions are welcome
> >
> > Kind regards
> >
> > fatcharly
> >  
> >
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Q: Resource balancing opration

2016-04-20 Thread Ken Gaillot
On 04/20/2016 01:17 AM, Ulrich Windl wrote:
> Hi!
> 
> I'm wondering: If you boot a node on a cluster, most resources will go to 
> another node (if possible). Due to stickiness configured, those resources 
> will stay there.
> So I'm wondering whether or how I could cause a rebalance of resources on the 
> cluster. I must admit that I don't understand the details of stickiness 
> related to other parameters. In my understanding stickiness should be related 
> to a percentage of utilization dynamically, so that a resource running on a 
> node that is "almost full" should dynamically lower its stickiness to allow 
> resource migration.
> 
> So if you are going to implement a manual resource rebalance operation, could 
> you dynamically lower the stickiness for each resource (by some amount or 
> some factor), wait if something happens, and then repeat the process until 
> resources look balanced. "Looking balanced" should be no worse as if all 
> resources are started when all cluster nodes are up.
> 
> Spontaneous pros and cons for "resource rebalancing"?
> 
> Regards,
> Ulrich

Pacemaker gives you a few levers to pull. Stickiness and utilization
attributes (with a placement strategy) are the main ones.

Normally, pacemaker *will* continually rebalance according to what nodes
are available. Stickiness tells the cluster not to do that.

Whether you should use stickiness (and how much) depends mainly on how
significant is the interruption that occurs when a service is moved. For
a large database supporting a high-traffic website, stopping and
starting can take a long time and cost a lot of business -- so maybe you
want an infinite stickiness in that case, and only rebalance manually
during a scheduled window. For a small VM that can live-migrate quickly
and doesn't affect any of your customer-facing services, maybe you don't
mind setting a small or zero stickiness.

You can also use rules to make the process intelligent. For example, for
a server that provides office services, you could set a rule that sets
infinite stickiness during business hours, and small or zero stickiness
otherwise. That way, you'd get no disruptions when people are actually
using the service during the day, and at night, it would automatically
rebalance.

Normally, pacemaker's idea of "balancing" is to simply distribute the
number of resources on each node as equally as possible. Utilization
attributes and placement strategies let you add more intelligence. For
example, you can define the number of cores per node or the amount of
RAM per node, along with how much each resource is expected to use, and
let pacemaker balance by that instead of just counting the number of
resources.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread Klaus Wenninger
On 04/20/2016 04:11 PM, fatcha...@gmx.de wrote:
> Hi,
>
> I´m running a 2-node apache webcluster on a fully patched CentOS 7 
> (pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
> Some files which are generated by the apache are created with a umask 137 but 
> I need this files created with a umask of 117.
> To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
> rebooted the system. This had no effekt.
> So I found out (after some research) that this is not working under CentOS 7 
> and that this had to be changed via systemd.
> So I created a directory "/etc/systemd/system/httpd.service.d" and put there 
> a "umask.conf"-File with this content: 
> [Service]
> UMask=0117
>
> Again I rebooted the system but no effekt.
> Is the pacemaker really starting the apache over the systemd ? And how can I 
> solve the problem ?
Didn't check with CentOS7 but on RHEL7 there is a
/usr/lib/ocf/resource.d/heartbeat/apache.
So it depends on how you defined the resource starting apache if systemd
is used or if it being done by the ocf-ra.
>
> Any suggestions are welcome
>
> Kind regards
>
> fatcharly
>  
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread Ken Gaillot
On 04/20/2016 09:11 AM, fatcha...@gmx.de wrote:
> Hi,
> 
> I´m running a 2-node apache webcluster on a fully patched CentOS 7 
> (pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
> Some files which are generated by the apache are created with a umask 137 but 
> I need this files created with a umask of 117.
> To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
> rebooted the system. This had no effekt.
> So I found out (after some research) that this is not working under CentOS 7 
> and that this had to be changed via systemd.
> So I created a directory "/etc/systemd/system/httpd.service.d" and put there 
> a "umask.conf"-File with this content: 
> [Service]
> UMask=0117
> 
> Again I rebooted the system but no effekt.
> Is the pacemaker really starting the apache over the systemd ? And how can I 
> solve the problem ?
> 
> Any suggestions are welcome
> 
> Kind regards
> 
> fatcharly

It depends on the resource agent you're using for apache.

If you were using systemd:httpd, I'd expect /etc/sysconfig/httpd or the
httpd.service.d override to work.

Since they don't, I'll guess you're using ocf:heartbeat:apache. In that
case, the file specified by the resource's envfiles parameter (which
defaults to /etc/apache2/envvars) is the right spot. So, you could
configure envfiles=/etc/sysconfig/httpd, or you could keep it default
and add your umask to /etc/apache2/envvars.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] pacemaker apache and umask on CentOS 7

2016-04-20 Thread fatcharly
Hi,

I´m running a 2-node apache webcluster on a fully patched CentOS 7 
(pacemaker-1.1.13-10.el7_2.2.x86_64 pcs-0.9.143-15.el7.x86_64).
Some files which are generated by the apache are created with a umask 137 but I 
need this files created with a umask of 117.
To change this I first tried to add a umask 117 to /etc/sysconfig/httpd & 
rebooted the system. This had no effekt.
So I found out (after some research) that this is not working under CentOS 7 
and that this had to be changed via systemd.
So I created a directory "/etc/systemd/system/httpd.service.d" and put there a 
"umask.conf"-File with this content: 
[Service]
UMask=0117

Again I rebooted the system but no effekt.
Is the pacemaker really starting the apache over the systemd ? And how can I 
solve the problem ?

Any suggestions are welcome

Kind regards

fatcharly
 
   

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Moving Related Servers

2016-04-20 Thread Klaus Wenninger
On 04/20/2016 04:01 PM, Ken Gaillot wrote:
> On 04/20/2016 12:44 AM, ‪H Yavari‬ ‪ wrote:
>> You got my situation right. But I couldn't find any method to do this?
>>
>> I should create one cluster with 4 node or 2 cluster with 2 node ? How I
>> restrict the cluster nodes to each other!!?
> Your last questions made me think of multi-site clustering using booth.
> I think this might be the best solution for you.
>
> You can configure two independent pacemaker clusters of 2 nodes each,
> then use booth to ensure that one cluster has the resources at any time.
> See:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279413776
>
> This is usually done with clusters at physically separate locations, but
> there's no problem with using it with two clusters in one location.
>
> Alternatively, going along more traditional lines such as what Klaus and
> I have mentioned, you could use rules and node attributes to keep the
> resources where desired. You could write a custom resource agent that
> would set a custom node attribute for the matching node (the start
> action should set the attribute to 1, and the stop action should set the
> attribute to 0;
Thought of that as well but wasn't sure if pengine would get this kind
of dependency and then switch the resources running on App2
to master if the resources on App3 would fail...
With a real dependency I would rather guess that pengine would
react as preferred in this case.
>  if the resource was on App 1, you'd set the attribute
> for App 3, and if the resource was on App 4, you'd set the attribute for
> App 4). Colocate that resource with your floating IP, and use a rule to
> locate service X where the custom node attribute is 1. See:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#ap-ocf
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279376656
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617356537136
>
>> 
>> *From:* Klaus Wenninger 
>> *To:* users@clusterlabs.org
>> *Sent:* Wednesday, 20 April 2016, 9:56:05
>> *Subject:* Re: [ClusterLabs] Moving Related Servers
>>
>> On 04/19/2016 04:32 PM, Ken Gaillot wrote:
>>> On 04/18/2016 10:05 PM, ‪H Yavari‬ ‪ wrote:
 Hi,

 This is servers maps:

 App 3-> App 1(Active)

 App 4 -> App 2  (Standby)


 Now App1 and App2 are in a cluster with IP failover.

 I need when IP failover will run and App2 will be Active node, service
 "X" on server App3 will be stop and App 4 will be Active node.
 In the other words, App1 works only with App3 and App 2 works with App 4.

 I have a web application on App1 and some services on App 3 (this is
 same for App2 and App 4)
>>> This is a difficult situation to model. In particular, you could only
>>> have a dependency one way -- so if we could get App 3 to fail over if
>>> App 1 fails, we couldn't model the other direction (App 1 failing over
>>> if App 3 fails). If each is dependent on the other, there's no way to
>>> start one first.
>>>
>>> Is there a technical reason App 3 can work only with App 1?
>>>
>>> Is it possible for service "X" to stay running on both App 3 and App 4
>>> all the time? If so, this becomes easier.
>> Just another try to understand what you are aiming for:
>>
>> You have a 2-node-cluster at the moment consisting of the nodes
>> App1 & App2.
>> You configured something like a master/slave-group to realize
>> an active/standby scenario.
>>
>> To get the servers App3 & App4 into the game we would make
>> them additional pacemaker-nodes (App3 & App4).
>> You now have a service X that could be running either on App3 or
>> App4 (which is easy by e.g. making it dependent on a node attribute)
>> and it should be running on App3 when the service-group is active
>> (master in pacemaker terms) on App1 and on App4 when the
>> service-group is active on App2.
>>
>> The standard thing would be to collocate a service with the master-role
>> (see all the DRBD examples for instance).
>> We would now need a locate-x when master is located-y rule instead
>> of collocation.
>> I don't know any way to directly specify this.
>> One - ugly though - way around I could imagine would be:
>>
>> - locate service X1 on App3
>> - locate service X2 on App4
>> - dummy service Y1 is located App1 and collocated with master-role
>> - dummy service Y2 is located App2 and collocated with master-role
>> - service X1 depends on Y1
>> - service X2 depends on Y2
>>
>> If that somehow reflects your situation the key question now would
>> probably be if pengine would make the group on App2 master
>> if service X1 fails on App3. I would guess yes but I'm not sure.
>>
>> Regards,
>> Klaus
>>
 Sorry for heavy description.

>>>

Re: [ClusterLabs] Moving Related Servers

2016-04-20 Thread Ken Gaillot
On 04/20/2016 12:44 AM, ‪H Yavari‬ ‪ wrote:
> You got my situation right. But I couldn't find any method to do this?
> 
> I should create one cluster with 4 node or 2 cluster with 2 node ? How I
> restrict the cluster nodes to each other!!?

Your last questions made me think of multi-site clustering using booth.
I think this might be the best solution for you.

You can configure two independent pacemaker clusters of 2 nodes each,
then use booth to ensure that one cluster has the resources at any time.
See:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279413776

This is usually done with clusters at physically separate locations, but
there's no problem with using it with two clusters in one location.

Alternatively, going along more traditional lines such as what Klaus and
I have mentioned, you could use rules and node attributes to keep the
resources where desired. You could write a custom resource agent that
would set a custom node attribute for the matching node (the start
action should set the attribute to 1, and the stop action should set the
attribute to 0; if the resource was on App 1, you'd set the attribute
for App 3, and if the resource was on App 4, you'd set the attribute for
App 4). Colocate that resource with your floating IP, and use a rule to
locate service X where the custom node attribute is 1. See:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#ap-ocf

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617279376656

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617356537136

> 
> 
> *From:* Klaus Wenninger 
> *To:* users@clusterlabs.org
> *Sent:* Wednesday, 20 April 2016, 9:56:05
> *Subject:* Re: [ClusterLabs] Moving Related Servers
> 
> On 04/19/2016 04:32 PM, Ken Gaillot wrote:
>> On 04/18/2016 10:05 PM, ‪H Yavari‬ ‪ wrote:
>>> Hi,
>>>
>>> This is servers maps:
>>>
>>> App 3-> App 1(Active)
>>>
>>> App 4 -> App 2  (Standby)
>>>
>>>
>>> Now App1 and App2 are in a cluster with IP failover.
>>>
>>> I need when IP failover will run and App2 will be Active node, service
>>> "X" on server App3 will be stop and App 4 will be Active node.
>>> In the other words, App1 works only with App3 and App 2 works with App 4.
>>>
>>> I have a web application on App1 and some services on App 3 (this is
>>> same for App2 and App 4)
>> This is a difficult situation to model. In particular, you could only
>> have a dependency one way -- so if we could get App 3 to fail over if
>> App 1 fails, we couldn't model the other direction (App 1 failing over
>> if App 3 fails). If each is dependent on the other, there's no way to
>> start one first.
>>
>> Is there a technical reason App 3 can work only with App 1?
>>
>> Is it possible for service "X" to stay running on both App 3 and App 4
>> all the time? If so, this becomes easier.
> Just another try to understand what you are aiming for:
> 
> You have a 2-node-cluster at the moment consisting of the nodes
> App1 & App2.
> You configured something like a master/slave-group to realize
> an active/standby scenario.
> 
> To get the servers App3 & App4 into the game we would make
> them additional pacemaker-nodes (App3 & App4).
> You now have a service X that could be running either on App3 or
> App4 (which is easy by e.g. making it dependent on a node attribute)
> and it should be running on App3 when the service-group is active
> (master in pacemaker terms) on App1 and on App4 when the
> service-group is active on App2.
> 
> The standard thing would be to collocate a service with the master-role
> (see all the DRBD examples for instance).
> We would now need a locate-x when master is located-y rule instead
> of collocation.
> I don't know any way to directly specify this.
> One - ugly though - way around I could imagine would be:
> 
> - locate service X1 on App3
> - locate service X2 on App4
> - dummy service Y1 is located App1 and collocated with master-role
> - dummy service Y2 is located App2 and collocated with master-role
> - service X1 depends on Y1
> - service X2 depends on Y2
> 
> If that somehow reflects your situation the key question now would
> probably be if pengine would make the group on App2 master
> if service X1 fails on App3. I would guess yes but I'm not sure.
> 
> Regards,
> Klaus
> 
>>> Sorry for heavy description.
>>>
>>>
>>> 
>>> *From:* Ken Gaillot mailto:kgail...@redhat.com>>
>>> *To:* users@clusterlabs.org 
>>> **
>>> On 04/18/2016 02:34 AM, ‪H Yavari‬ ‪ wrote:
>>>
 Hi,

 I have 4 CentOS servers (App1,App2.App3 and App4). I created a cluster
 for App1 and App2 with a IP float and it works well.
 In our infrastructure App1 wor