Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread Ken Gaillot
On Sun, 2018-02-25 at 02:24 +, 范国腾 wrote:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not
> work. 
> 
> Do we have any way to make all of the slave VIPs binds to the master
> node if there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 
> Thanks

Hi,

If you colocate all the slave IPs "with pgsql-ha" instead of "with
slave pgsql-ha", then they can run on either master or slave nodes.

Including the master IP in the anti-colocation set will keep them apart
normally.

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> resource has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion
> > and it works.
> > 
> > There is a question: If there are too much nodes (e.g.  total 10
> > slave nodes ), I need run "pcs constraint colocation add pgsql-
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource
> > has 
> > one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We 
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource
> > > has a 
> > > separate VIP. For example, I have 3 slave nodes and my VIP group
> > > has 
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2
> > > fails, 
> > > the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with
> > > slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip
> > resources to run on different nodes so you cannot put them into a
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha
> > pcs 
> > constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the same
> > node:
> > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> > -INFINITY
> > 
> > 
> > Regards,
> > Tomas
> > 
> > > 
> > > Thanks
> > > 
> > > 
> > 

Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-02-24 Thread Andrei Borzenkov
25.02.2018 05:24, 范国腾 пишет:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not work. 
> 
> Do we have any way to make all of the slave VIPs binds to the master node if 
> there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 

If users do not care whether they connect to master or slave, I'd say
setting up single cluster IP would be much easier.

Otherwise using advisory placement (score not equal to (-)INFINITY)
should allow pacemaker to place resources together if there is no other way.


> Thanks
> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has 
> one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
>> Tomas,
>>
>> Thank you very much. I do the change according to your suggestion and it 
>> works.
>>
>> There is a question: If there are too much nodes (e.g.  total 10 slave nodes 
>> ), I need run "pcs constraint colocation add pgsql-slave-ipx with 
>> pgsql-slave-ipy -INFINITY" many times. Is there a simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many resources 
> as you need in this command.
> 
> Tomas
> 
>>
>> Master/Slave Set: pgsql-ha [pgsqld]
>>   Masters: [ node1 ]
>>   Slaves: [ node2 node3 ]
>>   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started node1
>>   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started node3
>>   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started node2
>>
>> Thanks
>> Steven
>>
>> -邮件原件-
>> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
>> 发送时间: 2018年2月23日 17:02
>> 收件人: users@clusterlabs.org
>> 主题: Re: [ClusterLabs] How to configure to make each slave resource has 
>> one VIP
>>
>> Dne 23.2.2018 v 08:17 范国腾 napsal(a):
>>> Hi,
>>>
>>> Our system manages the database (one master and multiple slave). We 
>>> use one VIP for multiple Slave resources firstly.
>>>
>>> Now I want to change the configuration that each slave resource has a 
>>> separate VIP. For example, I have 3 slave nodes and my VIP group has 
>>> 2 vip; The 2 vips binds to node1 and node2 now; When the node2 fails, 
>>> the vip could move to the node3.
>>>
>>>
>>> I use the following command to add the VIP
>>>
>>> /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
>>> pgsql-slave-ip2/
>>>
>>> /      pcs constraint colocation add pgsql-slave-group with slave 
>>> pgsql-ha INFINITY/
>>>
>>> But now the two VIPs are the same nodes:
>>>
>>> /Master/Slave Set: pgsql-ha [pgsqld]/
>>>
>>> / Masters: [ node1 ]/
>>>
>>> / Slaves: [ node2 node3 ]/
>>>
>>> /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
>>> node1/
>>>
>>> /Resource Group: pgsql-slave-group/
>>>
>>> */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
>>> node2/*
>>>
>>> */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
>>> node2/*
>>>
>>> Could anyone tell how to configure to make each slave node has a VIP?
>>
>> Resources in a group always run on the same node. You want the ip resources 
>> to run on different nodes so you cannot put them into a group.
>>
>> This will take the resources out of the group:
>> pcs resource ungroup pgsql-slave-group
>>
>> Then you can set colocation constraints for them:
>> pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha pcs 
>> constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
>>
>> You may also need to tell pacemaker not to put both ips on the same node:
>> pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
>> -INFINITY
>>
>>
>> Regards,
>> Tomas
>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> ___
>>> Users mailing list: Users@clusterlabs.org 
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org Getting started:
>>> http://www.clusterlabs.org/doc/Cluster

[ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-02-24 Thread 范国腾
Hello,

If all of the slave nodes crash, all of the slave vips could not work. 

Do we have any way to make all of the slave VIPs binds to the master node if 
there is no slave nodes in the system?

the user client will not know the system has problem in this way.

Thanks

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年2月23日 17:37
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has one 
VIP

Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> Tomas,
> 
> Thank you very much. I do the change according to your suggestion and it 
> works.
> 
> There is a question: If there are too much nodes (e.g.  total 10 slave nodes 
> ), I need run "pcs constraint colocation add pgsql-slave-ipx with 
> pgsql-slave-ipy -INFINITY" many times. Is there a simple command to do this?

I think colocation set does the trick:
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many resources as 
you need in this command.

Tomas

> 
> Master/Slave Set: pgsql-ha [pgsqld]
>   Masters: [ node1 ]
>   Slaves: [ node2 node3 ]
>   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started node1
>   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started node3
>   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started node2
> 
> Thanks
> Steven
> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:02
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] How to configure to make each slave resource has 
> one VIP
> 
> Dne 23.2.2018 v 08:17 范国腾 napsal(a):
>> Hi,
>>
>> Our system manages the database (one master and multiple slave). We 
>> use one VIP for multiple Slave resources firstly.
>>
>> Now I want to change the configuration that each slave resource has a 
>> separate VIP. For example, I have 3 slave nodes and my VIP group has 
>> 2 vip; The 2 vips binds to node1 and node2 now; When the node2 fails, 
>> the vip could move to the node3.
>>
>>
>> I use the following command to add the VIP
>>
>> /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
>> pgsql-slave-ip2/
>>
>> /      pcs constraint colocation add pgsql-slave-group with slave 
>> pgsql-ha INFINITY/
>>
>> But now the two VIPs are the same nodes:
>>
>> /Master/Slave Set: pgsql-ha [pgsqld]/
>>
>> / Masters: [ node1 ]/
>>
>> / Slaves: [ node2 node3 ]/
>>
>> /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
>> node1/
>>
>> /Resource Group: pgsql-slave-group/
>>
>> */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
>> node2/*
>>
>> */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
>> node2/*
>>
>> Could anyone tell how to configure to make each slave node has a VIP?
> 
> Resources in a group always run on the same node. You want the ip resources 
> to run on different nodes so you cannot put them into a group.
> 
> This will take the resources out of the group:
> pcs resource ungroup pgsql-slave-group
> 
> Then you can set colocation constraints for them:
> pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha pcs 
> constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> 
> You may also need to tell pacemaker not to put both ips on the same node:
> pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> -INFINITY
> 
> 
> Regards,
> Tomas
> 
>>
>> Thanks
>>
>>
>>
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
___
Users mai

Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-02-23 Thread Ken Gaillot
On Fri, 2018-02-23 at 12:45 +, 范国腾 wrote:
> Thank you very much, Tomas. 
> This resolves my problem.
> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> resource has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion
> > and it works.

One thing to keep in mind: a score of -INFINITY means the IPs will
*never* run on the same node, even if one or more nodes go down. If
that's what you want, of course, that's good. If you want the IPs to
stay on different nodes normally, but be able to run on the same node
in case of node outage, use a finite negative score.

> > 
> > There is a question: If there are too much nodes (e.g.  total 10
> > slave nodes ), I need run "pcs constraint colocation add pgsql-
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource
> > has 
> > one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We 
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource
> > > has a 
> > > separate VIP. For example, I have 3 slave nodes and my VIP group
> > > has 
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2
> > > fails, 
> > > the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with
> > > slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip
> > resources to run on different nodes so you cannot put them into a
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha
> > pcs 
> > constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the same
> > node:
> > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> > -INFINITY
> > 
> > 
> > Regards,
> > Tomas
> > 
> > > 
> > > Thanks
> > > 
> > > 
> > > 
> > > ___
> > > Users mailing list: Users@clusterlabs.org 
> > > https://lists

[ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-02-23 Thread 范国腾
Thank you very much, Tomas. 
This resolves my problem.

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年2月23日 17:37
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has one 
VIP

Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> Tomas,
> 
> Thank you very much. I do the change according to your suggestion and it 
> works.
> 
> There is a question: If there are too much nodes (e.g.  total 10 slave nodes 
> ), I need run "pcs constraint colocation add pgsql-slave-ipx with 
> pgsql-slave-ipy -INFINITY" many times. Is there a simple command to do this?

I think colocation set does the trick:
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many resources as 
you need in this command.

Tomas

> 
> Master/Slave Set: pgsql-ha [pgsqld]
>   Masters: [ node1 ]
>   Slaves: [ node2 node3 ]
>   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started node1
>   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started node3
>   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started node2
> 
> Thanks
> Steven
> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:02
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] How to configure to make each slave resource has 
> one VIP
> 
> Dne 23.2.2018 v 08:17 范国腾 napsal(a):
>> Hi,
>>
>> Our system manages the database (one master and multiple slave). We 
>> use one VIP for multiple Slave resources firstly.
>>
>> Now I want to change the configuration that each slave resource has a 
>> separate VIP. For example, I have 3 slave nodes and my VIP group has 
>> 2 vip; The 2 vips binds to node1 and node2 now; When the node2 fails, 
>> the vip could move to the node3.
>>
>>
>> I use the following command to add the VIP
>>
>> /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
>> pgsql-slave-ip2/
>>
>> /      pcs constraint colocation add pgsql-slave-group with slave 
>> pgsql-ha INFINITY/
>>
>> But now the two VIPs are the same nodes:
>>
>> /Master/Slave Set: pgsql-ha [pgsqld]/
>>
>> / Masters: [ node1 ]/
>>
>> / Slaves: [ node2 node3 ]/
>>
>> /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
>> node1/
>>
>> /Resource Group: pgsql-slave-group/
>>
>> */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
>> node2/*
>>
>> */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
>> node2/*
>>
>> Could anyone tell how to configure to make each slave node has a VIP?
> 
> Resources in a group always run on the same node. You want the ip resources 
> to run on different nodes so you cannot put them into a group.
> 
> This will take the resources out of the group:
> pcs resource ungroup pgsql-slave-group
> 
> Then you can set colocation constraints for them:
> pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha pcs 
> constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> 
> You may also need to tell pacemaker not to put both ips on the same node:
> pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> -INFINITY
> 
> 
> Regards,
> Tomas
> 
>>
>> Thanks
>>
>>
>>
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
___
Users mailing list: Users@clusterlabs.org 
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org