Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Andrei Borzenkov
On Tue, Dec 8, 2015 at 12:01 PM, Ulrich Windl
 wrote:
 Andrei Borzenkov  schrieb am 08.12.2015 um 09:01 in
> Nachricht
> :
>> On Tue, Dec 8, 2015 at 10:44 AM, Ulrich Windl
>>  wrote:
>> Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
>>> <5665fcdc.1030...@alteeve.ca>:
>>> [...]
 Node 1 looks up how to fence node 2, sees no delay and fences
 immediately. Node 2 looks up how to fence node 1, sees a delay and
 pauses. Node 2 will be dead long before the delay expires, ensuring that
 node 2 always loses in such a case. If you have VMs on both nodes, then
 no matter which node the delay is on, some servers will be interrupted.
>>>
>>> AFAIK, the cluster will try to migrate resources if a fencing is pending,
>> but not yet complete. Is that true?
>>>
>>
>> If under "migrate" you really mean "restart resources that were
>> located on node that became inaccessible" I seriously hope the answer
>> is "not", otherwise what is the point in attempting fencing in the
>> first place?
>
> Hi!
>
> A node must be fenced if at least one resource fails to stop.

No, it "must" not. It is up to someone who configures cluster to
decide. If this resource is so important that cluster has to recover
it under any cost, then yes, fencing may be the only option. Leaving
resource as failed and letting administrator to handle it manually is
another option (it is quite possible that if it failed to stop it will
also fail to start in which case you just caused downtime without any
benefit).

> That means other resources still may be able to be stopped or migrated before 
> the fencing takes place. Possibly this is a decision between "kill everything 
> as fast as possible" vs. "try to stop as many services as possible cleanly". 
> I prefer the latter, but preferences may vary.

OK, in this context your question makes sense indeed. Personally I
also feel like "it has failed already, so it is not really that
urgent", especially if other resources can indeed be migrated
gracefully.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Ulrich Windl
>>> "Lentes, Bernd"  schrieb am 08.12.2015 
>>> um
09:13 in Nachricht <00a901d13190$5c6db3c0$15491b40$@helmholtz-muenchen.de>:
> Digimer wrote:
> 
>> >>> Should I install all vm's in one partition or every vm in a seperate
>> >>> partition ? The advantage of one vm per partition is that I don't
>> >>> need a cluster fs, right ?
>> >>
>> >> I would put each VM on a dedicated LV and not have an FS between
>> the
>> >> VM and the host. The question then becomes; What is the PV? I use
>> >> clustered LVM to make sure all nodes are in sync, LVM-wise.
>> >
>> > Is this the setup you are running (without fs) ?
>> 
>> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
>> device as the clustered LVM PV. We have one VG for the space (could
>> add a new DRBD resource later if needed...) and then create a dedicated
>> LV per VM.
>> We have, as I mentioned, one small LV formatted with gfs2 where we
>> store the VM's XML files (so that any change made to a VM is
>> immediately available to all nodes.
>> 
> 
> How can i migrate my current vm's ? They are stored in raw files (one or
> two).
> How do I transfer them to a naked lv ?

For migration the image must be available on both nodes (thus gfs2).

> 
> Bernd
>
> 
> Helmholtz Zentrum Muenchen
> Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
> Ingolstaedter Landstr. 1
> 85764 Neuherberg
> www.helmholtz-muenchen.de 
> Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons 
> Enhsen
> Registergericht: Amtsgericht Muenchen HRB 6466
> USt-IdNr: DE 129521671
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Lentes, Bernd
Ulrich wrote:
> >
> > Hi Ulrich,
> >
> > the migration i meant is the transition from my current setup (virtual
> > machines in raw files in a partition with filesystem) to the one
> > anticipated in the cluster (virtual machines in blank logical volumes
> > without fs). How can I do that ? And can I expand my disks in the vm
> > afterwards if necessary ?
> 
> If you use LVM, you might just add another disk to the VM, then
> make that disk a PV and add it to the VG. Then you can expand your LVs
> inside the VM.

I like that.

> 
> > But the "other" migration (live-migration of vm's) is of course also
> > interesting. Digimer wrote if I have my vm in a blank logical volume
> > without fs, which is placed on a SAN, I can live-migrate because the
> > process of live-migration takes care about the access to the lv and I
> > don't need a cluster fs, just cLVM.
> 
> If logical volume means LUN (disk), I'd agree. If you mean LVM LV, I'd
be
> very careful, especially when changing the LVM configuration. If you
> never plan to change LVM configuration, you could consider partitioning
> your disk with GPT with one partition for each VM.

I'm talking about LVM LV. Changing the LVM configuration (resizing a LV,
creating a new LV ...) could happen rarely, but could happen.
But cLVM takes care that during LVM configuration changes the cLVM on the
other notes can't change the configuration, and afterwards propagate the
new configuration to all nodes.
Why be careful ?

Bernd

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] design of a two-node cluster

2015-12-08 Thread Steffen Winther Sørensen
Check something like [community version] ProxMox  great 
way to administer your KVMs

/Steffen

> On 7. dec. 2015, at 18.35, Lentes, Bernd  
> wrote:
> 
> Hi,
> 
> i've been asking all around here a while ago. Unfortunately I couldn't
> continue to work on my cluster, so I'm still thinking about the design.
> I hope you will help me again with some recommendations, because when the
> cluster is running changing of the design is not possible anymore.
> 
> These are my requirements:
> 
> - all services are running inside virtual machines (KVM), mostly databases
> and static/dynamic webpages
> - I have two nodes and would like to have some vm's running on node A and
> some on node B during normal operation as a kind of loadbalancing
> - I'd like to keep the setup simple (if possible)
> - availability is important, performance not so much (webpages some
> hundred requests per day, databases some hundred inserts/selects per day)
> - I'd like to have snapshots of the vm's
> - live migration of the vm's should be possible
> - nodes are SLES 11 SP4, vm's are Windows 7 and severable linux
> distributions (Ubuntu, SLES, OpenSuSE)
> - setup should be extensible (add further vm's)
> - I have a shared storage (FC SAN)
> 
> My ideas/questions:
> 
> Should I install all vm's in one partition or every vm in a seperate
> partition ? The advantage of one vm per partition is that I don't need a
> cluster fs, right ?
> I read to avoid a cluster fs if possible because it adds further
> complexity. Below the fs I'd like to have logical volumes because they are
> easy to expand.
> Do I need cLVM (I think so) ? Is it an advantage to install the vm's in
> plain partitions, without a fs ?
> It would reduce the complexity further because I don't need a fs. Would
> live migration still be possible ?
> 
> snapshots:
> I was playing around with virsh (libvirt) to create snapshots of the vm's.
> In the end I gave up. virsh explains commands in its help, but when you
> want to use them you get messages
> like "not supported yet", although I use libvirt 1.2.11. This is
> ridiculous. I think I will create my snapshots inside the vm's using lvm.
> We have a network based backup solution (Legato/EMC) which saves the disks
> every night.
> Supplying a snapshot for that I have a consistent backup. The databases
> are dumped with their respective tools.
> 
> Thanks in advance.
> 
> 
> Bernd
> 
> --
> Bernd Lentes
> 
> Systemadministration
> institute of developmental genetics
> Gebäude 35.34 - Raum 208
> HelmholtzZentrum München
> bernd.len...@helmholtz-muenchen.de
> phone: +49 (0)89 3187 1241
> fax: +49 (0)89 3187 2294
> 
> Wer Visionen hat soll zum Hausarzt gehen
> Helmut Schmidt
> 
> 
> 
> Helmholtz Zentrum Muenchen
> Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
> Ingolstaedter Landstr. 1
> 85764 Neuherberg
> www.helmholtz-muenchen.de
> Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons 
> Enhsen
> Registergericht: Amtsgericht Muenchen HRB 6466
> USt-IdNr: DE 129521671
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] design of a two-node cluster

2015-12-08 Thread Digimer
On 08/12/15 03:13 AM, Lentes, Bernd wrote:
> Digimer wrote:
> 
> Should I install all vm's in one partition or every vm in a seperate
> partition ? The advantage of one vm per partition is that I don't
> need a cluster fs, right ?

 I would put each VM on a dedicated LV and not have an FS between
>> the
 VM and the host. The question then becomes; What is the PV? I use
 clustered LVM to make sure all nodes are in sync, LVM-wise.
>>>
>>> Is this the setup you are running (without fs) ?
>>
>> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
>> device as the clustered LVM PV. We have one VG for the space (could
>> add a new DRBD resource later if needed...) and then create a dedicated
>> LV per VM.
>> We have, as I mentioned, one small LV formatted with gfs2 where we
>> store the VM's XML files (so that any change made to a VM is
>> immediately available to all nodes.
>>
> 
> How can i migrate my current vm's ? They are stored in raw files (one or
> two).
> How do I transfer them to a naked lv ?
> 
> Bernd

If your VM images are 'raw', then you should be able to just dd the file
to the LV directly. If they're qcow2 (or something else), convert it to
raw first.

digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Digimer
On 08/12/15 08:35 AM, Ulrich Windl wrote:
 "Lentes, Bernd"  schrieb am 08.12.2015 
 um
> 13:10 in Nachricht <012101d131b1$5ec1b2e0$1c4518a0$@helmholtz-muenchen.de>:
>> Ulrich wrote:
>>
>>>
>> "Lentes, Bernd"  schrieb
>>> am
>> 08.12.2015 um
>>> 09:13 in Nachricht <00a901d13190$5c6db3c0$15491b40$@helmholtz-
>>> muenchen.de>:
 Digimer wrote:

 Should I install all vm's in one partition or every vm in a
 seperate partition ? The advantage of one vm per partition is
 that I don't need a cluster fs, right ?
>>>
>>> I would put each VM on a dedicated LV and not have an FS
>>> between
> the
>>> VM and the host. The question then becomes; What is the PV? I
>>> use
>>> clustered LVM to make sure all nodes are in sync, LVM-wise.
>>
>> Is this the setup you are running (without fs) ?
>
> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
> device as the clustered LVM PV. We have one VG for the space (could
> add a new DRBD resource later if needed...) and then create a
> dedicated LV per VM.
> We have, as I mentioned, one small LV formatted with gfs2 where we
> store the VM's XML files (so that any change made to a VM is
> immediately available to all nodes.
>

 How can i migrate my current vm's ? They are stored in raw files (one
 or two).
 How do I transfer them to a naked lv ?
>>>
>>> For migration the image must be available on both nodes (thus gfs2).
>>>

>>
>> Hi Ulrich,
>>
>> the migration i meant is the transition from my current setup (virtual
>> machines in raw files in a partition with filesystem) to the one
>> anticipated in the cluster (virtual machines in blank logical volumes
>> without fs). How can I do that ? And can I expand my disks in the vm
>> afterwards if necessary ?
> 
> You can copy the images with rsync or similar while the VMs are down. Then 
> you'll have the same filesystem layout. If you want to change the partition 
> sizes, I'd suggest to create new disks and partitions, the mount the 
> partitions on old and new system, and then rsync (or similar) the _files_ 
> from OLD to NEW. Some boot loaders may need some extra magic. If you use LVM, 
> you might just add another disk to the VM, then make that disk a PV and add 
> it to the VG. Then you can expand your LVs inside the VM.
> 
>> But the "other" migration (live-migration of vm's) is of course also
>> interesting. Digimer wrote if I have my vm in a blank logical volume
>> without fs, which is placed on a SAN, I can live-migrate because the
>> process of live-migration takes care about the access to the lv and I
>> don't need a cluster fs, just cLVM.
> 
> If logical volume means LUN (disk), I'd agree. If you mean LVM LV, I'd be 
> very careful, especially when changing the LVM configuration. If you never 
> plan to change LVM configuration, you could consider partitioning your disk 
> with GPT with one partition for each VM.
> 
> Regards,
> Ulrich

This approach is to do it from the VM's level, and it is not needed. The
VM does need to be stopped, yes, but you can simple go from a raw format
to the LV using dd, in my experience.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Resources suddenly get target-role="stopped"

2015-12-08 Thread Boyan Ikonomov
Mistery solved:

Never put:
[ "${2}" = release ] && crm resource stop VMA_${1}
inside 
/etc/libvirt/hooks/qemu


Very wrong decision.

On Monday 07 December 2015 16:49:01 emmanuel segura wrote:
> the next time show your full config unless your config have something
> special that you can't show.
> 
> 2015-12-07 9:08 GMT+01:00 Klechomir :
> > Hi,
> > Sorry didn't get your point.
> > 
> > The xml of the VM is on a active-active drbd drive with ocfs2 fs on it and
> > is visible from both nodes.
> > The live migration is always successful.
> > 
> > On 4.12.2015 19:30, emmanuel segura wrote:
> >> I think the xml of your vm need to available on both nodes, but your
> >> using a failover resource Filesystem_CDrive1, because pacemaker
> >> monitor resource on both nodes to check if they are running in
> >> multiple nodes.
> >> 
> >> 2015-12-04 18:06 GMT+01:00 Ken Gaillot :
> >>> On 12/04/2015 10:22 AM, Klechomir wrote:
>  Hi list,
>  My issue is the following:
>  
>  I have very stable cluster, using Corosync 2.1.0.26 and Pacemaker 1.1.8
>  (observed the same problem with Corosync 2.3.5  & Pacemaker 1.1.13-rc3)
>  
>  Bumped on this issue when started playing with VirtualDomain resources,
>  but this seems to be unrelated to the RA.
>  
>  The problem is that without apparent reason a resource gets
>  target-role="Stopped". This happens after (successful) migration, or
>  after failover., or after VM restart .
>  
>  My tests showed that changing the resource name fixes this problem, but
>  this seems to be a temporary workaround.
>  
>  The resource configuration is:
>  primitive VMA_VM1 ocf:heartbeat:VirtualDomain \
>  
>   params config="/NFSvolumes/CDrive1/VM1/VM1.xml"
>  
>  hypervisor="qemu:///system" migration_transport="tcp" \
>  
>   meta allow-migrate="true" target-role="Started" \
>   op start interval="0" timeout="120s" \
>   op stop interval="0" timeout="120s" \
>   op monitor interval="10" timeout="30" depth="0" \
>   utilization cpu="1" hv_memory="925"
>  
>  order VM_VM1_after_Filesystem_CDrive1 inf: Filesystem_CDrive1 VMA_VM1
>  
>  Here is the log from one such stop, after successful migration with
>  "crm
>  migrate resource VMA_VM1":
>  
>  Dec 04 15:18:22 [3818929] CLUSTER-1   crmd:debug: cancel_op:
>  Cancelling op 5564 for VMA_VM1 (VMA_VM1:5564)
>  Dec 04 15:18:22 [4434] CLUSTER-1   lrmd: info:
>  cancel_recurring_action: Cancelling operation
>  VMA_VM1_monitor_1
>  Dec 04 15:18:23 [3818929] CLUSTER-1   crmd:debug: cancel_op:
>  Op 5564 for VMA_VM1 (VMA_VM1:5564): cancelled
>  Dec 04 15:18:23 [3818929] CLUSTER-1   crmd:debug:
>  do_lrm_rsc_op:Performing
>  key=351:199:0:fb6e486a-023a-4b44-83cf-4c0c208a0f56
>  op=VMA_VM1_migrate_to_0
>  VirtualDomain(VMA_VM1)[1797698]:2015/12/04_15:18:23 DEBUG:
>  Virtual domain VM1 is currently running.
>  VirtualDomain(VMA_VM1)[1797698]:2015/12/04_15:18:23 INFO: VM1:
>  Starting live migration to CLUSTER-2 (using virsh
>  --connect=qemu:///system --quiet migrate --live  VM1
>  qemu+tcp://CLUSTER-2/system ).
>  Dec 04 15:18:24 [3818929] CLUSTER-1   crmd: info:
>  process_lrm_event:LRM operation VMA_VM1_monitor_1 (call=5564,
>  status=1, cib-update=0, confirmed=false) Cancelled
>  Dec 04 15:18:24 [3818929] CLUSTER-1   crmd:debug:
>  update_history_cache: Updating history for 'VMA_VM1' with
>  monitor op
>  VirtualDomain(VMA_VM1)[1797698]:2015/12/04_15:18:26 INFO: VM1:
>  live migration to CLUSTER-2 succeeded.
>  Dec 04 15:18:26 [4434] CLUSTER-1   lrmd:debug:
>  operation_finished:  VMA_VM1_migrate_to_0:1797698 - exited with
>  rc=0
>  Dec 04 15:18:26 [4434] CLUSTER-1   lrmd:   notice:
>  operation_finished:  VMA_VM1_migrate_to_0:1797698 [
>  2015/12/04_15:18:23 INFO: VM1: Starting live migration to CLUSTER-2
>  (using virsh --connect=qemu:///system --quiet migrate --live  VM1
>  qemu+tcp://CLUSTER-2/system ). ]
>  Dec 04 15:18:26 [4434] CLUSTER-1   lrmd:   notice:
>  operation_finished:  VMA_VM1_migrate_to_0:1797698 [
>  2015/12/04_15:18:26 INFO: VM1: live migration to CLUSTER-2 succeeded. ]
>  Dec 04 15:18:27 [3818929] CLUSTER-1   crmd:debug:
>  create_operation_update:  do_update_resource: Updating resouce
>  VMA_VM1 after complete migrate_to op (interval=0)
>  Dec 04 15:18:27 [3818929] CLUSTER-1   crmd:   notice:
>  process_lrm_event:LRM operation VMA_VM1_migrate_to_0 (call=5697,
>  rc=0, cib-update=89, confirmed=true) ok
>  Dec 04 15:18:27 [3818929] CLUSTER-1   crmd:debug:
>  update_history_cache:  

Re: [ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Digimer
On 08/12/15 02:44 AM, Ulrich Windl wrote:
 Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
> <5665fcdc.1030...@alteeve.ca>:
> [...]
>> Node 1 looks up how to fence node 2, sees no delay and fences
>> immediately. Node 2 looks up how to fence node 1, sees a delay and
>> pauses. Node 2 will be dead long before the delay expires, ensuring that
>> node 2 always loses in such a case. If you have VMs on both nodes, then
>> no matter which node the delay is on, some servers will be interrupted.
> 
> AFAIK, the cluster will try to migrate resources if a fencing is pending, but 
> not yet complete. Is that true?
> 
> [...]
> 
> Regards,
> Ulrich

A cluster can't (and shouldn't!) do anything about resources until
fencing has completed.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How I can contribute the code and TR fix for default resouce agent?

2015-12-08 Thread Ken Gaillot
On 12/07/2015 01:13 PM, Xiaohua Wang wrote:
> Hi Friends,
> Since our product is using the Pacemaker and related Resource Agent based on 
> RHEL 6.5.
> We found some bugs and already fixed them. So we want to contribute the code 
> fixing?
> How can we do it ?
> 
> Best Regards
> Xiaohua Wang

Hi,

Thank you for offering to contribute back! It is very much appreciated.

The code repositories are on github under Cluster Labs:

   https://github.com/ClusterLabs

If you have a github account, you can use the above link to select the
repository you want (for example, pacemaker or resource-agents), then
click the "Fork" button at the top right. That will create your own copy
of the repository on github.

You can then clone your new fork to your development machine, make your
changes, and push them back to your fork on github. The github page for
your fork will then show a button to submit a pull request.

If you do not want to use github, you can also email your changes in
standard patch format to the develop...@clusterlabs.org mailing list,
which you can subscribe to here: http://clusterlabs.org/mailman/listinfo/

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Pacemaker 1.1.14 - Release Candidate (try it out!)

2015-12-08 Thread Ken Gaillot
The release cycle for Pacemaker 1.1.14 has begun! The source code for a
release candidate is available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.14-rc2

This release candidate introduces some valuable new features:

* Resources will now start as soon as their state has been confirmed on
all nodes and all dependencies have been satisfied, rather than waiting
for the state of all resources to be confirmed. This allows for faster
startup of some services, and more even startup load.

* Fencing topology levels can now be applied to all nodes whose name
matches a configurable pattern, or that have a configurable node attribute.

* When a fencing topology level has multiple devices, reboots are now
automatically mapped to all-off-then-all-on, allowing much simplified
configuration of redundant power supplies.

* Guest nodes can now be included in groups, which simplifies the common
Pacemaker Remote use case of a grouping a storage device, filesystem and VM.

* Clone resources have a new clone-min metadata option, specifying that
a certain number of instances must be running before any dependent
resources can run. This is particularly useful for services behind a
virtual IP and haproxy, as is often done with OpenStack.

As usual, the release includes many bugfixes and minor enhancements. For
a more detailed list of changes, see the change log:

https://github.com/ClusterLabs/pacemaker/blob/1.1/ChangeLog

Everyone is encouraged to download, compile and test the new release. We
do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

(You may notice we're starting with rc2; rc1 was released, but had a
compilation issue in some cases.)
-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] FW: [Pacemaker] remove duplicate node

2015-12-08 Thread gerry kernan
 
 
 
Hi 
 
How would I remove a duplicate node, I have a 2 node setup , but on node is 
showing twice .  crm show configure below, node gat-voip-01 is listed twice.
 
 
node $id="0dc85a64-01ad-4fc5-81fd-698208a8322c" gat-voip-02\
attributes standby="on"
node $id="3b5d1061-8f68-4ab3-b169-e0ebe890c446" gat-voip-01
node $id="ae4d76e7-af64-4d93-acdd-4d7b5c274eff" gat-voip-01\
attributes standby="off"
 
primitive res_Filesystem_rep ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/rep" fstype="ext3" \
operations $id="res_Filesystem_rep-operations" \
op start interval="0" timeout="60" \
op stop interval="0" timeout="60" \
op monitor interval="20" timeout="40" start-delay="0" \
op notify interval="0" timeout="60" \
meta target-role="started" is-managed="true"
primitive res_IPaddr2_northIP ocf:heartbeat:IPaddr2 \
params ip="10.75.29.10" cidr_netmask="26" \
operations $id="res_IPaddr2_northIP-operations" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0" \
meta target-role="started" is-managed="true"
primitive res_IPaddr2_sipIP ocf:heartbeat:IPaddr2 \
params ip="158.255.224.226" nic="bond2" \
operations $id="res_IPaddr2_sipIP-operations" \
op start interval="0" timeout="20" \
op stop interval="0" timeout="20" \
op monitor interval="10" timeout="20" start-delay="0" \
meta target-role="started" is-managed="true"
primitive res_asterisk_res_asterisk lsb:asterisk \
operations $id="res_asterisk_res_asterisk-operations" \
op start interval="0" timeout="15" \
op stop interval="0" timeout="15" \
op monitor interval="15" timeout="15" start-delay="15" \
meta target-role="started" is-managed="true"
primitive res_drbd_1 ocf:linbit:drbd \
params drbd_resource="r0" \
operations $id="res_drbd_1-operations" \
op start interval="0" timeout="240" \
op promote interval="0" timeout="90" \
op demote interval="0" timeout="90" \
op stop interval="0" timeout="100" \
op monitor interval="10" timeout="20" start-delay="0" \
op notify interval="0" timeout="90"
primitive res_httpd_res_httpd lsb:httpd \
operations $id="res_httpd_res_httpd-operations" \
op start interval="0" timeout="15" \
op stop interval="0" timeout="15" \
op monitor interval="15" timeout="15" start-delay="15" \
meta target-role="started" is-managed="true"
primitive res_mysqld_res_mysql lsb:mysqld \
operations $id="res_mysqld_res_mysql-operations" \
op start interval="0" timeout="15" \
op stop interval="0" timeout="15" \
op monitor interval="15" timeout="15" start-delay="15" \
meta target-role="started"
group asterisk res_Filesystem_rep res_IPaddr2_northIP res_IPaddr2_sipIP 
res_mysqld_res_mysql res_httpd_res_httpd res_asterisk_res_asterisk
ms ms_drbd_1 res_drbd_1 \
meta clone-max="2" notify="true" interleave="true" 
resource-stickiness="100"
location loc_res_httpd_res_httpd_gat-voip-01.gdft.org asterisk inf: 
gat-voip-01.gdft.org
location loc_res_mysqld_res_mysql_gat-voip-01.gdft.org asterisk inf: 
gat-voip-01.gdft.org
colocation col_res_Filesystem_rep_ms_drbd_1 inf: asterisk ms_drbd_1:Master
order ord_ms_drbd_1_res_Filesystem_rep inf: ms_drbd_1:promote asterisk:start
property $id="cib-bootstrap-options" \
stonith-enabled="false" \
dc-version="1.0.12-unknown" \
no-quorum-policy="ignore" \
cluster-infrastructure="Heartbeat" \
last-lrm-refresh="1345727614"
 
 
Gerry Kernan
 
 
Infinity IT   |   17 The Mall   |   Beacon Court   |   Sandyford   |   Dublin 
D18 E3C8   |   Ireland
Tel:  +353 - (0)1 - 293 0090   |   E-Mail:  gerry.ker...@infinityit.ie
 
Managed IT Services   Infinity IT - www.infinityit.ie
IP TelephonyAsterisk Consulting - www.asteriskconsulting.com
Contact CentreTotal Interact - www.totalinteract.com___
Pacemaker mailing list: pacema...@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org