Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Andrei Borzenkov
On Tue, Dec 8, 2015 at 12:01 PM, Ulrich Windl
 wrote:
 Andrei Borzenkov  schrieb am 08.12.2015 um 09:01 in
> Nachricht
> :
>> On Tue, Dec 8, 2015 at 10:44 AM, Ulrich Windl
>>  wrote:
>> Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
>>> <5665fcdc.1030...@alteeve.ca>:
>>> [...]
 Node 1 looks up how to fence node 2, sees no delay and fences
 immediately. Node 2 looks up how to fence node 1, sees a delay and
 pauses. Node 2 will be dead long before the delay expires, ensuring that
 node 2 always loses in such a case. If you have VMs on both nodes, then
 no matter which node the delay is on, some servers will be interrupted.
>>>
>>> AFAIK, the cluster will try to migrate resources if a fencing is pending,
>> but not yet complete. Is that true?
>>>
>>
>> If under "migrate" you really mean "restart resources that were
>> located on node that became inaccessible" I seriously hope the answer
>> is "not", otherwise what is the point in attempting fencing in the
>> first place?
>
> Hi!
>
> A node must be fenced if at least one resource fails to stop.

No, it "must" not. It is up to someone who configures cluster to
decide. If this resource is so important that cluster has to recover
it under any cost, then yes, fencing may be the only option. Leaving
resource as failed and letting administrator to handle it manually is
another option (it is quite possible that if it failed to stop it will
also fail to start in which case you just caused downtime without any
benefit).

> That means other resources still may be able to be stopped or migrated before 
> the fencing takes place. Possibly this is a decision between "kill everything 
> as fast as possible" vs. "try to stop as many services as possible cleanly". 
> I prefer the latter, but preferences may vary.

OK, in this context your question makes sense indeed. Personally I
also feel like "it has failed already, so it is not really that
urgent", especially if other resources can indeed be migrated
gracefully.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Lentes, Bernd
Ulrich wrote:
> >
> > Hi Ulrich,
> >
> > the migration i meant is the transition from my current setup (virtual
> > machines in raw files in a partition with filesystem) to the one
> > anticipated in the cluster (virtual machines in blank logical volumes
> > without fs). How can I do that ? And can I expand my disks in the vm
> > afterwards if necessary ?
> 
> If you use LVM, you might just add another disk to the VM, then
> make that disk a PV and add it to the VG. Then you can expand your LVs
> inside the VM.

I like that.

> 
> > But the "other" migration (live-migration of vm's) is of course also
> > interesting. Digimer wrote if I have my vm in a blank logical volume
> > without fs, which is placed on a SAN, I can live-migrate because the
> > process of live-migration takes care about the access to the lv and I
> > don't need a cluster fs, just cLVM.
> 
> If logical volume means LUN (disk), I'd agree. If you mean LVM LV, I'd
be
> very careful, especially when changing the LVM configuration. If you
> never plan to change LVM configuration, you could consider partitioning
> your disk with GPT with one partition for each VM.

I'm talking about LVM LV. Changing the LVM configuration (resizing a LV,
creating a new LV ...) could happen rarely, but could happen.
But cLVM takes care that during LVM configuration changes the cLVM on the
other notes can't change the configuration, and afterwards propagate the
new configuration to all nodes.
Why be careful ?

Bernd

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: Antw: Re: design of a two-node cluster

2015-12-08 Thread Digimer
On 08/12/15 08:35 AM, Ulrich Windl wrote:
 "Lentes, Bernd"  schrieb am 08.12.2015 
 um
> 13:10 in Nachricht <012101d131b1$5ec1b2e0$1c4518a0$@helmholtz-muenchen.de>:
>> Ulrich wrote:
>>
>>>
>> "Lentes, Bernd"  schrieb
>>> am
>> 08.12.2015 um
>>> 09:13 in Nachricht <00a901d13190$5c6db3c0$15491b40$@helmholtz-
>>> muenchen.de>:
 Digimer wrote:

 Should I install all vm's in one partition or every vm in a
 seperate partition ? The advantage of one vm per partition is
 that I don't need a cluster fs, right ?
>>>
>>> I would put each VM on a dedicated LV and not have an FS
>>> between
> the
>>> VM and the host. The question then becomes; What is the PV? I
>>> use
>>> clustered LVM to make sure all nodes are in sync, LVM-wise.
>>
>> Is this the setup you are running (without fs) ?
>
> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
> device as the clustered LVM PV. We have one VG for the space (could
> add a new DRBD resource later if needed...) and then create a
> dedicated LV per VM.
> We have, as I mentioned, one small LV formatted with gfs2 where we
> store the VM's XML files (so that any change made to a VM is
> immediately available to all nodes.
>

 How can i migrate my current vm's ? They are stored in raw files (one
 or two).
 How do I transfer them to a naked lv ?
>>>
>>> For migration the image must be available on both nodes (thus gfs2).
>>>

>>
>> Hi Ulrich,
>>
>> the migration i meant is the transition from my current setup (virtual
>> machines in raw files in a partition with filesystem) to the one
>> anticipated in the cluster (virtual machines in blank logical volumes
>> without fs). How can I do that ? And can I expand my disks in the vm
>> afterwards if necessary ?
> 
> You can copy the images with rsync or similar while the VMs are down. Then 
> you'll have the same filesystem layout. If you want to change the partition 
> sizes, I'd suggest to create new disks and partitions, the mount the 
> partitions on old and new system, and then rsync (or similar) the _files_ 
> from OLD to NEW. Some boot loaders may need some extra magic. If you use LVM, 
> you might just add another disk to the VM, then make that disk a PV and add 
> it to the VG. Then you can expand your LVs inside the VM.
> 
>> But the "other" migration (live-migration of vm's) is of course also
>> interesting. Digimer wrote if I have my vm in a blank logical volume
>> without fs, which is placed on a SAN, I can live-migrate because the
>> process of live-migration takes care about the access to the lv and I
>> don't need a cluster fs, just cLVM.
> 
> If logical volume means LUN (disk), I'd agree. If you mean LVM LV, I'd be 
> very careful, especially when changing the LVM configuration. If you never 
> plan to change LVM configuration, you could consider partitioning your disk 
> with GPT with one partition for each VM.
> 
> Regards,
> Ulrich

This approach is to do it from the VM's level, and it is not needed. The
VM does need to be stopped, yes, but you can simple go from a raw format
to the LV using dd, in my experience.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org