Re: [ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-16 Thread Lentes, Bernd
Ulrich wrote:

- On Dec 16, 2015, at 8:36 AM, Ulrich Windl 
ulrich.wi...@rz.uni-regensburg.de wrote:

 "Lentes, Bernd"  schrieb am 16.12.2015 
 um
> 00:35 in Nachricht
> <1621336773.386234.1450222516246.javamail.zim...@helmholtz-muenchen.de>:
> 
> [...]
>> What is about a quorum disk ? I also read about "tiebrakers" or that STONITH
>> is magically able to chosse the right
>> node to fence (i can't believe that).
> 
> If you read "the right" as "not both" it's OK ;-)
> 
> In HP Service Guard they used a "lock disk" (SCSI disk) as tie breaker: On 
> split
> brain every node tried to "SCSI reserve" the unit, which only one node
> suceeded. Then (in an SBD like manner) the name of the survivor was written on
> to the disk (I guess), and every other node committed suicide (that's 
> different
> from STONITH where external nodes kill another, AFIK). Something like that is
> still missing.
> 
> Personal note: Bernd, I think it's time to try things now. You discussed a 
> lot,
> but I still think you never tried the proposals...
> 
Yes, you are right. But i like to have an idea of what will i do before, rather 
than testing "which design do i want to have ?"
It will still remain a lot to test.
I just have to wait for the second HP server we will order in january.
We are a public institution and orders sometimes require a lot of time. You 
work at an university, maybe you experience the same ...

Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-15 Thread Ulrich Windl
>>> "Lentes, Bernd"  schrieb am 16.12.2015 
>>> um
00:35 in Nachricht
<1621336773.386234.1450222516246.javamail.zim...@helmholtz-muenchen.de>:

[...]
> What is about a quorum disk ? I also read about "tiebrakers" or that STONITH 
> is magically able to chosse the right
> node to fence (i can't believe that).

If you read "the right" as "not both" it's OK ;-)

In HP Service Guard they used a "lock disk" (SCSI disk) as tie breaker: On 
split brain every node tried to "SCSI reserve" the unit, which only one node 
suceeded. Then (in an SBD like manner) the name of the survivor was written on 
to the disk (I guess), and every other node committed suicide (that's different 
from STONITH where external nodes kill another, AFIK). Something like that is 
still missing.

Personal note: Bernd, I think it's time to try things now. You discussed a lot, 
but I still think you never tried the proposals...

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Digimer
On 08/12/15 02:44 AM, Ulrich Windl wrote:
 Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
> <5665fcdc.1030...@alteeve.ca>:
> [...]
>> Node 1 looks up how to fence node 2, sees no delay and fences
>> immediately. Node 2 looks up how to fence node 1, sees a delay and
>> pauses. Node 2 will be dead long before the delay expires, ensuring that
>> node 2 always loses in such a case. If you have VMs on both nodes, then
>> no matter which node the delay is on, some servers will be interrupted.
> 
> AFAIK, the cluster will try to migrate resources if a fencing is pending, but 
> not yet complete. Is that true?
> 
> [...]
> 
> Regards,
> Ulrich

A cluster can't (and shouldn't!) do anything about resources until
fencing has completed.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Lentes, Bernd
Ulrich wrote:

> 
> >>> "Lentes, Bernd"  schrieb
> am
> >>> 08.12.2015 um
> 09:13 in Nachricht <00a901d13190$5c6db3c0$15491b40$@helmholtz-
> muenchen.de>:
> > Digimer wrote:
> >
> >> >>> Should I install all vm's in one partition or every vm in a
> >> >>> seperate partition ? The advantage of one vm per partition is
> >> >>> that I don't need a cluster fs, right ?
> >> >>
> >> >> I would put each VM on a dedicated LV and not have an FS
> between
> >> the
> >> >> VM and the host. The question then becomes; What is the PV? I
> use
> >> >> clustered LVM to make sure all nodes are in sync, LVM-wise.
> >> >
> >> > Is this the setup you are running (without fs) ?
> >>
> >> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
> >> device as the clustered LVM PV. We have one VG for the space (could
> >> add a new DRBD resource later if needed...) and then create a
> >> dedicated LV per VM.
> >> We have, as I mentioned, one small LV formatted with gfs2 where we
> >> store the VM's XML files (so that any change made to a VM is
> >> immediately available to all nodes.
> >>
> >
> > How can i migrate my current vm's ? They are stored in raw files (one
> > or two).
> > How do I transfer them to a naked lv ?
> 
> For migration the image must be available on both nodes (thus gfs2).
> 
> >

Hi Ulrich,

the migration i meant is the transition from my current setup (virtual
machines in raw files in a partition with filesystem) to the one
anticipated in the cluster (virtual machines in blank logical volumes
without fs). How can I do that ? And can I expand my disks in the vm
afterwards if necessary ?
But the "other" migration (live-migration of vm's) is of course also
interesting. Digimer wrote if I have my vm in a blank logical volume
without fs, which is placed on a SAN, I can live-migrate because the
process of live-migration takes care about the access to the lv and I
don't need a cluster fs, just cLVM.

Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Ulrich Windl
>>> "Lentes, Bernd"  schrieb am 08.12.2015 
>>> um
09:13 in Nachricht <00a901d13190$5c6db3c0$15491b40$@helmholtz-muenchen.de>:
> Digimer wrote:
> 
>> >>> Should I install all vm's in one partition or every vm in a seperate
>> >>> partition ? The advantage of one vm per partition is that I don't
>> >>> need a cluster fs, right ?
>> >>
>> >> I would put each VM on a dedicated LV and not have an FS between
>> the
>> >> VM and the host. The question then becomes; What is the PV? I use
>> >> clustered LVM to make sure all nodes are in sync, LVM-wise.
>> >
>> > Is this the setup you are running (without fs) ?
>> 
>> Yes, we use DRBD to replicate the storage and use the /dev/drbdX
>> device as the clustered LVM PV. We have one VG for the space (could
>> add a new DRBD resource later if needed...) and then create a dedicated
>> LV per VM.
>> We have, as I mentioned, one small LV formatted with gfs2 where we
>> store the VM's XML files (so that any change made to a VM is
>> immediately available to all nodes.
>> 
> 
> How can i migrate my current vm's ? They are stored in raw files (one or
> two).
> How do I transfer them to a naked lv ?

For migration the image must be available on both nodes (thus gfs2).

> 
> Bernd
>
> 
> Helmholtz Zentrum Muenchen
> Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
> Ingolstaedter Landstr. 1
> 85764 Neuherberg
> www.helmholtz-muenchen.de 
> Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
> Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons 
> Enhsen
> Registergericht: Amtsgericht Muenchen HRB 6466
> USt-IdNr: DE 129521671
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-08 Thread Andrei Borzenkov
On Tue, Dec 8, 2015 at 10:44 AM, Ulrich Windl
 wrote:
 Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
> <5665fcdc.1030...@alteeve.ca>:
> [...]
>> Node 1 looks up how to fence node 2, sees no delay and fences
>> immediately. Node 2 looks up how to fence node 1, sees a delay and
>> pauses. Node 2 will be dead long before the delay expires, ensuring that
>> node 2 always loses in such a case. If you have VMs on both nodes, then
>> no matter which node the delay is on, some servers will be interrupted.
>
> AFAIK, the cluster will try to migrate resources if a fencing is pending, but 
> not yet complete. Is that true?
>

If under "migrate" you really mean "restart resources that were
located on node that became inaccessible" I seriously hope the answer
is "not", otherwise what is the point in attempting fencing in the
first place?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: design of a two-node cluster

2015-12-07 Thread Ulrich Windl
>>> Digimer  schrieb am 07.12.2015 um 22:40 in Nachricht
<5665fcdc.1030...@alteeve.ca>:
[...]
> Node 1 looks up how to fence node 2, sees no delay and fences
> immediately. Node 2 looks up how to fence node 1, sees a delay and
> pauses. Node 2 will be dead long before the delay expires, ensuring that
> node 2 always loses in such a case. If you have VMs on both nodes, then
> no matter which node the delay is on, some servers will be interrupted.

AFAIK, the cluster will try to migrate resources if a fencing is pending, but 
not yet complete. Is that true?

[...]

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org