Let me give the full picture about our solution. It will then make it easy
to have the discussion.
We are looking at providing N + 1 Redundancy to our application servers,
i.e. 1 standby for upto N active (currently N<=5). Each server will have
some unique configuration. The standby will store
On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> On 06/20/2016 03:58 PM, Digimer wrote:
>
>> Then wouldn't it be a lot better to just run your services on both nodes
>> all the time and take HA out of the picture? Availability is predicated
>> on building the simplest system possible. If you have no
On 06/20/2016 03:58 PM, Digimer wrote:
> Then wouldn't it be a lot better to just run your services on both nodes
> all the time and take HA out of the picture? Availability is predicated
> on building the simplest system possible. If you have no concerns about
> uncoordinated access, then make
On 20/06/16 09:30 AM, Nikhil Utane wrote:
> Hi,
>
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way)
On 06/20/2016 08:30 AM, Nikhil Utane wrote:
> Hi,
>
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way)
On 06/20/2016 07:45 AM, ma...@nucleus.it wrote:
> Hi,
> i have a two node cluster with some vms (pacemaker resources) running on
> the two hypervisors:
> pacemaker-1.0.10
> corosync-1.3.0
>
> I need to do maintenance stuff , so i need to:
> - put on maintenance the cluster so the cluster doesn't
On 06/18/2016 05:15 AM, Ferenc Wágner wrote:
> Hi,
>
> Could somebody please elaborate a little why the pacemaker systemd
> service file contains "Restart=on-failure"? I mean that a failed node
> gets fenced anyway, so most of the time this would be a futile effort.
> On the other hand, one
On 2016-06-20 09:13, Jehan-Guillaume de Rorthais wrote:
I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.
I'm sure it's a very
Le Mon, 20 Jun 2016 19:00:12 +0530,
Nikhil Utane a écrit :
> Hi,
>
> For our solution we are making a conscious choice to not use quorum/fencing
> as for us service availability is more important than having 2 nodes take
> up the same active role. Split-brain is not
Hi,
For our solution we are making a conscious choice to not use quorum/fencing
as for us service availability is more important than having 2 nodes take
up the same active role. Split-brain is not an issue for us (at least i
think that way) since we have a second line of defense. We have clients
Hi,
i have a two node cluster with some vms (pacemaker resources) running on
the two hypervisors:
pacemaker-1.0.10
corosync-1.3.0
I need to do maintenance stuff , so i need to:
- put on maintenance the cluster so the cluster doesn't
touch/start/stop/monitor the vms
- update the vms
- stop the
11 matches
Mail list logo