Hmm. I will then work towards bringing this in. Thanks for your input.
On Wed, Jun 22, 2016 at 10:44 AM, Digimer wrote:
> On 22/06/16 01:07 AM, Nikhil Utane wrote:
> > I don't get it. Pacemaker + Corosync is providing me so much of
> > functionality.
> > For e.g. if we leave
On 22/06/16 01:09 AM, Nikhil Utane wrote:
> We are not using virtual IP. There is a separate discovery mechanism
> between the server and client. The client will reach out to new server
> only if it is incommunicado with the old one.
That's fine, but it really doesn't change anything. Whether
I don't get it. Pacemaker + Corosync is providing me so much of
functionality.
For e.g. if we leave out the condition of split-brain for a while, then it
provides:
1) Discovery and cluster formation
2) Synchronization of data
3) Heartbeat mechanism
4) Swift failover of the resource
5) Guarantee
On 06/20/2016 11:33 PM, Nikhil Utane wrote:
> Let me give the full picture about our solution. It will then make it
> easy to have the discussion.
>
> We are looking at providing N + 1 Redundancy to our application servers,
> i.e. 1 standby for upto N active (currently N<=5). Each server will
On 21/06/16 01:27 PM, Dimitri Maziuk wrote:
> On 06/21/2016 12:13 PM, Andrei Borzenkov wrote:
>
>> You should not run pacemaker without some sort of fencing. This need not
>> be network-controlled power socket (and tiebreaker is not directly
>> related to fencing).
>
> Yes it can be
21.06.2016 20:05, Dimitri Maziuk пишет:
> On 06/21/2016 11:47 AM, Digimer wrote:
>
>> If you don't need to coordinate services/access, you don't need HA.
>>
>> If you do need to coordinate services/access, you need fencing.
>
> So what you're saying is we *cannot* run a pacemaker cluster without
On 06/21/2016 11:47 AM, Digimer wrote:
> If you don't need to coordinate services/access, you don't need HA.
>
> If you do need to coordinate services/access, you need fencing.
So what you're saying is we *cannot* run a pacemaker cluster without a
tiebreaker node *and* a network-controlled
On 21/06/16 10:57 AM, Dmitri Maziuk wrote:
> On 2016-06-20 17:19, Digimer wrote:
>
>> Nikhil indicated that they could switch where traffic went up-stream
>> without issue, if I understood properly.
>
> They have some interesting setup, but that notwithstanding: if split
> brain happens some
On 2016-06-20 17:19, Digimer wrote:
Nikhil indicated that they could switch where traffic went up-stream
without issue, if I understood properly.
They have some interesting setup, but that notwithstanding: if split
brain happens some clients will connect to "old master" and some: to
"new
Let me give the full picture about our solution. It will then make it easy
to have the discussion.
We are looking at providing N + 1 Redundancy to our application servers,
i.e. 1 standby for upto N active (currently N<=5). Each server will have
some unique configuration. The standby will store
On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> On 06/20/2016 03:58 PM, Digimer wrote:
>
>> Then wouldn't it be a lot better to just run your services on both nodes
>> all the time and take HA out of the picture? Availability is predicated
>> on building the simplest system possible. If you have no
On 06/20/2016 03:58 PM, Digimer wrote:
> Then wouldn't it be a lot better to just run your services on both nodes
> all the time and take HA out of the picture? Availability is predicated
> on building the simplest system possible. If you have no concerns about
> uncoordinated access, then make
On 20/06/16 09:30 AM, Nikhil Utane wrote:
> Hi,
>
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way)
On 06/20/2016 08:30 AM, Nikhil Utane wrote:
> Hi,
>
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way)
On 2016-06-20 09:13, Jehan-Guillaume de Rorthais wrote:
I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.
I'm sure it's a very
Le Mon, 20 Jun 2016 19:00:12 +0530,
Nikhil Utane a écrit :
> Hi,
>
> For our solution we are making a conscious choice to not use quorum/fencing
> as for us service availability is more important than having 2 nodes take
> up the same active role. Split-brain is not
Hi,
For our solution we are making a conscious choice to not use quorum/fencing
as for us service availability is more important than having 2 nodes take
up the same active role. Split-brain is not an issue for us (at least i
think that way) since we have a second line of defense. We have clients
17 matches
Mail list logo