>>> Antony Stone schrieb am 04.08.2021 um
23:01 in
Nachricht <202108042301.19895.antony.st...@ha.open.source.it>:
> On Wednesday 04 August 2021 at 22:06:39, Frank D. Engel, Jr. wrote:
>
>> There is no safe way to do what you are trying to do.
>>
>> If the resource is on cluster A and contact is
>>> Antony Stone schrieb am 04.08.2021 um
21:27 in
Nachricht <202108042127.43916.antony.st...@ha.open.source.it>:
> On Wednesday 04 August 2021 at 20:57:49, Strahil Nikolov wrote:
>
>> That's why you need a qdisk at a 3‑rd location, so you will have 7 votes
in
>> total.When 3 nodes in cityA die,
On 05.08.2021 00:01, Antony Stone wrote:
> On Wednesday 04 August 2021 at 22:06:39, Frank D. Engel, Jr. wrote:
>
>> There is no safe way to do what you are trying to do.
>>
>> If the resource is on cluster A and contact is lost between clusters A
>> and B due to a network failure, how does
I still can't understand why the whole cluster will fail when only 3 nodes are
down and a qdisk is used.
CityA -> 3 nodes to run packageA -> 3 votesCityB -> 3 nodes to run packageB ->
3 votesCityC -> 1 node which cannot run any package (qdisk) -> 1 vote
Max votes:7Quorum: 4
As long as one city
In theory if you could have an independent voting infrastructure among
the three clusters which serves to effectively create a second cluster
infrastructure interconnecting them to support resource D, you could
have D running on one of the clusters so long as at least two of them
can
On Wednesday 04 August 2021 at 22:06:39, Frank D. Engel, Jr. wrote:
> There is no safe way to do what you are trying to do.
>
> If the resource is on cluster A and contact is lost between clusters A
> and B due to a network failure, how does cluster B know if the resource
> is still running on
There is no safe way to do what you are trying to do.
If the resource is on cluster A and contact is lost between clusters A
and B due to a network failure, how does cluster B know if the resource
is still running on cluster A or not?
It has no way of knowing if cluster A is even up and
On Wednesday 04 August 2021 at 20:57:49, Strahil Nikolov wrote:
> That's why you need a qdisk at a 3-rd location, so you will have 7 votes in
> total.When 3 nodes in cityA die, all resources will be started on the
> remaining 3 nodes.
I think I have not explained this properly.
I have three
That's why you need a qdisk at a 3-rd location, so you will have 7 votes in
total.When 3 nodes in cityA die, all resources will be started on the remaining
3 nodes.
Best Regards,Strahil Nikolov
On Wed, Aug 4, 2021 at 17:23, Antony Stone
wrote: On Wednesday 04 August 2021 at 16:07:39,
Hello.
Please forgive the length of this email but I wanted to provide as much
details as possible.
I'm trying to set up a cluster of two nodes for my service.
I have a problem with a scenario where the network between two nodes gets
broken and they can no longer see each other.
This causes
On Wednesday 04 August 2021 at 16:07:39, Andrei Borzenkov wrote:
> On Wed, Aug 4, 2021 at 5:03 PM Antony Stone wrote:
> > On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
> > > On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > > > On Tuesday 03 August 2021 at 12:12:03,
On Wed, Aug 4, 2021 at 5:03 PM Antony Stone
wrote:
>
> On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
>
> > On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > > On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> > > > Won't something like this work ?
On Wednesday 04 August 2021 at 13:31:12, Andrei Borzenkov wrote:
> On Wed, Aug 4, 2021 at 1:48 PM Antony Stone wrote:
> > On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> > > Won't something like this work ? Each node in LA will have same score
> > > of 5000, while other
Hi Strahil,
On Wed, Aug 04, 2021 at 10:17:26AM +, Strahil Nikolov wrote:
> When you move/migrate resources without the --lifetime option, cluster stack
> will set +|-INFINITY on the host. (+ -> when migrating to, - -> when
> migrating away without specifying destination host)
> Take a look
On Wed, Aug 4, 2021 at 1:48 PM Antony Stone
wrote:
>
> On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
>
> > Won't something like this work ? Each node in LA will have same score of
> > 5000, while other cities will be -5000.
> >
> > pcs constraint location DummyRes1 rule
On Tuesday 03 August 2021 at 12:12:03, Strahil Nikolov via Users wrote:
> Won't something like this work ? Each node in LA will have same score of
> 5000, while other cities will be -5000.
>
> pcs constraint location DummyRes1 rule score=5000 city eq LA
> pcs constraint location DummyRes1 rule
When you move/migrate resources without the --lifetime option, cluster stack
will set +|-INFINITY on the host. (+ -> when migrating to, - -> when migrating
away without specifying destination host)
Take a look at:
I am pleased to announce the latest maintenance release of Corosync
3.1.5 available immediately from GitHub release section at
https://github.com/corosync/corosync/releases or our website at
http://build.clusterlabs.org/corosync/releases/.
This release contains important bugfixes of cfgtool
On 03/08/2021 10:40, Antony Stone wrote:
On Tuesday 11 May 2021 at 12:56:01, Strahil Nikolov wrote:
Here is the example I had promised:
pcs node attribute server1 city=LA
pcs node attribute server2 city=NY
# Don't run on any node that is not in LA
pcs constraint location DummyRes1 rule
19 matches
Mail list logo