Hello,
sorry for the late reply, moving Date Centers tends to keep one busy.
I looked at the PR and while it works and certainly is an improvement, it
wouldn't help me in my case much.
Biggest issue being fuser and its exponential slowdown and the RA still
uses this.
What I did was to
> Kristoffer Gronlund wrote:
>>Adam Spiers writes:
>>
>>> - The whole cluster is shut down cleanly.
>>>
>>> - The whole cluster is then started up again. (Side question: what
>>> happens if the last node to shut down is not the first to start up?
>>>
On 29/11/17 22:00 +0100, Jan Pokorný wrote:
> On 28/11/17 22:35 +0300, Andrei Borzenkov wrote:
>> 28.11.2017 13:01, Jan Pokorný пишет:
>>> On 27/11/17 17:43 +0300, Andrei Borzenkov wrote:
Отправлено с iPhone
> 27 нояб. 2017 г., в 14:36, Ferenc Wágner написал(а):
On 28/11/17 22:35 +0300, Andrei Borzenkov wrote:
> 28.11.2017 13:01, Jan Pokorný пишет:
>> On 27/11/17 17:43 +0300, Andrei Borzenkov wrote:
>>> Отправлено с iPhone
>>>
27 нояб. 2017 г., в 14:36, Ferenc Wágner написал(а):
Andrei Borzenkov
On 11/29/2017 09:09 PM, Kristoffer Grönlund wrote:
> Adam Spiers writes:
>
>> OK, so reading between the lines, if we don't want our cluster's
>> latest config changes accidentally discarded during a complete cluster
>> reboot, we should ensure that the last man standing is also
Adam Spiers writes:
>
> OK, so reading between the lines, if we don't want our cluster's
> latest config changes accidentally discarded during a complete cluster
> reboot, we should ensure that the last man standing is also the first
> one booted up - right?
That would make
Kristoffer Gronlund wrote:
Adam Spiers writes:
Kristoffer Gronlund wrote:
Adam Spiers writes:
- The whole cluster is shut down cleanly.
- The whole cluster is then started up again. (Side question: what
On 11/29/2017 08:24 PM, Andrei Borzenkov wrote:
> 29.11.2017 20:14, Klaus Wenninger пишет:
>> On 11/28/2017 07:41 PM, Andrei Borzenkov wrote:
>>> 28.11.2017 10:45, Ramann, Björn пишет:
hi@all,
in my configuration, the 1st Node run on ESX1, the second run on ESX2. Now
I'm
29.11.2017 20:14, Klaus Wenninger пишет:
> On 11/28/2017 07:41 PM, Andrei Borzenkov wrote:
>> 28.11.2017 10:45, Ramann, Björn пишет:
>>> hi@all,
>>>
>>> in my configuration, the 1st Node run on ESX1, the second run on ESX2. Now
>>> I'm looking for a way to configure the cluster fence/stonith with
Klaus Wenninger wrote:
On 11/29/2017 04:23 PM, Kristoffer Grönlund wrote:
Adam Spiers writes:
- The whole cluster is shut down cleanly.
- The whole cluster is then started up again. (Side question: what
happens if the last node to shut down is not
Kristoffer Gronlund wrote:
Adam Spiers writes:
- The whole cluster is shut down cleanly.
- The whole cluster is then started up again. (Side question: what
happens if the last node to shut down is not the first to start up?
How will the cluster
On 11/28/2017 07:41 PM, Andrei Borzenkov wrote:
> 28.11.2017 10:45, Ramann, Björn пишет:
>> hi@all,
>>
>> in my configuration, the 1st Node run on ESX1, the second run on ESX2. Now
>> I'm looking for a way to configure the cluster fence/stonith with two ESX
>> server - is this possible?
> if you
On 11/29/2017 04:54 PM, Ken Gaillot wrote:
On Wed, 2017-11-29 at 14:22 +, Adam Spiers wrote:
The same questions apply if this troublesome node was actually a
remote node running pacemaker_remoted, rather than the 5th node in
the
cluster.
Remote nodes don't join at the crmd level as
On Wed, 2017-11-29 at 14:22 +, Adam Spiers wrote:
> Hi all,
>
> A colleague has been valiantly trying to help me belatedly learn
> about
> the intricacies of startup fencing, but I'm still not fully
> understanding some of the finer points of the behaviour.
>
> The documentation on the
> Adam Spiers writes:
>
>> - The whole cluster is shut down cleanly.
>>
>> - The whole cluster is then started up again. (Side question: what
>> happens if the last node to shut down is not the first to start up?
>> How will the cluster ensure it has the most recent
On 11/29/2017 04:23 PM, Kristoffer Grönlund wrote:
> Adam Spiers writes:
>
>> - The whole cluster is shut down cleanly.
>>
>> - The whole cluster is then started up again. (Side question: what
>> happens if the last node to shut down is not the first to start up?
>> How
Adam Spiers writes:
> - The whole cluster is shut down cleanly.
>
> - The whole cluster is then started up again. (Side question: what
> happens if the last node to shut down is not the first to start up?
> How will the cluster ensure it has the most recent version of the
On Tue, 2017-11-28 at 11:23 -0800, Aaron Cody wrote:
> I'm trying to build all of the pacemaker/corosync components from
> source instead of using the redhat rpms - I have a few questions.
>
> I'm building on redhat 7.2 and so far I have been able to build:
>
> libqb 1.0.2
> pacemaker 1.1.18
>
I'm trying to build all of the pacemaker/corosync components from
source instead of using the redhat rpms - I have a few questions.
I'm building on redhat 7.2 and so far I have been able to build:
libqb 1.0.2
pacemaker 1.1.18
corosync 2.4.3
resource-agents 4.0.1
however I have not been able to
19 matches
Mail list logo