24.10.2016 14:22, Nikhil Utane wrote:
I had set resource utilization to 1. Even then it scheduled 2 resources.
Doesn't it honor utilization resources if it doesn't find a free node?
To make utilization work you need to set both:
* node overall capacity (per-node utilization attribute)
*
I had set resource utilization to 1. Even then it scheduled 2 resources.
Doesn't it honor utilization resources if it doesn't find a free node?
-Nikhil
On Mon, Oct 24, 2016 at 4:43 PM, Vladislav Bogdanov
wrote:
> 24.10.2016 14:04, Nikhil Utane wrote:
>
>> That is what
That is what happened here :(.
When 2 nodes went down, two resources got scheduled on single node.
Isn't there any way to stop this from happening. Colocation constraint is
not helping.
-Regards
Nikhil
On Sat, Oct 22, 2016 at 12:57 AM, Vladislav Bogdanov
wrote:
>
21.10.2016 19:34, Andrei Borzenkov wrote:
14.10.2016 10:39, Vladislav Bogdanov пишет:
use of utilization (balanced strategy) has one caveat: resources are
not moved just because of utilization of one node is less, when nodes
have the same allocation score for the resource. So, after the
14.10.2016 10:39, Vladislav Bogdanov пишет:
>
> use of utilization (balanced strategy) has one caveat: resources are
> not moved just because of utilization of one node is less, when nodes
> have the same allocation score for the resource. So, after the
> simultaneus outage of two nodes in a
On 10/17/2016 11:29 PM, Nikhil Utane wrote:
> Thanks Ken.
> I will give it a shot.
>
> http://oss.clusterlabs.org/pipermail/pacemaker/2011-August/011271.html
> On this thread, if I interpret it correctly, his problem was solved when
> he swapped the anti-location constraint
>
> From (mapping to
Thanks Ken.
I will give it a shot.
http://oss.clusterlabs.org/pipermail/pacemaker/2011-August/011271.html
On this thread, if I interpret it correctly, his problem was solved when he
swapped the anti-location constraint
>From (mapping to my example)
cu_2 with cu_4 (score:-INFINITY)
cu_3 with cu_4
On 10/17/2016 09:55 AM, Nikhil Utane wrote:
> I see these prints.
>
> pengine: info: rsc_merge_weights:cu_4: Rolling back scores from cu_3
> pengine:debug: native_assign_node:Assigning Redun_CU4_Wb30 to cu_4
> pengine: info: rsc_merge_weights:cu_3: Rolling back scores from cu_2
I see these prints.
pengine: info: rsc_merge_weights: cu_4: Rolling back scores from cu_3
pengine:debug: native_assign_node: Assigning Redun_CU4_Wb30 to cu_4
pengine: info: rsc_merge_weights: cu_3: Rolling back scores from cu_2
pengine:debug: native_assign_node: Assigning
On 10/14/2016 06:56 AM, Nikhil Utane wrote:
> Hi,
>
> Thank you for the responses so far.
> I added reverse colocation as well. However seeing some other issue in
> resource movement that I am analyzing.
>
> Thinking further on this, why doesn't "/a not with b" does not imply "b
> not with a"?/
I feel the behavior has become worse after adding reverse co-location
constraint.
I started with this. And it was all I wanted it to be.
cu_5 <-> Redund_CU1_WB30
cu_4 <-> Redund_CU2_WB30
cu_3 <-> Redund_CU3_WB30
cu_2 <-> Redund_CU5_WB30
However for some reason pacemaker decided to move cu_2 from
Hi,
Thank you for the responses so far.
I added reverse colocation as well. However seeing some other issue in
resource movement that I am analyzing.
Thinking further on this, why doesn't "*a not with b" does not imply "b not
with a"?*
Coz wouldn't putting "b with a" violate "a not with b"?
Can
On October 14, 2016 10:13:17 AM GMT+03:00, Ulrich Windl
wrote:
Nikhil Utane schrieb am 13.10.2016 um
>16:43 in
>Nachricht
>:
>> Ulrich,
>>
>> I have 4
13 matches
Mail list logo