On Tue, 2023-04-11 at 17:31 +0300, Miro Igov wrote:
> I fixed the issue by changing location definition from:
>  
> location intranet-ip_on_any_nginx intranet-ip \
>         rule -inf: opa-nginx_1_active eq 0 \
>         rule -inf: opa-nginx_2_active eq 0
>  
> To:
>  
> location intranet-ip_on_any_nginx intranet-ip \
>         rule opa-nginx_1_active eq 1 \
>            rule opa-nginx_2_active eq 1
>  
> Now it works fine and shows the constraint with: crm res constraint
> intranet-ip

Ah, I suspect the issue was that the original constraint compared only
against 0, when initially (before the resources ever start) the
attribute is undefined.

Note that your new constraint says that the IP *prefers* to run where
the attribute is 1, but if there are no nodes with the attribute set to
1, it can still start somewhere. On the other hand, bans are mandatory,
so you may want to go back to that and just specify it as "ne 1".

>  
>  
>  
> From: Users <users-boun...@clusterlabs.org> On Behalf Of Miro Igov
> Sent: 10 April 2023 14:19
> To: users@clusterlabs.org
> Subject: [ClusterLabs] Location not working
>  
> Hello,
> I have a resource with location constraint set to:
>  
> location intranet-ip_on_any_nginx intranet-ip \
>         rule -inf: opa-nginx_1_active eq 0 \
>         rule -inf: opa-nginx_2_active eq 0
>  
> In syslog I see the attribute transition:
> Apr 10 12:11:02 intranet-test2 pacemaker-attrd[1511]:  notice:
> Setting opa-nginx_1_active[intranet-test1]: 1 -> 0
>  
> Current cluster status is :
>  
> Node List:
>   * Online: [ intranet-test1 intranet-test2 nas-sync-test1 nas-sync-
> test2 ]
>  
> * stonith-sbd (stonith:external/sbd):  Started intranet-test2
>   * admin-ip    (ocf::heartbeat:IPaddr2):        Started nas-sync-
> test2
>   * cron_symlink        (ocf::heartbeat:symlink):        Started
> intranet-test1
>   * intranet-ip (ocf::heartbeat:IPaddr2):        Started intranet-
> test1
>   * mysql_1     (systemd:mariadb@intranet-test1):        Started
> intranet-test1
>   * mysql_2     (systemd:mariadb@intranet-test2):        Started
> intranet-test2
>   * nginx_1     (systemd:nginx@intranet-test1):  Stopped
>   * nginx_1_active      (ocf::pacemaker:attribute):      Stopped
>   * nginx_2     (systemd:nginx@intranet-test2):  Started intranet-
> test2
>   * nginx_2_active      (ocf::pacemaker:attribute):      Started
> intranet-test2
>   * php_1       (systemd:php5.6-fpm@intranet-test1):     Started
> intranet-test1
>   * php_2       (systemd:php5.6-fpm@intranet-test2):     Started
> intranet-test2
>   * data_1      (ocf::heartbeat:Filesystem):     Stopped
>   * data_2      (ocf::heartbeat:Filesystem):     Started intranet-
> test2
>   * nfs_export_1        (ocf::heartbeat:exportfs):       Stopped
>   * nfs_export_2        (ocf::heartbeat:exportfs):       Started nas-
> sync-test2
>   * nfs_server_1        (systemd:nfs-server@nas-sync-test1):    
> Stopped
>   * nfs_server_2        (systemd:nfs-server@nas-sync-test2):    
> Started nas-sync-test2
>  
> Failed Resource Actions:
>   * nfs_server_1_start_0 on nas-sync-test1 'error' (1): call=95,
> status='complete', exitreason='', last-rc-change='2023-04-10 12:35:12
> +02:00', queued=0ms, exec=209ms
>  
>  
> Why intranet-ip is located on intranet-test1 while nginx_1_active is
> 0 ?
>  
> # crm res constraint intranet-ip
>    
> cron_symlink                                                         
>         (score=INFINITY, id=c_cron_symlink_on_intranet-ip)
> * intranet-ip
>   : Node nas-sync-
> test2                                                         
> (score=-INFINITY, id=intranet-ip_loc-rule)
>   : Node nas-sync-
> test1                                                         
> (score=-INFINITY, id=intranet-ip_loc-rule)
>  
> Why no constraint entry for intranet-ip_on_any_nginx location ?
>  
>  
>  
> 
>  This message has been sent as a part of discussion between PHARMYA
> and the addressee whose name is specified above. Should you receive
> this message by mistake, we would be most grateful if you informed us
> that the message has been sent to you. In this case, we also ask that
> you delete this message from your mailbox, and do not forward it or
> any part of it to anyone else.
> Thank you for your cooperation and understanding.                     
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot <kgail...@redhat.com>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to