Re: [ClusterLabs] VirtualDomain - unable to migrate

2022-01-05 Thread Ondrej Mular
On Tue, 4 Jan 2022 at 16:20, Ken Gaillot  wrote:
>
> On Wed, 2021-12-29 at 17:28 +, lejeczek via Users wrote:
> > Hi guys
> >
> > I'm having problems with cluster on new CentOS Stream 9 and
> > I'd be glad if you can share your thoughts.
> >
> > -> $ pcs resource move c8kubermaster2 swir
> > Location constraint to move resource 'c8kubermaster2' has
> > been created
> > Waiting for the cluster to apply configuration changes...
> > Location constraint created to move resource
> > 'c8kubermaster2' has been removed
> > Waiting for the cluster to apply configuration changes...
> > Error: resource 'c8kubermaster2' is running on node 'whale'
> > Error: Errors have occurred, therefore pcs is unable to continue
>
> pcs on CS9 moves the resource as normal, then runs a simulation to see
> what would happen if it removed the move-related constraint. If nothing
> would change (e.g. stickiness will keep the resource where it is), then
> it goes ahead and removes the constraint.
>
> I'm not sure what the error messages mean for that new approach. The
> pcs devs may be able to chime in.
I believe this is caused by a bug in the new implementation of `pcs
resource move` command. We have a fix for this almost ready, though,
some more testing is still needed. As you already filed a bugzilla
[0], let's track it there.

And thanks for bringing this up, it helped us to uncover a weird edge
case in the new implementation.

[0]: https://bugzilla.redhat.com/show_bug.cgi?id=2037218
>
> As a workaround, you can use crm_resource --move, which always leaves
> the constraint there.
Old move command is also still available as `pcs resource move-with-constraint`
>
> > Not much in pacemaker logs, perhaps nothing at all.
> > VM does migrate with 'virsh' just fine.
> >
> > -> $ pcs resource config c8kubermaster1
> >   Resource: c8kubermaster1 (class=ocf provider=heartbeat
> > type=VirtualDomain)
> >Attributes:
> > config=/var/lib/pacemaker/conf.d/c8kubermaster1.xml
> > hypervisor=qemu:///system migration_transport=ssh
> >Meta Attrs: allow-migrate=true failure-timeout=30s
> >Operations: migrate_from interval=0s timeout=90s
> > (c8kubermaster1-migrate_from-interval-0s)
> >migrate_to interval=0s timeout=90s
> > (c8kubermaster1-migrate_to-interval-0s)
> >monitor interval=30s
> > (c8kubermaster1-monitor-interval-30s)
> >start interval=0s timeout=60s
> > (c8kubermaster1-start-interval-0s)
> >stop interval=0s timeout=60s
> > (c8kubermaster1-stop-interval-0s)
> >
> > Any and all suggestions & thoughts are much appreciated.
> > many thanks, L
>
> --
> Ken Gaillot 
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] VirtualDomain - unable to migrate

2022-01-04 Thread Ken Gaillot
On Wed, 2021-12-29 at 17:28 +, lejeczek via Users wrote:
> Hi guys
> 
> I'm having problems with cluster on new CentOS Stream 9 and 
> I'd be glad if you can share your thoughts.
> 
> -> $ pcs resource move c8kubermaster2 swir
> Location constraint to move resource 'c8kubermaster2' has 
> been created
> Waiting for the cluster to apply configuration changes...
> Location constraint created to move resource 
> 'c8kubermaster2' has been removed
> Waiting for the cluster to apply configuration changes...
> Error: resource 'c8kubermaster2' is running on node 'whale'
> Error: Errors have occurred, therefore pcs is unable to continue

pcs on CS9 moves the resource as normal, then runs a simulation to see
what would happen if it removed the move-related constraint. If nothing
would change (e.g. stickiness will keep the resource where it is), then
it goes ahead and removes the constraint.

I'm not sure what the error messages mean for that new approach. The
pcs devs may be able to chime in.

As a workaround, you can use crm_resource --move, which always leaves
the constraint there.

> Not much in pacemaker logs, perhaps nothing at all.
> VM does migrate with 'virsh' just fine.
> 
> -> $ pcs resource config c8kubermaster1
>   Resource: c8kubermaster1 (class=ocf provider=heartbeat 
> type=VirtualDomain)
>Attributes: 
> config=/var/lib/pacemaker/conf.d/c8kubermaster1.xml 
> hypervisor=qemu:///system migration_transport=ssh
>Meta Attrs: allow-migrate=true failure-timeout=30s
>Operations: migrate_from interval=0s timeout=90s 
> (c8kubermaster1-migrate_from-interval-0s)
>migrate_to interval=0s timeout=90s 
> (c8kubermaster1-migrate_to-interval-0s)
>monitor interval=30s 
> (c8kubermaster1-monitor-interval-30s)
>start interval=0s timeout=60s 
> (c8kubermaster1-start-interval-0s)
>stop interval=0s timeout=60s 
> (c8kubermaster1-stop-interval-0s)
> 
> Any and all suggestions & thoughts are much appreciated.
> many thanks, L

-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] VirtualDomain - unable to migrate

2021-12-29 Thread lejeczek via Users

Hi guys

I'm having problems with cluster on new CentOS Stream 9 and 
I'd be glad if you can share your thoughts.


-> $ pcs resource move c8kubermaster2 swir
Location constraint to move resource 'c8kubermaster2' has 
been created

Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 
'c8kubermaster2' has been removed

Waiting for the cluster to apply configuration changes...
Error: resource 'c8kubermaster2' is running on node 'whale'
Error: Errors have occurred, therefore pcs is unable to continue

Not much in pacemaker logs, perhaps nothing at all.
VM does migrate with 'virsh' just fine.

-> $ pcs resource config c8kubermaster1
 Resource: c8kubermaster1 (class=ocf provider=heartbeat 
type=VirtualDomain)
  Attributes: 
config=/var/lib/pacemaker/conf.d/c8kubermaster1.xml 
hypervisor=qemu:///system migration_transport=ssh

  Meta Attrs: allow-migrate=true failure-timeout=30s
  Operations: migrate_from interval=0s timeout=90s 
(c8kubermaster1-migrate_from-interval-0s)
  migrate_to interval=0s timeout=90s 
(c8kubermaster1-migrate_to-interval-0s)
  monitor interval=30s 
(c8kubermaster1-monitor-interval-30s)
  start interval=0s timeout=60s 
(c8kubermaster1-start-interval-0s)
  stop interval=0s timeout=60s 
(c8kubermaster1-stop-interval-0s)


Any and all suggestions & thoughts are much appreciated.
many thanks, L
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/