Re: [ClusterLabs] VirtualDomain live migration error

2017-10-18 Thread Ken Gaillot
On Sat, 2017-09-02 at 01:21 +0200, Oscar Segarra wrote:
> Hi, 
> 
> I have updated the known_hosts:
> 
> Now, I get the following error:
> 
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_perform_op: +
>  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resou
> rce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
>  @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
> @crm-debug-origin=cib_action_update, @transition-key=6:27:0:a7fef266-
> 46c3-429e-ab00-c1a0aab24da5, @transition-magic=-
> 1:193;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=-1, @rc-
> code=193, @op-status=-1, @last-run=1504307021, @last-rc-c
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_process_request:    Completed cib_modify operation for section
> status: OK (rc=0, origin=vdicnode01/crmd/77, version=0.169.1)
> VirtualDomain(vm-vdicdb01)[13085]:      2017/09/02_01:03:41 INFO:
> vdicdb01: Starting live migration to vdicnode02 (using: virsh --
> connect=qemu:///system --quiet migrate --live  vdicdb01
> qemu+ssh://vdicnode02/system ).
> VirtualDomain(vm-vdicdb01)[13085]:      2017/09/02_01:03:41 ERROR:
> vdicdb01: live migration to vdicnode02 failed: 1
>  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> error: Cannot recv data: Permission denied, please try again.
>  ]p 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> Permission denied, please try again.
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: 
> Connection reset by peer ]
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:   notice:
> operation_finished:     vm-vdicdb01_migrate_to_0:13085:stderr [ ocf-
> exit-reason:vdicdb01: live migration to vdicnode02 failed: 1 ]
> Sep 02 01:03:41 [1537] vdicnode01       lrmd:     info: log_finished:
>   finished - rsc:vm-vdicdb01 action:migrate_to call_id:16 pid:13085
> exit-code:1 exec-time:119ms queue-time:0ms
> Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:
> process_lrm_event:      Result of migrate_to operation for vm-
> vdicdb01 on vdicnode01: 1 (unknown error) | call=16 key=vm-
> vdicdb01_migrate_to_0 confirmed=true cib-update=78
> Sep 02 01:03:41 [1540] vdicnode01       crmd:   notice:
> process_lrm_event:      vdicnode01-vm-vdicdb01_migrate_to_0:16 [
> error: Cannot recv data: Permission denied, please try
> again.\r\nPermission denied, please try again.\r\nPermission denied
> (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset
> by peer\nocf-exit-reason:vdicdb01: live migration to vdicnode02
> failed: 1\n ]
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_process_request:    Forwarding cib_modify operation for section
> status to all (origin=local/crmd/78)
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_perform_op: Diff: --- 0.169.1 2
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_perform_op: Diff: +++ 0.169.2 (null)
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_perform_op: +  /cib:  @num_updates=2
> Sep 02 01:03:41 [1535] vdicnode01        cib:     info:
> cib_perform_op: +
>  /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resou
> rce[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:  @crm-
> debug-origin=do_update_resource, @transition-
> magic=0:1;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5, @call-id=16,
> @rc-code=1, @op-status=0, @exec-time=119, @exit-reason=vdicdb01: live
> migration to vdicnode02 failed: 1
> Sep 02 01:03:4
> 
> as root <-- system prompts the password
> [root@vdicnode01 .ssh]# virsh --connect=qemu:///system --quiet
> migrate --live  vdicdb01 qemu+ssh://vdicnode02/system
> root@vdicnode02's password:
> 
> as oneadmin (the user that executes the qemu-kvm) <-- does not prompt
> the password
> virsh --connect=qemu:///system --quiet migrate --live  vdicdb01
> qemu+ssh://vdicnode02/system
> 
> Must I configure passwordless connection with root in order to make
> live migration work?
> 
> Or maybe is there any way to instruct pacemaker to use my oneadmin
> user for migrations inestad of root?

Pacemaker calls the VirtualDomain resource agent as root, but it's up
to the agent what to do from there. I don't see any user options in
VirtualDomain or virsh, so I don't think there is currently.

I see two options: configure passwordless ssh for root, or copy the
VirtualDomain resource and modify it to use "sudo -u oneadmin" when it
calls virsh.

We've discussed adding the capability to tell pacemaker to execute a
resource agent as a particular user. We've already put the plumbing in
for it, so that lrmd can execute alert agents as the hacluster user.
All that would be needed would be a new resource meta-attribute and the
IPC API to use it. It's low prior

Re: [ClusterLabs] VirtualDomain live migration error

2017-09-01 Thread Oscar Segarra
Hi,

I have updated the known_hosts:

Now, I get the following error:

Sep 02 01:03:41 [1535] vdicnode01cib: info: cib_perform_op: +
 
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
 @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
@crm-debug-origin=cib_action_update,
@transition-key=6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5,
@transition-magic=-1:193;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5,
@call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504307021, @last-rc-c
Sep 02 01:03:41 [1535] vdicnode01cib: info:
cib_process_request:Completed cib_modify operation for section status:
OK (rc=0, origin=vdicnode01/crmd/77, version=0.169.1)
VirtualDomain(vm-vdicdb01)[13085]:  2017/09/02_01:03:41 INFO: vdicdb01:
Starting live migration to vdicnode02 (using: virsh
--connect=qemu:///system --quiet migrate --live  vdicdb01
qemu+ssh://vdicnode02/system ).
VirtualDomain(vm-vdicdb01)[13085]:  2017/09/02_01:03:41 ERROR:
vdicdb01: live migration to vdicnode02 failed: 1
 ]p 02 01:03:41 [1537] vdicnode01   lrmd:   notice: operation_finished:
vm-vdicdb01_migrate_to_0:13085:stderr [ error: Cannot recv data:
Permission denied, please try again.
 ]p 02 01:03:41 [1537] vdicnode01   lrmd:   notice: operation_finished:
vm-vdicdb01_migrate_to_0:13085:stderr [ Permission denied, please try
again.
Sep 02 01:03:41 [1537] vdicnode01   lrmd:   notice: operation_finished:
vm-vdicdb01_migrate_to_0:13085:stderr [ Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by
peer ]
Sep 02 01:03:41 [1537] vdicnode01   lrmd:   notice: operation_finished:
vm-vdicdb01_migrate_to_0:13085:stderr [ ocf-exit-reason:vdicdb01: live
migration to vdicnode02 failed: 1 ]
Sep 02 01:03:41 [1537] vdicnode01   lrmd: info: log_finished:
finished - rsc:vm-vdicdb01 action:migrate_to call_id:16 pid:13085
exit-code:1 exec-time:119ms queue-time:0ms
Sep 02 01:03:41 [1540] vdicnode01   crmd:   notice: process_lrm_event:
 Result of migrate_to operation for vm-vdicdb01 on vdicnode01: 1
(unknown error) | call=16 key=vm-vdicdb01_migrate_to_0 confirmed=true
cib-update=78
Sep 02 01:03:41 [1540] vdicnode01   crmd:   notice: process_lrm_event:
 vdicnode01-vm-vdicdb01_migrate_to_0:16 [ error: Cannot recv data:
Permission denied, please try again.\r\nPermission denied, please try
again.\r\nPermission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by
peer\nocf-exit-reason:vdicdb01: live migration to vdicnode02 failed: 1\n ]
Sep 02 01:03:41 [1535] vdicnode01cib: info:
cib_process_request:Forwarding cib_modify operation for section status
to all (origin=local/crmd/78)
Sep 02 01:03:41 [1535] vdicnode01cib: info: cib_perform_op:
Diff: --- 0.169.1 2
Sep 02 01:03:41 [1535] vdicnode01cib: info: cib_perform_op:
Diff: +++ 0.169.2 (null)
Sep 02 01:03:41 [1535] vdicnode01cib: info: cib_perform_op: +
 /cib:  @num_updates=2
Sep 02 01:03:41 [1535] vdicnode01cib: info: cib_perform_op: +
 
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
 @crm-debug-origin=do_update_resource,
@transition-magic=0:1;6:27:0:a7fef266-46c3-429e-ab00-c1a0aab24da5,
@call-id=16, @rc-code=1, @op-status=0, @exec-time=119,
@exit-reason=vdicdb01: live migration to vdicnode02 failed: 1
Sep 02 01:03:4

as root <-- system prompts the password
[root@vdicnode01 .ssh]# virsh --connect=qemu:///system --quiet migrate
--live  vdicdb01 qemu+ssh://vdicnode02/system
root@vdicnode02's password:

as oneadmin (the user that executes the qemu-kvm) <-- does not prompt the
password
virsh --connect=qemu:///system --quiet migrate --live  vdicdb01
qemu+ssh://vdicnode02/system

Must I configure passwordless connection with root in order to make live
migration work?

Or maybe is there any way to instruct pacemaker to use my oneadmin user for
migrations inestad of root?

Thanks a lot:


2017-09-01 23:14 GMT+02:00 Ken Gaillot :

> On Fri, 2017-09-01 at 00:26 +0200, Oscar Segarra wrote:
> > Hi,
> >
> >
> > Yes, it is
> >
> >
> > The qemu-kvm process is executed by the oneadmin user.
> >
> >
> > When I cluster tries the live migration, what users do play?
> >
> >
> > Oneadmin
> > Root
> > Hacluster
> >
> >
> > I have just configured pasworless ssh connection with oneadmin.
> >
> >
> > Do I need to configure any other passwordless ssh connection with any
> > other user?
> >
> >
> > What user executes the virsh migrate - - live?
>
> The cluster executes resource actions as root.
>
> > Is there any way to check ssk keys?
>
> I'd just login once to the host as root from the cluster nodes, to make
> it sure it works, and accept the host when asked.
>
> >
> > Sorry for all theese questions.
> >
> >
> > Thanks a lot
> >
> >
> >
> >
> >
> >
> > El 1 sept. 2017 0:12, 

Re: [ClusterLabs] VirtualDomain live migration error

2017-09-01 Thread Ken Gaillot
On Fri, 2017-09-01 at 00:26 +0200, Oscar Segarra wrote:
> Hi,
> 
> 
> Yes, it is
> 
> 
> The qemu-kvm process is executed by the oneadmin user.
> 
> 
> When I cluster tries the live migration, what users do play?
> 
> 
> Oneadmin
> Root
> Hacluster
> 
> 
> I have just configured pasworless ssh connection with oneadmin.
> 
> 
> Do I need to configure any other passwordless ssh connection with any
> other user?
> 
> 
> What user executes the virsh migrate - - live?

The cluster executes resource actions as root.

> Is there any way to check ssk keys? 

I'd just login once to the host as root from the cluster nodes, to make
it sure it works, and accept the host when asked.

> 
> Sorry for all theese questions. 
> 
> 
> Thanks a lot 
> 
> 
> 
> 
> 
> 
> El 1 sept. 2017 0:12, "Ken Gaillot"  escribió:
> On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:
> > Hi Ken,
> >
> >
> > Thanks a lot for you quick answer:
> >
> >
> > Regarding to selinux, it is disabled. The FW is disabled as
> well.
> >
> >
> > [root@vdicnode01 ~]# sestatus
> > SELinux status: disabled
> >
> >
> > [root@vdicnode01 ~]# service firewalld status
> > Redirecting to /bin/systemctl status  firewalld.service
> > ● firewalld.service - firewalld - dynamic firewall daemon
> >Loaded: loaded
> (/usr/lib/systemd/system/firewalld.service;
> > disabled; vendor preset: enabled)
> >Active: inactive (dead)
> >  Docs: man:firewalld(1)
> >
> >
> > On migration, it performs a gracefully shutdown and a start
> on the new
> > node.
> >
> >
> > I attach the logs when trying to migrate from vdicnode02 to
> > vdicnode01:
> >
> >
> > vdicnode02 corosync.log:
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_perform_op: Diff: --- 0.161.2 2
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_perform_op: Diff: +++ 0.162.0 (null)
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_perform_op:
> >
> -- 
> /cib/configuration/constraints/rsc_location[@id='location-vm-vdicdb01-vdicnode01--INFINITY']
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_perform_op: +  /cib:  @epoch=162, @num_updates=0
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_process_request:Completed cib_replace operation for
> section
> > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> > version=0.162.0)
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_file_backup:Archived previous version
> > as /var/lib/pacemaker/cib/cib-65.raw
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_file_write_with_digest: Wrote version 0.162.0 of the
> CIB to
> > disk (digest: 1f87611b60cd7c48b95b6b788b47f65f)
> > Aug 31 23:38:17 [1521] vdicnode02cib: info:
> > cib_file_write_with_digest: Reading cluster
> configuration
> > file /var/lib/pacemaker/cib/cib.jt2KPw
> > (digest: /var/lib/pacemaker/cib/cib.Kwqfpl)
> > Aug 31 23:38:22 [1521] vdicnode02cib: info:
> > cib_process_ping:   Reporting our current digest to
> vdicnode01:
> > dace3a23264934279d439420d5a716cc for 0.162.0 (0x7f96bb26c5c0
> 0)
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_perform_op: Diff: --- 0.162.0 2
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_perform_op: Diff: +++ 0.163.0 (null)
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_perform_op: +  /cib:  @epoch=163
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_perform_op: ++ /cib/configuration/constraints:
>  > id="location-vm-vdicdb01-vdicnode02--INFINITY"
> node="vdicnode02"
> > rsc="vm-vdicdb01" score="-INFINITY"/>
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_process_request:Completed cib_replace operation for
> section
> > configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> > version=0.163.0)
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_file_backup:Archived previous version
> > as /var/lib/pacemaker/cib/cib-66.raw
> > Aug 31 23:38:27 [1521] vdicnode02cib: info:
> > cib_file_write_with_digest: Wrote version 0.163.0 of the
> CIB to
> > disk (digest: 47a548b36746de9275d66cc6aeb0fdc4)
> > Aug 31 23:38:27 [152

Re: [ClusterLabs] VirtualDomain live migration error

2017-08-31 Thread Oscar Segarra
Hi,

Yes, it is

The qemu-kvm process is executed by the oneadmin user.

When I cluster tries the live migration, what users do play?

Oneadmin
Root
Hacluster

I have just configured pasworless ssh connection with oneadmin.

Do I need to configure any other passwordless ssh connection with any other
user?

What user executes the virsh migrate - - live?

Is there any way to check ssk keys?

Sorry for all theese questions.

Thanks a lot




El 1 sept. 2017 0:12, "Ken Gaillot"  escribió:

On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:
> Hi Ken,
>
>
> Thanks a lot for you quick answer:
>
>
> Regarding to selinux, it is disabled. The FW is disabled as well.
>
>
> [root@vdicnode01 ~]# sestatus
> SELinux status: disabled
>
>
> [root@vdicnode01 ~]# service firewalld status
> Redirecting to /bin/systemctl status  firewalld.service
> ● firewalld.service - firewalld - dynamic firewall daemon
>Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> disabled; vendor preset: enabled)
>Active: inactive (dead)
>  Docs: man:firewalld(1)
>
>
> On migration, it performs a gracefully shutdown and a start on the new
> node.
>
>
> I attach the logs when trying to migrate from vdicnode02 to
> vdicnode01:
>
>
> vdicnode02 corosync.log:
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.161.2 2
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.162.0 (null)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op:
> -- /cib/configuration/constraints/rsc_location[@id='location-vm-vdicdb01-
vdicnode01--INFINITY']
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @epoch=162, @num_updates=0
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_process_request:Completed cib_replace operation for section
> configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> version=0.162.0)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_backup:Archived previous version
> as /var/lib/pacemaker/cib/cib-65.raw
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Wrote version 0.162.0 of the CIB to
> disk (digest: 1f87611b60cd7c48b95b6b788b47f65f)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Reading cluster configuration
> file /var/lib/pacemaker/cib/cib.jt2KPw
> (digest: /var/lib/pacemaker/cib/cib.Kwqfpl)
> Aug 31 23:38:22 [1521] vdicnode02cib: info:
> cib_process_ping:   Reporting our current digest to vdicnode01:
> dace3a23264934279d439420d5a716cc for 0.162.0 (0x7f96bb26c5c0 0)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.162.0 2
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.163.0 (null)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @epoch=163
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: ++ /cib/configuration/constraints:   id="location-vm-vdicdb01-vdicnode02--INFINITY" node="vdicnode02"
> rsc="vm-vdicdb01" score="-INFINITY"/>
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_process_request:Completed cib_replace operation for section
> configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> version=0.163.0)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_backup:Archived previous version
> as /var/lib/pacemaker/cib/cib-66.raw
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Wrote version 0.163.0 of the CIB to
> disk (digest: 47a548b36746de9275d66cc6aeb0fdc4)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Reading cluster configuration
> file /var/lib/pacemaker/cib/cib.rcgXiT
> (digest: /var/lib/pacemaker/cib/cib.7geMfi)
> Aug 31 23:38:27 [1523] vdicnode02   lrmd: info:
> cancel_recurring_action:Cancelling ocf operation
> vm-vdicdb01_monitor_1
> Aug 31 23:38:27 [1526] vdicnode02   crmd: info:
> do_lrm_rsc_op:  Performing
> key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> op=vm-vdicdb01_migrate_to_0
> Aug 31 23:38:27 [1523] vdicnode02   lrmd: info: log_execute:
>executing - rsc:vm-vdicdb01 action:migrate_to call_id:9
> Aug 31 23:38:27 [1526] vdicnode02   crmd: info:
> process_lrm_event:  Result of monitor operation for vm-vdicdb01 on
> vdicnode02: Cancelled | call=7 key=vm-vdicdb01_monitor_1
> confirmed=true
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.163.0 2
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.163.1 (null)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @num_updates=1
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +
>  /cib/status/node_

Re: [ClusterLabs] VirtualDomain live migration error

2017-08-31 Thread Ken Gaillot
On Thu, 2017-08-31 at 23:45 +0200, Oscar Segarra wrote:
> Hi Ken, 
> 
> 
> Thanks a lot for you quick answer:
> 
> 
> Regarding to selinux, it is disabled. The FW is disabled as well.
> 
> 
> [root@vdicnode01 ~]# sestatus
> SELinux status: disabled
> 
> 
> [root@vdicnode01 ~]# service firewalld status
> Redirecting to /bin/systemctl status  firewalld.service
> ● firewalld.service - firewalld - dynamic firewall daemon
>Loaded: loaded (/usr/lib/systemd/system/firewalld.service;
> disabled; vendor preset: enabled)
>Active: inactive (dead)
>  Docs: man:firewalld(1)
> 
> 
> On migration, it performs a gracefully shutdown and a start on the new
> node.
> 
> 
> I attach the logs when trying to migrate from vdicnode02 to
> vdicnode01:
> 
> 
> vdicnode02 corosync.log:
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.161.2 2
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.162.0 (null)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op:
> -- 
> /cib/configuration/constraints/rsc_location[@id='location-vm-vdicdb01-vdicnode01--INFINITY']
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @epoch=162, @num_updates=0
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_process_request:Completed cib_replace operation for section
> configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> version=0.162.0)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_backup:Archived previous version
> as /var/lib/pacemaker/cib/cib-65.raw
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Wrote version 0.162.0 of the CIB to
> disk (digest: 1f87611b60cd7c48b95b6b788b47f65f)
> Aug 31 23:38:17 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Reading cluster configuration
> file /var/lib/pacemaker/cib/cib.jt2KPw
> (digest: /var/lib/pacemaker/cib/cib.Kwqfpl)
> Aug 31 23:38:22 [1521] vdicnode02cib: info:
> cib_process_ping:   Reporting our current digest to vdicnode01:
> dace3a23264934279d439420d5a716cc for 0.162.0 (0x7f96bb26c5c0 0)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.162.0 2
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.163.0 (null)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @epoch=163
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: ++ /cib/configuration/constraints:   id="location-vm-vdicdb01-vdicnode02--INFINITY" node="vdicnode02"
> rsc="vm-vdicdb01" score="-INFINITY"/>
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_process_request:Completed cib_replace operation for section
> configuration: OK (rc=0, origin=vdicnode01/cibadmin/2,
> version=0.163.0)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_backup:Archived previous version
> as /var/lib/pacemaker/cib/cib-66.raw
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Wrote version 0.163.0 of the CIB to
> disk (digest: 47a548b36746de9275d66cc6aeb0fdc4)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_file_write_with_digest: Reading cluster configuration
> file /var/lib/pacemaker/cib/cib.rcgXiT
> (digest: /var/lib/pacemaker/cib/cib.7geMfi)
> Aug 31 23:38:27 [1523] vdicnode02   lrmd: info:
> cancel_recurring_action:Cancelling ocf operation
> vm-vdicdb01_monitor_1
> Aug 31 23:38:27 [1526] vdicnode02   crmd: info:
> do_lrm_rsc_op:  Performing
> key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
> op=vm-vdicdb01_migrate_to_0
> Aug 31 23:38:27 [1523] vdicnode02   lrmd: info: log_execute:
>executing - rsc:vm-vdicdb01 action:migrate_to call_id:9
> Aug 31 23:38:27 [1526] vdicnode02   crmd: info:
> process_lrm_event:  Result of monitor operation for vm-vdicdb01 on
> vdicnode02: Cancelled | call=7 key=vm-vdicdb01_monitor_1
> confirmed=true
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: --- 0.163.0 2
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: Diff: +++ 0.163.1 (null)
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +  /cib:  @num_updates=1
> Aug 31 23:38:27 [1521] vdicnode02cib: info:
> cib_perform_op: +
>  
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
>   @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to, 
> @crm-debug-origin=cib_action_update, 
> @transition-key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, 
> @transition-magic=-1:193;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a, 
> @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha
> Aug 31 23:38:27 [1521] vdicnode02 

Re: [ClusterLabs] VirtualDomain live migration error

2017-08-31 Thread Oscar Segarra
Hi Ken,

Thanks a lot for you quick answer:

Regarding to selinux, it is disabled. The FW is disabled as well.

[root@vdicnode01 ~]# sestatus
SELinux status: disabled

[root@vdicnode01 ~]# service firewalld status
Redirecting to /bin/systemctl status  firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled;
vendor preset: enabled)
   Active: inactive (dead)
 Docs: man:firewalld(1)

On migration, it performs a gracefully shutdown and a start on the new node.

I attach the logs when trying to migrate from vdicnode02 to vdicnode01:

vdicnode02 corosync.log:
Aug 31 23:38:17 [1521] vdicnode02cib: info: cib_perform_op:
Diff: --- 0.161.2 2
Aug 31 23:38:17 [1521] vdicnode02cib: info: cib_perform_op:
Diff: +++ 0.162.0 (null)
Aug 31 23:38:17 [1521] vdicnode02cib: info: cib_perform_op: --
/cib/configuration/constraints/rsc_location[@id='location-vm-vdicdb01-vdicnode01--INFINITY']
Aug 31 23:38:17 [1521] vdicnode02cib: info: cib_perform_op: +
 /cib:  @epoch=162, @num_updates=0
Aug 31 23:38:17 [1521] vdicnode02cib: info:
cib_process_request:Completed cib_replace operation for section
configuration: OK (rc=0, origin=vdicnode01/cibadmin/2, version=0.162.0)
Aug 31 23:38:17 [1521] vdicnode02cib: info: cib_file_backup:
 Archived previous version as /var/lib/pacemaker/cib/cib-65.raw
Aug 31 23:38:17 [1521] vdicnode02cib: info:
cib_file_write_with_digest: Wrote version 0.162.0 of the CIB to disk
(digest: 1f87611b60cd7c48b95b6b788b47f65f)
Aug 31 23:38:17 [1521] vdicnode02cib: info:
cib_file_write_with_digest: Reading cluster configuration file
/var/lib/pacemaker/cib/cib.jt2KPw (digest:
/var/lib/pacemaker/cib/cib.Kwqfpl)
Aug 31 23:38:22 [1521] vdicnode02cib: info: cib_process_ping:
Reporting our current digest to vdicnode01:
dace3a23264934279d439420d5a716cc for 0.162.0 (0x7f96bb26c5c0 0)
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op:
Diff: --- 0.162.0 2
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op:
Diff: +++ 0.163.0 (null)
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op: +
 /cib:  @epoch=163
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op: ++
/cib/configuration/constraints:  
Aug 31 23:38:27 [1521] vdicnode02cib: info:
cib_process_request:Completed cib_replace operation for section
configuration: OK (rc=0, origin=vdicnode01/cibadmin/2, version=0.163.0)
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_file_backup:
 Archived previous version as /var/lib/pacemaker/cib/cib-66.raw
Aug 31 23:38:27 [1521] vdicnode02cib: info:
cib_file_write_with_digest: Wrote version 0.163.0 of the CIB to disk
(digest: 47a548b36746de9275d66cc6aeb0fdc4)
Aug 31 23:38:27 [1521] vdicnode02cib: info:
cib_file_write_with_digest: Reading cluster configuration file
/var/lib/pacemaker/cib/cib.rcgXiT (digest:
/var/lib/pacemaker/cib/cib.7geMfi)
Aug 31 23:38:27 [1523] vdicnode02   lrmd: info:
cancel_recurring_action:Cancelling ocf operation
vm-vdicdb01_monitor_1
Aug 31 23:38:27 [1526] vdicnode02   crmd: info: do_lrm_rsc_op:
 Performing key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a
op=vm-vdicdb01_migrate_to_0
Aug 31 23:38:27 [1523] vdicnode02   lrmd: info: log_execute:
 executing - rsc:vm-vdicdb01 action:migrate_to call_id:9
Aug 31 23:38:27 [1526] vdicnode02   crmd: info: process_lrm_event:
 Result of monitor operation for vm-vdicdb01 on vdicnode02: Cancelled |
call=7 key=vm-vdicdb01_monitor_1 confirmed=true
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op:
Diff: --- 0.163.0 2
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op:
Diff: +++ 0.163.1 (null)
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op: +
 /cib:  @num_updates=1
Aug 31 23:38:27 [1521] vdicnode02cib: info: cib_perform_op: +
 
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='vm-vdicdb01']/lrm_rsc_op[@id='vm-vdicdb01_last_0']:
 @operation_key=vm-vdicdb01_migrate_to_0, @operation=migrate_to,
@crm-debug-origin=cib_action_update,
@transition-key=6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a,
@transition-magic=-1:193;6:6:0:fe1a9b0a-816c-4b97-96cb-b90dbf71417a,
@call-id=-1, @rc-code=193, @op-status=-1, @last-run=1504215507, @last-rc-cha
Aug 31 23:38:27 [1521] vdicnode02cib: info:
cib_process_request:Completed cib_modify operation for section status:
OK (rc=0, origin=vdicnode01/crmd/41, version=0.163.1)
VirtualDomain(vm-vdicdb01)[5241]:   2017/08/31_23:38:27 INFO: vdicdb01:
Starting live migration to vdicnode01 (using: virsh
--connect=qemu:///system --quiet migrate --live  vdicdb01
qemu+ssh://vdicnode01/system ).
VirtualDomain(vm-vdicdb01)[5241]:   

Re: [ClusterLabs] VirtualDomain live migration error

2017-08-31 Thread Ken Gaillot
On Thu, 2017-08-31 at 01:13 +0200, Oscar Segarra wrote:
> Hi,
> 
> 
> In my environment, I have just two hosts, where qemu-kvm process is
> launched by a regular user (oneadmin) - open nebula - 
> 
> 
> I have created a VirtualDomain resource that starts and stops the VM
> perfectly. Nevertheless, when I change the location weight in order to
> force the migration, It raises a migration failure "error: 1"
> 
> 
> If I execute the virsh migrate command (that appears in corosync.log)
> from command line, it works perfectly.
> 
> 
> Anybody has experienced the same issue?
> 
> 
> Thanks in advance for your help 

If something works from the command line but not when run by a daemon,
my first suspicion is SELinux. Check the audit log for denials around
that time.

I'd also check the system log and Pacemaker detail log around that time
to see if there is any more information.
-- 
Ken Gaillot 





___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] VirtualDomain live migration error

2017-08-30 Thread Oscar Segarra
Hi,

In my environment, I have just two hosts, where qemu-kvm process is
launched by a regular user (oneadmin) - open nebula -

I have created a VirtualDomain resource that starts and stops the VM
perfectly. Nevertheless, when I change the location weight in order to
force the migration, It raises a migration failure "error: 1"

If I execute the virsh migrate command (that appears in corosync.log) from
command line, it works perfectly.

Anybody has experienced the same issue?

Thanks in advance for your help
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org