[ovirt-users] safe to reboot ovirt-engine?
Hi, A simple answer to this I'm sure, but is it safe to reboot the ovirt-engine while vms on the vm-hosts connected to it are running? Anything in particular to take into account while doing so? Thanks. Kind regards, Wout ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt power management issue
Hi Eli, About System Model Number : AP7920 Serial Number : [redacted] NMC Serial Number : [redacted] Manufacture Date : 03/01/2005 Hardware Revision : B2 MAC Address : [redacted] Flash Type: AMD A29DL322DB Module Information -- Description : Rack PDU APP Name: rpdu Type: StatApp Version : 370 Sector : 16 Date: 01/13/2009 Time: 15:30:40 CRC16 : 5843 Description : Network Management Card AOS Name: aos Type: APC OS Version : 370 Sector : 47 Date: 01/13/2009 Time: 14:17:08 CRC16 : 2B5D Kind regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Woensdag 10 december 2014 21:28:22 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: Eli Mesika emes...@redhat.com Cc: users@ovirt.org Sent: Wednesday, December 10, 2014 2:53:33 PM Subject: Re: [ovirt-users] oVirt power management issue Hi Eli, When we enter the following data into the webGUI and press Test: Address = [ip-address] User Name = apc Password = [password] Type = apc SSH Port = 22 Slot = 1 Options = [empty] Secure = [box checked] It shows up in the vdsm.log as follows: addr=[ip-address],port=,agent=apc,user=[user],passwd=,action=status,secure=False,options=,policy=None The 'port=', 'secure=' and 'options=' fields seem to remain unchanged, regardless of our input. Hi Which APC version is used ??? I am getting to the conclusion that I must debug that in order to get an idea what is going on Kind Regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Woensdag 10 december 2014 12:54:56 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: Eli Mesika emes...@redhat.com Cc: users@ovirt.org Sent: Tuesday, December 9, 2014 2:59:41 PM Subject: Re: [ovirt-users] oVirt power management issue Hi Eli, The compatibility version of the cluster in which the proxy host resides is 3.5 . Hi Looking again in your VDSM log is see this fenceNode(addr=***.***.***.***,port=,agent=apc,user=apc,passwd=,action=status,secure=False,options=,policy=None) But when you invoked it directly you had wrote : fence_apc -a ***.***.***.*** -l apc -p ** -o status -n 1 -x So, this required a secured connection while you sent ,secure=False from UI Can you please check that the secure checkbox is checked in the UI for this PM agents and retry ? Kind regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Maandag 8 december 2014 22:06:23 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: emes...@redhat.com Cc: users@ovirt.org Sent: Monday, December 8, 2014 12:01:10 PM Subject: Fwd: [ovirt-users] oVirt power management issue Hi Eli, Thanks for the response. Attached are engine.log and vdsm.log from the host serving as a proxy. Our CPUs are Intel Nehalem-based. We encounter the same problem using apc_snmp. This is the output of the rpm-commands: rpm -qa | grep vdsm vdsm-yajsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-4.16.7-1.gitdb83943.el7.x86_64 vdsm-xmlrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el7.noarch vdsm-jsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-cli-4.16.7-1.gitdb83943.el7.noarch vdsm-python-4.16.7-1.gitdb83943.el7.noarch rpm -qa | grep fence-agents fence-agents-hpblade-4.0.2-21.el7.x86_64 fence-agents-cisco-ucs-4.0.2-21.el7.x86_64 fence-agents-eaton-snmp-4.0.2-21.el7.x86_64 fence-agents-apc-4.0.2-21.el7.x86_64 fence-agents-rsb-4.0.2-21.el7.x86_64 fence-agents-ilo-mp-4.0.2-21.el7.x86_64 fence-agents-ifmib-4.0.2-21.el7.x86_64 fence-agents-cisco-mds-4.0.2-21.el7.x86_64 fence-agents-all-4.0.2-21.el7.x86_64 fence-agents-common-4.0.2-21.el7.x86_64 fence-agents-rhevm-4.0.2-21.el7.x86_64 fence-agents-eps-4.0.2-21.el7.x86_64 fence-agents-bladecenter-4.0.2-21.el7.x86_64 fence-agents-intelmodular-4.0.2-21.el7.x86_64 fence-agents-apc-snmp-4.0.2-21.el7.x86_64 fence-agents-ilo2-4.0.2-21.el7.x86_64 fence-agents-ipmilan-4.0.2-21.el7.x86_64 fence-agents-scsi-4.0.2-21.el7.x86_64 fence-agents-brocade-4.0.2-21.el7.x86_64 fence-agents-wti-4.0.2-21.el7.x86_64 fence-agents-kdump-4.0.2-21.el7.x86_64 fence-agents-ibmblade-4.0.2-21.el7.x86_64 fence-agents-ipdu
Re: [ovirt-users] oVirt power management issue
Hi Eli, When we enter the following data into the webGUI and press Test: Address = [ip-address] User Name = apc Password = [password] Type = apc SSH Port = 22 Slot = 1 Options = [empty] Secure = [box checked] It shows up in the vdsm.log as follows: addr=[ip-address],port=,agent=apc,user=[user],passwd=,action=status,secure=False,options=,policy=None The 'port=', 'secure=' and 'options=' fields seem to remain unchanged, regardless of our input. Kind Regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Woensdag 10 december 2014 12:54:56 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: Eli Mesika emes...@redhat.com Cc: users@ovirt.org Sent: Tuesday, December 9, 2014 2:59:41 PM Subject: Re: [ovirt-users] oVirt power management issue Hi Eli, The compatibility version of the cluster in which the proxy host resides is 3.5 . Hi Looking again in your VDSM log is see this fenceNode(addr=***.***.***.***,port=,agent=apc,user=apc,passwd=,action=status,secure=False,options=,policy=None) But when you invoked it directly you had wrote : fence_apc -a ***.***.***.*** -l apc -p ** -o status -n 1 -x So, this required a secured connection while you sent ,secure=False from UI Can you please check that the secure checkbox is checked in the UI for this PM agents and retry ? Kind regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Maandag 8 december 2014 22:06:23 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: emes...@redhat.com Cc: users@ovirt.org Sent: Monday, December 8, 2014 12:01:10 PM Subject: Fwd: [ovirt-users] oVirt power management issue Hi Eli, Thanks for the response. Attached are engine.log and vdsm.log from the host serving as a proxy. Our CPUs are Intel Nehalem-based. We encounter the same problem using apc_snmp. This is the output of the rpm-commands: rpm -qa | grep vdsm vdsm-yajsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-4.16.7-1.gitdb83943.el7.x86_64 vdsm-xmlrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el7.noarch vdsm-jsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-cli-4.16.7-1.gitdb83943.el7.noarch vdsm-python-4.16.7-1.gitdb83943.el7.noarch rpm -qa | grep fence-agents fence-agents-hpblade-4.0.2-21.el7.x86_64 fence-agents-cisco-ucs-4.0.2-21.el7.x86_64 fence-agents-eaton-snmp-4.0.2-21.el7.x86_64 fence-agents-apc-4.0.2-21.el7.x86_64 fence-agents-rsb-4.0.2-21.el7.x86_64 fence-agents-ilo-mp-4.0.2-21.el7.x86_64 fence-agents-ifmib-4.0.2-21.el7.x86_64 fence-agents-cisco-mds-4.0.2-21.el7.x86_64 fence-agents-all-4.0.2-21.el7.x86_64 fence-agents-common-4.0.2-21.el7.x86_64 fence-agents-rhevm-4.0.2-21.el7.x86_64 fence-agents-eps-4.0.2-21.el7.x86_64 fence-agents-bladecenter-4.0.2-21.el7.x86_64 fence-agents-intelmodular-4.0.2-21.el7.x86_64 fence-agents-apc-snmp-4.0.2-21.el7.x86_64 fence-agents-ilo2-4.0.2-21.el7.x86_64 fence-agents-ipmilan-4.0.2-21.el7.x86_64 fence-agents-scsi-4.0.2-21.el7.x86_64 fence-agents-brocade-4.0.2-21.el7.x86_64 fence-agents-wti-4.0.2-21.el7.x86_64 fence-agents-kdump-4.0.2-21.el7.x86_64 fence-agents-ibmblade-4.0.2-21.el7.x86_64 fence-agents-ipdu-4.0.2-21.el7.x86_64 fence-agents-vmware-soap-4.0.2-21.el7.x86_64 fence-agents-drac5-4.0.2-21.el7.x86_64 Kind regards, Wout Have seen few resolved bugs on that , can you please let us know what is the cluster level version in which the proxy host resides ? - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Maandag 8 december 2014 00:17:13 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Eli Mesika emes...@redhat.com To: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Sent: Monday, December 8, 2014 1:14:54 AM Subject: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: users@ovirt.org Sent: Friday, December 5, 2014 12:50:24 PM Subject: [ovirt-users] oVirt power management issue Hi, We're trying to set up an oVirt configuration with an oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts (CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in the web GUI, we get the following message: Test Succeeded, unknown. The oVirt engine log outputs the following: 2014-12-05 11:23:00,872 INFO
Re: [ovirt-users] oVirt power management issue
Hi Eli, The compatibility version of the cluster in which the proxy host resides is 3.5 . Kind regards, Wout - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Maandag 8 december 2014 22:06:23 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: emes...@redhat.com Cc: users@ovirt.org Sent: Monday, December 8, 2014 12:01:10 PM Subject: Fwd: [ovirt-users] oVirt power management issue Hi Eli, Thanks for the response. Attached are engine.log and vdsm.log from the host serving as a proxy. Our CPUs are Intel Nehalem-based. We encounter the same problem using apc_snmp. This is the output of the rpm-commands: rpm -qa | grep vdsm vdsm-yajsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-4.16.7-1.gitdb83943.el7.x86_64 vdsm-xmlrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el7.noarch vdsm-jsonrpc-4.16.7-1.gitdb83943.el7.noarch vdsm-cli-4.16.7-1.gitdb83943.el7.noarch vdsm-python-4.16.7-1.gitdb83943.el7.noarch rpm -qa | grep fence-agents fence-agents-hpblade-4.0.2-21.el7.x86_64 fence-agents-cisco-ucs-4.0.2-21.el7.x86_64 fence-agents-eaton-snmp-4.0.2-21.el7.x86_64 fence-agents-apc-4.0.2-21.el7.x86_64 fence-agents-rsb-4.0.2-21.el7.x86_64 fence-agents-ilo-mp-4.0.2-21.el7.x86_64 fence-agents-ifmib-4.0.2-21.el7.x86_64 fence-agents-cisco-mds-4.0.2-21.el7.x86_64 fence-agents-all-4.0.2-21.el7.x86_64 fence-agents-common-4.0.2-21.el7.x86_64 fence-agents-rhevm-4.0.2-21.el7.x86_64 fence-agents-eps-4.0.2-21.el7.x86_64 fence-agents-bladecenter-4.0.2-21.el7.x86_64 fence-agents-intelmodular-4.0.2-21.el7.x86_64 fence-agents-apc-snmp-4.0.2-21.el7.x86_64 fence-agents-ilo2-4.0.2-21.el7.x86_64 fence-agents-ipmilan-4.0.2-21.el7.x86_64 fence-agents-scsi-4.0.2-21.el7.x86_64 fence-agents-brocade-4.0.2-21.el7.x86_64 fence-agents-wti-4.0.2-21.el7.x86_64 fence-agents-kdump-4.0.2-21.el7.x86_64 fence-agents-ibmblade-4.0.2-21.el7.x86_64 fence-agents-ipdu-4.0.2-21.el7.x86_64 fence-agents-vmware-soap-4.0.2-21.el7.x86_64 fence-agents-drac5-4.0.2-21.el7.x86_64 Kind regards, Wout Have seen few resolved bugs on that , can you please let us know what is the cluster level version in which the proxy host resides ? - Oorspronkelijk bericht - Van: Eli Mesika emes...@redhat.com Aan: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Verzonden: Maandag 8 december 2014 00:17:13 Onderwerp: Re: [ovirt-users] oVirt power management issue - Original Message - From: Eli Mesika emes...@redhat.com To: Wout Peeters w...@unix-solutions.be Cc: users@ovirt.org Sent: Monday, December 8, 2014 1:14:54 AM Subject: Re: [ovirt-users] oVirt power management issue - Original Message - From: Wout Peeters w...@unix-solutions.be To: users@ovirt.org Sent: Friday, December 5, 2014 12:50:24 PM Subject: [ovirt-users] oVirt power management issue Hi, We're trying to set up an oVirt configuration with an oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts (CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in the web GUI, we get the following message: Test Succeeded, unknown. The oVirt engine log outputs the following: 2014-12-05 11:23:00,872 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-02 from data center was chosen as a proxy to execute Status command on Host vm-03. 2014-12-05 11:23:00,879 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-02 from data center as proxy to execute Status command on Host 2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing Status Power Management command, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null 2014-12-05 11:23:00,930 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-02, HostId = 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = **, options = '', policy = 'null'), log id: 2803522 2014-12-05 11:23:01,137 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done 2014-12-05 11:23:01,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1
[ovirt-users] oVirt power management issue
Hi, We're trying to set up an oVirt configuration with an oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts (CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in the web GUI, we get the following message: Test Succeeded, unknown. The oVirt engine log outputs the following: 2014-12-05 11:23:00,872 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-02 from data center was chosen as a proxy to execute Status command on Host vm-03. 2014-12-05 11:23:00,879 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-02 from data center as proxy to execute Status command on Host 2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing Status Power Management command, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null 2014-12-05 11:23:00,930 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-02, HostId = 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = **, options = '', policy = 'null'), log id: 2803522 2014-12-05 11:23:01,137 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done 2014-12-05 11:23:01,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2803522 2014-12-05 11:23:01,139 WARN [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Fencing operation failed with proxy host 071554fc-eed2-4e8f-b6bc-041248d0eaa5, trying another proxy... 2014-12-05 11:23:01,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-01 from data center was chosen as a proxy to execute Status command on Host vm-03. 2014-12-05 11:23:01,244 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-01 from data center as proxy to execute Status command on Host 2014-12-05 11:23:01,246 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing Status Power Management command, Proxy Host:vm-01, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null 2014-12-05 11:23:01,273 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-01, HostId = c50eb9bf-5294-4d46-813d-7adfcb41d71d, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = **, options = '', policy = 'null'), log id: 2b00de15 2014-12-05 11:23:01,449 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done 2014-12-05 11:23:01,451 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2b00de15 This is the vdsm.log output: JsonRpc (StompReactor)::DEBUG::2014-12-05 11:34:05,065::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message StompFrame command='SEND' JsonRpcServer::DEBUG::2014-12-05 11:34:05,067::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-24996::DEBUG::2014-12-05 11:34:05,069::API::1188::vds::(fenceNode) fenceNode(addr=***.***.***.***,port=,agent=apc,user=apc,passwd=,action=status,secure=False,options=,policy=None) Thread-24996::DEBUG::2014-12-05 11:34:05,069::utils::738::root::(execCmd) /usr/sbin/fence_apc (cwd None) Thread-24996::DEBUG::2014-12-05 11:34:05,131::utils::758::root::(execCmd) FAILED: err = Failed: You have to enter plug number or machine identification\nPlease use '-h' for usage\n; rc = 1 Thread-24996::DEBUG::2014-12-05 11:34:05,131::API::1143::vds::(fence) rc 1 inp agent=fence_apc ipaddr=***.***.***.*** login=apc action=status passwd= out [] err ['Failed: You have to enter plug number or machine identification', Please use '-h' for usage] The 'port' and 'options' fields show up as empty, even if we enter '22' or 'port=22'. We did enter the slot number as well. Entering the fence_apc command manually, we get: fence_apc -a ***.***.***.*** -l apc -p ** -o status