Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Pat Riehecky
The disks dropped off when I was doing A LOT of disk IO on the VM - 
untaring a 2TB archive.  I'd done a snapshot earlier (about 30m) of the VM.


The storage shows no faults at this time.  The switch fabric was online 
for all 4 paths.


Pat

On 02/20/2017 02:08 PM, Adam Litke wrote:
There definitely seems to be a problem with your storage domain 
81f19871...  The host is unable to join that domain's sanlock 
lockspace.  Also, it seems that some metadata for the disk with id 
e17ebd7c... was corrupted or lost in translation somehow.  Can you 
provide more details about what happened when "the disk images got 
'unregistered' from oVirt"? Were you performing any particular 
operations (such as moving disks, snapshot create/delete, etc)?  Was 
there a problem with the storage at that time?


On Mon, Feb 20, 2017 at 9:51 AM, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Hi Adam,

Thanks for looking!  The storage is fibre attached and I've
verified with the SAN folks nothing went wonky during this window
on their side.

Here is what I've got from vdsm.log during the window (and a bit
surrounding it for context):

libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:

/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.com.redhat.rhevm.vdsm
already removed
libvirtEventLoop::WARNING::2017-02-16
08:35:17,435::utils::140::root::(rmFile) File:

/var/lib/libvirt/qemu/channels/ba806b93-b6fe-4873-99ec-55bb34c12e5f.org.qemu.guest_agent.0
already removed
periodic/2::WARNING::2017-02-16
08:35:18,144::periodic::295::virt.vm::(__call__)
vmId=`ba806b93-b6fe-4873-99ec-55bb34c12e5f`::could not run on
ba806b93-b6fe-4873-99ec-55bb34c12e5f: domain not connected
periodic/3::WARNING::2017-02-16
08:35:18,305::periodic::261::virt.periodic.VmDispatcher::(__call__)
could not run  on
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
Thread-23021::ERROR::2017-02-16
09:28:33,096::task::866::Storage.TaskManager.Task::(_setError)
Task=`ecab8086-261f-44b9-8123-eefb9bbf5b05`::Unexpected error
Thread-23021::ERROR::2017-02-16
09:28:33,097::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-23783::ERROR::2017-02-16
10:13:32,876::task::866::Storage.TaskManager.Task::(_setError)
Task=`ff628204-6e41-4e5e-b83a-dad6ec94d0d3`::Unexpected error
Thread-23783::ERROR::2017-02-16
10:13:32,877::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
Thread-24542::ERROR::2017-02-16
10:58:32,578::task::866::Storage.TaskManager.Task::(_setError)
Task=`f5111200-e980-46bb-bbc3-898ae312d556`::Unexpected error
Thread-24542::ERROR::2017-02-16
10:58:32,579::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': "Storage domain is member of pool:
'domain=81f19871-4d91-4698-a97d-36452bfae281'", 'code': 900}}
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::139::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,049::sdc::156::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 13127103-3f59-418a-90f1-5b1ade8526b1
jsonrpc.Executor/4::ERROR::2017-02-16
11:28:24,305::sdc::145::Storage.StorageDomainCache::(_findDomain)
domain 13127103-3f59-418a-90f1-5b1ade8526b1 not found
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,402::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
6e31bf97-458c-4a30-9df5-14f475db3339::ERROR::2017-02-16
11:29:19,403::task::866::Storage.TaskManager.Task::(_setError)
Task=`6e31bf97-458c-4a30-9df5-14f475db3339`::Unexpected error
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,649::image::205::Storage.Image::(getChain) There is no
leaf in the image b4c4b53e-3813-4959-a145-16f1dfcf1838
79ed31a2-5ac7-4304-ab4d-d05f72694860::ERROR::2017-02-16
11:29:20,650::task::866::Storage.TaskManager.Task::(_setError)
Task=`79ed31a2-5ac7-4304-ab4d-d05f72694860`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,063::image::205::Storage.Image::(getChain) There is no
leaf in the image e17ebd7c-0763-42b2-b344-5ad7f9cf448e
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,064::task::866::Storage.TaskManager.Task::(_setError)
Task=`62f20e22-e850-44c8-8943-faa4ce71e973`::Unexpected error
jsonrpc.Executor/5::ERROR::2017-02-16
11:30:17,065::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'messag

Re: [ovirt-users] Unable to export VM

2017-02-20 Thread Pat Riehecky
:50:00,868::periodic::261::virt.periodic.VmDispatcher::(__call__) 
could not run  on 
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16 
17:51:30,899::periodic::261::virt.periodic.VmDispatcher::(__call__) 
could not run  on 
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']
periodic/0::WARNING::2017-02-16 
17:52:30,907::periodic::261::virt.periodic.VmDispatcher::(__call__) 
could not run  on 
['ba806b93-b6fe-4873-99ec-55bb34c12e5f']



On 02/20/2017 08:45 AM, Adam Litke wrote:
Hi Pat.  I'd like to help you investigate this issue further.  Could 
you send a snippet of the vdsm.log on slam-vmnode-03 that covers the 
time period during this failure? Engine is reporting that vdsm has 
likely thrown an exception while acquiring locks associated with the 
VM disk you are exporting.


On Thu, Feb 16, 2017 at 12:40 PM, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Any attempts to export my VM error out.  Last night the disk
images got 'unregistered' from oVirt and I had to rescan the
storage domain to find them again.  Now I'm just trying to get a
backup of the VM.

The snapshots off of the old disks are still listed, but I don't
know if the lvm slices are still real or if that is even what is
wrong.

steps I followed ->
Halt VM
Click Export
leave things unchecked and click OK

oVirt version:
ovirt-engine-4.0.3-1.el7.centos.noarch
ovirt-engine-backend-4.0.3-1.el7.centos.noarch
ovirt-engine-cli-3.6.9.2-1.el7.noarch
ovirt-engine-dashboard-1.0.3-1.el7.centos.noarch
ovirt-engine-dbscripts-4.0.3-1.el7.centos.noarch
ovirt-engine-dwh-4.0.2-1.el7.centos.noarch
ovirt-engine-dwh-setup-4.0.2-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.2.1-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.2.1-1.el7.noarch
ovirt-engine-extensions-api-impl-4.0.3-1.el7.centos.noarch
ovirt-engine-lib-4.0.3-1.el7.centos.noarch
ovirt-engine-restapi-4.0.3-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-base-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.3-1.el7.centos.noarch
ovirt-engine-tools-4.0.3-1.el7.centos.noarch
ovirt-engine-tools-backup-4.0.3-1.el7.centos.noarch
ovirt-engine-userportal-4.0.3-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.3-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.3-1.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.3-1.el7.centos.noarch
ovirt-engine-wildfly-10.0.0-1.el7.x86_64
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-guest-agent-common-1.0.12-4.el7.noarch
ovirt-host-deploy-1.5.1-1.el7.centos.noarch
ovirt-host-deploy-java-1.5.1-1.el7.centos.noarch
ovirt-imageio-common-0.3.0-1.el7.noarch
ovirt-imageio-proxy-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
ovirt-imageio-proxy-setup-0.3.0-0.201606191345.git9f3d6d4.el7.centos.noarch
ovirt-image-uploader-4.0.0-1.el7.centos.noarch
ovirt-iso-uploader-4.0.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch




log snippet:
2017-02-16 11:34:44,959 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(default task-28) [] START, GetVmsInfoVDSCommand(
GetVmsInfoVDSCommandParameters:{runAsync='true',
storagePoolId='0001-0001-0001-0001-01a5',
ignoreFailoverLimit='false',
storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
vmIdList='null'}), log id: 3c406c84
2017-02-16 11:34:45,967 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(default task-28) [] FINISH, GetVmsInfoVDSCommand, log id: 3c406c84
2017-02-16 11:34:46,178 INFO
[org.ovirt.engine.core.bll.exportimport.ExportVmCommand] (default
task-24) [50b27eef] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ba806b93-b6fe-4873-99ec-55bb34c12e5f=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-02-16 11:34:46,221 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(default task-24) [50b27eef] START, GetVmsInfoVDSCommand(
GetVmsInfoVDSCommandParameters:{runAsync='true',
storagePoolId='0001-0001-0001-0001-01a5',
ignoreFailoverLimit='false',
storageDomainId='13127103-3f59-418a-90f1-5b1ade8526b1',
vmIdList='null'}), log id: 61bfd908
2017-02-16 11:34:47,227 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(default ta

[ovirt-users] Unable to export VM

2017-02-16 Thread Pat Riehecky
d.pool-8-thread-41) [] 
CommandAsyncTask::HandleEndActionResult [within thread]: Removing 
CommandMultiAsyncTasks object for entity 
'0b807437-17fe-4773-a539-09ddee3df215'



--
Pat Riehecky

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Scheduling snapshots

2017-02-15 Thread Pat Riehecky
That looks like it wants to export to an NFS domain afterwards. Alas 
I've not got much space there


Pat

On 02/15/2017 01:11 PM, Doug Ingham wrote:

https://github.com/wefixit-AT/oVirtBackup

...although I understand the API calls it uses have been deprecated in 
4.1.


On 15 February 2017 at 14:38, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Has someone got a script to automate scheduling snapshots of a
specific system (and retaining them for X days)?

Pat

    -- 
Pat Riehecky


Fermi National Accelerator Laboratory
www.fnal.gov <http://www.fnal.gov>
www.scientificlinux.org <http://www.scientificlinux.org>

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Doug


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Scheduling snapshots

2017-02-15 Thread Pat Riehecky
Has someone got a script to automate scheduling snapshots of a specific 
system (and retaining them for X days)?


Pat

--
Pat Riehecky

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding Disk stuck?

2016-12-20 Thread Pat Riehecky
They were fairly large and getting filtered out.  So I setup a github 
repo containing the log files.


https://github.com/jcpunk/logs

Pat


On 12/19/2016 10:10 AM, Elad Ben Aharon wrote:

Hi, can you please provide engine.log?

On Mon, Dec 19, 2016 at 5:06 PM, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Last Friday I started a job to add 1 new disk to each of 4 VMs -
total of 4 disks each 100G.

It seems to still be running, but no host shows an obvious IO load.
State is

Adding Disk (hour glass)
-> Validating (green check mark)
-> Executing (hour glass)
->-> Creating Volume (green check mark)

I checked in with:
/usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh

and it didn't show anything interesting.

The VMs themselves show the disks are there, but the VMs are still
locked by the disk processes.

Ideas?

    Pat

-- 
Pat Riehecky


Fermi National Accelerator Laboratory
www.fnal.gov <http://www.fnal.gov>
www.scientificlinux.org <http://www.scientificlinux.org>

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding Disk stuck?

2016-12-19 Thread Pat Riehecky
Last Friday I started a job to add 1 new disk to each of 4 VMs - total 
of 4 disks each 100G.


It seems to still be running, but no host shows an obvious IO load.
State is

Adding Disk (hour glass)
-> Validating (green check mark)
-> Executing (hour glass)
->-> Creating Volume (green check mark)

I checked in with:
/usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh

and it didn't show anything interesting.

The VMs themselves show the disks are there, but the VMs are still 
locked by the disk processes.


Ideas?

Pat

--
Pat Riehecky

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm auto start

2016-09-30 Thread Pat Riehecky



On 09/30/2016 08:22 AM, Michal Skrivanek wrote:


On 30 Sep 2016, at 09:15, qinglong.d...@horebdata.cn 
 wrote:


Hi,all
If all the hosts have been shut down, then I want one of the vms 
start when one of the hosts has started over(engine has also started 
over). What can I do? Or if'High Availability' should work?


if they died then HA VMs are going to be restarted as soon as engine 
comes up
But if you powered down those hosts/VMs they are cleanly shut down and 
won’t be restarted




See also:

https://bugzilla.redhat.com/show_bug.cgi?id=1325468
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

2016-09-06 Thread Pat Riehecky

It is, thanks for asking!

Pat

On 09/05/2016 02:58 AM, Roy Golan wrote:

btw is the hosted engine VM is listed in the web admin?

On 2 September 2016 at 16:40, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Hi Simone,

Thanks for the follow up!

I'll see about pulling out some log entries and getting a bugzilla
filed with them attached.

I was able to get it working again late last night by reinstalling
the ha engine rpms and rebooting.  Not sure why that fixed it, but
the engine fired right up shortly after and immediately saw all my
running VMs.

Pat


On 09/02/2016 02:52 AM, Simone Tiraboschi wrote:



On Thu, Sep 1, 2016 at 9:25 PM, Pat Riehecky <riehe...@fnal.gov
<mailto:riehe...@fnal.gov>> wrote:

Any suggestions for how I generate the alternate config?


vm.conf will get generated when needed converting the ovf
description in the OVF_STORE volume on the shared storage. The
OVF_STORE volume and its content are managed by the engine; in
this way you can edit the engine VM configuration from the engine
in a distributed env.

Now the question is why your system is failing on this step.
Can you please attach your
/var/log/ovirt-hosted-engine-ha/agent.log ?

Pat


On 09/01/2016 12:30 PM, Raymond wrote:

Same issue here, got it running with an alternate config
hosted-engine  --vm-start
--vm-conf=/etc/ovirt-hosted-engine/vm.conf

Cheers
Raymond



- Original Message -----
    From: "Pat Riehecky" <riehe...@fnal.gov
<mailto:riehe...@fnal.gov>>
To: "users" <users@ovirt.org <mailto:users@ovirt.org>>
Sent: Thursday, September 1, 2016 6:41:08 PM
Subject: [ovirt-users]
/var/run/ovirt-hosted-engine-ha/vm.conf not found

I seem to be unable to restart my ovirt hosted engine.

I'd swear I had this issue once before, but couldn't find
any notes on
my end.


# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=vmadmin.fnal.gov <http://vmadmin.fnal.gov>
vm_disk_id=36a09f1d-0923-4b3b-87aa-4788ca64064e
vm_disk_vol_id=c4244bdd-80c5-4f68-83c2-9494d9d05723
vmid=823d3e5b-60c2-4e53-a9e8-313aedcaf808
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=4
console=qxl
domainType=fc
spUUID=----
sdUUID=81f19871-4d91-4698-a97d-36452bfae281
connectionUUID=73b61b0a-85f8-4fb7-8faf-c687ef7cc5d8
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=131.225.193.200
bridge=ovirtmgmt
metadata_volume_UUID=93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5
metadata_image_UUID=22a9849b-c551-4783-ad0c-530464df47f3
lockspace_volume_UUID=0593a3b8-1d75-4be3-b65b-ce9a164d0309
lockspace_image_UUID=2c8c56f2-1711-4867-8ed0-3c502bb635ff
conf_volume_UUID=07c72aa5-7fd0-4159-b83e-4d078ae9c351
conf_image_UUID=f9e59ec5-6903-4b2d-8164-4fce3d901bdd

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=




___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>





___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

2016-09-02 Thread Pat Riehecky

Hi Simone,

Thanks for the follow up!

I'll see about pulling out some log entries and getting a bugzilla filed 
with them attached.


I was able to get it working again late last night by reinstalling the 
ha engine rpms and rebooting.  Not sure why that fixed it, but the 
engine fired right up shortly after and immediately saw all my running VMs.


Pat

On 09/02/2016 02:52 AM, Simone Tiraboschi wrote:



On Thu, Sep 1, 2016 at 9:25 PM, Pat Riehecky <riehe...@fnal.gov 
<mailto:riehe...@fnal.gov>> wrote:


Any suggestions for how I generate the alternate config?


vm.conf will get generated when needed converting the ovf description 
in the OVF_STORE volume on the shared storage. The OVF_STORE volume 
and its content are managed by the engine; in this way you can edit 
the engine VM configuration from the engine in a distributed env.


Now the question is why your system is failing on this step.
Can you please attach your /var/log/ovirt-hosted-engine-ha/agent.log ?

Pat


On 09/01/2016 12:30 PM, Raymond wrote:

Same issue here, got it running with an alternate config
hosted-engine  --vm-start
--vm-conf=/etc/ovirt-hosted-engine/vm.conf

Cheers
Raymond



- Original Message -
        From: "Pat Riehecky" <riehe...@fnal.gov
<mailto:riehe...@fnal.gov>>
To: "users" <users@ovirt.org <mailto:users@ovirt.org>>
Sent: Thursday, September 1, 2016 6:41:08 PM
Subject: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf
not found

I seem to be unable to restart my ovirt hosted engine.

I'd swear I had this issue once before, but couldn't find any
notes on
my end.


# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=vmadmin.fnal.gov <http://vmadmin.fnal.gov>
vm_disk_id=36a09f1d-0923-4b3b-87aa-4788ca64064e
vm_disk_vol_id=c4244bdd-80c5-4f68-83c2-9494d9d05723
vmid=823d3e5b-60c2-4e53-a9e8-313aedcaf808
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=4
console=qxl
domainType=fc
spUUID=----
sdUUID=81f19871-4d91-4698-a97d-36452bfae281
connectionUUID=73b61b0a-85f8-4fb7-8faf-c687ef7cc5d8
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=131.225.193.200
bridge=ovirtmgmt
metadata_volume_UUID=93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5
metadata_image_UUID=22a9849b-c551-4783-ad0c-530464df47f3
lockspace_volume_UUID=0593a3b8-1d75-4be3-b65b-ce9a164d0309
lockspace_image_UUID=2c8c56f2-1711-4867-8ed0-3c502bb635ff
conf_volume_UUID=07c72aa5-7fd0-4159-b83e-4d078ae9c351
conf_image_UUID=f9e59ec5-6903-4b2d-8164-4fce3d901bdd

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=




___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

2016-09-01 Thread Pat Riehecky

Any suggestions for how I generate the alternate config?

Pat

On 09/01/2016 12:30 PM, Raymond wrote:

Same issue here, got it running with an alternate config
hosted-engine  --vm-start --vm-conf=/etc/ovirt-hosted-engine/vm.conf

Cheers
Raymond



- Original Message -
From: "Pat Riehecky" <riehe...@fnal.gov>
To: "users" <users@ovirt.org>
Sent: Thursday, September 1, 2016 6:41:08 PM
Subject: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

I seem to be unable to restart my ovirt hosted engine.

I'd swear I had this issue once before, but couldn't find any notes on
my end.


# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=vmadmin.fnal.gov
vm_disk_id=36a09f1d-0923-4b3b-87aa-4788ca64064e
vm_disk_vol_id=c4244bdd-80c5-4f68-83c2-9494d9d05723
vmid=823d3e5b-60c2-4e53-a9e8-313aedcaf808
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=4
console=qxl
domainType=fc
spUUID=----
sdUUID=81f19871-4d91-4698-a97d-36452bfae281
connectionUUID=73b61b0a-85f8-4fb7-8faf-c687ef7cc5d8
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=131.225.193.200
bridge=ovirtmgmt
metadata_volume_UUID=93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5
metadata_image_UUID=22a9849b-c551-4783-ad0c-530464df47f3
lockspace_volume_UUID=0593a3b8-1d75-4be3-b65b-ce9a164d0309
lockspace_image_UUID=2c8c56f2-1711-4867-8ed0-3c502bb635ff
conf_volume_UUID=07c72aa5-7fd0-4159-b83e-4d078ae9c351
conf_image_UUID=f9e59ec5-6903-4b2d-8164-4fce3d901bdd

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

2016-09-01 Thread Pat Riehecky

I seem to be unable to restart my ovirt hosted engine.

I'd swear I had this issue once before, but couldn't find any notes on 
my end.



# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=vmadmin.fnal.gov
vm_disk_id=36a09f1d-0923-4b3b-87aa-4788ca64064e
vm_disk_vol_id=c4244bdd-80c5-4f68-83c2-9494d9d05723
vmid=823d3e5b-60c2-4e53-a9e8-313aedcaf808
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=4
console=qxl
domainType=fc
spUUID=----
sdUUID=81f19871-4d91-4698-a97d-36452bfae281
connectionUUID=73b61b0a-85f8-4fb7-8faf-c687ef7cc5d8
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=131.225.193.200
bridge=ovirtmgmt
metadata_volume_UUID=93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5
metadata_image_UUID=22a9849b-c551-4783-ad0c-530464df47f3
lockspace_volume_UUID=0593a3b8-1d75-4be3-b65b-ce9a164d0309
lockspace_image_UUID=2c8c56f2-1711-4867-8ed0-3c502bb635ff
conf_volume_UUID=07c72aa5-7fd0-4159-b83e-4d078ae9c351
conf_image_UUID=f9e59ec5-6903-4b2d-8164-4fce3d901bdd

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=



--
Pat Riehecky

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

MainThread::INFO::2016-09-01 
11:36:05,537::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
MainThread::INFO::2016-09-01 
11:36:05,538::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '131.225.193.200'}
MainThread::INFO::2016-09-01 
11:36:05,541::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 23350672
MainThread::INFO::2016-09-01 
11:36:05,542::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name': 
'ovirtmgmt', 'address': '0'}
MainThread::INFO::2016-09-01 
11:36:05,549::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 23350736
MainThread::INFO::2016-09-01 
11:36:05,550::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2016-09-01 
11:36:05,554::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 23351120
MainThread::INFO::2016-09-01 
11:36:05,555::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor cpu-load-no-engine, options {'use_ssl': 'true', 'vm_uuid': 
'823d3e5b-60c2-4e53-a9e8-313aedcaf808', 'address': '0'}
MainThread::INFO::2016-09-01 
11:36:05,558::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140278866605776
MainThread::INFO::2016-09-01 
11:36:05,558::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 
'823d3e5b-60c2-4e53-a9e8-313aedcaf808', 'address': '0'}
MainThread::INFO::2016-09-01 
11:36:05,563::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 140278866742032
MainThread::INFO::2016-09-01 
11:36:05,756::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain)
 Success, id 140278732398288
MainThread::INFO::2016-09-01 
11:36:05,756::hosted_engine::609::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Broker initialized, all submonitors started
MainThread::INFO::2016-09-01 
11:36:05,816::hosted_engine::708::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
 Ensuring lease for lockspace hosted-engine, host id 4 is acquired (file: 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff/0593a3b8-1d75-4be3-b65b-ce9a164d0309)
MainThread::INFO::2016-09-01 
11:36:10,961::upgrade::1001::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36)
 Host configuration is already up-to-date
MainThread::INFO::2016-09-01 
11:36:10,961::hosted_engine::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-09-01 
11:36:10,961::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2016-09-01 
11:36:16,122::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:c231f06b-0d4e-4d91-be55-2de903351fd3, 
volUUID:57e7e406-32f3-47ff-b879-9b608ba9cd42
MainThread::INFO::2016-09-01 
11:36:16,193::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:d2696090

Re: [ovirt-users] extra chatty /var/log/vdsm.log

2016-08-29 Thread Pat Riehecky



On 08/29/2016 10:46 AM, Nir Soffer wrote:

are you starting
and stopping vms every few seconds?


not to my knowledge.  I'll upgrade to 4.18

Thanks!

Pat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] extra chatty /var/log/vdsm.log

2016-08-29 Thread Pat Riehecky
-4d91-4698-a97d-36452bfae281/36a09f1d-0923-4b3b-87aa-4788ca64064e 
already exists
Thread-4907::WARNING::2016-08-29 
10:01:20,620::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/d2696090-e25c-4195-8c06-dafd80cf0720 
already exists
Thread-4909::WARNING::2016-08-29 
10:01:20,768::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4911::WARNING::2016-08-29 
10:01:20,914::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4913::WARNING::2016-08-29 
10:01:21,062::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/f9e59ec5-6903-4b2d-8164-4fce3d901bdd 
already exists
Thread-4931::WARNING::2016-08-29 
10:01:22,033::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4932::WARNING::2016-08-29 
10:01:22,110::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4958::WARNING::2016-08-29 
10:01:28,188::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/images 
already exists
Thread-4958::WARNING::2016-08-29 
10:01:28,188::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/dom_md 
already exists
Thread-4961::WARNING::2016-08-29 
10:01:28,494::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/c231f06b-0d4e-4d91-be55-2de903351fd3 
already exists
Thread-4963::WARNING::2016-08-29 
10:01:28,772::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/36a09f1d-0923-4b3b-87aa-4788ca64064e 
already exists
Thread-4965::WARNING::2016-08-29 
10:01:28,919::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/d2696090-e25c-4195-8c06-dafd80cf0720 
already exists
Thread-4967::WARNING::2016-08-29 
10:01:29,067::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4969::WARNING::2016-08-29 
10:01:29,215::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4971::WARNING::2016-08-29 
10:01:29,362::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/f9e59ec5-6903-4b2d-8164-4fce3d901bdd 
already exists
Thread-4988::WARNING::2016-08-29 
10:01:30,357::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4990::WARNING::2016-08-29 
10:01:30,434::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists



--
Pat Riehecky

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine won't come up

2016-04-26 Thread Pat Riehecky



On 04/26/2016 09:43 AM, Nir Soffer wrote:

On Tue, Apr 26, 2016 at 1:33 AM, Pat Riehecky <riehe...@fnal.gov> wrote:

I've just done a clean install of the 3.6 hosted engine (decided to wipe out
my previous system)

The install went in just fine, no errors I saw, but I'm getting interesting
errors in the ovirt-hosted-engine-ha agent.log

I have no idea what to do about these errors

-


According to Zdenek, this looks like major configuration issue.

Can you share the output of:

lsblk
pvscan --cache
pvs -

We see similar errors in this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1303940

Nir


Weirdly, I bounced the box and ran the above commands, and the Hosted 
Engine fired itself up?


I've bounced the box a few more times and it seems to come back every time.

I seem to be fixed?

Pat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine won't come up

2016-04-25 Thread Pat Riehecky
, ignoring /dev/sdc
  LV VG   Attr   LSize   Pool 
Origin Data%  Meta%  Move Log Cpy%Sync Convert
  0593a3b8-1d75-4be3-b65b-ce9a164d0309 
81f19871-4d91-4698-a97d-36452bfae281 -wi-ao 128.00m
  07c72aa5-7fd0-4159-b83e-4d078ae9c351 
81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 1.00g
  93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5 
81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 128.00m
  c4244bdd-80c5-4f68-83c2-9494d9d05723 
81f19871-4d91-4698-a97d-36452bfae281 -wi-ao 50.00g

  ids 81f19871-4d91-4698-a97d-36452bfae281 -wi-ao 128.00m
  inbox 81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 128.00m
  leases 81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 2.00g
  master 81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 1.00g
  metadata 81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 512.00m
  outbox 81f19871-4d91-4698-a97d-36452bfae281 -wi-a- 128.00m
# rpm -qa |grep ovirt
ovirt-engine-sdk-python-3.6.3.0-1.el7
ovirt-vmconsole-1.0.0-1.el7
ovirt-vmconsole-host-1.0.0-1.el7
libgovirt-0.3.3-1.el7_2.1
ovirt-setup-lib-1.0.1-1.el7
ovirt-hosted-engine-setup-1.3.4.0-1.el7
ovirt-host-deploy-1.4.1-1.el7
ovirt-hosted-engine-ha-1.3.5.1-1.el7
# rpm -qa |grep vdsm
vdsm-jsonrpc-4.17.23.2-0.el7
vdsm-4.17.23.2-0.el7
vdsm-python-4.17.23.2-0.el7
vdsm-xmlrpc-4.17.23.2-0.el7
vdsm-yajsonrpc-4.17.23.2-0.el7
vdsm-hook-vmfex-dev-4.17.23.2-0.el7
vdsm-cli-4.17.23.2-0.el7
vdsm-infra-4.17.23.2-0.el7


--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine Almost setup

2016-04-23 Thread Pat Riehecky
I realize now I shouldn't have set the default cluster name, is there a 
way I can resume the install of the hosted engine?


I've got the engine up and running, so I just need to jump in from after 
the engine install.


Ideas?

  Checking for oVirt-Engine status at ...
[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
[ INFO  ] Acquiring internal CA cert from the engine
[ INFO  ] The following CA certificate is going to be used, please 
immediately interrupt if not correct:

[ INFO  ] Issuer: C=US,xx
[ INFO  ] Connecting to the Engine
[ ERROR ] Failed to execute stage 'Closing up': Specified cluster does 
not exist: Production

[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160424000549.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, 
please check the issue, fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160423233732-5tl68l.log


--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to use /etc/ovirt-hosted-engine-setup.env.d/ ?

2016-03-21 Thread Pat Riehecky
I'd like to pre-seed answers to several of the hosted-engine setup 
question, but can't seem to find any instructions on how to pass answers 
into hosted-engine.  I tried the same tactics as with engine-setup, but 
those netted no results.


Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine use SPICE not VNC

2016-03-07 Thread Pat Riehecky

Is there a way to configure the hosted engine to only use SPICE and not VNC?

/usr/libexec/qemu-kvm -name HostedEngine -S -machine 
rhel6.5.0,accel=kvm,usb=off -cpu qemu64,-svm -m 4096 -realtime  
-device 
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 
-vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg 
timestamp=on


]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=qxl
domainType=nfs3
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
bridge=ovirtmgmt

# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 0.0.0.0:59000.0.0.0:* LISTEN

Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine disk migration

2016-03-04 Thread Pat Riehecky

I'm on oVirt 3.6

I'd like to migrate my hosted engine storage to another location and 
have a few questions:


(a) what is the right syntax for glusterfs in 
/etc/ovirt-hosted-engine/hosted-engine.conf? (I'm currently on nfs3)


(b) what is the right syntax for fibre channel?

(c) where are instructions for how to migrate the actual disk files? 
(google was little help)


(d) Can the hosted engine use the same (gluster/fibre) volume as my VM 
Images?


(e) I get various "Cannot edit Virtual Machine. This VM is not managed 
by the engine." in the console for manipulating the HostedEngine.  Is 
that expected?


Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Local and remote storage?

2015-02-19 Thread Pat Riehecky

Hello,

I've run into an odd problem with oVirt 3.5.1.

I've two sets of VMs:
A) some fairly critical VMs which need HA and have access to shared storage
B) some less important systems which can be reconstructed in the event 
of a disaster


My plan was to use the slack disk on my oVirt nodes to host the 'B' 
systems while still allowing the 'A' systems to run there.  This seems 
to not be allowed[1].


Is there a way I can utilize both the spare disk on my compute nodes and 
have HA configured for some VMs with shared storage?


Pat

[1] http://www.ovirt.org/OVirt_Administration_Guide#Preparing_Local_Storage


--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users