Re: [ovirt-users] Consult actual size the disk vm

2016-04-20 Thread Idan Shaby
Hi Marcelo,

Please note that under the Virtual Machines tab -> Disks sub tab, assuming
we are talking about an image, you can click on the Images radio button and
see the disk's virtual and actual size.
Hope it helps.


Regards,
Idan

On Thu, Apr 21, 2016 at 6:06 AM, Marcelo Leandro 
wrote:

> Hello,
>
> i am try automate an task, i am try consult the actual size the disk of
> vm, but not have success, i try in database in table imagens, i try with
> python-api with command:
>
> vm = api.vms.get(vmname)
> disk =  vm.disks.get(diskname)
> disk.get_actual_size()
>
> but the informations  not is equal with that shown in the dashboard.
>
> Can someone help me?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add hosts in ovirt-engine 3.6

2016-04-20 Thread Yedidyah Bar David
On Thu, Apr 21, 2016 at 4:57 AM, Sandvik Agustin
 wrote:
> Hi,
>
> Thanks for the quick reply, my ovirt-engine is running on CentOS 6.7 and my
> hypervisor is also CentOS 6.7.

As I wrote before, el6 hosts are not supported in 3.6.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue while importing the existing storage domain

2016-04-20 Thread Idan Shaby
Hi Satheesaran,

Please file a BZ and attach all the relevant logs so we can investigate it
properly.


Regards,
Idan

On Wed, Apr 20, 2016 at 12:42 PM, SATHEESARAN  wrote:

> Hi All,
>
> I was testing the gluster geo-replication on RHEV storage domain backed by
> gluster volume.
> In this case, storage domain ( data domain ) was created with gluster
> replica 3 volume.
>
> The VMs additional disks are carved out from this storage domain.
>
> Now I have geo-replicated[1] the gluster volume to the remote volume.
> When I try importing this storage domain in another RHEVM instance, it
> fails with error "internal engine error"
>  I see the following error in engine.log
>
> 
> 2016-04-20 05:13:47,685 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
> (ajp-/127.0.0.1:8702-3) [20f6ea4c] Failed in 'DetachStorageDomainVDS'
> method
> 2016-04-20 05:13:47,708 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ajp-/127.0.0.1:8702-3) [20f6ea4c] Correlation ID: null, Call Stack:
> null, Custom Event ID: -1, Message: VDSM command failed: Cannot acquire
> host id: (u'89061d19-fb76-47c9-a4aa-22b0062b769e', SanlockException(-262,
> 'Sanlock lockspace add failure', 'Sanlock exception'))
> 2016-04-20 05:13:47,708 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
> (ajp-/127.0.0.1:8702-3) [20f6ea4c] Command
> 'org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand'
> return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=661,
> message=Cannot acquire host id: (u'89061d19-fb76-47c9-a4aa-22b0062b769e',
> SanlockException(-262, 'Sanlock lockspace add failure', 'Sanlock
> exception'))]]'
> 
>
> The complete logs are available in the fpaste[2]
> Attaching the part of vdsm log to this mail
>
> [1] - geo-replication is the feature in glusterfs where the contents of
> volume are asynchronously replicated in remote volume.
> This is used for disaster-recovery workflow
>
> [2] - https://paste.fedoraproject.org/357701/11448771/
>
> Thanks,
> Satheesaran S
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Consult actual size the disk vm

2016-04-20 Thread Marcelo Leandro
Hello,

i am try automate an task, i am try consult the actual size the disk of vm,
but not have success, i try in database in table imagens, i try with
python-api with command:

vm = api.vms.get(vmname)
disk =  vm.disks.get(diskname)
disk.get_actual_size()

but the informations  not is equal with that shown in the dashboard.

Can someone help me?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add hosts in ovirt-engine 3.6

2016-04-20 Thread Sandvik Agustin
Hi,

Thanks for the quick reply, my ovirt-engine is running on CentOS 6.7 and my
hypervisor is also CentOS 6.7.
engine.log
]# tail /var/log/ovirt-engine/engine.log
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
[rt.jar:1.7.0_95]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_95]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_95]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_95]

2016-04-21 09:28:09,600 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-4) [59328744] Host installation failed for
host '92380584-6896-42f5-b2ca-727840db7645', 'vcih3.pagasa-vci.lan':
Command returned failure code 1 during SSH session 'root@10.11.41.7'
2016-04-21 09:28:09,602 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-4) [59328744] START,
SetVdsStatusVDSCommand(HostName = vcih3.pagasa-vci.lan,
SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='92380584-6896-42f5-b2ca-727840db7645', status='InstallFailed',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 17f30fc4
2016-04-21 09:28:09,604 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-4) [59328744] FINISH,
SetVdsStatusVDSCommand, log id: 17f30fc4
2016-04-21 09:28:09,609 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-4) [59328744] Correlation ID: 59328744,
Call Stack: null, Custom Event ID: -1, Message: Host vcih3.pagasa-vci.lan
installation failed. Command returned failure code 1 during SSH session '
root@10.11.41.7'.
2016-04-21 09:28:09,609 INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-4) [59328744] Lock freed to object
'EngineLock:{exclusiveLocks='[92380584-6896-42f5-b2ca-727840db7645=]', sharedLocks='null'}'

server.log
]# tail /var/log/ovirt-engine/server.log
at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
[undertow-core-1.1.8.Final.jar:1.1.8.Final]
at
io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:761)
[undertow-core-1.1.8.Final.jar:1.1.8.Final]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_95]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_95]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_95]

2016-04-21 09:27:33,289 INFO
 [org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-1)
Client session created
2016-04-21 09:27:33,298 INFO
 [org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-1)
Server version string: SSH-2.0-OpenSSH_5.3
2016-04-21 09:27:33,301 INFO
 [org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-2) Kex:
server->client aes128-ctr hmac-sha2-256 none
2016-04-21 09:27:33,302 INFO
 [org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-2) Kex:
client->server aes128-ctr hmac-sha2-256 none

host-deploy
# tail ovirt-host-deploy-20160421092809-10.11.41.7-59328744.log
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ***Q:STRING TERMINATION_COMMAND
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ###
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ### Processing ended, use 'quit'
to quit
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ### COMMAND>
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:RECEIVEnoop
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ***Q:STRING TERMINATION_COMMAND
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ###
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ### Processing ended, use 'quit'
to quit
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:SEND   ### COMMAND>
2016-04-21 09:28:14 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:219 DIALOG:RECEIVElog

hope this logs can help us.

TIA

On Wed, Apr 20, 2016 at 2:06 PM, Yedidyah Bar David  wrote:

> On Wed, Apr 20, 2016 at 4:10 AM, Sandvik Agustin
>  wrote:
> > Hi Users,
> >
> > Good day, i'm having problem on adding hosts/hypervisor in ovirt-engine
> 3.6,
> > a month ago i have successfully added 2 hypervisor when the engine is
> still
> > version 3.5, then a week ago, i manage to update the engine from 3.5 to
> 3.6,
> > yesterday i'm adding another 1 hypervisor and i tried using ovirt 3.6
> repo
> > for th

Re: [ovirt-users] Users Digest, Vol 55, Issue 155

2016-04-20 Thread Nir Soffer
o
>>>>  users-requ...@ovirt.org
>>>>
>>>> You can reach the person managing the list at
>>>>  users-ow...@ovirt.org
>>>>
>>>> When replying, please edit your Subject line so it is more specific
>>>> than "Re: Contents of Users digest..."
>>>>
>>>>
>>>> Today's Topics:
>>>>
>>>>   1. Re:  vhostmd vdsm-hook (Ars?ne Gschwind)
>>>>   2. Re:  Disks Illegal State (Nir Soffer)
>>>>
>>>>
>>>> ---
>>>> ---
>>>>
>>>> Message: 1
>>>> Date: Wed, 20 Apr 2016 19:09:39 +0200
>>>> From: Ars?ne Gschwind 
>>>> To: Simon Barrett ,  "users@ov
>>>> irt.org"
>>>>  
>>>> Subject: Re: [ovirt-users] vhostmd vdsm-hook
>>>> Message-ID: <5717b7d3.2070...@unibas.ch>
>>>> Content-Type: text/plain; charset="windows-1252"; Format="flowed"
>>>>
>>>> I've never tried with 2 disks but I will assume that the next free
>>>> available disk will be used by the vdsm hook and the vm-dump-metrics
>>>> cmd
>>>> will check the kind of disk.
>>>> Let me know if you give a try
>>>>
>>>> thanks,
>>>> Ars?ne
>>>>
>>>>> On 04/19/2016 02:43 PM, Simon Barrett wrote:
>>>>>
>>>>>
>>>>> Thanks again but how does that work when a VM is configured to
>>>>> have
>>>>> more than one disk?
>>>>>
>>>>> If I have a VM with a /dev/vda disk and a /dev/vdb disk, when I
>>>>> turn
>>>>> the vhostmd hook on the vm metric device gets created  as /dev/vdb
>>>>> and
>>>>> the original /dev/vdb disk gets bumped to /dev/vdc.
>>>>>
>>>>> Is that expected behavior? Will that not cause problems?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Simon
>>>>>
>>>>> *From:*Ars?ne Gschwind [mailto:arsene.gschw...@unibas.ch]
>>>>> *Sent:* Tuesday, 19 April, 2016 13:06
>>>>> *To:* Simon Barrett ; users@ovirt.
>>>>> org
>>>>> *Subject:* Re: [ovirt-users] vhostmd vdsm-hook
>>>>>
>>>>> The metric information are available on this additional disk
>>>>> /dev/vdb.
>>>>> You may install the package vm-dump-metrics and use the command
>>>>> vm-dump-metrics which will display all metrics in an xml format.
>>>>>
>>>>> Ars?ne
>>>>>
>>>>> On 04/19/2016 10:48 AM, Simon Barrett wrote:
>>>>>
>>>>>    Thanks Ars?ne,
>>>>>
>>>>>I have vhostmd running on the ovirt node and have set the
>>>>>sap_agent to true on the VM configuration. I also stopped and
>>>>>started the VM to ensure that the config change took effect.
>>>>>
>>>>>On the oVirt node I see the vhostmd running and see the
>>>>> following
>>>>>entry in the qemu-kvm output:
>>>>>
>>>>>drive
>>>>>file=/dev/shm/vhostmd0,if=none,id=drive-virtio-
>>>>> disk701,readonly=on,format=raw
>>>>>-device
>>>>>virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-
>>>>> disk701,id=virtio-disk701
>>>>>
>>>>>The part I wasn?t quite understanding was how this presented
>>>>>itself on the VM but I now see a new disk device ?/dev/vdb?. If
>>>>> I
>>>>>cat the contents of /dev/vdb I now see the information that is
>>>>>provided from the ovirt node, which is great news and very
>>>>> useful.
>>>>>
>>>>>Thanks for your help.
>>>>>
>>>>>Simon
>>>>>
>>>>>*From:*users-boun...@ovirt.org <mailto:users-boun...@ovirt.org>
>>>>>[mailto:users-boun...@ovirt.org] *On Behalf Of *Ars?ne Gschwind
>>>>>*Sent:* Monday, 18 April, 2016 16:03
>>>>>*To:* users@ovirt.org <mailto:users@ovirt.org>
>>>>>*Subject:* Re: [ovirt-users] vhostmd vdsm-hook
>>>>>
>>>>>Hi Simon,
>>>>>
>>>>>You 

Re: [ovirt-users] Users Digest, Vol 55, Issue 155

2016-04-20 Thread Clint Boggio
t; 
>>>> *From:*Ars?ne Gschwind [mailto:arsene.gschw...@unibas.ch]
>>>> *Sent:* Tuesday, 19 April, 2016 13:06
>>>> *To:* Simon Barrett ; users@ovirt.
>>>> org
>>>> *Subject:* Re: [ovirt-users] vhostmd vdsm-hook
>>>> 
>>>> The metric information are available on this additional disk
>>>> /dev/vdb.
>>>> You may install the package vm-dump-metrics and use the command
>>>> vm-dump-metrics which will display all metrics in an xml format.
>>>> 
>>>> Ars?ne
>>>> 
>>>> On 04/19/2016 10:48 AM, Simon Barrett wrote:
>>>> 
>>>>Thanks Ars?ne,
>>>> 
>>>>I have vhostmd running on the ovirt node and have set the
>>>>sap_agent to true on the VM configuration. I also stopped and
>>>>started the VM to ensure that the config change took effect.
>>>> 
>>>>On the oVirt node I see the vhostmd running and see the
>>>> following
>>>>entry in the qemu-kvm output:
>>>> 
>>>>drive
>>>>file=/dev/shm/vhostmd0,if=none,id=drive-virtio-
>>>> disk701,readonly=on,format=raw
>>>>-device
>>>>virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-
>>>> disk701,id=virtio-disk701
>>>> 
>>>>The part I wasn?t quite understanding was how this presented
>>>>itself on the VM but I now see a new disk device ?/dev/vdb?. If
>>>> I
>>>>cat the contents of /dev/vdb I now see the information that is
>>>>provided from the ovirt node, which is great news and very
>>>> useful.
>>>> 
>>>>Thanks for your help.
>>>> 
>>>>Simon
>>>> 
>>>>*From:*users-boun...@ovirt.org <mailto:users-boun...@ovirt.org>
>>>>[mailto:users-boun...@ovirt.org] *On Behalf Of *Ars?ne Gschwind
>>>>*Sent:* Monday, 18 April, 2016 16:03
>>>>*To:* users@ovirt.org <mailto:users@ovirt.org>
>>>>*Subject:* Re: [ovirt-users] vhostmd vdsm-hook
>>>> 
>>>>Hi Simon,
>>>> 
>>>>You will need to have vhostmd running on the oVirt node and set
>>>>the "sap_agent" custom property for the vm as you may see on
>>>> the
>>>>screenshot.
>>>> 
>>>>sap_agent
>>>> 
>>>>Ars?ne
>>>> 
>>>>On 04/15/2016 12:15 PM, Simon Barrett wrote:
>>>> 
>>>>I?m trying to use the vhostmd vdsm host to access ovirt
>>>> node
>>>>metrics from within a VM. Vhostmd is running and updating
>>>> the
>>>>/dev/shm/vhostmd0 on the ovirt node.
>>>> 
>>>>The part I?m stuck on is: ?This disk image is exported
>>>>read-only to guests. Guests can read the disk image to see
>>>>metrics? from
>>>>http://www.ovirt.org/develop/developer-guide/vdsm/hook/vhos
>>>> tmd/
>>>> 
>>>>Does the hook do this by default? I don?t see any new
>>>>read-only device mounted in the guest. Is there additional
>>>>work I need to do to mount this and access the data from
>>>>within the guest?
>>>> 
>>>>Many thanks,
>>>> 
>>>>Simon
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>___
>>>> 
>>>>Users mailing list
>>>> 
>>>>Users@ovirt.org <mailto:Users@ovirt.org>
>>>> 
>>>>http://lists.ovirt.org/mailman/listinfo/users
>>> -- next part --
>>> An HTML attachment was scrubbed...
>>> URL: <http://lists.ovirt.org/pipermail/users/attachments/20160420/d5d
>>> 7d06b/attachment-0001.html>
>>> -- next part --
>>> A non-text attachment was scrubbed...
>>> Name: not available
>>> Type: image/png
>>> Size: 6941 bytes
>>> Desc: not available
>>> URL: <http://lists.ovirt.org/pipermail/users/attachments/20160420/d5d
>>> 7d06b/attachment-0001.png>
>>> 
>>> --
>>> 
>>> Message: 2
>>> Date: Wed, 20 Apr 2016 20:33:06 +0300
>>> From: Nir 

Re: [ovirt-users] Users Digest, Vol 55, Issue 155

2016-04-20 Thread Nir Soffer
t; > drive
>> > file=/dev/shm/vhostmd0,if=none,id=drive-virtio-
>> > disk701,readonly=on,format=raw
>> > -device
>> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-
>> > disk701,id=virtio-disk701
>> >
>> > The part I wasn?t quite understanding was how this presented
>> > itself on the VM but I now see a new disk device ?/dev/vdb?. If
>> > I
>> > cat the contents of /dev/vdb I now see the information that is
>> > provided from the ovirt node, which is great news and very
>> > useful.
>> >
>> > Thanks for your help.
>> >
>> > Simon
>> >
>> > *From:*users-boun...@ovirt.org <mailto:users-boun...@ovirt.org>
>> > [mailto:users-boun...@ovirt.org] *On Behalf Of *Ars?ne Gschwind
>> > *Sent:* Monday, 18 April, 2016 16:03
>> > *To:* users@ovirt.org <mailto:users@ovirt.org>
>> > *Subject:* Re: [ovirt-users] vhostmd vdsm-hook
>> >
>> > Hi Simon,
>> >
>> > You will need to have vhostmd running on the oVirt node and set
>> > the "sap_agent" custom property for the vm as you may see on
>> > the
>> > screenshot.
>> >
>> > sap_agent
>> >
>> > Ars?ne
>> >
>> > On 04/15/2016 12:15 PM, Simon Barrett wrote:
>> >
>> > I?m trying to use the vhostmd vdsm host to access ovirt
>> > node
>> > metrics from within a VM. Vhostmd is running and updating
>> > the
>> > /dev/shm/vhostmd0 on the ovirt node.
>> >
>> > The part I?m stuck on is: ?This disk image is exported
>> > read-only to guests. Guests can read the disk image to see
>> > metrics? from
>> > http://www.ovirt.org/develop/developer-guide/vdsm/hook/vhos
>> > tmd/
>> >
>> > Does the hook do this by default? I don?t see any new
>> > read-only device mounted in the guest. Is there additional
>> > work I need to do to mount this and access the data from
>> > within the guest?
>> >
>> > Many thanks,
>> >
>> > Simon
>> >
>> >
>> >
>> >
>> >
>> > ___
>> >
>> > Users mailing list
>> >
>> > Users@ovirt.org <mailto:Users@ovirt.org>
>> >
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <http://lists.ovirt.org/pipermail/users/attachments/20160420/d5d
>> 7d06b/attachment-0001.html>
>> -- next part --
>> A non-text attachment was scrubbed...
>> Name: not available
>> Type: image/png
>> Size: 6941 bytes
>> Desc: not available
>> URL: <http://lists.ovirt.org/pipermail/users/attachments/20160420/d5d
>> 7d06b/attachment-0001.png>
>>
>> --
>>
>> Message: 2
>> Date: Wed, 20 Apr 2016 20:33:06 +0300
>> From: Nir Soffer 
>> To: Clint Boggio , Ala Hino ,
>>   Adam Litke 
>> Cc: users 
>> Subject: Re: [ovirt-users] Disks Illegal State
>> Message-ID:
>>   > .com>
>> Content-Type: text/plain; charset=UTF-8
>>
>> On Wed, Apr 20, 2016 at 5:34 PM, Clint Boggio 
>> wrote:
>> >
>> > The "vdsm-tool dump-volume-chains" command on the iSCSI storage
>> > domain
>> > shows one disk in "ILLEGAL" state while the gui shows 8 disk images
>> > in
>> > the same state.
>> Interesting - it would be useful to find the missing volume ids in
>> engine log and
>> understand wahy they are marked as illegal.
>>
>> >
>> >
>> > ###
>> > # BEGIN COMMAND OUTPUT
>> > ###
>> >
>> >
>> >
>> > [root@KVM01 ~]# vdsm-tool dump-volume-chains 045c7fda-ab98-4905-
>> > 876c-
>> > 00b5413a619f
>> >
>> > Images volume chains (base volume first)
>> >
>> >image:477e73af-e7db-4914-81ed-89b3fbc876f7
>> >
>> >  - c8320522-f839-472e-9707-a75f6fbe5cb6
>> >status: OK, voltype: LEAF, format: COW, legality:
>> > LEGAL,

Re: [ovirt-users] Users Digest, Vol 55, Issue 155

2016-04-20 Thread Clint Boggio
 the snapshot to complete the operation.

##
# END
##

If that's the case, then why (how) are the afflicted machines that have
not been rebooted still running without thier backing disks  ?

I can upload the logs and a copy of the backup script. Do you all have
a repository you'd like meto upload  to ? Let me know and i'll upload
them right now.



On Wed, 2016-04-20 at 13:33 -0400, users-requ...@ovirt.org wrote:
> Send Users mailing list submissions to
>   users@ovirt.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
>   users-requ...@ovirt.org
> 
> You can reach the person managing the list at
>   users-ow...@ovirt.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
> 
> 
> Today's Topics:
> 
>    1. Re:  vhostmd vdsm-hook (Ars?ne Gschwind)
>    2. Re:  Disks Illegal State (Nir Soffer)
> 
> 
> ---
> ---
> 
> Message: 1
> Date: Wed, 20 Apr 2016 19:09:39 +0200
> From: Ars?ne Gschwind 
> To: Simon Barrett ,  "users@ov
> irt.org"
>   
> Subject: Re: [ovirt-users] vhostmd vdsm-hook
> Message-ID: <5717b7d3.2070...@unibas.ch>
> Content-Type: text/plain; charset="windows-1252"; Format="flowed"
> 
> I've never tried with 2 disks but I will assume that the next free 
> available disk will be used by the vdsm hook and the vm-dump-metrics
> cmd 
> will check the kind of disk.
> Let me know if you give a try
> 
> thanks,
> Ars?ne
> 
> On 04/19/2016 02:43 PM, Simon Barrett wrote:
> > 
> > 
> > Thanks again but how does that work when a VM is configured to
> > have 
> > more than one disk?
> > 
> > If I have a VM with a /dev/vda disk and a /dev/vdb disk, when I
> > turn 
> > the vhostmd hook on the vm metric device gets created  as /dev/vdb
> > and 
> > the original /dev/vdb disk gets bumped to /dev/vdc.
> > 
> > Is that expected behavior? Will that not cause problems?
> > 
> > Thanks,
> > 
> > Simon
> > 
> > *From:*Ars?ne Gschwind [mailto:arsene.gschw...@unibas.ch]
> > *Sent:* Tuesday, 19 April, 2016 13:06
> > *To:* Simon Barrett ; users@ovirt.
> > org
> > *Subject:* Re: [ovirt-users] vhostmd vdsm-hook
> > 
> > The metric information are available on this additional disk
> > /dev/vdb. 
> > You may install the package vm-dump-metrics and use the command 
> > vm-dump-metrics which will display all metrics in an xml format.
> > 
> > Ars?ne
> > 
> > On 04/19/2016 10:48 AM, Simon Barrett wrote:
> > 
> > Thanks Ars?ne,
> > 
> > I have vhostmd running on the ovirt node and have set the
> > sap_agent to true on the VM configuration. I also stopped and
> > started the VM to ensure that the config change took effect.
> > 
> > On the oVirt node I see the vhostmd running and see the
> > following
> > entry in the qemu-kvm output:
> > 
> > drive
> > file=/dev/shm/vhostmd0,if=none,id=drive-virtio-
> > disk701,readonly=on,format=raw
> > -device
> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-
> > disk701,id=virtio-disk701
> > 
> > The part I wasn?t quite understanding was how this presented
> > itself on the VM but I now see a new disk device ?/dev/vdb?. If
> > I
> > cat the contents of /dev/vdb I now see the information that is
> > provided from the ovirt node, which is great news and very
> > useful.
> > 
> > Thanks for your help.
> > 
> > Simon
> > 
> > *From:*users-boun...@ovirt.org <mailto:users-boun...@ovirt.org>
> > [mailto:users-boun...@ovirt.org] *On Behalf Of *Ars?ne Gschwind
> > *Sent:* Monday, 18 April, 2016 16:03
> > *To:* users@ovirt.org <mailto:users@ovirt.org>
> > *Subject:* Re: [ovirt-users] vhostmd vdsm-hook
> > 
> > Hi Simon,
> > 
> > You will need to have vhostmd running on the oVirt node and set
> > the "sap_agent" custom property for the vm as you may see on
> > the
> > screenshot.
> > 
> > sap_agent
> > 
> > Ars?ne
> > 
> > On 04/15/2016 12:15 PM, Simon Barrett wrote:
> > 
> > I?m trying to use the vhostmd vdsm host to access ovirt
> > node
> >  

Re: [ovirt-users] Disks Illegal State

2016-04-20 Thread Nir Soffer
On Wed, Apr 20, 2016 at 5:34 PM, Clint Boggio  wrote:
> The "vdsm-tool dump-volume-chains" command on the iSCSI storage domain
> shows one disk in "ILLEGAL" state while the gui shows 8 disk images in
> the same state.

Interesting - it would be useful to find the missing volume ids in
engine log and
understand wahy they are marked as illegal.

>
> ###
> # BEGIN COMMAND OUTPUT
> ###
>
>
>
> [root@KVM01 ~]# vdsm-tool dump-volume-chains 045c7fda-ab98-4905-876c-
> 00b5413a619f
>
> Images volume chains (base volume first)
>
>image:477e73af-e7db-4914-81ed-89b3fbc876f7
>
>  - c8320522-f839-472e-9707-a75f6fbe5cb6
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:882c73fc-a833-4e2e-8e6a-f714d80c0f0d
>
>  - 689220c0-70f8-475f-98b2-6059e735cd1f
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:0ca8c49f-452e-4f61-a3fc-c4bf2711e200
>
>  - dac06a5c-c5a8-4f82-aa8d-5c7a382da0b3
>status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
> type: PREALLOCATED
>
>
>image:0ca0b8f8-8802-46ae-a9f8-45d5647feeb7
>
>  - 51a6de7b-b505-4c46-ae2a-25fb9faad810
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:ae6d2c62-cfbb-4765-930f-c0a0e3bc07d0
>
>  - b2d39c7d-5b9b-498d-a955-0e99c9bd5f3c
>status: OK, voltype: INTERNAL, format: COW, legality:
> LEGAL, type: SPARSE
>
>  - bf962809-3de7-4264-8c68-6ac12d65c151
>status: ILLEGAL, voltype: LEAF, format: COW, legality:
> ILLEGAL, type: SPARSE

Lets check vdsm and engine log, and find when and why this disk became
illegal.

If this was a result of a live merge that failed while finalizing the merge
on the engine side, we can safely delete the illegal volume.

If this is the case, we should find a live merge for volume
bf962809-3de7-4264-8c68-6ac12d65c151, and the live merge
should be successful on vdsm side. At this point, vdsm set the old
volume state to illegal. Engine should ask to delete this volume
later in this flow.

Adding Ala and Adam to look at this case.

>
>
>image:ff8c64c4-d52b-4812-b541-7f291f98d961
>
>  - 85f77cd5-2f86-49a9-a411-8539114d3035
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:70fc19a2-75da-41bd-a1f6-eb857ed2f18f
>
>  - a8f27397-395f-4b62-93c4-52699f59ea4b
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:2b315278-65f5-45e8-a51e-02b9bc84dcee
>
>  - a6e2150b-57fa-46eb-b205-017fe01b0e4b
>status: OK, voltype: INTERNAL, format: COW, legality:
> LEGAL, type: SPARSE
>
>  - 2d8e5c14-c923-49ac-8660-8e57b801e329
>status: OK, voltype: INTERNAL, format: COW, legality:
> LEGAL, type: SPARSE
>
>  - 43100548-b849-4762-bfc5-18a0f281df2e
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:bf4594b0-242e-4823-abfd-9398ce5e31b7
>
>  - 4608ce2e-f288-40da-b4e5-2a5e7f3bf837
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:00efca9d-932a-45b3-92c3-80065c1a40ce
>
>  - a0bb00bc-cefa-4031-9b59-3cddc3a53a0a
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:5ce704eb-3508-4c36-b0ce-444ebdd27e66
>
>  - e41f2c2d-0a79-49f1-8911-1535a82bd735
>status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
> type: PREALLOCATED
>
>
>image:11288fa5-0019-4ac0-8a7d-1d455e5e1549
>
>  - 5df31efc-14dd-427c-b575-c0d81f47c6d8
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:a091f7df-5c64-4b6b-a806-f4bf3aad53bc
>
>  - 38138111-2724-44a4-bde1-1fd9d60a1f63
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:c0b302c4-4b9d-4759-bb80-de1e865ecd58
>
>  - d4db9ba7-1b39-4b48-b319-013ebc1d71ce
>status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
> type: PREALLOCATED
>
>
>image:21123edb-f74f-440b-9c42-4c16ba06a2b7
>
>  - f3cc17aa-4336-4542-9ab0-9df27032be0b
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:ad486d26-4594-4d16-a402-68b45d82078a
>
>  - e87e0c7c-4f6f-45e9-90ca-cf34617da3f6
>status: OK, voltype: LEAF, format: COW, legality: LEGAL,
> type: SPARSE
>
>
>image:c30c7f11-7818-4592-97ca-9d5be46e2d8e
>
>  - cb53ad06-65e8-474d-94c3-9acf044d5a09
>status: OK, voltype: LEAF, format: COW, legality

Re: [ovirt-users] vhostmd vdsm-hook

2016-04-20 Thread Arsène Gschwind
I've never tried with 2 disks but I will assume that the next free 
available disk will be used by the vdsm hook and the vm-dump-metrics cmd 
will check the kind of disk.

Let me know if you give a try

thanks,
Arsène

On 04/19/2016 02:43 PM, Simon Barrett wrote:


Thanks again but how does that work when a VM is configured to have 
more than one disk?


If I have a VM with a /dev/vda disk and a /dev/vdb disk, when I turn 
the vhostmd hook on the vm metric device gets created  as /dev/vdb and 
the original /dev/vdb disk gets bumped to /dev/vdc.


Is that expected behavior? Will that not cause problems?

Thanks,

Simon

*From:*Arsène Gschwind [mailto:arsene.gschw...@unibas.ch]
*Sent:* Tuesday, 19 April, 2016 13:06
*To:* Simon Barrett ; users@ovirt.org
*Subject:* Re: [ovirt-users] vhostmd vdsm-hook

The metric information are available on this additional disk /dev/vdb. 
You may install the package vm-dump-metrics and use the command 
vm-dump-metrics which will display all metrics in an xml format.


Arsène

On 04/19/2016 10:48 AM, Simon Barrett wrote:

Thanks Arsène,

I have vhostmd running on the ovirt node and have set the
sap_agent to true on the VM configuration. I also stopped and
started the VM to ensure that the config change took effect.

On the oVirt node I see the vhostmd running and see the following
entry in the qemu-kvm output:

drive

file=/dev/shm/vhostmd0,if=none,id=drive-virtio-disk701,readonly=on,format=raw
-device

virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk701,id=virtio-disk701

The part I wasn’t quite understanding was how this presented
itself on the VM but I now see a new disk device “/dev/vdb”. If I
cat the contents of /dev/vdb I now see the information that is
provided from the ovirt node, which is great news and very useful.

Thanks for your help.

Simon

*From:*users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] *On Behalf Of *Arsène Gschwind
*Sent:* Monday, 18 April, 2016 16:03
*To:* users@ovirt.org 
*Subject:* Re: [ovirt-users] vhostmd vdsm-hook

Hi Simon,

You will need to have vhostmd running on the oVirt node and set
the "sap_agent" custom property for the vm as you may see on the
screenshot.

sap_agent

Arsène

On 04/15/2016 12:15 PM, Simon Barrett wrote:

I’m trying to use the vhostmd vdsm host to access ovirt node
metrics from within a VM. Vhostmd is running and updating the
/dev/shm/vhostmd0 on the ovirt node.

The part I’m stuck on is: “This disk image is exported
read-only to guests. Guests can read the disk image to see
metrics” from
http://www.ovirt.org/develop/developer-guide/vdsm/hook/vhostmd/

Does the hook do this by default? I don’t see any new
read-only device mounted in the guest. Is there additional
work I need to do to mount this and access the data from
within the guest?

Many thanks,

Simon





___

Users mailing list

Users@ovirt.org 

http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Illegal State

2016-04-20 Thread Clint Boggio
In my case Markus, the backing disks are MIA and show only as bright
red broken symbolic links. Using the postgres commands to set them as
OK would be folly, and likley cause more trouble. if the snapshot disks
are truly gone, (and they are), what procedure would i use to inform
the database and set the VM's in a usable status status again ?

On Mon, 2016-04-18 at 12:39 +, Markus Stockhausen wrote:
> > 
> > Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im
> > Auftrag von "Clint Boggio [cl...@theboggios.com]
> > Gesendet: Montag, 18. April 2016 14:16
> > An: users@ovirt.org
> > Betreff: [ovirt-users] Disks Illegal State
> > 
> > OVirt 3.6, 4 node cluster with dedicated engine. Main storage
> > domain is iscsi, ISO and Export domains are NFS.
> > 
> > Several of my VM snapshot disks show to be in an "illegal state".
> > The system will not allow me to manipulate the snapshots in any
> > way, nor clone the active system, or create a new snapshot.
> > 
> > In the logs I see that the system complains about not being able to
> > "get volume size for xxx", and also that the system appears to
> > believe that the image is "locked" and is currently in the snapshot
> > process.
> > 
> > Of the VM's with this status, one rebooted and was lost due to
> > "cannot get volume size for domain xxx".
> > 
> > I fear that in this current condition, should any of the other
> > machine reboot, they too will be lost.
> > 
> > How can I troubleshoot this problem further, and hopefully
> > alleviate the condition ?
> > 
> > Thank you for your help.
> Hi Clint,
> 
> for us the problem always boils down to the following steps. Might be
> simpler as we use
> NFS for all of our domains and have direct access to the image files.
> 
> 1) Check if snapshot disks are currently used. Capture the qemu
> command line with a "ps -ef"
> on the nodes. There you can see what images qemu is started with. For
> each of the files check
> the backing chain:
> 
> # qemu-img info /rhev/.../bbd05dd8-c3bf-4d15-9317-73040e04abae
> image: bbd05dd8-c3bf-4d15-9317-73040e04abae
> file format: qcow2
> virtual size: 50G (53687091200 bytes)
> disk size: 133M
> cluster_size: 65536
> backing file: ../f8ebfb39-2ac6-4b87-b193-4204d1854edc/595b95f4-ce1a-
> 4298-bd27-3f6745ae4e4c
> backing file format: raw
> Format specific information:
> compat: 0.10
> 
> # qemu-img info .../595b95f4-ce1a-4298-bd27-3f6745ae4e4c (see above)
> ...
> 
> I don't know how you can accomplish this on ISCSI (and LVM based
> images inside iirc). We
> usually follow the backing chain and test if all the files exist and
> are linked correctly. Especially
> if everything matches the OVirt GUI. I guess this is the most
> important part for you.
> 
> 2) In most of our cases everything is fine and only the OVirt
> database is wrong. So we fix it
> at our own risk. Because of your explanation I do not recommend that
> for you. It is just for 
> documentation purpose.
> 
> engine# su - postgres
> > 
> > psql engine postgres
> > 
> > select image_group_id,imagestatus from images where imagestatus =4;
> > ... list of illegal images
> > update images set imagestatus =1 where imagestatus = 4 and  > criteria>;
> > commit
> > 
> > select description,status from snapshots where status <> 'OK';
> > ... list of locked snapshots
> > update snapshots set status = 'OK' where status <> 'OK' and  > criteria>;
> > commit
> > 
> > \q
> Restart engine and everything should be in sync again. 
> 
> Best regards.
> 
> Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Paul Groeneweg | Pazion
The logs are not from the machine where the hosted engine is running on,
but from the SPM.

Op wo 20 apr. 2016 om 17:19 schreef Paul Groeneweg | Pazion :

> Hereby the logs.
>
>
> Op wo 20 apr. 2016 om 17:11 schreef Maor Lipchuk :
>
>> Hi Paul,
>>
>> Can u please attach the engine and VDSM logs with those failures to check
>> the origin of those failures
>>
>> Thanks,
>> Maor
>>
>> On Wed, Apr 20, 2016 at 6:06 PM, Paul Groeneweg | Pazion 
>> wrote:
>>
>>> Looks like the system does try recreate the OVF :-)
>>> Too bad this failed again...
>>>
>>> http://screencast.com/t/RlYCR1rk8T
>>> http://screencast.com/t/CpcQuoKg
>>>
>>> Failed to create OVF store disk for Storage Domain hostedengine_nfs.
>>> The Disk with the id b6f34661-8701-4f82-a07c-ed7faab4a1b8 might be
>>> removed manually for automatic attempt to create new one.
>>> OVF updates won't be attempted on the created disk.
>>>
>>> And on the hosted storage disk tab :
>>> http://screencast.com/t/ZmwjsGoQ1Xbp
>>>
>>>
>>>
>>>
>>>
>>>
>>> Op wo 20 apr. 2016 om 09:17 schreef Paul Groeneweg | Pazion <
>>> p...@pazion.nl>:
>>>
 I have added a ticket:
 https://bugzilla.redhat.com/show_bug.cgi?id=1328718

 Looking forward to solve!  ( trying to providing as much info as
 required ).

 For the short term, wwhat do I need to restore/rollback to get the
 OVF_STORE back in the Web GUI? is this all db?



 Op wo 20 apr. 2016 om 09:04 schreef Paul Groeneweg | Pazion <
 p...@pazion.nl>:

> Yes I removed them also from the web interface.
> Cen I recreate these or how can I restore?
>
> Op wo 20 apr. 2016 om 09:01 schreef Roy Golan :
>
>> On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion <
>> p...@pazion.nl> wrote:
>>
>>> Hi Roy,
>>>
>>> What do you mean with a RFE , submit a bug ticket?
>>>
>>> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=
>>
>> *oVirt*
>>
>>
>>> Here is what I did:
>>>
>>> I removed the OVF disks as explained from the hosted engine/storage.
>>> I started another server, tried several things like putting to
>>> maintenance and reinstalling, but I keep getting:
>>>
>>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>>> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
>>> OVF_STORE
>>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR 
>>> Unable
>>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>>> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
>>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration 
>>> value
>>> not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
>>> trying to restart agent
>>>
>>> Fact it can't find the OVF store seems logical, but now the
>>> /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file 
>>> conatining
>>> only "None".
>>> I tried to set file readonly ( chown root ), but this only threw an
>>> error about file not writable, tried different path, but nothing helped.
>>> So I am afraid to touch the other running hosts, as same might
>>> happen there and I am unable to start hosted engine again.
>>>
>>> I thought OVF would be created automatically again if it is missing,
>>> but it isn't...
>>> Can I trigger this OVF, or add it somehow manually? Would deleting
>>> the whole hosted_storage trigger an auto import again including OVF?
>>>
>>> If this provides no solution, I guess, I have to restore the removed
>>> OVF store. Would a complete database restore + restoring folder
>>> images/ be sufficient?
>>> Or where is the information about the OVF stores the Web GUI shows
>>> stored?
>>>
>>>
>> Did you remove it also from the engine via the webadmin or REST?
>> storage tab -> click the hosted_storage domain -> disks subtab -> right
>> click remove the failing ovf
>>
>>
>>> Looking forward to resolve this OVF store issue.
>>>
>>> Thanks in advance!!!
>>>
>>>
>>>
>>> Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
>>> p...@pazion.nl>:
>>>
 Hi Roy,

 Thanks for this explanation. I will dive into this evening. ( and
 make a backup first :-) )

 Normally the hosted engine only creates 1 ovf disk for the hosted
 storage?

 Thanks for the help.

 Op di 19 apr. 2016 om 10:22 schreef Roy Golan :

> On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
> p...@pazion.nl> wrote:
>

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Maor Lipchuk
Hi Paul,

Can u please attach the engine and VDSM logs with those failures to check
the origin of those failures

Thanks,
Maor

On Wed, Apr 20, 2016 at 6:06 PM, Paul Groeneweg | Pazion 
wrote:

> Looks like the system does try recreate the OVF :-)
> Too bad this failed again...
>
> http://screencast.com/t/RlYCR1rk8T
> http://screencast.com/t/CpcQuoKg
>
> Failed to create OVF store disk for Storage Domain hostedengine_nfs.
> The Disk with the id b6f34661-8701-4f82-a07c-ed7faab4a1b8 might be removed
> manually for automatic attempt to create new one.
> OVF updates won't be attempted on the created disk.
>
> And on the hosted storage disk tab : http://screencast.com/t/ZmwjsGoQ1Xbp
>
>
>
>
>
>
> Op wo 20 apr. 2016 om 09:17 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> I have added a ticket:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1328718
>>
>> Looking forward to solve!  ( trying to providing as much info as required
>> ).
>>
>> For the short term, wwhat do I need to restore/rollback to get the
>> OVF_STORE back in the Web GUI? is this all db?
>>
>>
>>
>> Op wo 20 apr. 2016 om 09:04 schreef Paul Groeneweg | Pazion <
>> p...@pazion.nl>:
>>
>>> Yes I removed them also from the web interface.
>>> Cen I recreate these or how can I restore?
>>>
>>> Op wo 20 apr. 2016 om 09:01 schreef Roy Golan :
>>>
 On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion <
 p...@pazion.nl> wrote:

> Hi Roy,
>
> What do you mean with a RFE , submit a bug ticket?
>
> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=

 *oVirt*


> Here is what I did:
>
> I removed the OVF disks as explained from the hosted engine/storage.
> I started another server, tried several things like putting to
> maintenance and reinstalling, but I keep getting:
>
> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
> OVF_STORE
> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR 
> Unable
> to get vm.conf from OVF_STORE, falling back to initial vm.conf
> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
> to get vm.conf from OVF_STORE, falling back to initial vm.conf
> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration 
> value
> not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
> trying to restart agent
>
> Fact it can't find the OVF store seems logical, but now the
> /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file conatining
> only "None".
> I tried to set file readonly ( chown root ), but this only threw an
> error about file not writable, tried different path, but nothing helped.
> So I am afraid to touch the other running hosts, as same might happen
> there and I am unable to start hosted engine again.
>
> I thought OVF would be created automatically again if it is missing,
> but it isn't...
> Can I trigger this OVF, or add it somehow manually? Would deleting the
> whole hosted_storage trigger an auto import again including OVF?
>
> If this provides no solution, I guess, I have to restore the removed
> OVF store. Would a complete database restore + restoring folder
> images/ be sufficient?
> Or where is the information about the OVF stores the Web GUI shows
> stored?
>
>
 Did you remove it also from the engine via the webadmin or REST?
 storage tab -> click the hosted_storage domain -> disks subtab -> right
 click remove the failing ovf


> Looking forward to resolve this OVF store issue.
>
> Thanks in advance!!!
>
>
>
> Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> Hi Roy,
>>
>> Thanks for this explanation. I will dive into this evening. ( and
>> make a backup first :-) )
>>
>> Normally the hosted engine only creates 1 ovf disk for the hosted
>> storage?
>>
>> Thanks for the help.
>>
>> Op di 19 apr. 2016 om 10:22 schreef Roy Golan :
>>
>>> On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
>>> p...@pazion.nl> wrote:
>>>
 I am still wondering about the OVF disk ( and event error ) on my
 hosted storage domain.

 My hostedstorage ovf disks ( http://screencast.com/t/AcdqmJWee )
  are not being updated ( what I understood is they should be regularly
 updated ).

 So I wonder, maybe I can remove these OVF disks and they are
 recreated automatically? ( Similar when removing the hosted storage 
 domain
 it was added automatically again )

 And 

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Paul Groeneweg | Pazion
Looks like the system does try recreate the OVF :-)
Too bad this failed again...

http://screencast.com/t/RlYCR1rk8T
http://screencast.com/t/CpcQuoKg

Failed to create OVF store disk for Storage Domain hostedengine_nfs.
The Disk with the id b6f34661-8701-4f82-a07c-ed7faab4a1b8 might be removed
manually for automatic attempt to create new one.
OVF updates won't be attempted on the created disk.

And on the hosted storage disk tab : http://screencast.com/t/ZmwjsGoQ1Xbp






Op wo 20 apr. 2016 om 09:17 schreef Paul Groeneweg | Pazion :

> I have added a ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1328718
>
> Looking forward to solve!  ( trying to providing as much info as required
> ).
>
> For the short term, wwhat do I need to restore/rollback to get the
> OVF_STORE back in the Web GUI? is this all db?
>
>
>
> Op wo 20 apr. 2016 om 09:04 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> Yes I removed them also from the web interface.
>> Cen I recreate these or how can I restore?
>>
>> Op wo 20 apr. 2016 om 09:01 schreef Roy Golan :
>>
>>> On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion >> > wrote:
>>>
 Hi Roy,

 What do you mean with a RFE , submit a bug ticket?

 Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=
>>>
>>> *oVirt*
>>>
>>>
 Here is what I did:

 I removed the OVF disks as explained from the hosted engine/storage.
 I started another server, tried several things like putting to
 maintenance and reinstalling, but I keep getting:

 Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
 WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
 OVF_STORE
 Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
 ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
 to get vm.conf from OVF_STORE, falling back to initial vm.conf
 Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
 ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
 to get vm.conf from OVF_STORE, falling back to initial vm.conf
 Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
 ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration value
 not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
 trying to restart agent

 Fact it can't find the OVF store seems logical, but now the
 /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file conatining
 only "None".
 I tried to set file readonly ( chown root ), but this only threw an
 error about file not writable, tried different path, but nothing helped.
 So I am afraid to touch the other running hosts, as same might happen
 there and I am unable to start hosted engine again.

 I thought OVF would be created automatically again if it is missing,
 but it isn't...
 Can I trigger this OVF, or add it somehow manually? Would deleting the
 whole hosted_storage trigger an auto import again including OVF?

 If this provides no solution, I guess, I have to restore the removed
 OVF store. Would a complete database restore + restoring folder
 images/ be sufficient?
 Or where is the information about the OVF stores the Web GUI shows
 stored?


>>> Did you remove it also from the engine via the webadmin or REST? storage
>>> tab -> click the hosted_storage domain -> disks subtab -> right click
>>> remove the failing ovf
>>>
>>>
 Looking forward to resolve this OVF store issue.

 Thanks in advance!!!



 Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
 p...@pazion.nl>:

> Hi Roy,
>
> Thanks for this explanation. I will dive into this evening. ( and
> make a backup first :-) )
>
> Normally the hosted engine only creates 1 ovf disk for the hosted
> storage?
>
> Thanks for the help.
>
> Op di 19 apr. 2016 om 10:22 schreef Roy Golan :
>
>> On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
>> p...@pazion.nl> wrote:
>>
>>> I am still wondering about the OVF disk ( and event error ) on my
>>> hosted storage domain.
>>>
>>> My hostedstorage ovf disks ( http://screencast.com/t/AcdqmJWee )
>>>  are not being updated ( what I understood is they should be regularly
>>> updated ).
>>>
>>> So I wonder, maybe I can remove these OVF disks and they are
>>> recreated automatically? ( Similar when removing the hosted storage 
>>> domain
>>> it was added automatically again )
>>>
>>> And for this NFS storage domain, is it normal to have 2 OVF disks?
>>>
>>> Really looking for a way get these OVF disks right.
>>>
>>>
>>>
>> Hi Paul,
>>
>> What you can do to remove them is to run this sql statement at your
>> setup
>>
>> ```sql
>> -- first make sure this is the disk, dates are taken from your
>> screenshot
>>
>> 

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
Hi everybody,

I added the procedure to the wiki, it you would be so kind to review it.

https://github.com/oVirt/ovirt-site/pull/188

Thanks

Martin


On Wed, Apr 20, 2016 at 1:29 PM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho  wrote:
>> Hi Didi & Martin,
>>
>> I followed your instructions and are able to add the 2nd host. Thank you :)
>>
>> This is what I've done:
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-agent
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-broker
>>
>> [root@host01 ~]# find /rhev -name hosted-engine.metadata
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>>
>> [root@host01 ~]# ls -al
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> ->
>> /var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56
>>
>> [root@host01 ~]# ls -al
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> ->
>> /var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>>
>> [root@host01 ~]# dd if=/dev/zero
>> of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>> bs=1M
>> dd: error writing
>> ‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’:
>> No space left on device
>> 129+0 records in
>> 128+0 records out
>> 134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s
>>
>> [root@host01 ~]# systemctl start ovirt-ha-broker
>>
>> [root@host01 ~]# systemctl start ovirt-ha-agent
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=none
>>
>> (Found 2 metadata files but the first one is red when I used 'ls -al' so I
>> assume it is a leftover from the previous failed installation and didn't
>> touch it)
>>
>> BTW, how to properly clean the FC storage before using it with oVirt? I used
>> "parted /dev/mapper/wwid mklabel msdos" to destroy the partition table.
>> Isn't that enough?
>
> Even this should not be needed in 3.6. Did you start with 3.6? Or upgraded
> from a previous version?
>
> Also please verify that output of 'hosted-engine --vm-status' makes sense.
>
> Thanks,
>
>>
>>
>> On 20/4/2559 15:11, Martin Sivak wrote:

 Assuming you never deployed a host with ID 52, this is likely a result of
 a
 corruption or dirt or something like that.
 I see that you use FC storage. In previous versions, we did not clean
 such
 storage, so you might have dirt left.
>>>
>>> This is the exact reason for an error like yours. Using dirty block
>>> storage. Please stop all hosted engine tooling (both agent and broker)
>>> and fill the metadata drive with zeros.
>>>
>>> You will have to find the proper hosted-engine.metadata file (which
>>> will be a symlink) under /rhev:
>>>
>>> Example:
>>>
>>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>>
>>>
>>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> [root@dev-03 rhev]# ls -al
>>>
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>>
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>> ->
>>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>>
>>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>>> clean it - But be CAREFUL to not touch any other file or disk you
>>> might find.
>>>
>>> Then restart the hosted engine tools and all should be fine.
>>>
>>>
>>>
>>> Martin
>>>
>>>
>>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David 
>>> wrote:

 On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho 
 wrote:
>
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the
> engine.
>
> The 1st host and the hosted-engine were installed successfully, but the
> 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from

Re: [ovirt-users] Disks Illegal State

2016-04-20 Thread Clint Boggio
The "vdsm-tool dump-volume-chains" command on the iSCSI storage domain
shows one disk in "ILLEGAL" state while the gui shows 8 disk images in
the same state.

###
# BEGIN COMMAND OUTPUT
###



[root@KVM01 ~]# vdsm-tool dump-volume-chains 045c7fda-ab98-4905-876c-
00b5413a619f

Images volume chains (base volume first)

   image:477e73af-e7db-4914-81ed-89b3fbc876f7

 - c8320522-f839-472e-9707-a75f6fbe5cb6
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:882c73fc-a833-4e2e-8e6a-f714d80c0f0d

 - 689220c0-70f8-475f-98b2-6059e735cd1f
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:0ca8c49f-452e-4f61-a3fc-c4bf2711e200

 - dac06a5c-c5a8-4f82-aa8d-5c7a382da0b3
   status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
type: PREALLOCATED


   image:0ca0b8f8-8802-46ae-a9f8-45d5647feeb7

 - 51a6de7b-b505-4c46-ae2a-25fb9faad810
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:ae6d2c62-cfbb-4765-930f-c0a0e3bc07d0

 - b2d39c7d-5b9b-498d-a955-0e99c9bd5f3c
   status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE

 - bf962809-3de7-4264-8c68-6ac12d65c151
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:ff8c64c4-d52b-4812-b541-7f291f98d961

 - 85f77cd5-2f86-49a9-a411-8539114d3035
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:70fc19a2-75da-41bd-a1f6-eb857ed2f18f

 - a8f27397-395f-4b62-93c4-52699f59ea4b
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:2b315278-65f5-45e8-a51e-02b9bc84dcee

 - a6e2150b-57fa-46eb-b205-017fe01b0e4b
   status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE

 - 2d8e5c14-c923-49ac-8660-8e57b801e329
   status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE

 - 43100548-b849-4762-bfc5-18a0f281df2e
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:bf4594b0-242e-4823-abfd-9398ce5e31b7

 - 4608ce2e-f288-40da-b4e5-2a5e7f3bf837
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:00efca9d-932a-45b3-92c3-80065c1a40ce

 - a0bb00bc-cefa-4031-9b59-3cddc3a53a0a
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:5ce704eb-3508-4c36-b0ce-444ebdd27e66

 - e41f2c2d-0a79-49f1-8911-1535a82bd735
   status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
type: PREALLOCATED


   image:11288fa5-0019-4ac0-8a7d-1d455e5e1549

 - 5df31efc-14dd-427c-b575-c0d81f47c6d8
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:a091f7df-5c64-4b6b-a806-f4bf3aad53bc

 - 38138111-2724-44a4-bde1-1fd9d60a1f63
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:c0b302c4-4b9d-4759-bb80-de1e865ecd58

 - d4db9ba7-1b39-4b48-b319-013ebc1d71ce
   status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
type: PREALLOCATED


   image:21123edb-f74f-440b-9c42-4c16ba06a2b7

 - f3cc17aa-4336-4542-9ab0-9df27032be0b
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:ad486d26-4594-4d16-a402-68b45d82078a

 - e87e0c7c-4f6f-45e9-90ca-cf34617da3f6
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:c30c7f11-7818-4592-97ca-9d5be46e2d8e

 - cb53ad06-65e8-474d-94c3-9acf044d5a09
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:998ac54a-0d91-431f-8929-fe62f5d7290a

 - d11aa0ee-d793-4830-9120-3b118ca44b6c
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:a1e69838-0bdf-42f3-95a4-56e4084510a9

 - f687c727-ec06-49f1-9762-b0195e0b549a
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:a29598fe-f94e-4215-8508-19ac24b082c8

 - 29b9ff26-2386-4fb5-832e-b7129307ceb4
   status: OK, voltype: LEAF, format: RAW, legality: LEGAL,
type: PREALLOCATED


   image:b151d4d7-d7fc-43ff-8bb2-75cf947ed626

 - 34676d55-695a-4d2a-a7fa-546971067829
   status: OK, voltype: LEAF, format: COW, legality: LEGAL,
type: SPARSE


   image:352a3a9a-4e1a-41bf-af86-717e374a7562

 - adcc7655-9586-48c1-90d2-1dc9a851bbe1
   status: OK, voltype: LEAF, format: 

Re: [ovirt-users] Fwd: hosted-engine install stalls

2016-04-20 Thread Simone Tiraboschi
On Wed, Apr 20, 2016 at 3:29 PM, Johan Vermeulen  wrote:
> Simone,
>
> the install has continuedI can now install the os on the vm...
> Many thanks.

It shouldn't take that much: normally it's a matter of seconds.

> greetz, J.
>
> 2016-04-20 14:54 GMT+02:00 Simone Tiraboschi :
>>
>> On Wed, Apr 20, 2016 at 2:18 PM, Johan Vermeulen 
>> wrote:
>> > I should sent this to the list, my apologies
>> >
>> > 2016-04-20 14:14 GMT+02:00 Johan Vermeulen :
>> >>
>> >> Hello,
>> >>
>> >> many thanks for helping me out.
>> >> This is the whole log:
>>
>> We call createVolume to create a new volume via VDSM; createVolume is
>> an async task and so we pool till it ends but here it seams that for
>> any reason is not ending.
>> Can you please also attach vdsm.log fro the same time frame?
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: hosted-engine install stalls

2016-04-20 Thread Johan Vermeulen
Simone,

the install has continuedI can now install the os on the vm...
Many thanks.

greetz, J.

2016-04-20 14:54 GMT+02:00 Simone Tiraboschi :

> On Wed, Apr 20, 2016 at 2:18 PM, Johan Vermeulen 
> wrote:
> > I should sent this to the list, my apologies
> >
> > 2016-04-20 14:14 GMT+02:00 Johan Vermeulen :
> >>
> >> Hello,
> >>
> >> many thanks for helping me out.
> >> This is the whole log:
>
> We call createVolume to create a new volume via VDSM; createVolume is
> an async task and so we pool till it ends but here it seams that for
> any reason is not ending.
> Can you please also attach vdsm.log fro the same time frame?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.5 and SSLv3

2016-04-20 Thread Robert Story
On Wed, 20 Apr 2016 08:52:49 -0400 Alexander wrote:
AW> On Wednesday, April 20, 2016 08:39:14 AM Robert Story wrote:
AW> > Yesterday I had to re-install a host node in my 3.5.6 cluster. After a 
fresh
AW> > install of CentOS 7.2, attempts to re-install failed, as did removing and
AW> > re-adding the node. Here is a log excerpt from the engine:
AW> > 
AW> > [...]
AW> > [org.ovirt.engine.core.vdsbroker.VdsManager]
AW> > (DefaultQuartzScheduler_Worker-38) Host eclipse is not responding. It will
AW> > stay in Connecting state for a grace period of 120 seconds and after that
AW> > an attempt to fence the host will be issued. 2016-04-19 18:22:01,938 ERROR
AW> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
AW> > (DefaultQuartzScheduler_Worker-38) Failure to refresh Vds runtime info:
AW> > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
AW> > java.net.NoRouteToHostException: No route to host at
AW> > 
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkExc
AW> > eption(VdsBrokerCommand.java:126) [vdsbroker.jar:]
AW> > 
AW> > Luckily seeing SSL+java in the log tickled my memory about java disabling
AW> > SSLv3, and google helped me find this workaround:
AW> > 
AW> >  - edit /usr/lib/jvm/java/jre/lib/security/java.security
AW> >  - look for jdk.tls.disabledAlgorithms
AW> >  - remove SSLv3 from the list
AW> >  - service ovirt-engine restart
AW> > 
AW> > Google also tells me that this should be an issue for 3.5, and there is a
AW> > vdsm setting, VdsmSSLProtocol, that can be set to use TLS, but I can't 
find
AW> > how to change/set it. Anyone know the secret?
AW> 
AW> Pretty much everything engine related can be configured with
AW> engine-config. engine-config -l will give you a list of all the
AW> options. engine-config -g  will get the current value,
AW> engine-config -s = will set it. A quick grep indicates that
AW> you are looking for the VdsmSSLProtocol key.

Hmmm..

  # engine-config -g VdsmSSLProtocol
  VdsmSSLProtocol: TLSv1 version: general

Looks like it's already set to TLS, making me wonder why I needed to remove 
SSLv3.  I just put it back and restarted the engine, and it seems to be 
communicating with all hosts ok. So maybe it's just some process/code using 
during install that isn't using this setting...


Robert

-- 
Senior Software Engineer @ Parsons


pgpgkDJo5spii.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Using oVirt Python SDK in Avocado Testing Framework

2016-04-20 Thread Amador Pahim

Hi guys,

I would like to share this post on using oVirt Python SDK to write tests 
in Avocado Testing Framework:


https://virtstuff.wordpress.com/2016/04/10/ovirt-functional-tests-using-avocado/

This post was born from an issue report, but after we get rid of the 
issue, I wonder if someone else can benefit from the remaining information.


Best,
--
apahim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: hosted-engine install stalls

2016-04-20 Thread Simone Tiraboschi
On Wed, Apr 20, 2016 at 2:18 PM, Johan Vermeulen  wrote:
> I should sent this to the list, my apologies
>
> 2016-04-20 14:14 GMT+02:00 Johan Vermeulen :
>>
>> Hello,
>>
>> many thanks for helping me out.
>> This is the whole log:

We call createVolume to create a new volume via VDSM; createVolume is
an async task and so we pool till it ends but here it seams that for
any reason is not ending.
Can you please also attach vdsm.log fro the same time frame?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.5 and SSLv3

2016-04-20 Thread Alexander Wels
On Wednesday, April 20, 2016 08:39:14 AM Robert Story wrote:
> Yesterday I had to re-install a host node in my 3.5.6 cluster. After a fresh
> install of CentOS 7.2, attempts to re-install failed, as did removing and
> re-adding the node. Here is a log excerpt from the engine:
> 
> 
> 2016-04-19 18:22:01,100 INFO 
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
> Connecting to eclipse.localdomain/10.71.10.249 2016-04-19 18:22:01,116 WARN
>  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor)
> Retry failed 2016-04-19 18:22:01,129 ERROR
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient]
> (DefaultQuartzScheduler_Worker-38) Exception during connection 2016-04-19
> 18:22:01,208 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> (DefaultQuartzScheduler_Worker-38) Command
> GetCapabilitiesVDSCommand(HostName = eclipse, HostId =
> 37a4a1c2-4906-489e-947c-1ef9fb828bc5,
> vds=Host[eclipse,37a4a1c2-4906-489e-947c-1ef9fb828bc5]) execution failed.
> Exception: VDSNetworkException: java.net.NoRouteToHostException: No route
> to host 2016-04-19 18:22:01,209 WARN 
> [org.ovirt.engine.core.vdsbroker.VdsManager]
> (DefaultQuartzScheduler_Worker-38) Host eclipse is not responding. It will
> stay in Connecting state for a grace period of 120 seconds and after that
> an attempt to fence the host will be issued. 2016-04-19 18:22:01,938 ERROR
> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> (DefaultQuartzScheduler_Worker-38) Failure to refresh Vds runtime info:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> java.net.NoRouteToHostException: No route to host at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkExc
> eption(VdsBrokerCommand.java:126) [vdsbroker.jar:]
> 
> 
> Luckily seeing SSL+java in the log tickled my memory about java disabling
> SSLv3, and google helped me find this workaround:
> 
>  - edit /usr/lib/jvm/java/jre/lib/security/java.security
>  - look for jdk.tls.disabledAlgorithms
>  - remove SSLv3 from the list
>  - service ovirt-engine restart
> 
> Google also tells me that this should be an issue for 3.5, and there is a
> vdsm setting, VdsmSSLProtocol, that can be set to use TLS, but I can't find
> how to change/set it. Anyone know the secret?
> 

Pretty much everything engine related can be configured with engine-config. 
engine-config -l will give you a list of all the options. engine-config -g 
 
will get the current value, engine-config -s = will set it. A quick 
grep indicates that you are looking for the VdsmSSLProtocol key.

> 
> Robert

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.5 and SSLv3

2016-04-20 Thread Robert Story
Yesterday I had to re-install a host node in my 3.5.6 cluster. After a fresh 
install of CentOS 7.2, attempts to re-install failed, as did removing and 
re-adding the node. Here is a log excerpt from the engine:


2016-04-19 18:22:01,100 INFO  
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) 
Connecting to eclipse.localdomain/10.71.10.249
2016-04-19 18:22:01,116 WARN  
[org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) Retry 
failed
2016-04-19 18:22:01,129 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] 
(DefaultQuartzScheduler_Worker-38) Exception during connection
2016-04-19 18:22:01,208 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
(DefaultQuartzScheduler_Worker-38) Command GetCapabilitiesVDSCommand(HostName = 
eclipse, HostId = 37a4a1c2-4906-489e-947c-1ef9fb828bc5, 
vds=Host[eclipse,37a4a1c2-4906-489e-947c-1ef9fb828bc5]) execution failed. 
Exception: VDSNetworkException: java.net.NoRouteToHostException: No route to 
host
2016-04-19 18:22:01,209 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(DefaultQuartzScheduler_Worker-38) Host eclipse is not responding. It will stay 
in Connecting state for a grace period of 120 seconds and after that an attempt 
to fence the host will be issued.
2016-04-19 18:22:01,938 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-38) Failure to refresh Vds runtime info: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
java.net.NoRouteToHostException: No route to host
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:126)
 [vdsbroker.jar:]


Luckily seeing SSL+java in the log tickled my memory about java disabling 
SSLv3, and google helped me find this workaround:

 - edit /usr/lib/jvm/java/jre/lib/security/java.security
 - look for jdk.tls.disabledAlgorithms
 - remove SSLv3 from the list
 - service ovirt-engine restart

Google also tells me that this should be an issue for 3.5, and there is a
vdsm setting, VdsmSSLProtocol, that can be set to use TLS, but I can't find
how to change/set it. Anyone know the secret?


Robert

-- 
Senior Software Engineer @ Parsons


pgpPaFIlxN6q6.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM status

2016-04-20 Thread Alexander Wels
On Wednesday, April 20, 2016 02:30:01 PM Kevin C wrote:
> Hi,
> 
> I have exclamation mark on some VM. But I can't have more informations when
> I hover my mouse over my VM (I only and "Up/Down" status or
> "Server/Desktop")
> 
> How can I know where is the problem ?
> 
> Thanks a lot

There is/was a bug that prevented the tooltip from showing the actual error. 
Not sure which version this is fixed in. Anyway the exclamation is usually one 
of two things:

1. The defined OS for the VM doesn't match the actual detected OS in the VM 
(For instance you selected Windows 7 and the actual installed one is Windows 7 
64bit).
2. The timezone doesn't match. This might cause some timezone related issues 
and that is why the exclamation mark is there.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM status

2016-04-20 Thread Kevin C
Hi,

I have exclamation mark on some VM. But I can't have more informations when I 
hover my mouse over my VM (I only and "Up/Down" status or "Server/Desktop")

How can I know where is the problem ?

Thanks a lot

smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows BSOD with virtio serial driver

2016-04-20 Thread Kevin COUSIN

> 
> You need this driver for the guest agent to report information and
> generally work well in Windows. This is the communication channel for the
> agent.
> While this is a qemu and/or driver issues, we'll need a lot more
> information to able to help you, beginning with the version of Windows, the
> driver (and where it came from), etc.
> Y.
> 
OK thanks. That's why my ovirt agent is not working. 

I have a Windows 2008 R2 SP1 x64. All drivers came from ovirt guest install ISO.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] several questions about serial console

2016-04-20 Thread Nathanaël Blanchet



Le 19/04/2016 09:50, Michal Skrivanek a écrit :

On 17 Apr 2016, at 11:53, Yedidyah Bar David  wrote:

On Fri, Apr 15, 2016 at 7:18 PM, Nathanaël Blanchet  wrote:


Le 15/04/2016 17:27, Nathanaël Blanchet a écrit :

Hi all,

About serial console:

how to get out of a selectionned vm where we are on the login prompt (why
not backing up to the vm menu) rather than killing the ssh process or
closing the terminal? Usual "^] " doesn't work there.
according to
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html-single/Virtual_Machine_Management_Guide/index.html

# ssh -t -p  ovirt-vmconsole@MANAGER_IP --vm-name vm1

should allow to connect directly to a vm on its serial port, and it
is very useful when there are a big number of vm. In reality, we get a SSH
issue : "unknown option -- -"

Seems like a bug in the documentation. See also the README file:

/usr/share/doc/ovirt-vmconsole/README on your machine, or

https://gerrit.ovirt.org/gitweb?p=ovirt-vmconsole.git;a=blob;f=README

For those who are interested, the working way is

# ssh -t -p  ovirt-vmconsole@MANAGER_IP connect --vm-name vm1

None of the official rhev doc or /usr/share/doc/ovirt-vmconsole/README 
tells about it. Connect is the implicit default argument, but we have to 
provide it so as to specify a vm name.





An other question is : Why the vm order is not alphabetic? it could simplify
the search when too many vms are displayed. And a simple "sort -k2" command
should do the stuff...

Please open an RFE for this. Thanks.

I think you can do that yourself, by editing:

/etc/ovirt-vmconsole/ovirt-vmconsole-proxy/conf.d/20-ovirt-vmconsole-proxy-helper.conf


if we want to add 5 users with UserVmManager role on 150 vms and I can't use
group for this stuff, this means I need to do this with an ovirt-shell
script like :
# for i in $(cat /tmp/ids.ovirt); do for j in $(cat /tmp/list_all);do
ovirt-shell -E "add permission --parent-vm-name $j --user-id $i --role-name
UserVmManager"; done; done
and 5*150 API connections only because I can't add several user ids on the
same "add permission" line ? It's doable, but not not very convinient and
very long if I have many more users to add.
Why can't we add permission by user-name and not user-id ?

That’s a limitation/design of REST API
In the UI you can highlight multiple VMs and assign roles all in one go
Note you don’t have to use the UserVmManager role, it’s just one of the 
predefined ones. If you want your own, different, role you can always define a 
new one and add the serial console permission

Thanks,
michal


Adding Francesco.

Thanks and best regards,
--
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent not starting on Windows 2008 R2

2016-04-20 Thread Kevin COUSIN
>>
> 
> Is the virtio-serial driver installed and working correctly? Please check
> in the device manager.
> Y.
> 
> 
I have BSOD when I try to install virtio-serial on Windows (I sent to the list, 
I didn't know my errors are dependent).

So I need to fix my BSOD on virtio-serial driver to fix this.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine install stalls

2016-04-20 Thread Johan Vermeulen
Hello All,

my setup of a hosted engine on Centos7.2 hangs on : [ INFO  ] Connecting
Storage Pool
In the log file I see a lot of items with "ok":










*2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1067 activateStorageDomain2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
heconflib.task_wait:283 Waiting for existing tasks to complete2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1076 {'status': {'message': 'OK', 'code':
0}, 'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 2}}2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1078 {'status': {'message': 'OK', 'code':
0}, 'info': {'name': 'No Description', 'isoprefix': '', 'pool_status':
'connected', 'lver': 2, 'spm_id': 1, 'master_uuid':
'4bec7cce-b3d4-4a46-a1db-c7829951ec89', 'version': '3', 'domains':
'4bec7cce-b3d4-4a46-a1db-c7829951ec89:Active,c817a126-f45b-4f03-b44b-e6249ef214a7:Active',
'type': 'POSIXFS', 'master_ver': 1}, 'dominfo':
{'4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'status': 'Active', 'diskfree':
'1933926400', 'isoprefix': '', 'alerts': [], 'disktotal': '2046640128',
'version': 3}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'status': 'Active',
'diskfree': '52560920576', 'isoprefix': '', 'alerts': [], 'disktotal':
'53660876800', 'version': 3}}}2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1079 {'status': {'message': 'OK', 'code':
0}, '4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'code': 0, 'actual': True,
'acquired': True, 'delay': '9.9272e-05', 'lastCheck': '0.3', 'version': 3,
'valid': True}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'code': 0,
'actual': True, 'acquired': False, 'delay': '0.0889462', 'lastCheck':
'9.0', 'version': 3, 'valid': True}}2016-04-20 13:04:01 DEBUG otopi.context
context._executeMethod:142 Stage misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._misc_create_volume2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.create_and_prepare_image:358 {'status': {'message': 'OK', 'code':
0}, 'uuid': '4835326e-5528-4c2f-831b-19dd323f1084'}2016-04-20 13:04:01
DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.create_and_prepare_image:372 Created configuration volume OK,
request was:- image: f48e629e-17b3-4849-871e-e5547dd8b031- volume:
869c3519-6875-4f7a-a15f-6ddf93f23d93*

but then it keeps displaying:

*2016-04-20 13:05:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.task_wait:283 Waiting for existing tasks to complete*

and that's it.

This is the summary of the install:

  --== CONFIGURATION PREVIEW ==--

  Bridge interface   : enp13s0
  Engine FQDN: *.ddns.net
  Bridge name: ovirtmgmt
  Host address   : **.ddns.net
  SSH daemon port: 8023
  Firewall manager   : iptables
  Gateway address: 192.168.66.1
  Host name for web application  : 
  Host ID: 1
  Image size GB  : 25
  GlusterFS Share Name   : hosted_engine_glusterfs
  GlusterFS Brick Provisioning   : False
  Storage connection : ***.ddns.net:
/export/data
  Console type   : qxl
  Memory size MB : 4096
  MAC address: 00:16:3e:4f:0e:31
  Boot type  : cdrom
  Number of CPUs : 2
  ISO image (cdrom boot/cloud-init)  :
/tmp/CentOS-7-x86_64-Minimal-1511.iso
  CPU Type   : model_Penryn

Thanks for any help on this issue.

greetings, J.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine install stalls

2016-04-20 Thread Simone Tiraboschi
Hi,
can you please attach the whole logs?

On Wed, Apr 20, 2016 at 1:15 PM, Johan Vermeulen  wrote:
> Hello All,
>
> my setup of a hosted engine on Centos7.2 hangs on : [ INFO  ] Connecting
> Storage Pool
> In the log file I see a lot of items with "ok":
>
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._activateStorageDomain:1067 activateStorageDomain
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> heconflib.task_wait:283 Waiting for existing tasks to complete
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._activateStorageDomain:1076 {'status': {'message': 'OK', 'code': 0},
> 'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 2}}
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._activateStorageDomain:1078 {'status': {'message': 'OK', 'code': 0},
> 'info': {'name': 'No Description', 'isoprefix': '', 'pool_status':
> 'connected', 'lver': 2, 'spm_id': 1, 'master_uuid':
> '4bec7cce-b3d4-4a46-a1db-c7829951ec89', 'version': '3', 'domains':
> '4bec7cce-b3d4-4a46-a1db-c7829951ec89:Active,c817a126-f45b-4f03-b44b-e6249ef214a7:Active',
> 'type': 'POSIXFS', 'master_ver': 1}, 'dominfo':
> {'4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'status': 'Active', 'diskfree':
> '1933926400', 'isoprefix': '', 'alerts': [], 'disktotal': '2046640128',
> 'version': 3}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'status': 'Active',
> 'diskfree': '52560920576', 'isoprefix': '', 'alerts': [], 'disktotal':
> '53660876800', 'version': 3}}}
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._activateStorageDomain:1079 {'status': {'message': 'OK', 'code': 0},
> '4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'code': 0, 'actual': True,
> 'acquired': True, 'delay': '9.9272e-05', 'lastCheck': '0.3', 'version': 3,
> 'valid': True}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'code': 0,
> 'actual': True, 'acquired': False, 'delay': '0.0889462', 'lastCheck': '9.0',
> 'version': 3, 'valid': True}}
> 2016-04-20 13:04:01 DEBUG otopi.context context._executeMethod:142 Stage
> misc METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._misc_create_volume
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
> heconflib.create_and_prepare_image:358 {'status': {'message': 'OK', 'code':
> 0}, 'uuid': '4835326e-5528-4c2f-831b-19dd323f1084'}
> 2016-04-20 13:04:01 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
> heconflib.create_and_prepare_image:372 Created configuration volume OK,
> request was:
> - image: f48e629e-17b3-4849-871e-e5547dd8b031
> - volume: 869c3519-6875-4f7a-a15f-6ddf93f23d93
>
> but then it keeps displaying:
>
> 2016-04-20 13:05:02 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
> heconflib.task_wait:283 Waiting for existing tasks to complete
>
> and that's it.
>
> This is the summary of the install:
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Bridge interface   : enp13s0
>   Engine FQDN: *.ddns.net
>   Bridge name: ovirtmgmt
>   Host address   : **.ddns.net
>   SSH daemon port: 8023
>   Firewall manager   : iptables
>   Gateway address: 192.168.66.1
>   Host name for web application  : 
>   Host ID: 1
>   Image size GB  : 25
>   GlusterFS Share Name   : hosted_engine_glusterfs
>   GlusterFS Brick Provisioning   : False
>   Storage connection :
> ***.ddns.net:/export/data
>   Console type   : qxl
>   Memory size MB : 4096
>   MAC address: 00:16:3e:4f:0e:31
>   Boot type  : cdrom
>   Number of CPUs : 2
>   ISO image (cdrom boot/cloud-init)  :
> /tmp/CentOS-7-x86_64-Minimal-1511.iso
>   CPU Type   : model_Penryn
>
> Thanks for any help on this issue.
>
> greetings, J.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho  wrote:
> Hi Didi & Martin,
>
> I followed your instructions and are able to add the 2nd host. Thank you :)
>
> This is what I've done:
>
> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>
> [root@host01 ~]# systemctl stop ovirt-ha-agent
>
> [root@host01 ~]# systemctl stop ovirt-ha-broker
>
> [root@host01 ~]# find /rhev -name hosted-engine.metadata
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>
> [root@host01 ~]# ls -al
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> ->
> /var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56
>
> [root@host01 ~]# ls -al
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
> lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
> ->
> /var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>
> [root@host01 ~]# dd if=/dev/zero
> of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
> bs=1M
> dd: error writing
> ‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’:
> No space left on device
> 129+0 records in
> 128+0 records out
> 134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s
>
> [root@host01 ~]# systemctl start ovirt-ha-broker
>
> [root@host01 ~]# systemctl start ovirt-ha-agent
>
> [root@host01 ~]# hosted-engine --set-maintenance --mode=none
>
> (Found 2 metadata files but the first one is red when I used 'ls -al' so I
> assume it is a leftover from the previous failed installation and didn't
> touch it)
>
> BTW, how to properly clean the FC storage before using it with oVirt? I used
> "parted /dev/mapper/wwid mklabel msdos" to destroy the partition table.
> Isn't that enough?

Even this should not be needed in 3.6. Did you start with 3.6? Or upgraded
from a previous version?

Also please verify that output of 'hosted-engine --vm-status' makes sense.

Thanks,

>
>
> On 20/4/2559 15:11, Martin Sivak wrote:
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of
>>> a
>>> corruption or dirt or something like that.
>>> I see that you use FC storage. In previous versions, we did not clean
>>> such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>>
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> ->
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David 
>> wrote:
>>>
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho 
>>> wrote:

 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the
 engine.

 The 1st host and the hosted-engine were installed successfully, but the
 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from
 host 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of
>>> a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions,

[ovirt-users] hosted-engine install stalls

2016-04-20 Thread Johan Vermeulen
Hello All,

my setup of a hosted engine on Centos7.2 hangs on : [ INFO  ] Connecting
Storage Pool
In the log file I see a lot of items with "ok":










*2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1067 activateStorageDomain2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
heconflib.task_wait:283 Waiting for existing tasks to complete2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1076 {'status': {'message': 'OK', 'code':
0}, 'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 2}}2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1078 {'status': {'message': 'OK', 'code':
0}, 'info': {'name': 'No Description', 'isoprefix': '', 'pool_status':
'connected', 'lver': 2, 'spm_id': 1, 'master_uuid':
'4bec7cce-b3d4-4a46-a1db-c7829951ec89', 'version': '3', 'domains':
'4bec7cce-b3d4-4a46-a1db-c7829951ec89:Active,c817a126-f45b-4f03-b44b-e6249ef214a7:Active',
'type': 'POSIXFS', 'master_ver': 1}, 'dominfo':
{'4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'status': 'Active', 'diskfree':
'1933926400', 'isoprefix': '', 'alerts': [], 'disktotal': '2046640128',
'version': 3}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'status': 'Active',
'diskfree': '52560920576', 'isoprefix': '', 'alerts': [], 'disktotal':
'53660876800', 'version': 3}}}2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1079 {'status': {'message': 'OK', 'code':
0}, '4bec7cce-b3d4-4a46-a1db-c7829951ec89': {'code': 0, 'actual': True,
'acquired': True, 'delay': '9.9272e-05', 'lastCheck': '0.3', 'version': 3,
'valid': True}, 'c817a126-f45b-4f03-b44b-e6249ef214a7': {'code': 0,
'actual': True, 'acquired': False, 'delay': '0.0889462', 'lastCheck':
'9.0', 'version': 3, 'valid': True}}2016-04-20 13:04:01 DEBUG otopi.context
context._executeMethod:142 Stage misc METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf.Plugin._misc_create_volume2016-04-20
13:04:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.create_and_prepare_image:358 {'status': {'message': 'OK', 'code':
0}, 'uuid': '4835326e-5528-4c2f-831b-19dd323f1084'}2016-04-20 13:04:01
DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.create_and_prepare_image:372 Created configuration volume OK,
request was:- image: f48e629e-17b3-4849-871e-e5547dd8b031- volume:
869c3519-6875-4f7a-a15f-6ddf93f23d93*

but then it keeps displaying:

*2016-04-20 13:05:02 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.task_wait:283 Waiting for existing tasks to complete*

and that's it.

This is the summary of the install:

  --== CONFIGURATION PREVIEW ==--

  Bridge interface   : enp13s0
  Engine FQDN: *.ddns.net
  Bridge name: ovirtmgmt
  Host address   : **.ddns.net
  SSH daemon port: 8023
  Firewall manager   : iptables
  Gateway address: 192.168.66.1
  Host name for web application  : 
  Host ID: 1
  Image size GB  : 25
  GlusterFS Share Name   : hosted_engine_glusterfs
  GlusterFS Brick Provisioning   : False
  Storage connection : ***.ddns.net:
/export/data
  Console type   : qxl
  Memory size MB : 4096
  MAC address: 00:16:3e:4f:0e:31
  Boot type  : cdrom
  Number of CPUs : 2
  ISO image (cdrom boot/cloud-init)  :
/tmp/CentOS-7-x86_64-Minimal-1511.iso
  CPU Type   : model_Penryn

Thanks for any help on this issue.

greetings, J.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Wee Sritippho

Hi Didi & Martin,

I followed your instructions and are able to add the 2nd host. Thank you :)

This is what I've done:

[root@host01 ~]# hosted-engine --set-maintenance --mode=global

[root@host01 ~]# systemctl stop ovirt-ha-agent

[root@host01 ~]# systemctl stop ovirt-ha-broker

[root@host01 ~]# find /rhev -name hosted-engine.metadata
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata

[root@host01 ~]# ls -al 
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56 
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata 
-> 
/var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56


[root@host01 ~]# ls -al 
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40 
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata 
-> 
/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3


[root@host01 ~]# dd if=/dev/zero 
of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3 
bs=1M
dd: error writing 
‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’: 
No space left on device

129+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s

[root@host01 ~]# systemctl start ovirt-ha-broker

[root@host01 ~]# systemctl start ovirt-ha-agent

[root@host01 ~]# hosted-engine --set-maintenance --mode=none

(Found 2 metadata files but the first one is red when I used 'ls -al' so 
I assume it is a leftover from the previous failed installation and 
didn't touch it)


BTW, how to properly clean the FC storage before using it with oVirt? I 
used "parted /dev/mapper/wwid mklabel msdos" to destroy the partition 
table. Isn't that enough?


On 20/4/2559 15:11, Martin Sivak wrote:

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.
I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left.

This is the exact reason for an error like yours. Using dirty block
storage. Please stop all hosted engine tooling (both agent and broker)
and fill the metadata drive with zeros.

You will have to find the proper hosted-engine.metadata file (which
will be a symlink) under /rhev:

Example:

[root@dev-03 rhev]# find . -name hosted-engine.metadata

./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

[root@dev-03 rhev]# ls -al
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
-> 
/rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
clean it - But be CAREFUL to not touch any other file or disk you
might find.

Then restart the hosted engine tools and all should be fine.



Martin


On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:

On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:

Hi,

I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.

The 1st host and the hosted-engine were installed successfully, but the 2nd
host failed with this error message:

"Failed to execute stage 'Setup validation': Metadata version 2 from host 52
too new for this agent (highest compatible version: 1)"

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.

What do you get on host 1 running 'hosted-engine --vm-status'?

I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left. See also [1]. You can try cleaning
using [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
[2] 
https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure


Here is the package versions:

[root@host02 ~]# rpm -qa | grep ovirt
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.cent

Re: [ovirt-users] ovirt-guest-agent not starting on Windows 2008 R2

2016-04-20 Thread Yaniv Kaul
On Wed, Apr 20, 2016 at 11:37 AM, Kevin C  wrote:

> Hi list,
>
> On a Windows 2008 R2, oVirt guest agent is not starting. The following
> information was included with the event:
>
> Traceback (most recent call last):
>   File "win32serviceutil.pyc", line 835, in SvcRun
>   File "OVirtGuestService.pyc", line 89, in SvcDoRun
>   File "GuestAgentWin32.pyc", line 655, in __init__
>   File "OVirtAgentLogic.pyc", line 182, in __init__
>   File "VirtIoChannel.pyc", line 151, in __init__
>   File "VirtIoChannel.pyc", line 128, in __init__
>   File "WinFile.pyc", line 40, in __init__
> error: (2, 'CreateFile', 'The system cannot find the file specified.')
>

Is the virtio-serial driver installed and working correctly? Please check
in the device manager.
Y.


>
> the message resource is present but the message is not found in the
> string/message table
>
> How can I investigate and start the service ?
>
> Thanks a lot
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows BSOD with virtio serial driver

2016-04-20 Thread Yaniv Kaul
On Wed, Apr 20, 2016 at 11:53 AM, Kevin C  wrote:

> Hi list,
>
>
> I imported two windows boxes from Proxmox to oVirt. I want to install
> ovirt-guest-agent and drivers, but I had BSOD when installing virtio-serial.
>
> How can I have some debug info, and need I this driver ?
>

You need this driver for the guest agent to report information and
generally work well in Windows. This is the communication channel for the
agent.
While this is a qemu and/or driver issues, we'll need a lot more
information to able to help you, beginning with the version of Windows, the
driver (and where it came from), etc.
Y.


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> And we also do not clean on upgrades... Perhaps we can? Should? Optionally?
>

We can't. We do not execute any setup tool during upgrade and the
clean procedure
requires that all hosted engine tooling is shut down.

Martin

On Wed, Apr 20, 2016 at 11:40 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:40 AM, Martin Sivak  wrote:
>>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>>> I guess it's supposed to be able to handle this, but perhaps users want
>>> to clean the lockspace because dirt there causes also problems with
>>> sanlock, no?
>>
>> Sanlock can be up, but the lockspace has to be unused.
>>
>>> So the only tool we have to clean metadata is '–clean-metadata', which
>>> works one-by-one?
>>
>> Correct, it needs to acquire the lock first to make sure nobody is writing.
>>
>> The dirty disk issue should not be happening anymore, we added an
>> equivalent of the DD to hosted engine setup. But we might have a bug there
>> of course.
>
> And we also do not clean on upgrades... Perhaps we can? Should? Optionally?
>
>>
>> Martin
>>
>> On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
> after moving to global maintenance.

 Good point.

> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?

 Reinitialize lockspace clears the sanlock lockspace, not the metadata
 file. Those are two different places.
>>>
>>> So the only tool we have to clean metadata is '–clean-metadata', which
>>> works one-by-one?
>>>
>>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>>> I guess it's supposed to be able to handle this, but perhaps users want
>>> to clean the lockspace because dirt there causes also problems with
>>> sanlock, no?
>>>

> Care to add this to the howto page?

 Yeah, I can do that.
>>>
>>> Thanks!
>>>

 Martin

 On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  
 wrote:
> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>>> Assuming you never deployed a host with ID 52, this is likely a result 
>>> of a
>>> corruption or dirt or something like that.
>>
>>> I see that you use FC storage. In previous versions, we did not clean 
>>> such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>
> after moving to global maintenance.
>
> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?
> Thanks!
>
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> -> 
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
>> wrote:
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  
>>> wrote:
 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
 engine.

 The 1st host and the hosted-engine were installed successfully, but 
 the 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from 
 host 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result 
>>> of a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions, we d

[ovirt-users] Issue while importing the existing storage domain

2016-04-20 Thread SATHEESARAN

Hi All,

I was testing the gluster geo-replication on RHEV storage domain backed 
by gluster volume.
In this case, storage domain ( data domain ) was created with gluster 
replica 3 volume.


The VMs additional disks are carved out from this storage domain.

Now I have geo-replicated[1] the gluster volume to the remote volume.
When I try importing this storage domain in another RHEVM instance, it 
fails with error "internal engine error"

 I see the following error in engine.log


2016-04-20 05:13:47,685 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-3) 
[20f6ea4c] Failed in 'DetachStorageDomainVDS' method
2016-04-20 05:13:47,708 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp-/127.0.0.1:8702-3) [20f6ea4c] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VDSM command failed: Cannot acquire 
host id: (u'89061d19-fb76-47c9-a4aa-22b0062b769e', 
SanlockException(-262, 'Sanlock lockspace add failure', 'Sanlock 
exception'))
2016-04-20 05:13:47,708 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-3) 
[20f6ea4c] Command 
'org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand' return 
value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=661, 
message=Cannot acquire host id: 
(u'89061d19-fb76-47c9-a4aa-22b0062b769e', SanlockException(-262, 
'Sanlock lockspace add failure', 'Sanlock exception'))]]'



The complete logs are available in the fpaste[2]
Attaching the part of vdsm log to this mail

[1] - geo-replication is the feature in glusterfs where the contents of 
volume are asynchronously replicated in remote volume.

This is used for disaster-recovery workflow

[2] - https://paste.fedoraproject.org/357701/11448771/

Thanks,
Satheesaran S
BindingXMLRPC::INFO::2016-04-20 
10:42:10,604::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request 
handler for 127.0.0.1:37704
Thread-4816::INFO::2016-04-20 
10:42:10,605::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler 
for 127.0.0.1:37704 started
Thread-4816::INFO::2016-04-20 
10:42:10,611::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler 
for 127.0.0.1:37704 stopped
jsonrpc.Executor/0::ERROR::2016-04-20 
10:42:11,407::task::866::Storage.TaskManager.Task::(_setError) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 805, in forcedDetachStorageDomain
self._deatchStorageDomainFromOldPools(sdUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 781, in 
_deatchStorageDomainFromOldPools
dom.acquireHostId(pool.id)
  File "/usr/share/vdsm/storage/sd.py", line 533, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
  File "/usr/share/vdsm/storage/clusterlock.py", line 234, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: 
(u'89061d19-fb76-47c9-a4aa-22b0062b769e', SanlockException(-262, 'Sanlock 
lockspace add failure', 'Sanlock exception'))
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::885::Storage.TaskManager.Task::(_run) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Task._run: 
14b4ecd2-41f1-4cf3-bb21-8ba5e433f
1c7 (u'89061d19-fb76-47c9-a4aa-22b0062b769e', 
u'----') {} failed - stopping task
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::1246::Storage.TaskManager.Task::(stop) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::stopping in state preparing (force 
False)
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::993::Storage.TaskManager.Task::(_decref) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::ref 1 aborting True
jsonrpc.Executor/0::INFO::2016-04-20 
10:42:11,408::task::1171::Storage.TaskManager.Task::(prepare) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::aborting: Task is aborted: 'Cannot 
acquir
e host id' - code 661
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::1176::Storage.TaskManager.Task::(prepare) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Prepare: aborted: Cannot acquire 
host id
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::993::Storage.TaskManager.Task::(_decref) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::ref 0 aborting True
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::928::Storage.TaskManager.Task::(_doAbort) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Task._doAbort: force False
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,409::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,409::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433

[ovirt-users] Issue while importing the existing storage domain

2016-04-20 Thread SATHEESARAN

Hi All,

I was testing the gluster geo-replication on RHEV storage domain backed 
by gluster volume.
In this case, storage domain ( data domain ) was created with gluster 
replica 3 volume.


The VMs additional disks are carved out from this storage domain.

Now I have geo-replicated[1] the gluster volume to the remote volume.
When I try importing this storage domain in another RHEVM instance, it 
fails with error "internal engine error"

 I see the following error in engine.log


2016-04-20 05:13:47,685 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-3) 
[20f6ea4c] Failed in 'DetachStorageDomainVDS' method
2016-04-20 05:13:47,708 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp-/127.0.0.1:8702-3) [20f6ea4c] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VDSM command failed: Cannot acquire 
host id: (u'89061d19-fb76-47c9-a4aa-22b0062b769e', 
SanlockException(-262, 'Sanlock lockspace add failure', 'Sanlock 
exception'))
2016-04-20 05:13:47,708 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-3) 
[20f6ea4c] Command 
'org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand' return 
value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=661, 
message=Cannot acquire host id: 
(u'89061d19-fb76-47c9-a4aa-22b0062b769e', SanlockException(-262, 
'Sanlock lockspace add failure', 'Sanlock exception'))]]'



The complete logs are available in the fpaste[2]
Attaching the part of vdsm log to this mail

[1] - geo-replication is the feature in glusterfs where the contents of 
volume are asynchronously replicated in remote volume.

This is used for disaster-recovery workflow

[2] - https://paste.fedoraproject.org/357701/11448771/

Thanks,
Satheesaran S
BindingXMLRPC::INFO::2016-04-20 
10:42:10,604::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request 
handler for 127.0.0.1:37704
Thread-4816::INFO::2016-04-20 
10:42:10,605::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler 
for 127.0.0.1:37704 started
Thread-4816::INFO::2016-04-20 
10:42:10,611::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler 
for 127.0.0.1:37704 stopped
jsonrpc.Executor/0::ERROR::2016-04-20 
10:42:11,407::task::866::Storage.TaskManager.Task::(_setError) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 805, in forcedDetachStorageDomain
self._deatchStorageDomainFromOldPools(sdUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 781, in 
_deatchStorageDomainFromOldPools
dom.acquireHostId(pool.id)
  File "/usr/share/vdsm/storage/sd.py", line 533, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
  File "/usr/share/vdsm/storage/clusterlock.py", line 234, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: 
(u'89061d19-fb76-47c9-a4aa-22b0062b769e', SanlockException(-262, 'Sanlock 
lockspace add failure', 'Sanlock exception'))
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::885::Storage.TaskManager.Task::(_run) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Task._run: 
14b4ecd2-41f1-4cf3-bb21-8ba5e433f
1c7 (u'89061d19-fb76-47c9-a4aa-22b0062b769e', 
u'----') {} failed - stopping task
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::1246::Storage.TaskManager.Task::(stop) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::stopping in state preparing (force 
False)
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::993::Storage.TaskManager.Task::(_decref) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::ref 1 aborting True
jsonrpc.Executor/0::INFO::2016-04-20 
10:42:11,408::task::1171::Storage.TaskManager.Task::(prepare) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::aborting: Task is aborted: 'Cannot 
acquir
e host id' - code 661
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::1176::Storage.TaskManager.Task::(prepare) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Prepare: aborted: Cannot acquire 
host id
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::993::Storage.TaskManager.Task::(_decref) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::ref 0 aborting True
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,408::task::928::Storage.TaskManager.Task::(_doAbort) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433f1c7`::Task._doAbort: force False
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,409::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
jsonrpc.Executor/0::DEBUG::2016-04-20 
10:42:11,409::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`14b4ecd2-41f1-4cf3-bb21-8ba5e433

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:40 AM, Martin Sivak  wrote:
>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>> I guess it's supposed to be able to handle this, but perhaps users want
>> to clean the lockspace because dirt there causes also problems with
>> sanlock, no?
>
> Sanlock can be up, but the lockspace has to be unused.
>
>> So the only tool we have to clean metadata is '–clean-metadata', which
>> works one-by-one?
>
> Correct, it needs to acquire the lock first to make sure nobody is writing.
>
> The dirty disk issue should not be happening anymore, we added an
> equivalent of the DD to hosted engine setup. But we might have a bug there
> of course.

And we also do not clean on upgrades... Perhaps we can? Should? Optionally?

>
> Martin
>
> On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
 after moving to global maintenance.
>>>
>>> Good point.
>>>
 Martin - any advantage of this over '–reinitialize-lockspace'? Besides
 that it works also in older versions? Care to add this to the howto page?
>>>
>>> Reinitialize lockspace clears the sanlock lockspace, not the metadata
>>> file. Those are two different places.
>>
>> So the only tool we have to clean metadata is '–clean-metadata', which
>> works one-by-one?
>>
>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>> I guess it's supposed to be able to handle this, but perhaps users want
>> to clean the lockspace because dirt there causes also problems with
>> sanlock, no?
>>
>>>
 Care to add this to the howto page?
>>>
>>> Yeah, I can do that.
>>
>> Thanks!
>>
>>>
>>> Martin
>>>
>>> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  
>>> wrote:
 On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>> Assuming you never deployed a host with ID 52, this is likely a result 
>> of a
>> corruption or dirt or something like that.
>
>> I see that you use FC storage. In previous versions, we did not clean 
>> such
>> storage, so you might have dirt left.
>
> This is the exact reason for an error like yours. Using dirty block
> storage. Please stop all hosted engine tooling (both agent and broker)
> and fill the metadata drive with zeros.

 after moving to global maintenance.

 Martin - any advantage of this over '–reinitialize-lockspace'? Besides
 that it works also in older versions? Care to add this to the howto page?
 Thanks!

>
> You will have to find the proper hosted-engine.metadata file (which
> will be a symlink) under /rhev:
>
> Example:
>
> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>
> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> [root@dev-03 rhev]# ls -al
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
> -> 
> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>
> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
> clean it - But be CAREFUL to not touch any other file or disk you
> might find.
>
> Then restart the hosted engine tools and all should be fine.
>
>
>
> Martin
>
>
> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
> wrote:
>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  
>> wrote:
>>> Hi,
>>>
>>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
>>> engine.
>>>
>>> The 1st host and the hosted-engine were installed successfully, but the 
>>> 2nd
>>> host failed with this error message:
>>>
>>> "Failed to execute stage 'Setup validation': Metadata version 2 from 
>>> host 52
>>> too new for this agent (highest compatible version: 1)"
>>
>> Assuming you never deployed a host with ID 52, this is likely a result 
>> of a
>> corruption or dirt or something like that.
>>
>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>
>> I see that you use FC storage. In previous versions, we did not clean 
>> such
>> storage, so you might have dirt left. See also [1]. You can try cleaning
>> using [2].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>> [2] 
>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>
>>>
>>> Here is the package versions:
>>>
>>> [root@host02 ~]# rpm -qa | grep ovir

Re: [ovirt-users] ldap servers configuration can be misleading with AD

2016-04-20 Thread Ondra Machacek

On 04/20/2016 10:33 AM, Fabrice Bacchella wrote:



Le 20 avr. 2016 à 10:16, Ondra Machacek  a écrit :

On 04/19/2016 07:46 PM, Fabrice Bacchella wrote:



Le 19 avr. 2016 à 17:35, Ondra Machacek  a écrit :

On 04/19/2016 04:37 PM, Fabrice Bacchella wrote:

I tried to plug ovirt using my company AD.

But I have a problem, the DNS srv records are not well managed and I can't use 
them so I changed pool.default.serverset.type from srvrecord to failover.


With AD you should use srvrecord, unless you have somehow miscofigured AD.
Can you please elaborate more what does it mean 'DNS srv records are not well 
managed'?


The command
dig +short  _ldap._tcp.dsone.3ds.com any | wc -l
return 122 lines. Out of that, I can only use less than 10, all other generates 
timeout. I don't know if it's firewall or forgotten DC that generate that. 
There is no way I can use srvrecord.
This domain is totally out of my reach, I have to take it as is.


ok, that's not good, but if some of the domains which are working are in same 
site, you can use 'domain-conversion'(works only with srvrecord):
pool.default.serverset.srvrecord.domain-conversion.type = regex
pool.default.serverset.srvrecord.domain-conversion.regex.pattern = 
^(?.*)$
pool.default.serverset.srvrecord.domain-conversion.regex.replacement = 
WORKING-SITE._sites.${domain}


What is that supposed to do ? All my DC are in the form xx-xxx-dcs99.${domain} 
and I have to pick a in this list. dig _sites.${domain} return nothing for me

what a regex will do ?


Well AD has something called sites[1].
With this regex, you can specify what computers will only be used.

[1] https://technet.microsoft.com/en-us/library/cc782048%28v=ws.10%29.aspx





Is that your case? Can you please share log of extensions-tool, so we can 
better understand
your problem and provide better help.


I have no knowledge about AD, I'm a 100% linux sysadmin and just use AD as an 
LDAP server, so all those forest/GC are unknown things for me.

I will send that in a private mail.



OK, will take a look.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] several questions about serial console

2016-04-20 Thread Nathanaël Blanchet

thank you, it works, exactly what I need.

Le 20/04/2016 09:12, Milan Zamazal a écrit :

Nathanaël Blanchet  writes:


* how to get out of a selectionned vm where we are on the login prompt
(why not backing up to the vm menu) rather than killing the ssh
process or closing the terminal? Usual "^] " doesn't work there.

You must use ssh escape sequences with ssh, ~. should work here.
See ESCAPE CHARACTERS section in `man ssh' for more information.


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Windows BSOD with virtio serial driver

2016-04-20 Thread Kevin C
Hi list,


I imported two windows boxes from Proxmox to oVirt. I want to install 
ovirt-guest-agent and drivers, but I had BSOD when installing virtio-serial.

How can I have some debug info, and need I this driver ?



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent not starting on Windows 2008 R2

2016-04-20 Thread Kevin C
Hi list,

On a Windows 2008 R2, oVirt guest agent is not starting. The following 
information was included with the event: 

Traceback (most recent call last):
  File "win32serviceutil.pyc", line 835, in SvcRun
  File "OVirtGuestService.pyc", line 89, in SvcDoRun
  File "GuestAgentWin32.pyc", line 655, in __init__
  File "OVirtAgentLogic.pyc", line 182, in __init__
  File "VirtIoChannel.pyc", line 151, in __init__
  File "VirtIoChannel.pyc", line 128, in __init__
  File "WinFile.pyc", line 40, in __init__
error: (2, 'CreateFile', 'The system cannot find the file specified.')

the message resource is present but the message is not found in the 
string/message table

How can I investigate and start the service ?

Thanks a lot

smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
> I guess it's supposed to be able to handle this, but perhaps users want
> to clean the lockspace because dirt there causes also problems with
> sanlock, no?

Sanlock can be up, but the lockspace has to be unused.

> So the only tool we have to clean metadata is '–clean-metadata', which
> works one-by-one?

Correct, it needs to acquire the lock first to make sure nobody is writing.

The dirty disk issue should not be happening anymore, we added an
equivalent of the DD to hosted engine setup. But we might have a bug there
of course.

Martin

On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
>>> after moving to global maintenance.
>>
>> Good point.
>>
>>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>>> that it works also in older versions? Care to add this to the howto page?
>>
>> Reinitialize lockspace clears the sanlock lockspace, not the metadata
>> file. Those are two different places.
>
> So the only tool we have to clean metadata is '–clean-metadata', which
> works one-by-one?
>
> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
> I guess it's supposed to be able to handle this, but perhaps users want
> to clean the lockspace because dirt there causes also problems with
> sanlock, no?
>
>>
>>> Care to add this to the howto page?
>>
>> Yeah, I can do that.
>
> Thanks!
>
>>
>> Martin
>>
>> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
> Assuming you never deployed a host with ID 52, this is likely a result of 
> a
> corruption or dirt or something like that.

> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left.

 This is the exact reason for an error like yours. Using dirty block
 storage. Please stop all hosted engine tooling (both agent and broker)
 and fill the metadata drive with zeros.
>>>
>>> after moving to global maintenance.
>>>
>>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>>> that it works also in older versions? Care to add this to the howto page?
>>> Thanks!
>>>

 You will have to find the proper hosted-engine.metadata file (which
 will be a symlink) under /rhev:

 Example:

 [root@dev-03 rhev]# find . -name hosted-engine.metadata

 ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

 [root@dev-03 rhev]# ls -al
 ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

 lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
 ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
 -> 
 /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

 And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
 clean it - But be CAREFUL to not touch any other file or disk you
 might find.

 Then restart the hosted engine tools and all should be fine.



 Martin


 On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
 wrote:
> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>> Hi,
>>
>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
>> engine.
>>
>> The 1st host and the hosted-engine were installed successfully, but the 
>> 2nd
>> host failed with this error message:
>>
>> "Failed to execute stage 'Setup validation': Metadata version 2 from 
>> host 52
>> too new for this agent (highest compatible version: 1)"
>
> Assuming you never deployed a host with ID 52, this is likely a result of 
> a
> corruption or dirt or something like that.
>
> What do you get on host 1 running 'hosted-engine --vm-status'?
>
> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left. See also [1]. You can try cleaning
> using [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
> [2] 
> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>
>>
>> Here is the package versions:
>>
>> [root@host02 ~]# rpm -qa | grep ovirt
>> libgovirt-0.3.3-1.el7_2.1.x86_64
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>> ovirt-hosted-engine-setup-1.3.4.0-1.el

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
>> after moving to global maintenance.
>
> Good point.
>
>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>> that it works also in older versions? Care to add this to the howto page?
>
> Reinitialize lockspace clears the sanlock lockspace, not the metadata
> file. Those are two different places.

So the only tool we have to clean metadata is '–clean-metadata', which
works one-by-one?

Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
I guess it's supposed to be able to handle this, but perhaps users want
to clean the lockspace because dirt there causes also problems with
sanlock, no?

>
>> Care to add this to the howto page?
>
> Yeah, I can do that.

Thanks!

>
> Martin
>
> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
 Assuming you never deployed a host with ID 52, this is likely a result of a
 corruption or dirt or something like that.
>>>
 I see that you use FC storage. In previous versions, we did not clean such
 storage, so you might have dirt left.
>>>
>>> This is the exact reason for an error like yours. Using dirty block
>>> storage. Please stop all hosted engine tooling (both agent and broker)
>>> and fill the metadata drive with zeros.
>>
>> after moving to global maintenance.
>>
>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>> that it works also in older versions? Care to add this to the howto page?
>> Thanks!
>>
>>>
>>> You will have to find the proper hosted-engine.metadata file (which
>>> will be a symlink) under /rhev:
>>>
>>> Example:
>>>
>>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>>
>>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> [root@dev-03 rhev]# ls -al
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>> -> 
>>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>>
>>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>>> clean it - But be CAREFUL to not touch any other file or disk you
>>> might find.
>>>
>>> Then restart the hosted engine tools and all should be fine.
>>>
>>>
>>>
>>> Martin
>>>
>>>
>>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
 On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
> engine.
>
> The 1st host and the hosted-engine were installed successfully, but the 
> 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from host 
> 52
> too new for this agent (highest compatible version: 1)"

 Assuming you never deployed a host with ID 52, this is likely a result of a
 corruption or dirt or something like that.

 What do you get on host 1 running 'hosted-engine --vm-status'?

 I see that you use FC storage. In previous versions, we did not clean such
 storage, so you might have dirt left. See also [1]. You can try cleaning
 using [2].

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
 [2] 
 https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure

>
> Here is the package versions:
>
> [root@host02 ~]# rpm -qa | grep ovirt
> libgovirt-0.3.3-1.el7_2.1.x86_64
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>
> [root@engine ~]# rpm -qa | grep ovirt
> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.n

Re: [ovirt-users] ldap servers configuration can be misleading with AD

2016-04-20 Thread Fabrice Bacchella

> Le 20 avr. 2016 à 10:16, Ondra Machacek  a écrit :
> 
> On 04/19/2016 07:46 PM, Fabrice Bacchella wrote:
>> 
>>> Le 19 avr. 2016 à 17:35, Ondra Machacek  a écrit :
>>> 
>>> On 04/19/2016 04:37 PM, Fabrice Bacchella wrote:
 I tried to plug ovirt using my company AD.
 
 But I have a problem, the DNS srv records are not well managed and I can't 
 use them so I changed pool.default.serverset.type from srvrecord to 
 failover.
>>> 
>>> With AD you should use srvrecord, unless you have somehow miscofigured AD.
>>> Can you please elaborate more what does it mean 'DNS srv records are not 
>>> well managed'?
>> 
>> The command
>> dig +short  _ldap._tcp.dsone.3ds.com any | wc -l
>> return 122 lines. Out of that, I can only use less than 10, all other 
>> generates timeout. I don't know if it's firewall or forgotten DC that 
>> generate that. There is no way I can use srvrecord.
>> This domain is totally out of my reach, I have to take it as is.
> 
> ok, that's not good, but if some of the domains which are working are in same 
> site, you can use 'domain-conversion'(works only with srvrecord):
> pool.default.serverset.srvrecord.domain-conversion.type = regex
> pool.default.serverset.srvrecord.domain-conversion.regex.pattern = 
> ^(?.*)$
> pool.default.serverset.srvrecord.domain-conversion.regex.replacement = 
> WORKING-SITE._sites.${domain}

What is that supposed to do ? All my DC are in the form xx-xxx-dcs99.${domain} 
and I have to pick a in this list. dig _sites.${domain} return nothing for me

what a regex will do ?


> Is that your case? Can you please share log of extensions-tool, so we can 
> better understand
> your problem and provide better help.

I have no knowledge about AD, I'm a 100% linux sysadmin and just use AD as an 
LDAP server, so all those forest/GC are unknown things for me.

I will send that in a private mail.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> after moving to global maintenance.

Good point.

> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?

Reinitialize lockspace clears the sanlock lockspace, not the metadata
file. Those are two different places.

> Care to add this to the howto page?

Yeah, I can do that.

Martin

On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>>> Assuming you never deployed a host with ID 52, this is likely a result of a
>>> corruption or dirt or something like that.
>>
>>> I see that you use FC storage. In previous versions, we did not clean such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>
> after moving to global maintenance.
>
> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?
> Thanks!
>
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> -> 
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
 engine.

 The 1st host and the hosted-engine were installed successfully, but the 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from host 
 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions, we did not clean such
>>> storage, so you might have dirt left. See also [1]. You can try cleaning
>>> using [2].
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>>> [2] 
>>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>>

 Here is the package versions:

 [root@host02 ~]# rpm -qa | grep ovirt
 libgovirt-0.3.3-1.el7_2.1.x86_64
 ovirt-vmconsole-1.0.0-1.el7.centos.noarch
 ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
 ovirt-host-deploy-1.4.1-1.el7.centos.noarch
 ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
 ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
 ovirt-release36-007-1.noarch
 ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
 ovirt-setup-lib-1.0.1-1.el7.centos.noarch

 [root@engine ~]# rpm -qa | grep ovirt
 ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
 ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
 ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
 ovirt-host-deploy-1.4.1-1.el7.centos.noarch
 ovirt-release36-007-1.noarch
 ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
 ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
 ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
 ovirt-setup-lib-1.0.1-1.el7.centos.noarch
 ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
 ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
 ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
 ovirt-vmconsole-1.0.0-1.el7.centos.noarch
 ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-3.6.4.1-1.el7.centos.noarch
 ovir

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>> Assuming you never deployed a host with ID 52, this is likely a result of a
>> corruption or dirt or something like that.
>
>> I see that you use FC storage. In previous versions, we did not clean such
>> storage, so you might have dirt left.
>
> This is the exact reason for an error like yours. Using dirty block
> storage. Please stop all hosted engine tooling (both agent and broker)
> and fill the metadata drive with zeros.

after moving to global maintenance.

Martin - any advantage of this over '–reinitialize-lockspace'? Besides
that it works also in older versions? Care to add this to the howto page?
Thanks!

>
> You will have to find the proper hosted-engine.metadata file (which
> will be a symlink) under /rhev:
>
> Example:
>
> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>
> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> [root@dev-03 rhev]# ls -al
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
> -> 
> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>
> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
> clean it - But be CAREFUL to not touch any other file or disk you
> might find.
>
> Then restart the hosted engine tools and all should be fine.
>
>
>
> Martin
>
>
> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>>> Hi,
>>>
>>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>>>
>>> The 1st host and the hosted-engine were installed successfully, but the 2nd
>>> host failed with this error message:
>>>
>>> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
>>> too new for this agent (highest compatible version: 1)"
>>
>> Assuming you never deployed a host with ID 52, this is likely a result of a
>> corruption or dirt or something like that.
>>
>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>
>> I see that you use FC storage. In previous versions, we did not clean such
>> storage, so you might have dirt left. See also [1]. You can try cleaning
>> using [2].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>> [2] 
>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>
>>>
>>> Here is the package versions:
>>>
>>> [root@host02 ~]# rpm -qa | grep ovirt
>>> libgovirt-0.3.3-1.el7_2.1.x86_64
>>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>>> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
>>> ovirt-release36-007-1.noarch
>>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>>
>>> [root@engine ~]# rpm -qa | grep ovirt
>>> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
>>> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
>>> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>>> ovirt-release36-007-1.noarch
>>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>>> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
>>> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
>>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
>>> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>>> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>>> ovirt-guest-agent-common-1.0.11-1.el7.noarch
>>> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
>>> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
>>> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
>>> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
>>> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch

Re: [ovirt-users] ldap servers configuration can be misleading with AD

2016-04-20 Thread Ondra Machacek

On 04/19/2016 07:46 PM, Fabrice Bacchella wrote:



Le 19 avr. 2016 à 17:35, Ondra Machacek  a écrit :

On 04/19/2016 04:37 PM, Fabrice Bacchella wrote:

I tried to plug ovirt using my company AD.

But I have a problem, the DNS srv records are not well managed and I can't use 
them so I changed pool.default.serverset.type from srvrecord to failover.


With AD you should use srvrecord, unless you have somehow miscofigured AD.
Can you please elaborate more what does it mean 'DNS srv records are not well 
managed'?


The command
dig +short  _ldap._tcp.dsone.3ds.com any | wc -l
return 122 lines. Out of that, I can only use less than 10, all other generates 
timeout. I don't know if it's firewall or forgotten DC that generate that. 
There is no way I can use srvrecord.
This domain is totally out of my reach, I have to take it as is.


ok, that's not good, but if some of the domains which are working are in 
same site, you can use 'domain-conversion'(works only with srvrecord):

pool.default.serverset.srvrecord.domain-conversion.type = regex
pool.default.serverset.srvrecord.domain-conversion.regex.pattern = 
^(?.*)$
pool.default.serverset.srvrecord.domain-conversion.regex.replacement = 
WORKING-SITE._sites.${domain}






Can you please send engine log or if you are on 3.6, then use this command to 
test and provide log:
$ ovirt-engine-extensions-tool --log-level=FINEST --log-file=ad-search.log aaa 
search --entity-name=userX --extension-name=ad-authz


I kill it after 1h of execution, and a 1.6MB log file, when I have
pool.default.serverset.type = srvrecord
pool.default.serverset.srvrecord.domain = ${global:vars.domain}

With pool.default.serverset.type = failover and 
pool.default.connection-options.connectTimeoutMillis = 500, I got:
time ovirt-engine-extensions-tool  bla
real1m29.264s
user0m6.837s
sys 0m0.291s
and a 278KB log file.


And with my setup (pool.default.serverset.type and 
pool.default.dc-resolve.default.serverset.type set to failover, 
pool.default.connection-options.connectTimeoutMillis = 500), I got
real0m5.084s
user0m6.343s
sys 0m0.164s
and a 199KB log file.


With pool.default.dc-resolve.enable = false, the results is the same than with 
failover for every one.


Ok. So assure in your failover servers are GCs(for correct group 
resolution).
Now it could use other servers (which you didn't specified in failover) 
in case you are resolving
user/group from different domain, so it's chasing refferal, in that case 
we run 'dig
domainX.forest.com A', so you can have actually more A 
records(inacessible) for it.


Is that your case? Can you please share log of extensions-tool, so we 
can better understand

your problem and provide better help.





Btw: Do you use mutli domain AD setup? Or only single domain?


I think it's a single domain, but I'm not a Microsoft expert at all.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> Assuming you never deployed a host with ID 52, this is likely a result of a
> corruption or dirt or something like that.

> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left.

This is the exact reason for an error like yours. Using dirty block
storage. Please stop all hosted engine tooling (both agent and broker)
and fill the metadata drive with zeros.

You will have to find the proper hosted-engine.metadata file (which
will be a symlink) under /rhev:

Example:

[root@dev-03 rhev]# find . -name hosted-engine.metadata

./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

[root@dev-03 rhev]# ls -al
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
-> 
/rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
clean it - But be CAREFUL to not touch any other file or disk you
might find.

Then restart the hosted engine tools and all should be fine.



Martin


On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>> Hi,
>>
>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>>
>> The 1st host and the hosted-engine were installed successfully, but the 2nd
>> host failed with this error message:
>>
>> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
>> too new for this agent (highest compatible version: 1)"
>
> Assuming you never deployed a host with ID 52, this is likely a result of a
> corruption or dirt or something like that.
>
> What do you get on host 1 running 'hosted-engine --vm-status'?
>
> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left. See also [1]. You can try cleaning
> using [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
> [2] 
> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>
>>
>> Here is the package versions:
>>
>> [root@host02 ~]# rpm -qa | grep ovirt
>> libgovirt-0.3.3-1.el7_2.1.x86_64
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
>> ovirt-release36-007-1.noarch
>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>
>> [root@engine ~]# rpm -qa | grep ovirt
>> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
>> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
>> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-release36-007-1.noarch
>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
>> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
>> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>> ovirt-guest-agent-common-1.0.11-1.el7.noarch
>> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
>> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
>> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
>> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
>> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
>> ovirt-engine-lib-3.6.4.1-1.el7.centos.noarch
>>
>>
>> Here are the log files:
>> https://gist.github.com/weeix/1743f88d3afe1f405889a67ed4011141
>>
>> --
>> Wee
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Didi
> ___

[ovirt-users] Fw: job queue

2016-04-20 Thread Dominique Taffin
Hello!

I was hoping someone could give some insight details...

 I have a question regarding the internal operations of ovirt-
engine. It seems the internal queue only processes up to 10 requests
/ jobs simultanious. E.g. 7 Power_ON and 3 LiveMigrations. All
remaining jobs (e.g. 50 more Power_On requests) are delayed.

Is this queue size configurable somehow?

We sometimes do have a huge amount of power_on requests (last time
up to 700 in 30 minutes) that get extremely delayed by this.

Any hints are welcome.

thank you and best,
 Dominique
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Paul Groeneweg | Pazion
I have added a ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1328718

Looking forward to solve!  ( trying to providing as much info as required ).

For the short term, wwhat do I need to restore/rollback to get the
OVF_STORE back in the Web GUI? is this all db?



Op wo 20 apr. 2016 om 09:04 schreef Paul Groeneweg | Pazion :

> Yes I removed them also from the web interface.
> Cen I recreate these or how can I restore?
>
> Op wo 20 apr. 2016 om 09:01 schreef Roy Golan :
>
>> On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion 
>> wrote:
>>
>>> Hi Roy,
>>>
>>> What do you mean with a RFE , submit a bug ticket?
>>>
>>> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=
>>
>> *oVirt*
>>
>>
>>> Here is what I did:
>>>
>>> I removed the OVF disks as explained from the hosted engine/storage.
>>> I started another server, tried several things like putting to
>>> maintenance and reinstalling, but I keep getting:
>>>
>>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>>> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
>>> OVF_STORE
>>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
>>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>>> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
>>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration value
>>> not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
>>> trying to restart agent
>>>
>>> Fact it can't find the OVF store seems logical, but now the
>>> /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file conatining
>>> only "None".
>>> I tried to set file readonly ( chown root ), but this only threw an
>>> error about file not writable, tried different path, but nothing helped.
>>> So I am afraid to touch the other running hosts, as same might happen
>>> there and I am unable to start hosted engine again.
>>>
>>> I thought OVF would be created automatically again if it is missing, but
>>> it isn't...
>>> Can I trigger this OVF, or add it somehow manually? Would deleting the
>>> whole hosted_storage trigger an auto import again including OVF?
>>>
>>> If this provides no solution, I guess, I have to restore the removed OVF
>>> store. Would a complete database restore + restoring folder
>>> images/ be sufficient?
>>> Or where is the information about the OVF stores the Web GUI shows
>>> stored?
>>>
>>>
>> Did you remove it also from the engine via the webadmin or REST? storage
>> tab -> click the hosted_storage domain -> disks subtab -> right click
>> remove the failing ovf
>>
>>
>>> Looking forward to resolve this OVF store issue.
>>>
>>> Thanks in advance!!!
>>>
>>>
>>>
>>> Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
>>> p...@pazion.nl>:
>>>
 Hi Roy,

 Thanks for this explanation. I will dive into this evening. ( and make
 a backup first :-) )

 Normally the hosted engine only creates 1 ovf disk for the hosted
 storage?

 Thanks for the help.

 Op di 19 apr. 2016 om 10:22 schreef Roy Golan :

> On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
> p...@pazion.nl> wrote:
>
>> I am still wondering about the OVF disk ( and event error ) on my
>> hosted storage domain.
>>
>> My hostedstorage ovf disks ( http://screencast.com/t/AcdqmJWee )
>>  are not being updated ( what I understood is they should be regularly
>> updated ).
>>
>> So I wonder, maybe I can remove these OVF disks and they are
>> recreated automatically? ( Similar when removing the hosted storage 
>> domain
>> it was added automatically again )
>>
>> And for this NFS storage domain, is it normal to have 2 OVF disks?
>>
>> Really looking for a way get these OVF disks right.
>>
>>
>>
> Hi Paul,
>
> What you can do to remove them is to run this sql statement at your
> setup
>
> ```sql
> -- first make sure this is the disk, dates are taken from your
> screenshot
>
> SELECT ovf_disk_id, image_guid, imagestatus, _create_date FROM images,
> storage_domains_ovf_info where ovf_disk_id = images.image_group_id and
> _create_date > '2016-05-01 11:11:29' and _create_date < '2016-05-01
> 11:11:31';
>
> -- now delete this disk
>
> DELETE FROM  storage_domains_ovf_info where ovf_disk_id = %what was
> found in the last query%'
> ```
>
> Now you can right-click and remove this disk.
>
> Since the disk of the ovirt-engine resides on the hosted_storage
> domain we can't put this domain into maintenance and fix those kind of
> issues. There for I would like you to kindly open an RFE and mention 
> y

Re: [ovirt-users] several questions about serial console

2016-04-20 Thread Milan Zamazal
Nathanaël Blanchet  writes:

> * how to get out of a selectionned vm where we are on the login prompt
> (why not backing up to the vm menu) rather than killing the ssh
> process or closing the terminal? Usual "^] " doesn't work there.

You must use ssh escape sequences with ssh, ~. should work here.
See ESCAPE CHARACTERS section in `man ssh' for more information.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Paul Groeneweg | Pazion
Yes I removed them also from the web interface.
Cen I recreate these or how can I restore?

Op wo 20 apr. 2016 om 09:01 schreef Roy Golan :

> On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion 
> wrote:
>
>> Hi Roy,
>>
>> What do you mean with a RFE , submit a bug ticket?
>>
>> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=
>
> *oVirt*
>
>
>> Here is what I did:
>>
>> I removed the OVF disks as explained from the hosted engine/storage.
>> I started another server, tried several things like putting to
>> maintenance and reinstalling, but I keep getting:
>>
>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
>> OVF_STORE
>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
>> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
>> to get vm.conf from OVF_STORE, falling back to initial vm.conf
>> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration value
>> not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
>> trying to restart agent
>>
>> Fact it can't find the OVF store seems logical, but now the
>> /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file conatining
>> only "None".
>> I tried to set file readonly ( chown root ), but this only threw an error
>> about file not writable, tried different path, but nothing helped.
>> So I am afraid to touch the other running hosts, as same might happen
>> there and I am unable to start hosted engine again.
>>
>> I thought OVF would be created automatically again if it is missing, but
>> it isn't...
>> Can I trigger this OVF, or add it somehow manually? Would deleting the
>> whole hosted_storage trigger an auto import again including OVF?
>>
>> If this provides no solution, I guess, I have to restore the removed OVF
>> store. Would a complete database restore + restoring folder
>> images/ be sufficient?
>> Or where is the information about the OVF stores the Web GUI shows stored?
>>
>>
> Did you remove it also from the engine via the webadmin or REST? storage
> tab -> click the hosted_storage domain -> disks subtab -> right click
> remove the failing ovf
>
>
>> Looking forward to resolve this OVF store issue.
>>
>> Thanks in advance!!!
>>
>>
>>
>> Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
>> p...@pazion.nl>:
>>
>>> Hi Roy,
>>>
>>> Thanks for this explanation. I will dive into this evening. ( and make
>>> a backup first :-) )
>>>
>>> Normally the hosted engine only creates 1 ovf disk for the hosted
>>> storage?
>>>
>>> Thanks for the help.
>>>
>>> Op di 19 apr. 2016 om 10:22 schreef Roy Golan :
>>>
 On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
 p...@pazion.nl> wrote:

> I am still wondering about the OVF disk ( and event error ) on my
> hosted storage domain.
>
> My hostedstorage ovf disks ( http://screencast.com/t/AcdqmJWee )  are
> not being updated ( what I understood is they should be regularly updated
> ).
>
> So I wonder, maybe I can remove these OVF disks and they are recreated
> automatically? ( Similar when removing the hosted storage domain it was
> added automatically again )
>
> And for this NFS storage domain, is it normal to have 2 OVF disks?
>
> Really looking for a way get these OVF disks right.
>
>
>
 Hi Paul,

 What you can do to remove them is to run this sql statement at your
 setup

 ```sql
 -- first make sure this is the disk, dates are taken from your
 screenshot

 SELECT ovf_disk_id, image_guid, imagestatus, _create_date FROM images,
 storage_domains_ovf_info where ovf_disk_id = images.image_group_id and
 _create_date > '2016-05-01 11:11:29' and _create_date < '2016-05-01
 11:11:31';

 -- now delete this disk

 DELETE FROM  storage_domains_ovf_info where ovf_disk_id = %what was
 found in the last query%'
 ```

 Now you can right-click and remove this disk.

 Since the disk of the ovirt-engine resides on the hosted_storage domain
 we can't put this domain into maintenance and fix those kind of issues.
 There for I would like you to kindly open an RFE and mention you're
 scenario so we would supply a way to do this kind of operations in a safe
 way.

 Maor thanks for the help and reference.


> Op ma 4 apr. 2016 om 09:54 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> I'd like to add:
>>
>> - There are 2 OVF stores in my hosted_storage ( hostedengine_nfs ).
>> - I checked creation time, they are both created around the same time
>> http://screencast.com/t/hbXQFlou
>>
>>

Re: [ovirt-users] Delete Failed to update OVF disks, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain hostedengine_nfs).

2016-04-20 Thread Roy Golan
On Wed, Apr 20, 2016 at 9:05 AM, Paul Groeneweg | Pazion 
wrote:

> Hi Roy,
>
> What do you mean with a RFE , submit a bug ticket?
>
> Yes please. https://*bugzilla*.redhat.com/enter_bug.cgi?product=

*oVirt*


> Here is what I did:
>
> I removed the OVF disks as explained from the hosted engine/storage.
> I started another server, tried several things like putting to maintenance
> and reinstalling, but I keep getting:
>
> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
> WARNING:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Unable to find
> OVF_STORE
> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable
> to get vm.conf from OVF_STORE, falling back to initial vm.conf
> Apr 20 00:18:00 geisha-3 ovirt-ha-agent:
> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Unable
> to get vm.conf from OVF_STORE, falling back to initial vm.conf
> Apr 20 00:18:00 geisha-3 journal: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: ''Configuration value
> not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
> trying to restart agent
>
> Fact it can't find the OVF store seems logical, but now the
> /var/run/ovirt-hosted-engine-ha/vm.conf is replace with a file conatining
> only "None".
> I tried to set file readonly ( chown root ), but this only threw an error
> about file not writable, tried different path, but nothing helped.
> So I am afraid to touch the other running hosts, as same might happen
> there and I am unable to start hosted engine again.
>
> I thought OVF would be created automatically again if it is missing, but
> it isn't...
> Can I trigger this OVF, or add it somehow manually? Would deleting the
> whole hosted_storage trigger an auto import again including OVF?
>
> If this provides no solution, I guess, I have to restore the removed OVF
> store. Would a complete database restore + restoring folder
> images/ be sufficient?
> Or where is the information about the OVF stores the Web GUI shows stored?
>
>
Did you remove it also from the engine via the webadmin or REST? storage
tab -> click the hosted_storage domain -> disks subtab -> right click
remove the failing ovf


> Looking forward to resolve this OVF store issue.
>
> Thanks in advance!!!
>
>
>
> Op di 19 apr. 2016 om 10:31 schreef Paul Groeneweg | Pazion <
> p...@pazion.nl>:
>
>> Hi Roy,
>>
>> Thanks for this explanation. I will dive into this evening. ( and make a
>> backup first :-) )
>>
>> Normally the hosted engine only creates 1 ovf disk for the hosted storage?
>>
>> Thanks for the help.
>>
>> Op di 19 apr. 2016 om 10:22 schreef Roy Golan :
>>
>>> On Mon, Apr 18, 2016 at 10:05 PM, Paul Groeneweg | Pazion <
>>> p...@pazion.nl> wrote:
>>>
 I am still wondering about the OVF disk ( and event error ) on my
 hosted storage domain.

 My hostedstorage ovf disks ( http://screencast.com/t/AcdqmJWee )  are
 not being updated ( what I understood is they should be regularly updated
 ).

 So I wonder, maybe I can remove these OVF disks and they are recreated
 automatically? ( Similar when removing the hosted storage domain it was
 added automatically again )

 And for this NFS storage domain, is it normal to have 2 OVF disks?

 Really looking for a way get these OVF disks right.



>>> Hi Paul,
>>>
>>> What you can do to remove them is to run this sql statement at your setup
>>>
>>> ```sql
>>> -- first make sure this is the disk, dates are taken from your screenshot
>>>
>>> SELECT ovf_disk_id, image_guid, imagestatus, _create_date FROM images,
>>> storage_domains_ovf_info where ovf_disk_id = images.image_group_id and
>>> _create_date > '2016-05-01 11:11:29' and _create_date < '2016-05-01
>>> 11:11:31';
>>>
>>> -- now delete this disk
>>>
>>> DELETE FROM  storage_domains_ovf_info where ovf_disk_id = %what was
>>> found in the last query%'
>>> ```
>>>
>>> Now you can right-click and remove this disk.
>>>
>>> Since the disk of the ovirt-engine resides on the hosted_storage domain
>>> we can't put this domain into maintenance and fix those kind of issues.
>>> There for I would like you to kindly open an RFE and mention you're
>>> scenario so we would supply a way to do this kind of operations in a safe
>>> way.
>>>
>>> Maor thanks for the help and reference.
>>>
>>>
 Op ma 4 apr. 2016 om 09:54 schreef Paul Groeneweg | Pazion <
 p...@pazion.nl>:

> I'd like to add:
>
> - There are 2 OVF stores in my hosted_storage ( hostedengine_nfs ).
> - I checked creation time, they are both created around the same time
> http://screencast.com/t/hbXQFlou
>
> So hopefully there is some way to update hosted storage sp it can be
> updated.
>
> Op do 31 mrt. 2016 om 15:41 schreef Maor Lipchuk  >:
>
>> [Adding Roy to the thread]
>> Roy,
>>
>> Can you please share your insight regarding the hosted engine
>> beh