[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-31 Thread Neil
Hi Sharon,

This issue still persists, and when I saw that 4.3.5 was released I've
tried to upgrade, but I see it says there are no packages available,
however I see I have 11 updates that are version locked. Could this
possibly be causing issues in terms of why updating to 4.3.5 when it was in
"pre" that it didn't resolve the dashboard problem?

[root@ovirt]# yum update "ovirt-*-setup*"
Loaded plugins: fastestmirror, versionlock
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster6 is listed more than once in the
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the
configuration
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository sac-gluster-ansible is listed more than once in the configuration
Repository ovirt-4.3 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
ovirt-4.3-epel/x86_64/metalink
 |  46 kB  00:00:00
 * base: mirror.pcsp.co.za
 * extras: mirror.pcsp.co.za
 * ovirt-4.1: mirror.slu.cz
 * ovirt-4.1-epel: ftp.uni-bayreuth.de
 * ovirt-4.2: mirror.slu.cz
 * ovirt-4.2-epel: ftp.uni-bayreuth.de
 * ovirt-4.3-epel: ftp.uni-bayreuth.de
 * updates: mirror.bitco.co.za
ovirt-4.3-centos-gluster6
| 2.9 kB  00:00:00
ovirt-4.3-centos-opstools
| 2.9 kB  00:00:00
ovirt-4.3-centos-ovirt43
 | 2.9 kB  00:00:00
ovirt-4.3-centos-qemu-ev
 | 2.9 kB  00:00:00
ovirt-4.3-virtio-win-latest
| 3.0 kB  00:00:00
sac-gluster-ansible
| 3.3 kB  00:00:00
Excluding 11 updates due to versionlock (use "yum versionlock status" to
show them)
No packages marked for update

[root@ovirt yum.repos.d]# yum versionlock status
Loaded plugins: fastestmirror, versionlock
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster6 is listed more than once in the
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the
configuration
Repository centos-sclo-rh-release is listed more than once in the
configuration
Repository sac-gluster-ansible is listed more than once in the configuration
Repository ovirt-4.3 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: mirror.pcsp.co.za
 * extras: mirror.pcsp.co.za
 * ovirt-4.1: mirror.slu.cz
 * ovirt-4.1-epel: ftp.uni-bayreuth.de
 * ovirt-4.2: mirror.slu.cz
 * ovirt-4.2-epel: ftp.uni-bayreuth.de
 * ovirt-4.3-epel: ftp.uni-bayreuth.de
 * updates: mirror.bitco.co.za
0:ovirt-engine-webadmin-portal-4.2.8.2-1.el7.*
0:ovirt-engine-dwh-4.2.4.3-1.el7.*
0:ovirt-engine-tools-backup-4.2.8.2-1.el7.*
0:ovirt-engine-restapi-4.2.8.2-1.el7.*
0:ovirt-engine-dbscripts-4.2.8.2-1.el7.*
0:ovirt-engine-4.2.8.2-1.el7.*
0:ovirt-engine-backend-4.2.8.2-1.el7.*
0:ovirt-engine-wildfly-14.0.1-3.el7.*
0:ovirt-engine-wildfly-overlay-14.0.1-3.el7.*
0:ovirt-engine-tools-4.2.8.2-1.el7.*
0:ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.*
versionlock status done

Any ideas?

Thank you.
Regards.
Neil Wilson.



On Wed, Jul 24, 2019 at 3:46 PM Neil  wrote:

> Hi Sharon,
>
> Thank you for the info and apologies for the very late reply.
>
> I've done the service ovirt-engine-dwhd restart, and unfortunately
> there's no difference, below is the log
>
> 2019-07-24
> 03:00:00|3lI186|A138nf|XhBMpJ|OVIRT_ENGINE_DWH|DeleteTimeKeepingJob|Default|6|Java
> Exception|tJDBCInput_10|org.postgresql.util.PSQLException:This connection
> has been closed.|1
> Exception in component tJDBCInput_10
> org.postgresql.util.PSQLException: This connection has been closed.
> at
> org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:822)
> at
> org.postgresql.jdbc3.AbstractJdbc3Connection.createStatement(AbstractJdbc3Connection.java:229)
> at
> org.postgresql.jdbc2.AbstractJdbc2Connection.createStatement(AbstractJdbc2Connection.java:294)
> at
> ovirt_e

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-16 Thread Neil
Hi Sharon,

Thank you for coming back to me.

Unfortunately I've upgraded to 4.3.5 today and both issues still persist. I
have also tried clearing all data out of my browser and re-logged back in.

I see a new error though in my engine.log as below, however I still don't
see anything logged when I click the migrate button...

2019-07-16 15:01:19,600+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'balloonEnabled' can not be
updated when status is 'Up'
2019-07-16 15:01:19,601+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'watchdog' can not be updated
when status is 'Up'
2019-07-16 15:01:19,602+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'rngDevice' can not be updated
when status is 'Up'
2019-07-16 15:01:19,602+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'soundDeviceEnabled' can not
be updated when status is 'Up'
2019-07-16 15:01:19,603+02 WARN
 [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-15)
[685e07c0-b76f-4093-afc9-7c3999ee4ae2] Field 'consoleEnabled' can not be
updated when status is 'Up'

Then in my vdsm.log I'm seeing the following error

2019-07-16 15:05:59,038+0200 WARN  (qgapoller/3)
[virt.periodic.VmDispatcher] could not run  at
0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
'8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
'8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
'2489c75f-2758-4d82-8338-12f02ff78afa',
'9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
'523ad9ee-5738-42f2-9ee1-50727207e93b',
'84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
'43c06f86-2e37-410b-84be-47e83052344a',
'6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
'19844323-b3cc-441a-8d70-e45326848b10',
'77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)

2019-07-16 15:06:09,036+0200 WARN  (qgapoller/2)
[virt.periodic.VmDispatcher] could not run  at
0x7f00a00476e0> on ['ded20d05-f558-4e17-bf2d-e4907e1bbcde',
'8c93b301-b50d-4d3d-b6cb-54abb3d7f0bb',
'8d8571bf-a7ce-4e73-8d3e-fe1a2aab9b4b',
'2489c75f-2758-4d82-8338-12f02ff78afa',
'9a6561b8-5702-43dc-9e92-1dc5dfed4eef',
'523ad9ee-5738-42f2-9ee1-50727207e93b',
'84f4685b-39e1-4bc8-b8ab-755a2c325cb0',
'43c06f86-2e37-410b-84be-47e83052344a',
'6f44a02c-5de6-4002-992f-2c2c5feb2ee5',
'19844323-b3cc-441a-8d70-e45326848b10',
'77872f3d-c69f-48ab-992b-1d2765a38481'] (periodic:289)

I'm not sure if this is related to either of the above issues though, but I
can attach the full log if needed.

Please shout if there is anything else you think I can try doing.

Thank you.

Regards.

Neil Wilson




On Mon, Jul 15, 2019 at 11:29 AM Sharon Gratch  wrote:

> Hi Neil,
>
> Regarding issue 1 (Dashboard):
> I recommend to upgrade to latest oVirt version 4.3.5, for this fix as well
> as other enhancements and bug fixes.
> For oVirt 4.3.5 installation / upgrade instructions:
> http://www.ovirt.org/release/4.3.5/
>
> Regarding issue 2 (Manual Migrate dialog):
> If it will be reproduced after upgrading then please try to clean your
> browser caching before running the admin portal. It might help.
>
> Regards,
> Sharon
>
> On Thu, Jul 11, 2019 at 1:24 PM Neil  wrote:
>
>>
>> Hi Sharon,
>>
>> Thanks for the assistance.
>> On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch 
>> wrote:
>>
>>> Hi,
>>>
>>> Regarding issue 1 (Dashboard):
>>> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
>>> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may
>>> be the same issue.
>>>
>>
>>
>> No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
>> there another repo available?
>>
>> Regarding issue 2 (Manual Migrate dialog):
>>> Can you please attach your browser console log and engine.log snippet
>>> when you have the problem?
>>> If you could take from the console log the actual REST API response,
>>> that would be great.
>>> The request will be something like
>>> /api/hosts?migration_target_of=...
>>>
>>
>> Please see attached text log for the browser console, I don't see any
>> REST API being logged, just a stack trace error.
>> The engine.log literally doesn't get updated when I click the Migrate
>> button so there isn't anyth

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi Sharon,

Thanks for the assistance.
On Thu, Jul 11, 2019 at 11:58 AM Sharon Gratch  wrote:

> Hi,
>
> Regarding issue 1 (Dashboard):
> Did you upgrade the engine to 4.3.5? There was a bug fixed in version
> 4.3.4-5 https://bugzilla.redhat.com/show_bug.cgi?id=1713967 and it may be
> the same issue.
>


No I  wasn't aware that there were updates, how do I obtain 4.3.4-5 is
there another repo available?

Regarding issue 2 (Manual Migrate dialog):
> Can you please attach your browser console log and engine.log snippet when
> you have the problem?
> If you could take from the console log the actual REST API response, that
> would be great.
> The request will be something like
> /api/hosts?migration_target_of=...
>

Please see attached text log for the browser console, I don't see any REST
API being logged, just a stack trace error.
The engine.log literally doesn't get updated when I click the Migrate
button so there isn't anything to share unfortunately.

Please shout if you need further info.

Thank you!




>
>
> On Thu, Jul 11, 2019 at 10:04 AM Neil  wrote:
>
>> Hi everyone,
>> Just an update.
>>
>> I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to
>> 4.3 and I'm still faced with the same problems.
>>
>> 1.) My Dashboard says the following "Error! Could not fetch dashboard
>> data. Please ensure that data warehouse is properly installed and
>> configured."
>>
>> 2.) When I click the Migrate button I get the error "Could not fetch
>> data needed for VM migrate operation"
>>
>> Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
>> it's one issue down.
>>
>> I've done an engine-upgrade-check and a yum update on all my hosts and
>> engine and there are no further updates or patches waiting.
>> Nothing is logged in my engine.log when I click the Migrate button either.
>>
>> Any ideas what to do or try for  1 and 2 above?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>>
>>
>> On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:
>>
>>>
>>>
>>> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
>>> michal.skriva...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>>>
>>>>
>>>>
>>>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek <
>>>> michal.skriva...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>>>>
>>>>> I'm not sure, but I always thought that you need  an agent for live
>>>>> migrations.
>>>>>
>>>>>
>>>>> You don’t. For snapshots, and other less important stuff like
>>>>> reporting IPs you do. In 4.3 you should be fine with qemu-ga only
>>>>>
>>>> I've seen resolving live migration issues by installing newer versions
>>>> of ovirt ga.
>>>>
>>>>
>>>> Hm, it shouldn’t make any difference whatsoever. Do you have any
>>>> concrete data? that would help.
>>>>
>>> That is some time ago when runnign 4.1. No data unfortunately. Also did
>>> not expect ovirt ga to affect migration, but experience showed me that it
>>> did.  The only observation is that it affected only Windows VMs. Linux VMs
>>> never had an issue, regardless of ovirt ga.
>>>
>>>> You can always try installing either qemu-guest-agent  or
>>>>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>>>>
>>>>> Have you set the new cluster/dc version ?
>>>>>
>>>>> Best Regards
>>>>> Strahil Nikolov
>>>>> On Jul 9, 2019 17:42, Neil  wrote:
>>>>>
>>>>> I remember seeing the bug earlier but because it was closed thought it
>>>>> was unrelated, this appears to be it
>>>>>
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>>>>
>>>>> Perhaps I'm not understanding your question about the VM guest agent,
>>>>> but I don't have any guest agent currently installed on the VM, not sure 
>>>>> if
>>>>> the output of my qemu-kvm process maybe answers this question?
>>>>>
>>>>> /usr/libexec/qemu-kvm -name
>>>>> guest=H

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-11 Thread Neil
Hi everyone,
Just an update.

I have both hosts upgraded to 4.3, I have upgraded my DC and cluster to 4.3
and I'm still faced with the same problems.

1.) My Dashboard says the following "Error! Could not fetch dashboard data.
Please ensure that data warehouse is properly installed and configured."

2.) When I click the Migrate button I get the error "Could not fetch data
needed for VM migrate operation"

Upgrading my hosts resolved the "node status: DEGRADED" issue so at least
it's one issue down.

I've done an engine-upgrade-check and a yum update on all my hosts and
engine and there are no further updates or patches waiting.
Nothing is logged in my engine.log when I click the Migrate button either.

Any ideas what to do or try for  1 and 2 above?

Thank you.

Regards.

Neil Wilson.





On Thu, Jul 11, 2019 at 8:27 AM Alex K  wrote:

>
>
> On Thu, Jul 11, 2019 at 7:57 AM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 11 Jul 2019, at 06:34, Alex K  wrote:
>>
>>
>>
>> On Tue, Jul 9, 2019, 19:10 Michal Skrivanek 
>> wrote:
>>
>>>
>>>
>>> On 9 Jul 2019, at 17:16, Strahil  wrote:
>>>
>>> I'm not sure, but I always thought that you need  an agent for live
>>> migrations.
>>>
>>>
>>> You don’t. For snapshots, and other less important stuff like reporting
>>> IPs you do. In 4.3 you should be fine with qemu-ga only
>>>
>> I've seen resolving live migration issues by installing newer versions of
>> ovirt ga.
>>
>>
>> Hm, it shouldn’t make any difference whatsoever. Do you have any concrete
>> data? that would help.
>>
> That is some time ago when runnign 4.1. No data unfortunately. Also did
> not expect ovirt ga to affect migration, but experience showed me that it
> did.  The only observation is that it affected only Windows VMs. Linux VMs
> never had an issue, regardless of ovirt ga.
>
>> You can always try installing either qemu-guest-agent  or
>>> ovirt-guest-agent and check if live  migration between hosts is possible.
>>>
>>> Have you set the new cluster/dc version ?
>>>
>>> Best Regards
>>> Strahil Nikolov
>>> On Jul 9, 2019 17:42, Neil  wrote:
>>>
>>> I remember seeing the bug earlier but because it was closed thought it
>>> was unrelated, this appears to be it
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1670701
>>>
>>> Perhaps I'm not understanding your question about the VM guest agent,
>>> but I don't have any guest agent currently installed on the VM, not sure if
>>> the output of my qemu-kvm process maybe answers this question?
>>>
>>> /usr/libexec/qemu-kvm -name
>>> guest=Headoffice.cbl-ho.local,debug-threads=on -S -object
>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
>>> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>> Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
>>> -m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
>>> -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
>>> 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>>> type=1,manufacturer=oVirt,product=oVirt
>>> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-
>>>
>>>
>> It’s 7.3, likely oVirt 4.1. Please upgrade...
>>
>> C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef -no-user-config
>>> -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon
>>> chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2019-07-09T10:26:53,driftfix=slew -global
>>> kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
>>> if=none,id=drive-ide0-1-0,readonly=on -device
>>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>> file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
>>> -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootinde

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Neil
To provide a slight update on this.

I put one of my hosts into maintenance and it then migrated the two VM's
off of it, I then upgraded the host to 4.3.

I have 12 VM's running on the remaining host, if I put it into maintenance
will it try migrate all 12 VM's at once or will it stagger them until they
are all migrated?

Thank you.

Regards.

Neil Wilson.






On Wed, Jul 10, 2019 at 9:44 AM Neil  wrote:

> Hi Michal,
>
> Thanks for assisting.
>
> I've just done as requested however nothing is logged in the engine.log at
> the time I click Migrate, below is the log and I hit the Migrate button
> about 4 times between 09:35 and 09:36 and nothing was logged about this...
>
> 2019-07-10 09:35:57,967+02 INFO
>  [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) []
> User trouble@internal successfully logged in with scopes: ovirt-app-admin
> ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~
> ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
> ovirt-ext=token:password-access
> 2019-07-10 09:35:58,012+02 INFO
>  [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14)
> [2997034] Running command: CreateUserSessionCommand internal: false.
> 2019-07-10 09:35:58,021+02 INFO
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User
> trouble@internal-authz connecting from '160.128.20.85' using session
> 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig=='
> logged in.
> 2019-07-10 09:36:58,304+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
> tasks in queue.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for
> tasks.
>
> The same is observed in the vdsm.log too, below is the log during the
> attempted migration
>
> 2019-07-10 09:39:57,034+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.getStats succeeded in 0.01 seconds (__init__:573)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [api.host] START getStats()
> from=:::10.0.1.1,57934 (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] START
> repoStats(domains=()) from=:::10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats
> return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual':
> True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck':
> '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0,
> 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154',
> 'lastCheck': '6.0', 'valid': True},
> u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True,
> 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4',
> 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443',
> 'lastCheck': '2.4', 'valid': True}} from=:::10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-10 Thread Neil
': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser':
'0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '15':
{'cpuUser': '0.27', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.60'},
'14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle':
'99.66'}, '17': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27',
'cpuIdle': '99.66'}, '16': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys':
'0.07', 'cpuIdle': '99.40'}, '19': {'cpuUser': '0.00', 'nodeIndex': 1,
'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18': {'cpuUser': '1.00',
'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.73'}, '31': {'cpuUser':
'0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '30':
{'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'37': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle':
'99.86'}, '36': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '35': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys':
'0.33', 'cpuIdle': '99.47'}, '34': {'cpuUser': '0.00', 'nodeIndex': 0,
'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33': {'cpuUser': '0.07',
'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '32': {'cpuUser':
'0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}},
'numaNodeMemFree': {'1': {'memPercent': 5, 'memFree': '94165'}, '0':
{'memPercent': 22, 'memFree': '77122'}}, 'memShared': 0, 'haScore': 3400,
'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 2, 'memUsed':
'11', 'storageDomains': {u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code':
0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846',
'lastCheck': '2.4', 'valid': True},
u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0',
'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988',
'lastCheck': '2.4', 'valid': True},
u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True,
'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4',
'valid': True}}, 'incomingVmMigrations': 0, 'network': {'em4': {'txErrors':
'0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em4', 'tx':
'2160', 'txDropped': '0', 'rx': '261751836', 'rxErrors': '0', 'speed':
'1000', 'rxDropped': '1'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up',
'sampleTime': 1562744396.40508, 'name': 'ovirtmgmt', 'tx': '193005142',
'txDropped': '0', 'rx': '4300879104', 'rxErrors': '0', 'speed': '1000',
'rxDropped': '478'}, 'restores': {'txErrors': '0', 'state': 'up',
'sampleTime': 1562744396.40508, 'name': &

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
I remember seeing the bug earlier but because it was closed thought it was
unrelated, this appears to be it

https://bugzilla.redhat.com/show_bug.cgi?id=1670701

Perhaps I'm not understanding your question about the VM guest agent, but I
don't have any guest agent currently installed on the VM, not sure if the
output of my qemu-kvm process maybe answers this question?

/usr/libexec/qemu-kvm -name guest=Headoffice.cbl-ho.local,debug-threads=on
-S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-Headoffice.cbl-ho.lo/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,rtm=on,hle=on
-m 8192 -realtime mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1
-numa node,nodeid=0,cpus=0-7,mem=8192 -uuid
9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0034-5810-8033-C2C04F4E4B32,uuid=9a6561b8-5702-43dc-9e92-1dc5dfed4eef
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=31,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc
base=2019-07-09T10:26:53,driftfix=slew -global
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/59831b91-00a5-01e4-0294-0018/8a607f8a-542a-473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f245467-d31d-4f5a-8037-7c5012a4aa84,format=qcow2,if=none,id=drive-virtio-disk0,serial=56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=stop,rerror=stop,cache=none,aio=native
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:5b,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,fd=35,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,fd=36,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=10.0.1.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-object rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

Please shout if you need further info.

Thanks.






On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov 
wrote:

> Shouldn't cause that problem.
>
> You have to find the bug in bugzilla and report a regression (if it's not
> closed) , or open a new one and report the regression.
> As far as I remember , only the dashboard was affected due to new features
> about vdo disk savings.
>
> About the VM - this should be another issue. What agent are you using in
> the VMs (ovirt or qemu) ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 10:09:05 ч. Гринуич-4, Neil <
> nwilson...@gmail.com> написа:
>
>
> Hi Strahil,
>
> Thanks for the quick reply.
> I put the cluster into global maintenance, then installed the 4.3 repo,
> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
> "engine-setup", then "yum update", once completed, I rebooted the
> hosted-engine VM, and took the cluster out of global maintenance.
>
> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
> update" after doing the engine-setup, not sure if this would cause it
> perhaps?
>
> Thank you.
> Regards.
> Neil Wilson.
>
> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
> wrote:
>
> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторни

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
Apologies this should read...
"I put the cluster into global maintenance, then installed the 4.3 repo,
then "engine-upgrade-check" then "yum update ovirt\*setup\*" and then
"engine-setup"..."

On Tue, Jul 9, 2019 at 4:08 PM Neil  wrote:

> Hi Strahil,
>
> Thanks for the quick reply.
> I put the cluster into global maintenance, then installed the 4.3 repo,
> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
> "engine-setup", then "yum update", once completed, I rebooted the
> hosted-engine VM, and took the cluster out of global maintenance.
>
> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
> update" after doing the engine-setup, not sure if this would cause it
> perhaps?
>
> Thank you.
> Regards.
> Neil Wilson.
>
> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
> wrote:
>
>> Hi Neil,
>>
>> for "Could not fetch data needed for VM migrate operation" - there was a
>> bug and it was fixed.
>> Are you sure you have fully updated ?
>> What procedure did you use ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil <
>> nwilson...@gmail.com> написа:
>>
>>
>> Hi guys.
>>
>> I have two problems since upgrading from 4.2.x to 4.3.4
>>
>> First issue is I can no longer manually migrate VM's between hosts, I get
>> an error in the ovirt GUI that says "Could not fetch data needed for VM
>> migrate operation" and nothing gets logged either in my engine.log or my
>> vdsm.log
>>
>> Then the other issue is my Dashboard says the following "Error! Could not
>> fetch dashboard data. Please ensure that data warehouse is properly
>> installed and configured."
>>
>> If I look at my ovirt-engine-dwhd.log I see the following if I try
>> restart the dwh service...
>>
>> 2019-07-09 11:48:04|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.3.0
>> dwhAggregationDebug|false
>> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2019-07-09 11:48:10|ETL Service Stopped
>> 2019-07-09 11:49:59|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
>> limitRows|limit 1000
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>> ovirtEngineDbUser|engine
>> deleteIncrement|10
>> timeBetweenErrorEvents|30
>> hoursToKeepSamples|24
>> deleteMultiplier|1000
>> lastErrorSent|2011-07-03 12:46:47.00
>> etlVersion|4.3.0
>> dwhAggregationDebug|false
>> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> ovirtEngineHistoryDbPassword|**
>> 2019-07-09 11:52:56|ETL Service Stopped
>> 2019-07-09 11:52:57|ETL Service Started
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> hoursToKeepDaily|0
>> hoursToKeepHourly|720
>> ovirtEngineDbPassword|**
>> runDeleteTime|3
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>> runInterleave|60
&

[ovirt-users] Re: Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
Hi Strahil,

Thanks for the quick reply.
I put the cluster into global maintenance, then installed the 4.3 repo,
then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
"engine-setup", then "yum update", once completed, I rebooted the
hosted-engine VM, and took the cluster out of global maintenance.

Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a "yum
update" after doing the engine-setup, not sure if this would cause it
perhaps?

Thank you.
Regards.
Neil Wilson.

On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov 
wrote:

> Hi Neil,
>
> for "Could not fetch data needed for VM migrate operation" - there was a
> bug and it was fixed.
> Are you sure you have fully updated ?
> What procedure did you use ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 9 юли 2019 г., 7:26:21 ч. Гринуич-4, Neil 
> написа:
>
>
> Hi guys.
>
> I have two problems since upgrading from 4.2.x to 4.3.4
>
> First issue is I can no longer manually migrate VM's between hosts, I get
> an error in the ovirt GUI that says "Could not fetch data needed for VM
> migrate operation" and nothing gets logged either in my engine.log or my
> vdsm.log
>
> Then the other issue is my Dashboard says the following "Error! Could not
> fetch dashboard data. Please ensure that data warehouse is properly
> installed and configured."
>
> If I look at my ovirt-engine-dwhd.log I see the following if I try restart
> the dwh service...
>
> 2019-07-09 11:48:04|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 11:48:10|ETL Service Stopped
> 2019-07-09 11:49:59|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 11:52:56|ETL Service Stopped
> 2019-07-09 11:52:57|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
> hoursToKeepDaily|0
> hoursToKeepHourly|720
> ovirtEngineDbPassword|**
> runDeleteTime|3
>
> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
> runInterleave|60
> limitRows|limit 1000
> ovirtEngineHistoryDbUser|ovirt_engine_history
> ovirtEngineDbUser|engine
> deleteIncrement|10
> timeBetweenErrorEvents|30
> hoursToKeepSamples|24
> deleteMultiplier|1000
> lastErrorSent|2011-07-03 12:46:47.00
> etlVersion|4.3.0
> dwhAggregationDebug|false
> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
> ovirtEngineHistoryDbPassword|**
> 2019-07-09 12:16:01|ETL Service Stopped
> 2019-07-09 12:16:45|ETL Service Started
> ovirtEngineDbDriverClass|org.postgresql.Driver
>
> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonVal

[ovirt-users] Manual Migration not working and Dashboard broken after 4.3.4 update

2019-07-09 Thread Neil
2.33-1.el7.noarch

[root@host-a ~]# rpm -qa | grep -i vdsm
vdsm-http-4.20.46-1.el7.noarch
vdsm-common-4.20.46-1.el7.noarch
vdsm-network-4.20.46-1.el7.x86_64
vdsm-jsonrpc-4.20.46-1.el7.noarch
vdsm-4.20.46-1.el7.x86_64
vdsm-hook-ethtool-options-4.20.46-1.el7.noarch
vdsm-hook-vhostmd-4.20.46-1.el7.noarch
vdsm-python-4.20.46-1.el7.noarch
vdsm-api-4.20.46-1.el7.noarch
vdsm-yajsonrpc-4.20.46-1.el7.noarch
vdsm-hook-fcoe-4.20.46-1.el7.noarch
vdsm-hook-openstacknet-4.20.46-1.el7.noarch
vdsm-client-4.20.46-1.el7.noarch
vdsm-gluster-4.20.46-1.el7.x86_64
vdsm-hook-vmfex-dev-4.20.46-1.el7.noarch

I am seeing the following error every minute or so in my vdsm.log as
follows

2019-07-09 12:50:31,543+0200 WARN  (qgapoller/2)
[virt.periodic.VmDispatcher] could not run  at
0x7f52b01b85f0> on ['9a6561b8-5702-43dc-9e92-1dc5dfed4eef'] (periodic:323)

Then also under /var/log/messages..

Jul  9 12:57:48 host-a ovs-vsctl:
ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
connection failed (No such file or directory)

I'm not using ovn so I'm guessing this can be ignored.

If I search for ERROR or WARN in my logs nothing relevant is logged

Any suggestions on what to start looking for please?

Please let me know if you need further info.

Thank you.

Regards.

Neil Wilson
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AI3HSM3L7WNMT2AFJN6IOZEATH7OCHAI/


[ovirt-users] Re: Network/Storage design

2018-11-07 Thread Alastair Neil
fyi My config:
for storage each system has 2x10 GBe lacp bonded to two switches in an mlag
group
1x1Gbe ovirtmgmnt
1x1Gbe public VM netrwork
1x1Gbe private VM nertwork
1x1Gbe trunked to provide tagged vlans to VMs



On Wed, 7 Nov 2018 at 13:10, Josep Manel Andrés Moscardó <
josep.mosca...@embl.de> wrote:

> Hi,
> thanks for the info.
> So management network is used to deploy new images, but if I deploy the
> VM's from PXE server, I guess this is not needed, right?
>
> The storage network is clear.
>
> And then the entworks for the VM's I guess that are the networks with
> VLAN's, in my case, that will be used by the VM's. But, Which is the
> network for migrating VM's from one host to the other? Does it need to
> be non usable by the the VM's?
>
> Thanks.
>
> On 7/11/18 16:53, Nir Soffer wrote:
> > On Wed, Nov 7, 2018 at 4:17 PM Josep Manel Andrés Moscardó
> > mailto:josep.mosca...@embl.de>> wrote:
> >
> > Hi,
> > I am new to oVirt and trying to deploy a cluster to see whether we
> can
> > move from VMWare to oVirt, but my first stopper is how to do a proper
> > design  for the infrastructure,
> >
> > how many networks do I need?, I have 2x10Gb SFP+ and 4x1Gb ethernet.
> >
> > For storage we have NFS coming from Netapp and ceph.
> >
> > Could someone point me in the right direction?
> >
> >
> > You can use 1G nic for the management network, but note that if you
> > want to upload and download images we use this network, so best have
> > a fast enough network for management.
> >
> > You want separate network for storage (NFS/Ceph) - this your biggest
> > bottleneck.
> >
> > Then you want a fast network for VMs. Migrating VMs from host to host
> > needs also fast network, but is not very common.
> >
> > It depends also what the VMs are used for - database? desktops?
> > web servers? rendering nodes?
> >
> > I guess you can learn from other users.
> >
> > Nir
>
> --
> Josep Manel Andrés Moscardó
> Systems Engineer, IT Operations
> EMBL Heidelberg
> T +49 6221 387-8394
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSQJI6KQT2HFJ3OAYCKYXO224TCA5MJ7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQ5YTIJZFGGF37JG3QHFO5KAIHH2WAFH/


[ovirt-users] Re: [ANN] oVirt 4.2.6 is now generally available

2018-09-06 Thread Alastair Neil
On Mon, 3 Sep 2018 at 07:59, Sandro Bonazzola  wrote:

> The oVirt Project is pleased to announce the general availability of oVirt
> 4.2.6, as of September 3rd, 2018.
>
> This update is the sixth in a series of stabilization updates to the 4.2
> series.
> This is pre-release software. This pre-release should not to be used in
> production.
>
>
I am curious about this statement that this is pre-release software. When
you announce General Availability it is usually considered "released." Is
this a simple error, or is there another implication here?




> This release is available now for:
> * Red Hat Enterprise Linux 7.5 or later
> * CentOS Linux (or similar) 7.5 or later
>  -- *snip*
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFEQ5JK2RZM3Q7U3RDARIV7ZPDMHSPW2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QR7ZO6Y5E5RTLLPH4KO42K37U263HD73/


[ovirt-users] Re: Engine Error

2018-07-10 Thread Alastair Neil
what did you select as your CPU architecture when you created the cluster?
It looks like the VM is trying to use a CPU type of "Custom", how many
nodes in your cluster?  I suggest you specify the lowest common denominator
of CPU architecture (e.g. Sandybridge) of the nodes as the CPU architecture
of the cluster..

On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe  wrote:

> Hi,
>
> I have just re-installed centOS 7 in 3 servers and have configured gluster
> volumes following this documentation:
> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, But I
> have installed
>
> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
>
> ​package.
> Hosted-engine --deploy is failing with this error:
>
>  "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4",
> "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio",
> "--disk",
> "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a",
> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom",
> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video",
> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon",
> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"],
> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg":
> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552",
> "stderr": "ERRORunsupported configuration: CPU mode 'custom' for x86_64
> kvm domain on x86_64 host is not supported by hypervisor\nDomain
> installation does not appear to have been successful.\nIf it was, you can
> restart your domain by running:\n  virsh --connect qemu:///system start
> HostedEngineLocal\notherwise, please restart your installation.",
> "stderr_lines": ["ERRORunsupported configuration: CPU mode 'custom' for
> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain
> installation does not appear to have been successful.", "If it was, you can
> restart your domain by running:", "  virsh --connect qemu:///system start
> HostedEngineLocal", "otherwise, please restart your installation."],
> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting
> install..."]}
>
> I added the root user to the kvm group. but it ddn't work.
>
> Can you please help me out. I have been struggling to deploy the hosted
> engine.​
>
> --
> Regards,
> Sakhi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBGWHCLKVBD6U2KDDBPVG4WBB5RWNPDX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMTB7BKIP63OZJYABEYTJHKCKKICYR6H/


[ovirt-users] Re: Question re: MaxFreeMemoryforOverUtlized and MinFreeMemoryforUnderUtilized

2018-06-13 Thread Alastair Neil
when the free memory is below defined maximum value

 this is the problem, this  statement is veiled in double negatives

When a quantity is below a "Maximum" value then this is should be
considered normal
when a quantity exceeds a Maximum value this should be considered an error
condition

but this is not the case - when free memory falls below the threshold of
MaxFreeMemoryforOverUtlized we are overutilized

On Wed, 13 Jun 2018 at 17:14, Martin Sivak  wrote:

> Hi, it is just a matter of perspective:
>
> MaxFreeMemoryforOverUtlized - the host is considered over utilized
> when the free memory is below defined maximum value
> MinFreeMemoryforUnderUtilized - the host is considered under utilized
> when the free memory is at least the defined minimal value
>
> Best regards
>
> --
> Martin Sivak
> oVirt
>
>
> On Wed, Jun 13, 2018 at 7:14 PM, Alastair Neil 
> wrote:
> > Can someone clarify these setting for me, I am having difficulty parsing
> > what exactly they mean. They seem to me to be backwards.
> >
> > If I wish to set a threshold at which I want my host to be consider over
> > utilized, not schedule any new VMs, and migrate VMs away,  then surely I
> > should specify  a minimum threshold of free memory. I.E. if free memory
> > drops below my threshold (OR memory use exceed a maximum threshold)
> migrate
> > VM's off of this system.
> >
> > Conversely if a system is underutilized I should set a maximum threshold
> of
> > free memory (Or minimum used memory).
> >
> > Thanks,
> >
> > --Alastair
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C25Z6WQRPXXU4TMSDDR7HLS5SW3D2LCF/
> >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEDUS37GQPMCEVA6FBHEFCAB4YVAXCMC/


[ovirt-users] Question re: MaxFreeMemoryforOverUtlized and MinFreeMemoryforUnderUtilized

2018-06-13 Thread Alastair Neil
Can someone clarify these setting for me, I am having difficulty parsing
what exactly they mean. They seem to me to be backwards.

If I wish to set a threshold at which I want my host to be consider over
utilized, not schedule any new VMs, and migrate VMs away,  then surely I
should specify  a minimum threshold of free memory. I.E. if free memory
drops below my threshold (OR memory use exceed a maximum threshold) migrate
VM's off of this system.

Conversely if a system is underutilized I should set a maximum threshold of
free memory (Or minimum used memory).

Thanks,

--Alastair
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C25Z6WQRPXXU4TMSDDR7HLS5SW3D2LCF/


[ovirt-users] Can't add disks at all

2017-11-09 Thread Neil
Hi guys,

I've got a strange one.

I'm running ovirt 3.5.x with NFS storage from two NAS's

All my VM's are running fine, however if I try to add a new disk, I click
OK and nothing happens. Nothing is logged in the ovirt logs or the vdsm
logs either.

I've tried different browsers, I've tried tried changing the disk type etc,
but no matter what, I click the OK button to add the disk, and the button
shows it's been clicked, but the page just sits there, eventually after
waiting 5-10 minutes I have to click cancel.

Nothing is logged under tasks, etc in the ovirt GUI too.

I have tried restarting my ovirt engine completely. My engine logging is
working as I do see logs when I click around in my ovirt GUI.

One of my storage domains is 100% full(400MB free), and I get a warning
about this in the event logs, however the domain I'm trying to add it from
isn't the full one.

Any ideas where I can start looking?

Thank you.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IOPS stats/reports from all hosts to storage

2017-10-05 Thread Neil
Haha it is rather crappy.

It was very cheap in comparison to Dell's and HP's etc at the time, and
it's over 6 years old now so it's done quite well considering the price.

The brand is Cipherwave which I think is a rebrand of some other brand,
very basic GUI and features.

So is there no way to get data domain IOPS from the oVirt side?

Thanks!



On Thu, Oct 5, 2017 at 9:53 AM, Karli Sjöberg  wrote:

> On tor, 2017-10-05 at 09:45 +0200, Neil wrote:
> > Hi Karli,
> >
> > I was hoping that too, but it seems the SAN doesn't have these
> > features.
>
> Wow, that´s kind of a crappy storage, no offense. What´s the brand, so
> we can stay clear of it? :)
>
> /K
>
> >
> > There is only the 4 oVirt hosts connected to it via 8GB FC.
> >
> > I see oVirt has Storage QOS, but how do we set Storage QOS without
> > knowing the maximum storage limits? Perhaps I'm misunderstood this...
> >
> > Thanks.
> >
> > Regards.
> >
> > Neil Wilson.
> >
> >
> >
> >
> >
> >
> > On Thu, Oct 5, 2017 at 9:32 AM, Karli Sjöberg 
> > wrote:
> > > On tor, 2017-10-05 at 08:27 +0200, Neil wrote:
> > > > Hi guys,
> > > >
> > > > I'm running FC storage with 4 hosts on oVirt 3.6 and we've been
> > > > having some IOPS issues recently and the SAN provider has asked
> > > me to
> > > > provide them with the following info...
> > > >
> > > > Datastore Stripe Size
> > > > Default VM Disk Stripe Size
> > > > Average IO Size
> > > > Average THROUGHPUT (MB/s)
> > > > Average IOPS
> > > > Maximum IOPS
> > > > Read/Write Percentage of IO
> > > > Datastore Average Latency
> > > > VM Disk Average Latency
> > > >
> > > > All of this is from across all hosts and VM's to the storage
> > > domain.
> > > > Is there any way to get this kind of info from oVirt? I've been
> > > > looking at oVirt-reports but I don't see much as far as
> > > IO/throughput
> > > > reporting goes.
> > > >
> > > > Apologies if I've missed something obvious.
> > >
> > > Just a thought but, isn´t there any way of getting these numbers
> > > from
> > > the storage instead of looking at it from the virtualization? Are
> > > there
> > > _a lot_ of other systems connected to it?
> > >
> > > /K
> > >
> > > >
> > > > Thanks.
> > > >
> > > > Regards.
> > > >
> > > > Neil Wilson.
> > > > ___
> > > > Users mailing list
> > > > Users@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/users
> > >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IOPS stats/reports from all hosts to storage

2017-10-05 Thread Neil
Hi Karli,

I was hoping that too, but it seems the SAN doesn't have these features.

There is only the 4 oVirt hosts connected to it via 8GB FC.

I see oVirt has Storage QOS, but how do we set Storage QOS without knowing
the maximum storage limits? Perhaps I'm misunderstood this...

Thanks.

Regards.

Neil Wilson.






On Thu, Oct 5, 2017 at 9:32 AM, Karli Sjöberg  wrote:

> On tor, 2017-10-05 at 08:27 +0200, Neil wrote:
> > Hi guys,
> >
> > I'm running FC storage with 4 hosts on oVirt 3.6 and we've been
> > having some IOPS issues recently and the SAN provider has asked me to
> > provide them with the following info...
> >
> > Datastore Stripe Size
> > Default VM Disk Stripe Size
> > Average IO Size
> > Average THROUGHPUT (MB/s)
> > Average IOPS
> > Maximum IOPS
> > Read/Write Percentage of IO
> > Datastore Average Latency
> > VM Disk Average Latency
> >
> > All of this is from across all hosts and VM's to the storage domain.
> > Is there any way to get this kind of info from oVirt? I've been
> > looking at oVirt-reports but I don't see much as far as IO/throughput
> > reporting goes.
> >
> > Apologies if I've missed something obvious.
>
> Just a thought but, isn´t there any way of getting these numbers from
> the storage instead of looking at it from the virtualization? Are there
> _a lot_ of other systems connected to it?
>
> /K
>
> >
> > Thanks.
> >
> > Regards.
> >
> > Neil Wilson.
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] IOPS stats/reports from all hosts to storage

2017-10-04 Thread Neil
Hi guys,

I'm running FC storage with 4 hosts on oVirt 3.6 and we've been having some
IOPS issues recently and the SAN provider has asked me to provide them with
the following info...

Datastore Stripe Size
Default VM Disk Stripe Size
Average IO Size
Average THROUGHPUT (MB/s)
Average IOPS
Maximum IOPS
Read/Write Percentage of IO
Datastore Average Latency
VM Disk Average Latency

All of this is from across all hosts and VM's to the storage domain. Is
there any way to get this kind of info from oVirt? I've been looking at
oVirt-reports but I don't see much as far as IO/throughput reporting goes.

Apologies if I've missed something obvious.

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Thank you everyone.

I've updated to ovirt-engine-3.5.6.2-1 and this has resolved the problem as
it renewed my certs on engine-setup.

Much appreciated!

Regards.

Neil Wilson.

On Fri, Sep 22, 2017 at 3:18 PM, Neil  wrote:

> Thanks Sandro.
>
> I'll get cracking and report back if it fixed it.
>
> Thanks for all the help everyone.
>
>
> On Fri, Sep 22, 2017 at 3:14 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2017-09-22 15:07 GMT+02:00 Neil :
>>
>>>
>>> Thanks for the guidance everyone.
>>>
>>> I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got
>>> the same error unfortunately. Below is the output of the upgrade. Should
>>> this have fixed the issue or do I need to upgrade to 3.5 etc?
>>>
>>
>> I think you'll need 3.5.4 at least: https://bugzilla.redhat
>> .com/show_bug.cgi?id=1214860
>>
>>
>>
>>
>>>
>>>
>>> [ INFO  ] Stage: Initializing
>>> [ INFO  ] Stage: Environment setup
>>>   Configuration files: 
>>> ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>>>   Log file: /var/log/ovirt-engine/setup/ov
>>> irt-engine-setup-20170922125526-vw5khx.log
>>>   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
>>> [ INFO  ] Stage: Environment packages setup
>>> [ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
>>> [ INFO  ] Stage: Programs detection
>>> [ INFO  ] Stage: Environment setup
>>> [ INFO  ] Stage: Environment customization
>>>
>>>   --== PRODUCT OPTIONS ==--
>>>
>>>
>>>   --== PACKAGES ==--
>>>
>>> [ INFO  ] Checking for product updates...
>>>   Setup has found updates for some packages, do you wish to
>>> update them now? (Yes, No) [Yes]:
>>> [ INFO  ] Checking for an update for Setup...
>>>
>>>   --== NETWORK CONFIGURATION ==--
>>>
>>> [WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
>>> resolved only locally
>>>   Setup can automatically configure the firewall on this system.
>>>   Note: automatic configuration of the firewall may overwrite
>>> current settings.
>>>   Do you want Setup to configure the firewall? (Yes, No) [Yes]:
>>> no
>>>
>>>   --== DATABASE CONFIGURATION ==--
>>>
>>>
>>>   --== OVIRT ENGINE CONFIGURATION ==--
>>>
>>>   Skipping storing options as database already prepared
>>>
>>>   --== PKI CONFIGURATION ==--
>>>
>>>   PKI is already configured
>>>
>>>   --== APACHE CONFIGURATION ==--
>>>
>>>
>>>   --== SYSTEM CONFIGURATION ==--
>>>
>>>
>>>   --== MISC CONFIGURATION ==--
>>>
>>>
>>>   --== END OF CONFIGURATION ==--
>>>
>>> [ INFO  ] Stage: Setup validation
>>>   During execution engine service will be stopped (OK, Cancel)
>>> [OK]:
>>> [WARNING] Less than 16384MB of memory is available
>>> [ INFO  ] Cleaning stale zombie tasks
>>>
>>>   --== CONFIGURATION PREVIEW ==--
>>>
>>>   Engine database name: engine
>>>   Engine database secured connection  : False
>>>   Engine database host: localhost
>>>   Engine database user name   : engine
>>>   Engine database host name validation: False
>>>   Engine database port: 5432
>>>   Datacenter storage type : False
>>>   Update Firewall : False
>>>   Configure WebSocket Proxy   : True
>>>   Host FQDN   : engine01.mydomain.za
>>>   Upgrade packages: True
>>>
>>>   Please confirm installation settings (OK, Cancel) [OK]:
>>> [ INFO  ] Cleaning async tasks and compensations
>>> [ INFO  ] Checking the Engine database consistency
>>> [ INFO  ] Stage: Transaction setup
>>> [ INFO  ] Stopping engine service
>>> [ INFO  ] Stopping websocket-proxy service
>>> [ INFO  ] Stage: Misc configuration
>>> [ INFO  ] Stage: Package installation
>>> 

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Thanks Sandro.

I'll get cracking and report back if it fixed it.

Thanks for all the help everyone.


On Fri, Sep 22, 2017 at 3:14 PM, Sandro Bonazzola 
wrote:

>
>
> 2017-09-22 15:07 GMT+02:00 Neil :
>
>>
>> Thanks for the guidance everyone.
>>
>> I've upgraded my engine now to ovirt-engine-3.4.4-1 but I've still got
>> the same error unfortunately. Below is the output of the upgrade. Should
>> this have fixed the issue or do I need to upgrade to 3.5 etc?
>>
>
> I think you'll need 3.5.4 at least: https://bugzilla.
> redhat.com/show_bug.cgi?id=1214860
>
>
>
>
>>
>>
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Stage: Environment setup
>>   Configuration files: 
>> ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>>   Log file: /var/log/ovirt-engine/setup/ov
>> irt-engine-setup-20170922125526-vw5khx.log
>>   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Yum Downloading: repomdPLa0LXtmp.xml (0%)
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== PRODUCT OPTIONS ==--
>>
>>
>>   --== PACKAGES ==--
>>
>> [ INFO  ] Checking for product updates...
>>   Setup has found updates for some packages, do you wish to
>> update them now? (Yes, No) [Yes]:
>> [ INFO  ] Checking for an update for Setup...
>>
>>   --== NETWORK CONFIGURATION ==--
>>
>> [WARNING] Failed to resolve engine01.mydomain.za using DNS, it can be
>> resolved only locally
>>   Setup can automatically configure the firewall on this system.
>>   Note: automatic configuration of the firewall may overwrite
>> current settings.
>>   Do you want Setup to configure the firewall? (Yes, No) [Yes]: no
>>
>>   --== DATABASE CONFIGURATION ==--
>>
>>
>>   --== OVIRT ENGINE CONFIGURATION ==--
>>
>>   Skipping storing options as database already prepared
>>
>>   --== PKI CONFIGURATION ==--
>>
>>   PKI is already configured
>>
>>   --== APACHE CONFIGURATION ==--
>>
>>
>>   --== SYSTEM CONFIGURATION ==--
>>
>>
>>   --== MISC CONFIGURATION ==--
>>
>>
>>   --== END OF CONFIGURATION ==--
>>
>> [ INFO  ] Stage: Setup validation
>>   During execution engine service will be stopped (OK, Cancel)
>> [OK]:
>> [WARNING] Less than 16384MB of memory is available
>> [ INFO  ] Cleaning stale zombie tasks
>>
>>   --== CONFIGURATION PREVIEW ==--
>>
>>   Engine database name: engine
>>   Engine database secured connection  : False
>>   Engine database host: localhost
>>   Engine database user name   : engine
>>   Engine database host name validation: False
>>   Engine database port: 5432
>>   Datacenter storage type : False
>>   Update Firewall : False
>>   Configure WebSocket Proxy   : True
>>   Host FQDN   : engine01.mydomain.za
>>   Upgrade packages: True
>>
>>   Please confirm installation settings (OK, Cancel) [OK]:
>> [ INFO  ] Cleaning async tasks and compensations
>> [ INFO  ] Checking the Engine database consistency
>> [ INFO  ] Stage: Transaction setup
>> [ INFO  ] Stopping engine service
>> [ INFO  ] Stopping websocket-proxy service
>> [ INFO  ] Stage: Misc configuration
>> [ INFO  ] Stage: Package installation
>> [ INFO  ] Yum Status: Downloading Packages
>> [ INFO  ] Yum Download/Verify: ovirt-engine-3.4.4-1.el6.noarch
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 2.0 M(19%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 4.3 M(41%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 6.3 M(60%)
>> [ INFO  ] Yum Downloading: (2/13): 
>> ovirt-engine-backend-3.4.4-1.el6.noarch.rpm
>> 8.9 M(85%)
>> [ INFO  ] Yum Download/Verify: ovirt-engine-backend-3.4.4-1.el6.noarch
>> [ INFO  ] Yum Download/Verify:

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
irewall-cmd -service ovirt-http
  The following network ports should be opened:
  tcp:443
  tcp:5432
  tcp:6100
  tcp:80
  An example of the required configuration for iptables can be
found at:
  /etc/ovirt-engine/iptables.example

  --== END OF SUMMARY ==--

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20170922125526-vw5khx.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20170922143806-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

I'm still seeing the following below, in my engine.log and when I log in,
all my VM's show as unknown.

2017-09-22 15:06:06,060 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-57) Command
GetCapabilitiesVDSCommand(HostName = node02.mydomain.za, HostId =
d2debdfe-76e7-40cf-a7fd-78a0f50f14d4,
vds=Host[node02.mydomain.za,d2debdfe-76e7-40cf-a7fd-78a0f50f14d4])
execution failed. Exception: VDSNetworkException:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
certificate_expired

Any ideas?

Thanks!

On Fri, Sep 22, 2017 at 11:10 AM, Martin Perina  wrote:

>
>
> On Fri, Sep 22, 2017 at 10:58 AM, Neil  wrote:
>
>> Thanks Martin and Piotr,
>>
>> Correct, this was a very old installation from the old drey repo that was
>> upgraded gradually over the years.
>>
>> I have tried engine-setup yesterday, prior to this looking under
>> /var/log/ovirt-engine/setup it looks like 2014
>>
>> I've attached a log of the output of running it now, looks like a repo
>> issue with trying to upgrade to the latest 3.4.x release, but not sure what
>> else to look for?
>>
>
> ​Hmm, it's so ancient version that oVirt 3.4 mirrors are probably not
> working anymore. You can either:
>
> 1. Execute engine-setup --offline to skip updates check or
> 2. Edit /etc/yum.repos.d/ovirt*.conf files and switch from mirrors to main
> site resources.ovirt.org
>
>
>> Thanks for the assistance.
>>
>> Regards.
>>
>> Neil Wilson
>>
>>
>> On Fri, Sep 22, 2017 at 10:38 AM, Piotr Kliczewski <
>> piotr.kliczew...@gmail.com> wrote:
>>
>>> On Fri, Sep 22, 2017 at 10:35 AM, Martin Perina 
>>> wrote:
>>> >
>>> >
>>> > On Fri, Sep 22, 2017 at 10:18 AM, Neil  wrote:
>>> >>
>>> >> Hi Piotr,
>>> >>
>>> >> Thank you for the information.
>>> >>
>>> >> It looks like something has expired looking in the server.log now that
>>> >> debug is enabled.
>>> >>
>>> >> 2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)
>>>  Version:
>>> >> V3
>>> >> 2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)
>>>  Subject:
>>> >> CN=engine01.mydomain.za, O=mydomain, C=US
>>> >> 2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)
>>> >> Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
>>> >> 2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
>>> >> 2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:
>>> >> Sun RSA public key, 1024 bits
>>> >> 2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)
>>>  modulus:
>>> >> 966706131850237857720016566132274169225143716493132034132811
>>> 213711757321195965137528821713060454503460188878350322233731
>>> 259812207539722762942035931744044702655933680916835641105243
>>> 164032601213316092139626126181817086803318505413903188689260
>>> 54438078223371655800890725486783860059873397983318033852172060923531
>>> >> 2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)
>>>  public
>>> >> exponent: 65537
>>> >> 2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)
>>> >> Validity: [From: Sun Oct 14 22:26:46 SAST 2012,
>>> >> 2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
>>> >> To: Tue Sep 19 18:26:49 SAST 2017]
>>> >> 2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)
>>>  Issuer:
>>> >> CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US
>>> >>
>>> >> Any idea how I can generate a new one and what cert it is that's
>>> expired?
>>> >

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-22 Thread Neil
Hi Piotr,

Thank you for the information.

It looks like something has expired looking in the server.log now that
debug is enabled.

2017-09-22 09:35:26,462 INFO  [stdout] (MSC service thread 1-4)   Version:
V3
2017-09-22 09:35:26,464 INFO  [stdout] (MSC service thread 1-4)   Subject:
CN=engine01.mydomain.za, O=mydomain, C=US
2017-09-22 09:35:26,467 INFO  [stdout] (MSC service thread 1-4)   Signature
Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
2017-09-22 09:35:26,471 INFO  [stdout] (MSC service thread 1-4)
2017-09-22 09:35:26,472 INFO  [stdout] (MSC service thread 1-4)   Key:  Sun
RSA public key, 1024 bits
2017-09-22 09:35:26,474 INFO  [stdout] (MSC service thread 1-4)   modulus:
96670613185023785772001656613227416922514371649313203413281121371175732119596513752882171306045450346018887835032223373125981220753972276294203593174404470265593368091683564110524316403260121331609213962612618181708680331850541390318868926054438078223371655800890725486783860059873397983318033852172060923531
2017-09-22 09:35:26,476 INFO  [stdout] (MSC service thread 1-4)   public
exponent: 65537
2017-09-22 09:35:26,477 INFO  [stdout] (MSC service thread 1-4)   Validity:
[From: Sun Oct 14 22:26:46 SAST 2012,
2017-09-22 09:35:26,478 INFO  [stdout] (MSC service thread 1-4)
   To: Tue Sep 19 18:26:49 SAST 2017]
2017-09-22 09:35:26,479 INFO  [stdout] (MSC service thread 1-4)   Issuer:
CN=CA-engine01.mydomain.za.47472, O=mydomain, C=US

Any idea how I can generate a new one and what cert it is that's expired?

Please see the attached log for more info.

Thank you so much for your assistance.

Regards.

Neil Wilson.






On Thu, Sep 21, 2017 at 8:41 PM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> Neil,
>
> It seems that your engine certificate(s) is/are not ok. I would
> suggest to enable ssl debug in the engine by:
> - add '-Djavax.net.debug=all' to ovirt-engine.py file here [1].
> - restart your engine
> - check your server.log and check what is the issue.
>
> Hopefully we will be able to understand what happened in your setup.
>
> Thanks,
> Piotr
>
> [1] https://github.com/oVirt/ovirt-engine/blob/master/
> packaging/services/ovirt-engine/ovirt-engine.py#L341
>
> On Thu, Sep 21, 2017 at 4:42 PM, Neil  wrote:
> > Further to the logs sent, on the nodes I'm also seeing the following
> error
> > under /var/log/messages...
> >
> > Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate with
> > subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
> > Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
> exception#012Traceback
> > (most recent call last):#012  File "/usr/share/vdsm/BindingXMLRPC.py",
> line
> > 80, in threaded_start#012self.server.handle_request()#012  File
> > "/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request#012
> > self._handle_request_noblock()#012  File
> > "/usr/lib64/python2.6/SocketServer.py", line 288, in
> > _handle_request_noblock#012request, client_address =
> > self.get_request()#012  File "/usr/lib64/python2.6/SocketServer.py",
> line
> > 456, in get_request#012return self.socket.accept()#012  File
> > "/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line
> 136,
> > in accept#012raise SSL.SSLError("%s, client %s" % (e,
> > address[0]))#012SSLError: no certificate returned, client 10.251.193.5
> >
> > Not sure if this is any further help in diagnosing the issue?
> >
> > Thanks, any assistance is appreciated.
> >
> > Regards.
> >
> > Neil Wilson.
> >
> >
> > On Thu, Sep 21, 2017 at 4:31 PM, Neil  wrote:
> >>
> >> Hi Piotr,
> >>
> >> Thank you for the reply. After sending the email I did go and check the
> >> engine one too
> >>
> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem
> -enddate
> >> -noout
> >> notAfter=Oct 13 16:26:46 2022 GMT
> >>
> >> I'm not sure if this one below is meant to verify or if this output is
> >> expected?
> >>
> >> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/private/
> ca.pem
> >> -enddate -noout
> >> unable to load certificate
> >> 140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
> >> line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
> >>
> >> My date is correct too Thu Sep 21 16:30:15 SAST 2017
> >>
> >> Any ideas?
> >>
> >> Googling surprisingly doesn't come up with much.
> >>
> >> Thank you.
> >>
> >> Regards.
> >>
> >> Neil Wilson.
> >

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-21 Thread Neil
Further to the logs sent, on the nodes I'm also seeing the following error
under /var/log/messages...

Sep 20 03:43:12 node01 vdsm root ERROR invalid client certificate with
subject "/C=US/O=UKDM/CN=engine01.mydomain.za"^C
Sep 20 03:43:12 node01 vdsm vds ERROR xml-rpc handler
exception#012Traceback (most recent call last):#012  File
"/usr/share/vdsm/BindingXMLRPC.py", line 80, in threaded_start#012
 self.server.handle_request()#012  File
"/usr/lib64/python2.6/SocketServer.py", line 278, in handle_request#012
 self._handle_request_noblock()#012  File
"/usr/lib64/python2.6/SocketServer.py", line 288, in
_handle_request_noblock#012request, client_address =
self.get_request()#012  File "/usr/lib64/python2.6/SocketServer.py", line
456, in get_request#012return self.socket.accept()#012  File
"/usr/lib64/python2.6/site-packages/vdsm/SecureXMLRPCServer.py", line 136,
in accept#012raise SSL.SSLError("%s, client %s" % (e,
address[0]))#012SSLError: no certificate returned, client 10.251.193.5

Not sure if this is any further help in diagnosing the issue?

Thanks, any assistance is appreciated.

Regards.

Neil Wilson.


On Thu, Sep 21, 2017 at 4:31 PM, Neil  wrote:

> Hi Piotr,
>
> Thank you for the reply. After sending the email I did go and check the
> engine one too
>
> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem -enddate
> -noout
> notAfter=Oct 13 16:26:46 2022 GMT
>
> I'm not sure if this one below is meant to verify or if this output is
> expected?
>
> [root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/private/ca.pem
> -enddate -noout
> unable to load certificate
> 140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
> line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE
>
> My date is correct too Thu Sep 21 16:30:15 SAST 2017
>
> Any ideas?
>
> Googling surprisingly doesn't come up with much.
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
> On Thu, Sep 21, 2017 at 4:16 PM, Piotr Kliczewski <
> piotr.kliczew...@gmail.com> wrote:
>
>> Neil,
>>
>> You checked both nodes what about the engine? Can you check engine certs?
>> You can find more info where they are located here [1].
>>
>> Thanks,
>> Piotr
>>
>> [1] https://www.ovirt.org/develop/release-management/features/in
>> fra/pki/#ovirt-engine
>>
>> On Thu, Sep 21, 2017 at 3:26 PM, Neil  wrote:
>> > Hi guys,
>> >
>> > Please could someone assist, my cluster is down and I can't access my
>> vm's
>> > to switch some of them back on.
>> >
>> > I'm seeing the following error in the engine.log however I've checked my
>> > certs on my hosts (as some of the goolge results said to check), but the
>> > certs haven't expired...
>> >
>> >
>> > 2017-09-21 15:09:45,077 ERROR
>> > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> > (DefaultQuartzScheduler_Worker-4) Command
>> GetCapabilitiesVDSCommand(HostName
>> > = node02.mydomain.za, HostId = d2debdfe-76e7-40cf-a7fd-78a0f50f14d4,
>> > vds=Host[node02.mydomain.za]) execution failed. Exception:
>> > VDSNetworkException: javax.net.ssl.SSLHandshakeException: Received
>> fatal
>> > alert: certificate_expired
>> > 2017-09-21 15:09:45,086 ERROR
>> > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> > (DefaultQuartzScheduler_Worker-10) Command
>> > GetCapabilitiesVDSCommand(HostName = node01.mydomain.za, HostId =
>> > b108549c-1700-11e2-b936-9f5243b8ce13, vds=Host[node01.mydomain.za])
>> > execution failed. Exception: VDSNetworkException:
>> > javax.net.ssl.SSLHandshakeException: Received fatal alert:
>> > certificate_expired
>> > 2017-09-21 15:09:48,173 ERROR
>> >
>> > My engine and host info is below...
>> >
>> > [root@engine01 ovirt-engine]# rpm -qa | grep -i ovirt
>> > ovirt-engine-lib-3.4.0-1.el6.noarch
>> > ovirt-engine-restapi-3.4.0-1.el6.noarch
>> > ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
>> > ovirt-engine-3.4.0-1.el6.noarch
>> > ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
>> > ovirt-host-deploy-java-1.2.0-1.el6.noarch
>> > ovirt-engine-setup-3.4.0-1.el6.noarch
>> > ovirt-host-deploy-1.2.0-1.el6.noarch
>> > ovirt-engine-backend-3.4.0-1.el6.noarch
>> > ovirt-image-uploader-3.4.0-1.el6.noarch
>> > ovirt-engine-tools-3.4.0-1.el6.noarch
>> > ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
>> > ovirt-engine-webadmin-po

Re: [ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-21 Thread Neil
Hi Piotr,

Thank you for the reply. After sending the email I did go and check the
engine one too

[root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem -enddate
-noout
notAfter=Oct 13 16:26:46 2022 GMT

I'm not sure if this one below is meant to verify or if this output is
expected?

[root@engine01 /]# openssl x509 -in /etc/pki/ovirt-engine/private/ca.pem
-enddate -noout
unable to load certificate
140642165552968:error:0906D06C:PEM routines:PEM_read_bio:no start
line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE

My date is correct too Thu Sep 21 16:30:15 SAST 2017

Any ideas?

Googling surprisingly doesn't come up with much.

Thank you.

Regards.

Neil Wilson.

On Thu, Sep 21, 2017 at 4:16 PM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> Neil,
>
> You checked both nodes what about the engine? Can you check engine certs?
> You can find more info where they are located here [1].
>
> Thanks,
> Piotr
>
> [1] https://www.ovirt.org/develop/release-management/features/
> infra/pki/#ovirt-engine
>
> On Thu, Sep 21, 2017 at 3:26 PM, Neil  wrote:
> > Hi guys,
> >
> > Please could someone assist, my cluster is down and I can't access my
> vm's
> > to switch some of them back on.
> >
> > I'm seeing the following error in the engine.log however I've checked my
> > certs on my hosts (as some of the goolge results said to check), but the
> > certs haven't expired...
> >
> >
> > 2017-09-21 15:09:45,077 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> > (DefaultQuartzScheduler_Worker-4) Command GetCapabilitiesVDSCommand(
> HostName
> > = node02.mydomain.za, HostId = d2debdfe-76e7-40cf-a7fd-78a0f50f14d4,
> > vds=Host[node02.mydomain.za]) execution failed. Exception:
> > VDSNetworkException: javax.net.ssl.SSLHandshakeException: Received fatal
> > alert: certificate_expired
> > 2017-09-21 15:09:45,086 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
> > (DefaultQuartzScheduler_Worker-10) Command
> > GetCapabilitiesVDSCommand(HostName = node01.mydomain.za, HostId =
> > b108549c-1700-11e2-b936-9f5243b8ce13, vds=Host[node01.mydomain.za])
> > execution failed. Exception: VDSNetworkException:
> > javax.net.ssl.SSLHandshakeException: Received fatal alert:
> > certificate_expired
> > 2017-09-21 15:09:48,173 ERROR
> >
> > My engine and host info is below...
> >
> > [root@engine01 ovirt-engine]# rpm -qa | grep -i ovirt
> > ovirt-engine-lib-3.4.0-1.el6.noarch
> > ovirt-engine-restapi-3.4.0-1.el6.noarch
> > ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
> > ovirt-engine-3.4.0-1.el6.noarch
> > ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
> > ovirt-host-deploy-java-1.2.0-1.el6.noarch
> > ovirt-engine-setup-3.4.0-1.el6.noarch
> > ovirt-host-deploy-1.2.0-1.el6.noarch
> > ovirt-engine-backend-3.4.0-1.el6.noarch
> > ovirt-image-uploader-3.4.0-1.el6.noarch
> > ovirt-engine-tools-3.4.0-1.el6.noarch
> > ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
> > ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
> > ovirt-engine-cli-3.4.0.5-1.el6.noarch
> > ovirt-engine-setup-base-3.4.0-1.el6.noarch
> > ovirt-iso-uploader-3.4.0-1.el6.noarch
> > ovirt-engine-userportal-3.4.0-1.el6.noarch
> > ovirt-log-collector-3.4.1-1.el6.noarch
> > ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
> > ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
> > ovirt-engine-dbscripts-3.4.0-1.el6.noarch
> > [root@engine01 ovirt-engine]# cat /etc/redhat-release
> > CentOS release 6.5 (Final)
> >
> >
> > [root@node02 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem
> -enddate
> > -noout ; date
> > notAfter=May 27 08:36:17 2019 GMT
> > Thu Sep 21 15:18:22 SAST 2017
> > CentOS release 6.5 (Final)
> > [root@node02 ~]# rpm -qa | grep vdsm
> > vdsm-4.14.6-0.el6.x86_64
> > vdsm-python-4.14.6-0.el6.x86_64
> > vdsm-cli-4.14.6-0.el6.noarch
> > vdsm-xmlrpc-4.14.6-0.el6.noarch
> > vdsm-python-zombiereaper-4.14.6-0.el6.noarch
> >
> >
> > [root@node01 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem
> -enddate
> > -noout ; date
> > notAfter=Jun 13 16:09:41 2018 GMT
> > Thu Sep 21 15:18:52 SAST 2017
> > CentOS release 6.5 (Final)
> > [root@node01 ~]# rpm -qa | grep -i vdsm
> > vdsm-4.14.6-0.el6.x86_64
> > vdsm-xmlrpc-4.14.6-0.el6.noarch
> > vdsm-cli-4.14.6-0.el6.noarch
> > vdsm-python-zombiereaper-4.14.6-0.el6.noarch
> > vdsm-python-4.14.6-0.el6.x86_64
> >
> > Please could I have some assistance, I'm rater desperate.
> >
> > Thank you.
> >
> > Regards.
> >
> > Neil Wilson
> >
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SSLHandshakeException: Received fatal alert: certificate_expired

2017-09-21 Thread Neil
Hi guys,

Please could someone assist, my cluster is down and I can't access my vm's
to switch some of them back on.

I'm seeing the following error in the engine.log however I've checked my
certs on my hosts (as some of the goolge results said to check), but the
certs haven't expired...


2017-09-21 15:09:45,077 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-4) Command
GetCapabilitiesVDSCommand(HostName = node02.mydomain.za, HostId =
d2debdfe-76e7-40cf-a7fd-78a0f50f14d4, vds=Host[node02.mydomain.za])
execution failed. Exception: VDSNetworkException:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
certificate_expired
2017-09-21 15:09:45,086 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-10) Command
GetCapabilitiesVDSCommand(HostName = node01.mydomain.za, HostId =
b108549c-1700-11e2-b936-9f5243b8ce13, vds=Host[node01.mydomain.za])
execution failed. Exception: VDSNetworkException:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
certificate_expired
2017-09-21 15:09:48,173 ERROR

My engine and host info is below...

[root@engine01 ovirt-engine]# rpm -qa | grep -i ovirt
ovirt-engine-lib-3.4.0-1.el6.noarch
ovirt-engine-restapi-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
ovirt-host-deploy-java-1.2.0-1.el6.noarch
ovirt-engine-setup-3.4.0-1.el6.noarch
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-engine-backend-3.4.0-1.el6.noarch
ovirt-image-uploader-3.4.0-1.el6.noarch
ovirt-engine-tools-3.4.0-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
ovirt-engine-cli-3.4.0.5-1.el6.noarch
ovirt-engine-setup-base-3.4.0-1.el6.noarch
ovirt-iso-uploader-3.4.0-1.el6.noarch
ovirt-engine-userportal-3.4.0-1.el6.noarch
ovirt-log-collector-3.4.1-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
ovirt-engine-dbscripts-3.4.0-1.el6.noarch
[root@engine01 ovirt-engine]# cat /etc/redhat-release
CentOS release 6.5 (Final)


[root@node02 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -enddate
-noout ; date
notAfter=May 27 08:36:17 2019 GMT
Thu Sep 21 15:18:22 SAST 2017
CentOS release 6.5 (Final)
[root@node02 ~]# rpm -qa | grep vdsm
vdsm-4.14.6-0.el6.x86_64
vdsm-python-4.14.6-0.el6.x86_64
vdsm-cli-4.14.6-0.el6.noarch
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch


[root@node01 ~]# openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -enddate
-noout ; date
notAfter=Jun 13 16:09:41 2018 GMT
Thu Sep 21 15:18:52 SAST 2017
CentOS release 6.5 (Final)
[root@node01 ~]# rpm -qa | grep -i vdsm
vdsm-4.14.6-0.el6.x86_64
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-cli-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64

Please could I have some assistance, I'm rater desperate.

Thank you.

Regards.

Neil Wilson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: AcquireHostIdFailure and code 661

2017-09-19 Thread Neil
Hi Moritz,

Thanks for your assistance.

I've checked my /etc/sysconfig/nfs on all 3 hosts and my engine and none of
them have any options specified, so I don't think it's this one.

In terms of adding a sanlock and vdsm user, was this done on your hosts or
engine?

My hosts uid for sanlock and vdsm are all the same.

I don't have a sanlock user on my ovirt engine,but I do have a vdsm user
and the uid matches across all my hosts too.

Thank you!

Regards.

Neil Wilson.





On Tue, Sep 19, 2017 at 3:47 PM, Moritz Baumann 
wrote:

> Hi Neil,
>
> I had similar errors ('Sanlock lockspace add failure' and SPM problems,
> ...) in the log files and my problem was that I added the "-g"  option to
> mountd (months ago without restarting the service) in /etc/sysconfig/nfs
> under RPCMOUNTDOPTS.
>
> I had to either remove the "-g" option or add a goup sanlock and vdsm with
> the same users as on the ovirt-nodes.
>
> Maybe your issue is similar.
>
> Cheers,
> Moritz
>
> On 19.09.2017 14:16, Neil wrote:
>
>> Hi guys,
>>
>> I'm desperate to get to the bottom of this issue. Does anyone have any
>> ideas please?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>> -- Forwarded message --
>> From: *Neil* mailto:nwilson...@gmail.com>>
>> Date: Mon, Sep 11, 2017 at 4:46 PM
>> Subject: AcquireHostIdFailure and code 661
>> To: "users@ovirt.org <mailto:users@ovirt.org>" > users@ovirt.org>>
>>
>>
>> Hi guys,
>>
>> Please could someone shed some light on this issue I'm facing.
>>
>> I'm trying to add a new NFS storage domain but when I try add it, I get a
>> message saying "Acquire hostID failed" and it fails to add.
>>
>> I can mount the NFS share manually and I can see that once the attaching
>> has failed the NFS share is still mounted on the hosts, as per the
>> following...
>>
>> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on
>> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
>> type nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=172
>> .16.0.11)
>>
>> Also looking at the folders on the NFS share I can see that some data has
>> been written, so it's not a permissions issue...
>>
>> drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-edf5b9
>> 3535d5
>> -rwx---rwx. 1 vdsm kvm0 Sep 11 16:08 __DIRECT_IO_TEST__
>>
>> I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in
>> the hope it's a known bug, but I'm still encountering the same problem.
>>
>> It's not a hosted engine and you might see in the logs that I have a
>> storage domain that is out of space which I'm aware of, and I'm hoping the
>> system using this space will be decommissioned in 2 days
>>
>> FilesystemSize  Used Avail Use% Mounted on
>> /dev/sda2 420G  2.2G  413G   1% /
>> tmpfs  48G 0   48G   0% /dev/shm
>> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB
>>915G  915G  424M 100%
>> /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Ex
>> ports___RAID1__1TB
>> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1
>>5.5T  3.7T  1.8T  67%
>> /rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Ex
>> ports___STORAGE1
>> 172.16.0.20:/data/ov-export
>>3.6T  2.3T  1.3T  65%
>> /rhev/data-center/mnt/172.16.0.20:_data_ov-export
>> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB
>>3.6T  2.0T  1.6T  56%
>> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___4TB
>> 172.16.0.253:/var/lib/exports/iso
>>193G   42G  141G  23%
>> /rhev/data-center/mnt/172.16.0.253:_var_lib_exports_iso
>> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2
>>5.5T  3.7G  5.5T   1%
>> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
>>
>> The "STOR2" above is left mounted after attempting to add the new NFS
>> storage domain.
>>
>> Engine details:
>> Fedora release 19 (Schrödinger’s Cat)
>> ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch
>> ovirt-release34-1.0.3-1.noarch
>> ovirt-image-uploader-3.5.0-1.fc19.noarch
>> ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch
>> ovirt-log-collector-3.5.0-1.fc19.noarch
>> ovirt-release35-006-1.noarch
>> ovirt-engine-setup-3.5.0.1-1.fc19.noa

Re: [ovirt-users] AcquireHostIdFailure and code 661

2017-09-14 Thread Neil
Sorry to re-post, but does anyone have any ideas?

Thank you.

Regards.

Neil Wilson.

On Mon, Sep 11, 2017 at 4:46 PM, Neil  wrote:

> Hi guys,
>
> Please could someone shed some light on this issue I'm facing.
>
> I'm trying to add a new NFS storage domain but when I try add it, I get a
> message saying "Acquire hostID failed" and it fails to add.
>
> I can mount the NFS share manually and I can see that once the attaching
> has failed the NFS share is still mounted on the hosts, as per the
> following...
>
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on
> /rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2
> type nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=
> 172.16.0.11)
>
> Also looking at the folders on the NFS share I can see that some data has
> been written, so it's not a permissions issue...
>
> drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 16ab135b-0362-4d7e-bb11-
> edf5b93535d5
> -rwx---rwx. 1 vdsm kvm0 Sep 11 16:08 __DIRECT_IO_TEST__
>
> I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in the
> hope it's a known bug, but I'm still encountering the same problem.
>
> It's not a hosted engine and you might see in the logs that I have a
> storage domain that is out of space which I'm aware of, and I'm hoping the
> system using this space will be decommissioned in 2 days
>
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda2 420G  2.2G  413G   1% /
> tmpfs  48G 0   48G   0% /dev/shm
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB
>   915G  915G  424M 100% /rhev/data-center/mnt/172.16.
> 0.10:_raid0_data___NAS__NFS__Exports___RAID1__1TB
> 172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1
>   5.5T  3.7T  1.8T  67% /rhev/data-center/mnt/172.16.
> 0.10:_raid0_data___NAS__NFS__Exports___STORAGE1
> 172.16.0.20:/data/ov-export
>   3.6T  2.3T  1.3T  65% /rhev/data-center/mnt/172.16.
> 0.20:_data_ov-export
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB
>   3.6T  2.0T  1.6T  56% /rhev/data-center/mnt/172.16.
> 0.11:_raid1_data___NAS__NFS__Exports___4TB
> 172.16.0.253:/var/lib/exports/iso
>   193G   42G  141G  23% /rhev/data-center/mnt/172.16.
> 0.253:_var_lib_exports_iso
> 172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2
>   5.5T  3.7G  5.5T   1% /rhev/data-center/mnt/172.16.
> 0.11:_raid1_data___NAS__NFS__Exports___STOR2
>
> The "STOR2" above is left mounted after attempting to add the new NFS
> storage domain.
>
> Engine details:
> Fedora release 19 (Schrödinger’s Cat)
> ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch
> ovirt-release34-1.0.3-1.noarch
> ovirt-image-uploader-3.5.0-1.fc19.noarch
> ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-log-collector-3.5.0-1.fc19.noarch
> ovirt-release35-006-1.noarch
> ovirt-engine-setup-3.5.0.1-1.fc19.noarch
> ovirt-release33-1.0.0-0.1.master.noarch
> ovirt-engine-tools-3.5.0.1-1.fc19.noarch
> ovirt-engine-lib-3.5.0.1-1.fc19.noarch
> ovirt-engine-sdk-python-3.5.0.8-1.fc19.noarch
> ovirt-host-deploy-java-1.3.0-1.fc19.noarch
> ovirt-engine-backend-3.5.0.1-1.fc19.noarch
> sos-3.1-1.1.fc19.ovirt.noarch
> ovirt-engine-setup-base-3.5.0.1-1.fc19.noarch
> ovirt-engine-extensions-api-impl-3.5.0.1-1.fc19.noarch
> ovirt-engine-webadmin-portal-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-iso-uploader-3.5.0-1.fc19.noarch
> ovirt-host-deploy-1.3.0-1.fc19.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.5.0.1-1.fc19.noarch
> ovirt-engine-3.5.0.1-1.fc19.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.5.0.1-1.fc19.noarch
> ovirt-engine-userportal-3.5.0.1-1.fc19.noarch
> ovirt-engine-cli-3.5.0.5-1.fc19.noarch
> ovirt-engine-restapi-3.5.0.1-1.fc19.noarch
> libvirt-daemon-driver-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-libxl-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-secret-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-network-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-storage-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-network-1.1.3.2-1.fc19.x86_64
> libvirt-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
> libvirt-client-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-nodedev-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-uml-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-xen-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-driver-interface-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-config-nwfilter-1.1.3.2-1.fc19.x86_64
> libvirt-daemon-1.1.3.2-1.fc19.x86_6

Re: [ovirt-users] Move VM from FC storage cluster to local-storage in another cluster

2017-08-08 Thread Neil
I'm replying to my own email as I managed to resolve the issue.

Sometimes it helps to RTFM.

I had to remove the export domain that I created on cluster2, as well as
detach the original export domain on cluster1, before I could attach the
original export domain to cluster2, as you can only have one export domain
attached at a time.



On Tue, Aug 8, 2017 at 11:10 AM, Neil  wrote:

> Hi guys,
>
> I need to move a VM from one cluster (cluster1) using FC storage with 4
> hosts, to a separate cluster (cluster 2) with only 1 NEW host that has
> local storage only.
>
> What would be the best way to do this?
>
> All I aim to achieve is to have a single NEW host that has local storage
> that I can run a single VM on, which is manageable via oVirt, so even if it
> means adding the NEW host as a separate DC, how can I copy or move (not
> live) the VM to this new host?
>
> I've tried exporting the VM to an export domain on cluster1, but I can't
> seem to figure out how to "attach" the export domain to cluster2 with the
> NEW host.
>
> If I go to "Import VM" on cluster2, I get a message saying "Not available
> when no export domain is active" if I try and attach the same export domain
> that was used to export the VM in cluster1, it says I can't because it's
> already assigned to the cluster, so I'm really confused as to how to go
> about doing this.
>
> Any help and guidance is appreciated.
>
> Thanks.
>
> Regards.
>
> Neil Wilson.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Move VM from FC storage cluster to local-storage in another cluster

2017-08-08 Thread Neil
Hi guys,

I need to move a VM from one cluster (cluster1) using FC storage with 4
hosts, to a separate cluster (cluster 2) with only 1 NEW host that has
local storage only.

What would be the best way to do this?

All I aim to achieve is to have a single NEW host that has local storage
that I can run a single VM on, which is manageable via oVirt, so even if it
means adding the NEW host as a separate DC, how can I copy or move (not
live) the VM to this new host?

I've tried exporting the VM to an export domain on cluster1, but I can't
seem to figure out how to "attach" the export domain to cluster2 with the
NEW host.

If I go to "Import VM" on cluster2, I get a message saying "Not available
when no export domain is active" if I try and attach the same export domain
that was used to export the VM in cluster1, it says I can't because it's
already assigned to the cluster, so I'm really confused as to how to go
about doing this.

Any help and guidance is appreciated.

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Neil
Ah! Okay, thanks for the assistance Yaniv.

I'll go ahead and start the upgrade to Centos 7 process asap.

Regards.

Neil Wilson.


On Wed, Aug 2, 2017 at 2:06 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Aug 2, 2017 at 2:39 PM, Neil  wrote:
>
>> Thanks Yaniv,
>>
>>
>> On Wed, Aug 2, 2017 at 12:12 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Wed, Aug 2, 2017 at 1:06 PM, Neil  wrote:
>>>
>>>> Hi Yaniv,
>>>>
>>>> Thanks for the assistance.
>>>>
>>>> On Wed, Aug 2, 2017 at 12:01 PM, Yaniv Kaul  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:
>>>>>
>>>>>> Hi guys,
>>>>>>
>>>>>> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available
>>>>>> in the ovirt 3.6 repo, however I seem to be encountering a known bug (
>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>>>>>>
>>>>>
>>>>> This specific bug seems to be fixed in 4.1 (and was backported to 4.0)
>>>>> - are you sure it's fixed in 3.6.x?
>>>>>
>>>>>
>>>>
>>>> I only "suspect" it fixed because the Redhat bug report mentions it was
>>>> fixed.
>>>> Is this causing the failure to negotiate SPM, as this is what I'm
>>>> trying to resolve?
>>>>
>>>>
>>>>
>>>>> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out
>>>>>> how to install this.
>>>>>>
>>>>>> I was hoping it was via http://resources.ovirt.org
>>>>>> /pub/ovirt-3.6-snapshot but this link is dead.
>>>>>>
>>>>>> Is anyone using ovirt 3.6.9 and how does one obtain it?
>>>>>>
>>>>>
>>>>> If you are on 3.6.7, go ahead and upgrade.
>>>>>
>>>>
>>>> Any ideas how? I don't see a repo available for it and can't find
>>>> packages even when looking through http://resources.ovirt.org manually?
>>>>
>>>
>>> See the upgrade guide @ http://www.ovirt.org/documen
>>> tation/upgrade-guide/upgrade-guide/
>>>
>>
>> The issue is that there isn't a repo that contains 3.6.9 by the looks of
>> things, I'm running 3.6.7 and running engine-upgrade-check says there are
>> no new packages. 3.6.9 doesn't seem to exist in the 3.6 repo?
>>
>
> Indeed, which is why I've suggested you go ahead and upgrade to 4.
>
>>
>>
>> (don't forget to enable the channels - you can do it first by installing
>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm )
>>> Y.
>>>
>>
>> I'd like to upgrade to 4 asap, but this involves installing a new Centos
>> 7 machine to migrate my engine currently running on Centos 6.9, which I
>> can't do just yet.
>>
>
> Correct, you'll need CentOS 7. I've heard 7.4 is in the oven and will be
> ready in the coming weeks (?).
> Y.
>
>
>>
>>
>>
>>
>>
>>
>>>
>>>
>>>>
>>>> Y.
>>>>>
>>>>>
>>>>>> The issues I'm facing is, after trying to update my cluster version
>>>>>> to 3.6, my hosts weren't compatible, as it says they only compatible with
>>>>>> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
>>>>>> installed and updated the latest vdsm from the 3.6 repo, but this still
>>>>>> didn't allow me to change my cluster version. I then rolled back the
>>>>>> cluster version to 3.5.
>>>>>>
>>>>>> At the moment because I've upgraded 1 host, I can't select this host
>>>>>> as SPM and I'm wondering if I can upgrade my remaining hosts, or will 
>>>>>> this
>>>>>> prevent any hosts from being my SPM? I'm seeing the following error
>>>>>> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>>>>>>
>>>>>> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my
>>>>>> hosts, so that I can start the ovirt 4 upgrade next.
>>>>>>
>>>>>> Please could I have some guidance on this?
>>>>>>
>>>>>> Thank you.
>>>>>>
>>>>>> Regards.
>>>>>>
>>>>>> Neil Wilson.
>>>>>>
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Neil
Thanks Yaniv,


On Wed, Aug 2, 2017 at 12:12 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Aug 2, 2017 at 1:06 PM, Neil  wrote:
>
>> Hi Yaniv,
>>
>> Thanks for the assistance.
>>
>> On Wed, Aug 2, 2017 at 12:01 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:
>>>
>>>> Hi guys,
>>>>
>>>> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available
>>>> in the ovirt 3.6 repo, however I seem to be encountering a known bug (
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>>>>
>>>
>>> This specific bug seems to be fixed in 4.1 (and was backported to 4.0) -
>>> are you sure it's fixed in 3.6.x?
>>>
>>>
>>
>> I only "suspect" it fixed because the Redhat bug report mentions it was
>> fixed.
>> Is this causing the failure to negotiate SPM, as this is what I'm trying
>> to resolve?
>>
>>
>>
>>> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out
>>>> how to install this.
>>>>
>>>> I was hoping it was via http://resources.ovirt.org
>>>> /pub/ovirt-3.6-snapshot but this link is dead.
>>>>
>>>> Is anyone using ovirt 3.6.9 and how does one obtain it?
>>>>
>>>
>>> If you are on 3.6.7, go ahead and upgrade.
>>>
>>
>> Any ideas how? I don't see a repo available for it and can't find
>> packages even when looking through http://resources.ovirt.org manually?
>>
>
> See the upgrade guide @ http://www.ovirt.org/documentation/upgrade-guide/
> upgrade-guide/
>

The issue is that there isn't a repo that contains 3.6.9 by the looks of
things, I'm running 3.6.7 and running engine-upgrade-check says there are
no new packages. 3.6.9 doesn't seem to exist in the 3.6 repo?


(don't forget to enable the channels - you can do it first by installing
> http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm )
> Y.
>

I'd like to upgrade to 4 asap, but this involves installing a new Centos 7
machine to migrate my engine currently running on Centos 6.9, which I can't
do just yet.






>
>
>>
>> Y.
>>>
>>>
>>>> The issues I'm facing is, after trying to update my cluster version to
>>>> 3.6, my hosts weren't compatible, as it says they only compatible with
>>>> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
>>>> installed and updated the latest vdsm from the 3.6 repo, but this still
>>>> didn't allow me to change my cluster version. I then rolled back the
>>>> cluster version to 3.5.
>>>>
>>>> At the moment because I've upgraded 1 host, I can't select this host as
>>>> SPM and I'm wondering if I can upgrade my remaining hosts, or will this
>>>> prevent any hosts from being my SPM? I'm seeing the following error
>>>> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>>>>
>>>> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my
>>>> hosts, so that I can start the ovirt 4 upgrade next.
>>>>
>>>> Please could I have some guidance on this?
>>>>
>>>> Thank you.
>>>>
>>>> Regards.
>>>>
>>>> Neil Wilson.
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Neil
Hi Yaniv,

Thanks for the assistance.

On Wed, Aug 2, 2017 at 12:01 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Aug 2, 2017 at 12:09 PM, Neil  wrote:
>
>> Hi guys,
>>
>> I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available in
>> the ovirt 3.6 repo, however I seem to be encountering a known bug (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1387949)
>>
>
> This specific bug seems to be fixed in 4.1 (and was backported to 4.0) -
> are you sure it's fixed in 3.6.x?
>
>

I only "suspect" it fixed because the Redhat bug report mentions it was
fixed.
Is this causing the failure to negotiate SPM, as this is what I'm trying to
resolve?



> which looks to be fixed in ovirt 3.6.9.2 but I can't seem to find out how
>> to install this.
>>
>> I was hoping it was via http://resources.ovirt.org/pub/ovirt-3.6-snapshot
>> but this link is dead.
>>
>> Is anyone using ovirt 3.6.9 and how does one obtain it?
>>
>
> If you are on 3.6.7, go ahead and upgrade.
>

Any ideas how? I don't see a repo available for it and can't find packages
even when looking through http://resources.ovirt.org manually?


Y.
>
>
>> The issues I'm facing is, after trying to update my cluster version to
>> 3.6, my hosts weren't compatible, as it says they only compatible with
>> version 3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and
>> installed and updated the latest vdsm from the 3.6 repo, but this still
>> didn't allow me to change my cluster version. I then rolled back the
>> cluster version to 3.5.
>>
>> At the moment because I've upgraded 1 host, I can't select this host as
>> SPM and I'm wondering if I can upgrade my remaining hosts, or will this
>> prevent any hosts from being my SPM? I'm seeing the following error
>> "WARNING Unrecognized protocol: 'SUBSCRI'" on my upgraded host.
>>
>> I'm wanting to upgrade to the latest 3.6 as well as upgrade all my hosts,
>> so that I can start the ovirt 4 upgrade next.
>>
>> Please could I have some guidance on this?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6.x and Centos 6.9

2017-08-02 Thread Neil
Hi guys,

I upgraded to the latest ovirt-engine-3.6.7.5-1.el6.noarch available in the
ovirt 3.6 repo, however I seem to be encountering a known bug (
https://bugzilla.redhat.com/show_bug.cgi?id=1387949)  which looks to be
fixed in ovirt 3.6.9.2 but I can't seem to find out how to install this.

I was hoping it was via http://resources.ovirt.org/pub/ovirt-3.6-snapshot
but this link is dead.

Is anyone using ovirt 3.6.9 and how does one obtain it?

The issues I'm facing is, after trying to update my cluster version to 3.6,
my hosts weren't compatible, as it says they only compatible with version
3.3, 3.4 and 3.5 etc. I then upgraded 1 hosts to Centos 6.9 and installed
and updated the latest vdsm from the 3.6 repo, but this still didn't allow
me to change my cluster version. I then rolled back the cluster version to
3.5.

At the moment because I've upgraded 1 host, I can't select this host as SPM
and I'm wondering if I can upgrade my remaining hosts, or will this prevent
any hosts from being my SPM? I'm seeing the following error "WARNING
Unrecognized protocol: 'SUBSCRI'" on my upgraded host.

I'm wanting to upgrade to the latest 3.6 as well as upgrade all my hosts,
so that I can start the ovirt 4 upgrade next.

Please could I have some guidance on this?

Thank you.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM Command failed: Heartbeat Exceeded

2017-07-31 Thread Neil
Hi guys,

Sorry to repost but I'm rather desperate here.

Thanks

Regards.

Neil Wilson.

On 31 Jul 2017 16:51, "Neil"  wrote:

> Hi guys,
>
> Please could someone assist me, my DC seems to be trying to re-negotiate
> SPM and apparently it's failing. I tried to delete an old autogenerated
> snapshot and shortly after that the issue seemed to start, however after
> about an hour, the snapshot said successfully deleted, and then SPM
> negotiated again albeit for a short period before it started trying to
> re-negotiate again.
>
> Last week I upgraded from ovirt 3.5 to 3.6, I also upgraded one of my 4
> hosts using the 3.6 repo to the latest available from that repo and did a
> yum update too.
>
> I have 4 nodes and my ovirt engine is a KVM guest on another physical
> machine on the network. I'm using an FC SAN with ATTO HBA's and recently
> we've started seeing some degraded IO. The SAN appears to be alright and
> the disks all seem to check out, but we are having rather slow IOPS at the
> moment, which we trying to track down.
>
> ovirt engine CentOS release 6.9 (Final)
> ebay-cors-filter-1.0.1-0.1.ovirt.el6.noarch
> ovirt-engine-3.6.7.5-1.el6.noarch
> ovirt-engine-backend-3.6.7.5-1.el6.noarch
> ovirt-engine-cli-3.6.2.0-1.el6.noarch
> ovirt-engine-dbscripts-3.6.7.5-1.el6.noarch
> ovirt-engine-extension-aaa-jdbc-1.0.7-1.el6.noarch
> ovirt-engine-extensions-api-impl-3.6.7.5-1.el6.noarch
> ovirt-engine-jboss-as-7.1.1-1.el6.x86_64
> ovirt-engine-lib-3.6.7.5-1.el6.noarch
> ovirt-engine-restapi-3.6.7.5-1.el6.noarch
> ovirt-engine-sdk-python-3.6.7.0-1.el6.noarch
> ovirt-engine-setup-3.6.7.5-1.el6.noarch
> ovirt-engine-setup-base-3.6.7.5-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.6.7.5-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.7.5-1.el6.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.7.5-1.el6.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.6.7.5-1.el6.noarch
> ovirt-engine-tools-3.6.7.5-1.el6.noarch
> ovirt-engine-tools-backup-3.6.7.5-1.el6.noarch
> ovirt-engine-userportal-3.6.7.5-1.el6.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.7.5-1.el6.noarch
> ovirt-engine-webadmin-portal-3.6.7.5-1.el6.noarch
> ovirt-engine-websocket-proxy-3.6.7.5-1.el6.noarch
> ovirt-engine-wildfly-8.2.1-1.el6.x86_64
> ovirt-engine-wildfly-overlay-8.0.5-1.el6.noarch
> ovirt-host-deploy-1.4.1-1.el6.noarch
> ovirt-host-deploy-java-1.4.1-1.el6.noarch
> ovirt-image-uploader-3.6.0-1.el6.noarch
> ovirt-iso-uploader-3.6.0-1.el6.noarch
> ovirt-release34-1.0.3-1.noarch
> ovirt-release35-006-1.noarch
> ovirt-release36-3.6.7-1.noarch
> ovirt-setup-lib-1.0.1-1.el6.noarch
> ovirt-vmconsole-1.0.2-1.el6.noarch
> ovirt-vmconsole-proxy-1.0.2-1.el6.noarch
>
> node01 (CentOS 6.9)
> vdsm-4.16.30-0.el6.x86_64
> vdsm-cli-4.16.30-0.el6.noarch
> vdsm-jsonrpc-4.16.30-0.el6.noarch
> vdsm-python-4.16.30-0.el6.noarch
> vdsm-python-zombiereaper-4.16.30-0.el6.noarch
> vdsm-xmlrpc-4.16.30-0.el6.noarch
> vdsm-yajsonrpc-4.16.30-0.el6.noarch
> gpxe-roms-qemu-0.9.7-6.16.el6.noarch
> qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> libvirt-0.10.2-62.el6.x86_64
> libvirt-client-0.10.2-62.el6.x86_64
> libvirt-lock-sanlock-0.10.2-62.el6.x86_64
> libvirt-python-0.10.2-62.el6.x86_64
> node01 was upgraded out of desperation after I tried changing my DC and
> cluster version to 3.6, but then found that none of my hosts could be
> activated out of maintenance due to an incompatibility with 3.6 (I'm still
> not sure why as searching seemed to indicate Centos 6.x was compatible. I
> then had to remove all 4 hosts, and change the cluster version back to 3.5
> and then re-add them. When I tried changing the cluster version to 3.6 I
> did get a complaint about using the "legacy protocol" so on each host under
> Advanced, I changed them to use the JSON protocol, and this seemed to
> resolve it, however once changing the DC/Cluster back to 3.5 the option to
> change the protocol back to Legacy is no longer shown.
>
> node02 (Centos 6.7)
> vdsm-4.16.30-0.el6.x86_64
> vdsm-cli-4.16.30-0.el6.noarch
> vdsm-jsonrpc-4.16.30-0.el6.noarch
> vdsm-python-4.16.30-0.el6.noarch
> vdsm-python-zombiereaper-4.16.30-0.el6.noarch
> vdsm-xmlrpc-4.16.30-0.el6.noarch
> vdsm-yajsonrpc-4.16.30-0.el6.noarch
> gpxe-roms-qemu-0.9.7-6.14.el6.noarch
> qemu-img-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
> qemu-kvm-rhev-tools-0.12.1.2-2.479.el6_7.2.x86_64
> libvirt-0.10.2-54.el6_7.6.x86_64
> libvirt-client-0.10.2-54.el6_7.6.x86_64
> libvirt-lock-sanlock-0.10.2-54.el6_7.6.x86_64
> libvi

Re: [ovirt-users] recover dom_md

2016-10-18 Thread Alastair Neil
I was able to reconstrruct the dom_md/metadata file except for the
_sha_cksum line,  so I guess I'll try to  follow the sanlick direct init
and then reattach the domain.

On 18 October 2016 at 14:00, Alastair Neil  wrote:

> I have an ovirt 4 cluster. with two gluster storage domains.  The old
> domain is on a 1G network and the new one is on 10G.  While migrating disks
> I accidentally removed the dom_md  directory  in the new storage domain.
>
> Is there a process to recreate this directory.  I have moved about 28 disk
> images, the domain is marked as down but the VM with disks in there are
> still up.
>
> I have seen examples of rebuilding the dom_md/ids file using sanlock
> direct init, but I do not know if it is possible to rebuild the entire
> directory this way, and before I commit myself so shutting down all the
> hosts I'd like to make sure there us a good chance of success,
>
> If this is not possible is there a way of importing the disk images , I
> tried copying them to the export domain but they do not show up in the
> imports.
>
>
> -Alastair
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] recover dom_md

2016-10-18 Thread Alastair Neil
I have an ovirt 4 cluster. with two gluster storage domains.  The old
domain is on a 1G network and the new one is on 10G.  While migrating disks
I accidentally removed the dom_md  directory  in the new storage domain.

Is there a process to recreate this directory.  I have moved about 28 disk
images, the domain is marked as down but the VM with disks in there are
still up.

I have seen examples of rebuilding the dom_md/ids file using sanlock direct
init, but I do not know if it is possible to rebuild the entire directory
this way, and before I commit myself so shutting down all the hosts I'd
like to make sure there us a good chance of success,

If this is not possible is there a way of importing the disk images , I
tried copying them to the export domain but they do not show up in the
imports.


-Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Multiple FC SAN's and Hosts LSM etc

2016-07-15 Thread Neil
Hi guys,

I'm soon going to have the following equipment and I'd like to find the
best way to utilise it

1.) One NEW FC SAN, and 3 new Dell Hosts with FC cards.

2.) One OLD FC SAN with 2 older HP hosts with FC cards. (old VMWare
environment)

3.) Another OLDER FC SAN with 2 older HP Hosts with FC cards. (old VMWare
environment)

4.) I have an existing oVirt 3.5 DR cluster with two hosts and NFS storage
that is current in use and works well.

Each of the above SAN's will only have FC ports to connect to their
existing hosts, so all hosts won't be connected to all SAN's. All hosts
would be the same Centos 7.x release etc.

All existing VM's are going to be moved to the option 1 via a baremetal
restore from backup onto a NEW oVirt platform. Once installed I'd then like
to re-commission 2 and 3 above to make use of the old hardware and SAN's as
secondary or possibly a "new" DR platform to replace or improve on option 4.

Bearing in mind the older hardware will be different CPU generations, would
it be best to add the older hosts and SAN's as new clusters within the same
NEW oVirt installation? Or should I rather just keep 2, 3 and 4 as separate
oVirt installations?

I know in the past live migration wouldn't work with different CPU
generations, and of course my SAN's won't be physically connected to each
of the hosts.

In order to move VM's between 1, 2 and 3 would I need to shut the VM down
and export and import, or is there another way to do this?

Could LSM work between across all three SANS and hosts?

I know I can do a baremetal restore from backup directly onto either 1, 2
or 3 if needed, but I'd like to try tie all of this into one platform if
there is good reason to do so. Any thoughts, suggestions or ideas here?

Any guidance is greatly appreciated.

Thank you

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0 /patternfly contrast?

2016-07-08 Thread Alastair Neil
Absolutely +1

On 7 July 2016 at 22:25, SGhosh  wrote:

> Hi
>
> Running ovirt 4.0 on CentOS 7 - and the gui color contrast seems to be off.
>
> I am seeing very low readability with the white text on light blue
> selection bar (attached).
>
> Any tweaks?
>
> -subhendu
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Decommission Master Storage Domain

2016-06-20 Thread Neil
Good day Nicolas,

Thank you very much for your reply.

I have followed your instructions and it's worked perfectly.

Much appreciated.

Regards.

Neil Wilson.

On Thu, Jun 16, 2016 at 8:43 AM, Nicolas Ecarnot 
wrote:

> Le 15/06/2016 14:07, Neil a écrit :
>
>> Hi guys,
>>
>> I've searched around a little but don't see much on how to do this.
>>
>> I'm running ovirt-engine-3.5.6.2-1.el6.noarch on Centos 6.x
>>
>> I have 4 x Centos 6.x hosts connected to an FC SAN with two different
>> RAID arrays configured on it, one new RAID and one old RAID.
>> The new RAID is shared as a new FC storage domain, the old RAID as my
>> old Master storage domain.
>>
>> I have moved all VM's using LSM to the new storage domain and I would
>> like to remove my old storage domain now, so that the old physical hard
>> disks can be removed out of my SAN.
>>
>> If I go to "Disks" on the old storage domain I only see two named
>> OVF_store etc, so it looks like it's ready to be decommissioned.
>>
>> How can I promote my new domain to the master and remove/destroy my old
>> master domain and can it all be done without any VM downtime?
>>
>
> Hi Neil,
>
> - When you're sure no VM disk remains on the old SD :
> - Select the old storage domain
> - in the 'Data center' tab at the bottom of the GUI, use the 'Maintenance'
> button
> - the Master role will automagically switch to the remaining storage
> domain (it can take some seconds)
> - once there, you're free to detach then destroy the old storage domain.
>
> HTH
>
> --
> Nicolas Ecarnot
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Decommission Master Storage Domain

2016-06-15 Thread Neil
Hi guys,

I've searched around a little but don't see much on how to do this.

I'm running ovirt-engine-3.5.6.2-1.el6.noarch on Centos 6.x

I have 4 x Centos 6.x hosts connected to an FC SAN with two different RAID
arrays configured on it, one new RAID and one old RAID.
The new RAID is shared as a new FC storage domain, the old RAID as my old
Master storage domain.

I have moved all VM's using LSM to the new storage domain and I would like
to remove my old storage domain now, so that the old physical hard disks
can be removed out of my SAN.

If I go to "Disks" on the old storage domain I only see two named OVF_store
etc, so it looks like it's ready to be decommissioned.

How can I promote my new domain to the master and remove/destroy my old
master domain and can it all be done without any VM downtime?

Any help is greatly appreciated.

Thanks!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] auto-remove snapshot created in live storage migration?

2016-04-01 Thread Alastair Neil
good to know - thanks.


On 31 March 2016 at 11:05, Nir Soffer  wrote:

> On Thu, Mar 31, 2016 at 12:03 AM, Alastair Neil 
> wrote:
> > Is it planned to allow the snapshots that are created during a live
> storage
> > migration to be automatically deleted once the migration has completed?
> It
> > is easy to forget about them and end up with large snapshots.
>
> Yes, it is planned for 4.
>
> Kevin, can you share the bug number for this?
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] auto-remove snapshot created in live storage migration?

2016-03-30 Thread Alastair Neil
Is it planned to allow the snapshots that are created during a live storage
migration to be automatically deleted once the migration has completed?  It
is easy to forget about them and end up with large snapshots.

-Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt and CAS SSO

2016-03-14 Thread Alastair Neil
On 11 March 2016 at 11:55, Martin Perina  wrote:

> Hi,
>
> I'm glad to hear that you were able to successfully configure aaa-misc
> and mod_auth_cas to allow CAS based login for oVirt.
>
> Unfortunately regarding CAS authorization for oVirt I have somewhat bad
> news for you. But let me explain the issue a bit:
>
> 1. Using aaa-misc we are able to pass only user name of the authenticated
>user from apache to ovirt.
>
> 2. After that we have authenticated user on oVirt and then we pass
>its username to authz extension to fetch full principal record including
>group memberships. At the moment we don't pass anything else to authz
>extension, just principal name (username).
>
> So here are options how to enable CAS authorization for oVirt:
>
> 1. Implement new authz extension which will fetch principal record for CAS
>server (if this is possible, I don't know much about CAS)
>
> 2. Or implement new authn/authz extensions specific to CAS which will use
>CAS API do both authn and authz.
>
> 3. Use LDAP as a backend for you CAS server (if possible) and configure
>authz part using ovirt-engine-extension-aaa-ldap
>
> 4. You could also create an RFE bug on oVirt to add CAS support, but
>no promises from me :-) you are the first user asking about CAS support
>


err, no I asked about it about 18 months ago on this very list and got no
response.  So in a way they are the first to ask and actually get a
response.





>
> And of course feel free to ask!
>
> Regards
>
> Martin Perina
>
> [1] http://machacekondra.blogspot.cz/
> [2] https://www.youtube.com/watch?v=bSbdqmRNLi0
> [3]
> http://www.slideshare.net/MartinPeina/the-new-ovirt-extension-api-taking-aaa-authentication-authorization-accounting-to-the-next-level
> [4] https://www.youtube.com/watch?v=9b9WVFsy_yg
> [5]
> http://www.slideshare.net/MartinPeina/ovirt-extension-api-the-first-step-for-fully-modular-ovirt
> [6] https://github.com/oVirt/ovirt-engine-extension-aaa-ldap
> [7] https://github.com/oVirt/ovirt-engine-extension-aaa-misc
> [8] https://github.com/oVirt/ovirt-engine-extension-aaa-jdbc
>
> - Original Message -
> > From: "Fabrice Bacchella" 
> > To: Users@ovirt.org
> > Sent: Tuesday, March 8, 2016 11:54:13 AM
> > Subject: [ovirt-users] ovirt and CAS SSO
> >
> > I'm trying to add CAS SSO to ovirt.
> >
> > For authn (authentication),
> > org.ovirt.engineextensions.aaa.misc.http.AuthnExtension is OK, I put
> jboss
> > behind an Apache with mod_auth_cas.
> >
> > Now I'm fighting with authz (authorization). CAS provides everything
> needed
> > as header. So I don't need ldap or jdbc extensions. Is there anything
> done
> > about that or do I need to write my own extension ? Is there some
> > documentation about that ?
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Mixing CPU types

2016-01-27 Thread Neil
Hi Rene,

Thank you very much for coming back to me.

That's perfect then and answers my questions exactly.

Much appreciated.

Regards.

Neil Wilson.







On Wed, Jan 27, 2016 at 1:57 PM, René Koch  wrote:

> Hi Neil,
>
> You can mix cpu types (but not AMD and Intel) if you leave the cluster
> level at the lowest cpu level.
> I personally don't mix cpu levels if possible, but instead create own
> clusters for each cpu type in order to be able to use the newest cpu
> features...
>
>
> Regards,
> René
>
>
> On 01/27/2016 12:53 PM, Neil wrote:
>
> Hi guys,
>
> I currently have an oVirt 3.5 cluster with Sandy Bridge Xeon CPU's, and I
> need to add a new host for more RAM and vCPU's, however the new Xeon e5
> cpu's are Haswell based.
>
> Can I mix CPU types (Haswell and Sandy Bridge) in my cluster and will I be
> able to migrate between my hosts?
>
> I'm guessing that for this to work I'll need to leave my "CPU type" set to
> Sandy Bridge, will the Haswell based CPU be compatible with my cluster, but
> just run without any Haswell type features?
>
> Apologies if this is a dumb question or if it's been answered before.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Mixing CPU types

2016-01-27 Thread Neil
Hi guys,

I currently have an oVirt 3.5 cluster with Sandy Bridge Xeon CPU's, and I
need to add a new host for more RAM and vCPU's, however the new Xeon e5
cpu's are Haswell based.

Can I mix CPU types (Haswell and Sandy Bridge) in my cluster and will I be
able to migrate between my hosts?

I'm guessing that for this to work I'll need to leave my "CPU type" set to
Sandy Bridge, will the Haswell based CPU be compatible with my cluster, but
just run without any Haswell type features?

Apologies if this is a dumb question or if it's been answered before.

Thank you.

Regards.

Neil Wilson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migrate to Hosted engine set up using ovirt node 3.6

2015-11-11 Thread Alastair Neil
On 11 November 2015 at 12:52, Simone Tiraboschi  wrote:

>
>
> On Wed, Nov 11, 2015 at 6:01 PM, Alastair Neil 
> wrote:
>
>> Hi
>>
>> I am in the process of upgrading my ovirt DC to 3.6.  I would like to
>> migrate to hosted-engine with gluster replica 3 storage in the process.
>> The engine has been upgraded and I have installed 3 of the VM hosts in one
>> cluster  using the ovirt-node iso.  I have three other VM hosts in a
>> separate cluster to upgrade.
>>
>> I see an option to configure the hosted engine through the node admin
>> login tui, however I have not found any up to date instructions on how to
>> perform a migration to hosted using the ovirt-node.  I thought I'd ask a
>> few questions before I started:
>>
>> Should I connect and approve the node in the current engine prior to
>> configuring the hosted engine?
>> Does the node hosted-engine-setup provide a pause to restore the engine
>> db from the external engine? Or even better a facility to upload the
>> database backup file?
>>
>
> http://www.ovirt.org/Migrate_to_Hosted_Engine
>

Yes I am familiar with these instructions, however they are quite old and
have not been updated to include any information about how to perform this
in the case of using the 3.6 ovirt-node image as the host and the
ovirt-live installation image as the VM installation.


>
> hosted-engine-setup has to be run on the host.
> engine-setup has to be run on the engine VM.
> The DB restore from you previous setup has to be performed on the engine
> VM. If you are using the ready to use oVirt engine appliance and you choose
> to automatically execute engine-setup on your VM it will not wait for you
> to replace the engine DB so you have just to avoid that and manually
> execute engine-setup on the engine VM.
>

This is the crux: if the hosted-engine-setup using the live image allows
you to pause the setup and do a manual engine-setup. Its seems like it is
probably a quite common thing for people to want to do so I thought I'd
check.  If it has not been attempted and/or documented then I will try it.


> In this way you have all the time to perform your import.
>
>
>
>>
>> Any feedback gratefully received.
>>
>> Thanks, Alastair
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] migrate to Hosted engine set up using ovirt node 3.6

2015-11-11 Thread Alastair Neil
Hi

I am in the process of upgrading my ovirt DC to 3.6.  I would like to
migrate to hosted-engine with gluster replica 3 storage in the process.
The engine has been upgraded and I have installed 3 of the VM hosts in one
cluster  using the ovirt-node iso.  I have three other VM hosts in a
separate cluster to upgrade.

I see an option to configure the hosted engine through the node admin login
tui, however I have not found any up to date instructions on how to perform
a migration to hosted using the ovirt-node.  I thought I'd ask a few
questions before I started:

Should I connect and approve the node in the current engine prior to
configuring the hosted engine?
Does the node hosted-engine-setup provide a pause to restore the engine db
from the external engine? Or even better a facility to upload the database
backup file?

Any feedback gratefully received.

Thanks, Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Corrupted VM's

2015-10-06 Thread Neil
Hi Nir,

Thank you for coming back to me. I see in the ovirt-engine log the one VM
said it ran out of space, do you think perhaps the SAN itself was over
allocated somehow? Could this cause the issue shown in the logs?

In order to restore I had to delete some old VM's so this would have free'd
up lots of space and in doing so perhaps resolve the problem?

Just a thought really.

Thank you.

Regards.

Neil Wilson.



On Tue, Oct 6, 2015 at 2:24 PM, Nir Soffer  wrote:

> On Tue, Oct 6, 2015 at 10:18 AM, Neil  wrote:
>
>> Hi guys,
>>
>> I had a strange issue on the 3rd of September and I've only got round to
>> checking what caused it now. Basically about 4 or 5 Windows Server VM's got
>> completely corrupted. When I press start I'd just get a blank screen and
>> nothing would display, I tried various things but no matter what I wouldn't
>> even get the Seabios display showing the VM was even posting
>> The remaining 10 VM's were fine, it was just these 4 or 5 that got
>> corrupted and to recover I had to do a full DR restore of the VM's.
>>
>> I'm concerned that the issue might appear again, which is why I'm mailing
>> the list now, does anyone have any clues as to what might have caused this?
>> All logs on the FC SAN were fine and all hosts appeared normal...
>>
>> The following are my versions...
>>
>> CentOS release 6.5 (Final)
>> ovirt-release34-1.0.3-1.noarch
>> ovirt-host-deploy-1.2.3-1.el6.noarch
>> ovirt-engine-lib-3.4.4-1.el6.noarch
>> ovirt-iso-uploader-3.4.4-1.el6.noarch
>> ovirt-engine-cli-3.4.0.5-1.el6.noarch
>> ovirt-engine-setup-base-3.4.4-1.el6.noarch
>> ovirt-engine-websocket-proxy-3.4.4-1.el6.noarch
>> ovirt-engine-backend-3.4.4-1.el6.noarch
>> ovirt-engine-tools-3.4.4-1.el6.noarch
>> ovirt-engine-dbscripts-3.4.4-1.el6.noarch
>> ovirt-engine-3.4.4-1.el6.noarch
>> ovirt-engine-setup-3.4.4-1.el6.noarch
>> ovirt-engine-sdk-python-3.4.4.0-1.el6.noarch
>> ovirt-image-uploader-3.4.3-1.el6.noarch
>> ovirt-host-deploy-java-1.2.3-1.el6.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-3.4.4-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.4-1.el6.noarch
>> ovirt-engine-restapi-3.4.4-1.el6.noarch
>> ovirt-engine-userportal-3.4.4-1.el6.noarch
>> ovirt-engine-webadmin-portal-3.4.4-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-3.4.4-1.el6.noarch
>>
>> CentOS release 6.5 (Final)
>> vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
>> vdsm-cli-4.14.11.2-0.el6.noarch
>> vdsm-python-4.14.11.2-0.el6.x86_64
>> vdsm-4.14.11.2-0.el6.x86_64
>> vdsm-xmlrpc-4.14.11.2-0.el6.noarch
>>
>> Below are the sanlock.logs from two of my hosts and attached is my
>> ovirt-engine.log from the date of the issue...
>>
>> Node02
>> 2015-09-03 10:34:53+0200 33184492 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 7
>> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 10:34:53+0200 33184492 [7369]: s1 renewal error -202
>> delta_length 10 last_success 33184461
>> 2015-09-03 10:35:04+0200 33184503 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 ioto 10 to_count 8
>> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 10:35:04+0200 33184503 [7369]: s1 renewal error -202
>> delta_length 11 last_success 33184461
>> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 result 1048576:0 other free r
>> 2015-09-03 10:35:05+0200 33184504 [7369]: 0e6991ae aio collect 0
>> 0x7fbd7910:0x7fbd7920:0x7fbd7feff000 result 1048576:0 match reap
>> 2015-09-03 11:03:00+0200 33186178 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 ioto 10 to_count 9
>> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae-6238-4c61-96d2-ca8fed35161e/ids
>> 2015-09-03 11:03:00+0200 33186178 [7369]: s1 renewal error -202
>> delta_length 10 last_success 33186147
>> 2015-09-03 11:03:07+0200 33186185 [7369]: 0e6991ae aio collect 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd7feff000 result 1048576:0 other free
>> 2015-09-03 11:10:18+0200 33186616 [7369]: 0e6991ae aio timeout 0
>> 0x7fbd78c0:0x7fbd78d0:0x7fbd9094b000 ioto 10 to_count 10
>> 2015-09-03 11:10:18+0200 33186616 [7369]: s1 delta_renew read rv -202
>> offset 0 /dev/0e6991ae

Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-22 Thread Alastair Neil
You need to set the gluster.server-quorum-ratio to 51%

On 22 September 2015 at 08:25, Ramesh Nachimuthu 
wrote:

>
>
> On 09/22/2015 05:43 PM, Alastair Neil wrote:
>
> what are the gluster-quorum-type and gluster.server-quorum-ratio  settings
> on the volume?
>
>
> *cluster.server-quorum-type*:server
> *cluster.quorum-type*:auto
> *gluster.server-quorum-ratio is not set.*
>
> One brick process is purposefully killed  but remaining two bricks are up
> and running.
>
> Regards,
> Ramesh
>
> On 22 September 2015 at 06:24, Ramesh Nachimuthu 
> wrote:
>
>> Hi,
>>
>>I am not able to resume a VM which was paused because of gluster
>> client quorum issue. Here is what happened in my setup.
>>
>> 1. Created a gluster storage domain which is backed by gluster volume
>> with replica 3.
>> 2. Killed one brick process. So only two bricks are running in replica 3
>> setup.
>> 3. Created two VMs
>> 4. Started some IO using fio on both of the VMs
>> 5. After some time got the following error in gluster mount and VMs moved
>> to paused state.
>>  " server 10.70.45.17:49217 has not responded in the last 42
>> seconds, disconnecting."
>>   "vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6:
>> Failing WRITE as quorum is not met"
>>   more gluster mount logs at <http://pastebin.com/UmiUQq0F>
>> http://pastebin.com/UmiUQq0F
>> 6. After some time gluster quorum is active and I am able to write the
>> the gluster file system.
>> 7. When I try to resume the VM it doesn't work and I got following error
>> in vdsm log.
>>   http://pastebin.com/aXiamY15
>>
>>
>> Regards,
>> Ramesh
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-22 Thread Alastair Neil
what are the gluster-quorum-type and gluster.server-quorum-ratio  settings
on the volume?

On 22 September 2015 at 06:24, Ramesh Nachimuthu 
wrote:

> Hi,
>
>I am not able to resume a VM which was paused because of gluster client
> quorum issue. Here is what happened in my setup.
>
> 1. Created a gluster storage domain which is backed by gluster volume with
> replica 3.
> 2. Killed one brick process. So only two bricks are running in replica 3
> setup.
> 3. Created two VMs
> 4. Started some IO using fio on both of the VMs
> 5. After some time got the following error in gluster mount and VMs moved
> to paused state.
>  " server 10.70.45.17:49217 has not responded in the last 42
> seconds, disconnecting."
>   "vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing
> WRITE as quorum is not met"
>   more gluster mount logs at http://pastebin.com/UmiUQq0F
> 6. After some time gluster quorum is active and I am able to write the the
> gluster file system.
> 7. When I try to resume the VM it doesn't work and I got following error
> in vdsm log.
>   http://pastebin.com/aXiamY15
>
>
> Regards,
> Ramesh
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving storage away from a single point of failure

2015-09-22 Thread Alastair Neil
My own experience with gluster for VMs is that it is just fine until you
need to bring down a node and need the VM's to be live.  I have a replica 3
gluster server and, while the VMs are fine while the node is down, when it
is brought back up, gluster attempts to heal the files on the downed node
and the ensuing i/o freezes the VM's until the heal is complete, and with
many VM's on a storage volume that can take hours.  I have migrated all my
critical VMs back onto NFS.   There are changes coming soon in gluster that
will hopefully mitigate this (better granualarity in the data heals, i/o
throttling during heals etc.)  but for now I am keeping most of my VMs on
nfs.

The alternative is to set the quorum so that the VM volume goes read only
when a node goes down.  This may seem mad, but at least your VMs are frozen
only while a node is down and not for hours afterwards.



On 22 September 2015 at 05:32, Daniel Helgenberger <
daniel.helgenber...@m-box.de> wrote:

>
>
> On 18.09.2015 23:04, Robert Story wrote:
> > Hi,
>
> Hello Robert,
>
> >
> > I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single
> > server. I'd like to move away from having a single point of failure.
>
> In this case have a look at iSCSI or FC storage. If you have redundant
> contollers and switches
> the setup should be reliable enough?
>
> > Watching the mailing list, all the issues with gluster getting out of
> sync
> > and replica issues has me nervous about gluster, plus I just have 2
> > machines with lots of drive bays for storage.
>
> Still, I would stick to gluster if you want a replicated storage:
>  - It is supported out of the box and you get active support from lots of
> users here
>  - Replica3 will solve most out of sync cases
>  - I dare say other replicated storage backends do suffer from the same
> issues, this is by design.
>
> Two things you should keep in mind when running gluster in production:
>  - Do not run compute and storage on the same hosts
>  - Do not (yet) use Gluster as storage for Hosted Engine
>
> > I've been reading about GFS2
> > and DRBD, and wanted opinions on if either is a good/bad idea, or to see
> if
> > there are other alternatives.
> >
> > My oVirt setup is currently 5 nodes and about 25 VMs, might double in
> size
> > eventually, but probably won't get much bigger than that.
>
> In the end, it is quite easy to migrate storage domains. If you are
> satisfied with your lab
> setup, put it in production and add storage later and move the disks.
> Afterwards, remove old
> storage domains.
>
> My to cent with gluster: It runs quite stable since some time now if you
> do not touch it.
> I never had issues when adding bricks, though removing and replacing them
> can be very tricky.
>
> HTH,
>
> >
> >
> > Thanks,
> >
> > Robert
> >
>
> --
> Daniel Helgenberger
> m box bewegtbild GmbH
>
> P: +49/30/2408781-22
> F: +49/30/2408781-10
>
> ACKERSTR. 19
> D-10115 BERLIN
>
>
> www.m-box.de  www.monkeymen.tv
>
> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] self-hosted-engine Failed to establish session with host

2015-08-04 Thread Neil
Sorry to be replying to my own post, but I'm not sure if by adding the Host
manually that I've perhaps messed up something now?

How do I go about adding my default NFS storage domain because no storage
domain exists currently despite my hosted_engine being a VM already running
on my default domain? In fact when I look at my hosted_engine and go to
disks it shows there is no disk at all.

Thanks!

Regards.

Neil Wilson.



On Tue, Aug 4, 2015 at 3:27 PM, Neil  wrote:

> Hi guys,
>
> I've gone through the oVirt admin front end as a test and I see I've been
> able to add the host to the Default datacenter manually. Not sure if
> perhaps this worked manually because it used root and I specified the
> password perhaps instead?
>
> Would be interesting to find out why this failed the automatic addition
> though.
>
> Thank you!
>
> Regards.
>
> Neil Wilson.
>
>
>
>
> On Tue, Aug 4, 2015 at 3:24 PM, Neil  wrote:
>
>> Hi guys,
>>
>> I initially installed an AllinOne oVirt installation, but realised half
>> way through that I actually need a self-hosted-engine instead, so I did an
>> engine-cleanup and removed the database etc then ran...
>>
>> yum install ovirt-hosted-engine-setup
>> hosted-engine --deploy
>>
>> I installed Centos 6.4 minimal on the VM then did a full yum update to
>> 6.6 and restarted as well as disabled selinux.
>>
>> Everything went through perfectly and my engine is up and running,
>> however my installation suddenly terminated when trying to add the host to
>> the Default datacenter as follows
>>
>> [ INFO  ] Engine replied: DB Up!Welcome to Health Status!
>>   Enter the name of the cluster to which you want to add the host
>> (Default) [Default]:
>> [ ERROR ] Cannot automatically add the host to cluster Default: Cannot
>> add Host. Connecting to host via SSH has failed, verify that the host is
>> reachable (IP address, routable address etc.) You may refer to the
>> engine.log file for further details.
>> [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
>> cluster Default
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150804143614.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>>
>> I've attached my engine.log and my answer file. I have confirmed that
>> both systems are able to reach each other, so I'm not sure if it's perhaps
>> some kind of DNS related issue perhaps? Note that I'm only using host file
>> resolution and not internal DNS names.
>>
>> Please could someone take a look and see if you are able to pick up
>> anything obvious please.
>>
>> Thank you!
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] self-hosted-engine Failed to establish session with host

2015-08-04 Thread Neil
Hi guys,

I've gone through the oVirt admin front end as a test and I see I've been
able to add the host to the Default datacenter manually. Not sure if
perhaps this worked manually because it used root and I specified the
password perhaps instead?

Would be interesting to find out why this failed the automatic addition
though.

Thank you!

Regards.

Neil Wilson.




On Tue, Aug 4, 2015 at 3:24 PM, Neil  wrote:

> Hi guys,
>
> I initially installed an AllinOne oVirt installation, but realised half
> way through that I actually need a self-hosted-engine instead, so I did an
> engine-cleanup and removed the database etc then ran...
>
> yum install ovirt-hosted-engine-setup
> hosted-engine --deploy
>
> I installed Centos 6.4 minimal on the VM then did a full yum update to 6.6
> and restarted as well as disabled selinux.
>
> Everything went through perfectly and my engine is up and running, however
> my installation suddenly terminated when trying to add the host to the
> Default datacenter as follows
>
> [ INFO  ] Engine replied: DB Up!Welcome to Health Status!
>   Enter the name of the cluster to which you want to add the host
> (Default) [Default]:
> [ ERROR ] Cannot automatically add the host to cluster Default: Cannot add
> Host. Connecting to host via SSH has failed, verify that the host is
> reachable (IP address, routable address etc.) You may refer to the
> engine.log file for further details.
> [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
> cluster Default
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150804143614.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
> I've attached my engine.log and my answer file. I have confirmed that both
> systems are able to reach each other, so I'm not sure if it's perhaps some
> kind of DNS related issue perhaps? Note that I'm only using host file
> resolution and not internal DNS names.
>
> Please could someone take a look and see if you are able to pick up
> anything obvious please.
>
> Thank you!
>
> Regards.
>
> Neil Wilson.
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] self-hosted-engine Failed to establish session with host

2015-08-04 Thread Neil
Hi guys,

I initially installed an AllinOne oVirt installation, but realised half way
through that I actually need a self-hosted-engine instead, so I did an
engine-cleanup and removed the database etc then ran...

yum install ovirt-hosted-engine-setup
hosted-engine --deploy

I installed Centos 6.4 minimal on the VM then did a full yum update to 6.6
and restarted as well as disabled selinux.

Everything went through perfectly and my engine is up and running, however
my installation suddenly terminated when trying to add the host to the
Default datacenter as follows

[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
  Enter the name of the cluster to which you want to add the host
(Default) [Default]:
[ ERROR ] Cannot automatically add the host to cluster Default: Cannot add
Host. Connecting to host via SSH has failed, verify that the host is
reachable (IP address, routable address etc.) You may refer to the
engine.log file for further details.
[ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
cluster Default
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150804143614.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

I've attached my engine.log and my answer file. I have confirmed that both
systems are able to reach each other, so I'm not sure if it's perhaps some
kind of DNS related issue perhaps? Note that I'm only using host file
resolution and not internal DNS names.

Please could someone take a look and see if you are able to pick up
anything obvious please.

Thank you!

Regards.

Neil Wilson.
2015-08-04 12:32:00,508 INFO  
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC service 
thread 1-3) Init device custom properties utilities
2015-08-04 12:32:00,513 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service thread 
1-3) Initializing Scheduling manager
2015-08-04 12:32:00,516 INFO  
[org.ovirt.engine.core.bll.network.MacPoolManagerRanges] 
(org.ovirt.thread.pool-8-thread-1) Start initializing MacPoolManagerRanges
2015-08-04 12:32:00,550 INFO  
[org.ovirt.engine.core.bll.network.MacPoolManagerRanges] 
(org.ovirt.thread.pool-8-thread-1) Finished initializing. Available MACs in 
pool: 256
2015-08-04 12:32:00,563 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service thread 
1-3) External scheduler disabled, discovery skipped
2015-08-04 12:32:00,564 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service thread 
1-3) Initialized Scheduling manager
2015-08-04 12:32:00,564 INFO  [org.ovirt.engine.core.bll.dwh.DwhHeartBeat] (MSC 
service thread 1-3) Initializing DWH Heart Beat
2015-08-04 12:32:00,569 INFO  [org.ovirt.engine.core.bll.dwh.DwhHeartBeat] (MSC 
service thread 1-3) DWH Heart Beat initialized
2015-08-04 12:35:58,625 INFO  [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(ajp--127.0.0.1-8702-4) Running command: LoginUserCommand internal: false.
2015-08-04 12:35:58,630 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: UNASSIGNED not exist in string table
2015-08-04 12:35:58,630 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: VDS_HIGH_NETWORK_USE not exist in string 
table
2015-08-04 12:35:58,631 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_FAILED_REMOVE_VM not exist in string 
table
2015-08-04 12:35:58,631 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_RUN_UNLOCK_ENTITY_SCRIPT not exist 
in string table
2015-08-04 12:35:58,632 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: 
VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK not exist in string table
2015-08-04 12:35:58,632 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: STORAGE_ACTIVATE_ASYNC not exist in 
string table
2015-08-04 12:35:58,633 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_ADDED_DISK_PROFILE not exist in 
string table
2015-08-04 12:35:58,633 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_FAILED_TO_ADD_DISK_PROFILE not exist 
in string table
2015-08-04 12:35:58,635 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_REMOVED_DISK_PROFILE not exist in 
string table
2015-08-04 12:35:58,635 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-4) AuditLogType: USER_FAILED_TO_REMOVE_DISK_PROFILE not 
exist in s

Re: [ovirt-users] All in one question

2015-08-03 Thread Neil
Hi Mathew,

Wow, thank you very much for the quick response!

I've gone ahead and tried to do what you've suggested, but I'm a bit
confused as to how the storage is going to work...

If I add the storage domain on the Local datacenter, then I can only choose
local_storage and then when I try add this storage as NFS, ovirt says the
storage domain isn't empty (which it isn't) so I'm confused as to how both
hosts will work together on the same storage domain?

All I'm wanting to achieve is a two host cluster with NFS storage from one
host, is it not over complicating things using the AllinOne installation?

Thank you, and apologies if I've perhaps misunderstood.

Regards.

Neil Wilson.


On Mon, Aug 3, 2015 at 10:03 AM, Matthew Lagoe 
wrote:

> Since the storage is NFS it really doesn’t matter where it is located at
> so long as all hosts can talk to it via ip
>
>
>
> Basically if you have the nfs storage locally or otherwise you can simply
> add another host to the datacenter and then you should be able to use the
> storage across the new host
>
>
>
> Keep in mind you will need to make it so the external host is able to
> access the nfs share
>
>
>
> *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
> Behalf Of *Neil
> *Sent:* Monday, August 03, 2015 12:58 AM
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] All in one question
>
>
>
> Hi guys,
>
>
>
> Please excuse this if it sounds like a dumb question, it's my first time
> doing an All-in-one" oVirt installation
>
>
>
> I've installed the All-in-one on one physical machine, and configured this
> as a host in the cluster, and my intention was to use local NFS storage as
> the primary storage domain for the VM's, but then add a second host to the
> cluster which would access this NFS primary storage domain on the original
> "All-in-one" installation...
>
> After doing the install when I log in I see that when you do an
> "All-in-one" install you end up with a "local_cluster" as well as a
> "Default" cluster and you can't add another host to the "local_cluster", so
> it appears I'll need to add the second host to the "Default" which I'm
> assuming means I won't be able to share the primary NFS storage between the
> two clusters and I won't get live migration between my two physical hosts
> across the clusters?
>
>
>
> Could anyone confirm if my assumptions are correct please?
>
>
>
> Thank you!
>
>
>
> Regards.
>
>
>
> Neil Wilson.
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] All in one question

2015-08-03 Thread Neil
Hi guys,

Please excuse this if it sounds like a dumb question, it's my first time
doing an All-in-one" oVirt installation

I've installed the All-in-one on one physical machine, and configured this
as a host in the cluster, and my intention was to use local NFS storage as
the primary storage domain for the VM's, but then add a second host to the
cluster which would access this NFS primary storage domain on the original
"All-in-one" installation...
After doing the install when I log in I see that when you do an
"All-in-one" install you end up with a "local_cluster" as well as a
"Default" cluster and you can't add another host to the "local_cluster", so
it appears I'll need to add the second host to the "Default" which I'm
assuming means I won't be able to share the primary NFS storage between the
two clusters and I won't get live migration between my two physical hosts
across the clusters?

Could anyone confirm if my assumptions are correct please?

Thank you!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] reinstall hosted-engine with ovirt 3.5?

2015-07-17 Thread Alastair Neil
is there a mechanism to import the appliance image into the hosted engine?
I am not sure how I would do this since I have no live access to my engine
DB.


On 17 July 2015 at 06:56, Jiri Belka  wrote:

> > From: "Alastair Neil" 
> > To: "Ovirt Users" 
> > Sent: Thursday, July 16, 2015 5:38:41 PM
> > Subject: [ovirt-users] reinstall hosted-engine with ovirt 3.5?
> >
> > Due to a moment of idiocy I accidentally upgraded my hosted-engine vm to
> > Fedora 22 and now ovirt-engine will not start, I was able to get
> postgesql
> > up an running so I was able to make a backup of the engine. As far as I
> know
> > Ovirt 3.5 is not supported on F22, so my options seem limited.
> >
> > 1, update to the 3.6 prerelease
> > 2, reinstall the VM, if I were doing this I would use CentOS 7
> >
> >
> > my preference would be to fresh install the hosted-engine. I am guessing
> the
> > way to go about this would be to shutdown the HE broker and agent
> daemons on
> > all the nodes, possibly clean the metadata? and the do a hosted engine
> > deploy as though migrating from an external engine.
> >
> > Can anyone comment if this is reasonable?
>
> You can give a try to ovirt engine appliance and then restore
> from backup ;)
>
> j.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] reinstall hosted-engine with ovirt 3.5?

2015-07-16 Thread Alastair Neil
Due to a moment of idiocy I accidentally upgraded my hosted-engine vm to
Fedora 22 and  now ovirt-engine will not start, I was able to get postgesql
up an running so I was able to make a backup of the engine.  As far as I
know Ovirt 3.5 is not supported on F22, so my options seem limited.

1, update to the 3.6 prerelease
2, reinstall the VM, if I were doing this I would use CentOS 7


my preference would be to fresh install the hosted-engine.  I am guessing
the way to go about this would be to shutdown the HE broker and agent
daemons on all the nodes, possibly clean the metadata? and the do a hosted
engine deploy as though migrating from an external engine.

Can anyone comment if this is reasonable?

Thanks,

-Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] size of VM image files varies on different gluster bricks

2015-05-27 Thread Alastair Neil
Hi

I have a hosted engine cluster running 3.5.2 on f20.  I have 6 nodes
running centos 6.6 and three storage nodes also running centos 6.6 with
gluster 3.6.3,

My primary data store is a replica 3 gluster volume.  I noticed that the
size of some image files differs wildly on one server's brick.  Disks are
all thin provisioned.  The bricks are thin provisioned lvm volume with xfs
file systems.   The only difference between the systems is that the problem
node is newer, a Dell R530 with MD1400 where as the other two are Dell
R510's each with MD1200s.  The storage arrays all have the same 4TB disks.

e.g. for a disk that the ovirt console repost as having virtual size 500G
and actual size 103G I see:


[root@gluster0 479d2197-de09-4012-8183-43c6baa7e65b]# cd
> ../d0d58fb9-ecaa-446f-bc42-dd681a16aee2/
> [root@gluster0 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 106G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta
> [root@gluster1 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# pwd
>
> /export/brick5/ovirt-data/54d9ee82-0974-4a72-98a5-328d2e4007f1/images/d0d58fb9-ecaa-446f-bc42-dd681a16aee2
> [root@gluster1 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 103G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta
> [root@gluster-2 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# pwd
>
> /export/brick5/ovirt-data/54d9ee82-0974-4a72-98a5-328d2e4007f1/images/d0d58fb9-ecaa-446f-bc42-dd681a16aee2
> [root@gluster-2 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 501G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta


I'd appreciate any suggestions about troubleshooting and resolving his. Here
is the volume info:

Volume Name: data
> Type: Replicate
> Volume ID: 5c6ff46d-1159-4c7e-8b16-5ffeb15cbaf9
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster-2:/export/brick5/ovirt-data
> Brick2: gluster1:/export/brick5/ovirt-data
> Brick3: gluster0:/export/brick5/ovirt-data
> Options Reconfigured:
> performance.least-prio-threads: 4
> performance.low-prio-threads: 16
> performance.normal-prio-threads: 24
> performance.high-prio-threads: 24
> performance.io-thread-count: 32
> diagnostics.count-fop-hits: off
> diagnostics.latency-measurement: off
> auth.allow: *
> nfs.rpc-auth-allow: *
> network.remote-dio: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.eager-lock: enable
> cluster.min-free-disk: 5%
> cluster.rebalance-stats: on
> cluster.background-self-heal-count: 16
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.self-heal-daemon: on
> cluster.heal-timeout: 500
> cluster.self-heal-window-size: 8
> cluster.data-self-heal-algorithm: diff
> cluster.quorum-type: auto
> cluster.self-heal-readdir-size: 64KB
> network.ping-timeout: 20
> performance.open-behind: disable
> cluster.server-quorum-ratio: 51%
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] simple-sso w. kerberos & iplanet ldap - login slow and unreliable (ovirt 3.5.1.1)

2015-04-09 Thread Alastair Neil
I have configured the simple-sso with kerberos.  I can successfully login
most of the time, but often the login fails and I am dropped at the portal
login window and  prompted for the internal account username and password.
Host is FC 20.  Also, adding users in the GMU-authz o=gmu.edu namespace is
agonisingly slow returning from the directory lookup.

I can see from the apache logs that the kerberos authentication is
successful, but in the engine logs I see many errors:

2015-04-09 13:39:28,493 ERROR
> [org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter]
> (ajp--127.0.0.1-8702-11) Cannot obtain profile for user aneil2


and eventually:

2015-04-09 13:39:28,342 ERROR
> [org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter]
> (ajp--127.0.0.1-8702-5) Cannot obtain profile for user aneil2
> {Extkey[name=EXTENSION_INVOKE_CONTEXT;type=class
> org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_INVOKE_CONTEXT[886d2ebb-312a-49ae-9cc3-e1f849834b7d];]={Extkey[name=EXTENSION_INTERFACE_VERSION_MAX;type=class
> java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MAX[f4cff49f-2717-4901-8ee9-df362446e3e7];]=0,
> Extkey[name=EXTENSION_LICENSE;type=class
> java.lang.String;uuid=EXTENSION_LICENSE[8a61ad65-054c-4e31-9c6d-1ca4d60a4c18];]=ASL
> 2.0, Extkey[name=EXTENSION_NOTES;type=class
> java.lang.String;uuid=EXTENSION_NOTES[2da5ad7e-185a-4584-aaff-97f66978e4ea];]=Display
> name: ovirt-engine-extension-aaa-ldap-1.0.2-1.fc20,
> Extkey[name=EXTENSION_HOME_URL;type=class
> java.lang.String;uuid=EXTENSION_HOME_URL[4ad7a2f4-f969-42d4-b399-72d192e18304];]=
> http://www.ovirt.org, Extkey[name=EXTENSION_LOCALE;type=class
> java.lang.String;uuid=EXTENSION_LOCALE[0780b112-0ce0-404a-b85e-8765d778bb29];]=en_US,
> Extkey[name=EXTENSION_NAME;type=class
> java.lang.String;uuid=EXTENSION_NAME[651381d3-f54f-4547-bf28-b0b01a103184];]=ovirt-engine-extension-aaa-ldap.authz,
> Extkey[name=EXTENSION_INTERFACE_VERSION_MIN;type=class
> java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MIN[2b84fc91-305b-497b-a1d7-d961b9d2ce0b];]=0,
> Extkey[name=EXTENSION_CONFIGURATION;type=class
> java.util.Properties;uuid=EXTENSION_CONFIGURATION[2d48ab72-f0a1-4312-b4ae-5068a226b0fc];]=***,
> Extkey[name=EXTENSION_AUTHOR;type=class
> java.lang.String;uuid=EXTENSION_AUTHOR[ef242f7a-2dad-4bc5-9aad-e07018b7fbcc];]=The
> oVirt Project, Extkey[name=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE;type=class
> java.lang.Integer;uuid=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE[2eb1f541-0f65-44a1-a6e3-014e247595f5];]=50,
> Extkey[name=EXTENSION_INSTANCE_NAME;type=class
> java.lang.String;uuid=EXTENSION_INSTANCE_NAME[65c67ff6-aeca-4bd5-a245-8674327f011b];]=GMU-authz,
> Extkey[name=EXTENSION_BUILD_INTERFACE_VERSION;type=class
> java.lang.Integer;uuid=EXTENSION_BUILD_INTERFACE_VERSION[cb479e5a-4b23-46f8-aed3-56a4747a8ab7];]=0,
> Extkey[name=EXTENSION_CONFIGURATION_SENSITIVE_KEYS;type=interface
> java.util.Collection;uuid=EXTENSION_CONFIGURATION_SENSITIVE_KEYS[a456efa1-73ff-4204-9f9b-ebff01e35263];]=[],
> Extkey[name=EXTENSION_GLOBAL_CONTEXT;type=class
> org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_GLOBAL_CONTEXT[9799e72f-7af6-4cf1-bf08-297bc8903676];]=*skip*,
> Extkey[name=EXTENSION_VERSION;type=class
> java.lang.String;uuid=EXTENSION_VERSION[fe35f6a8-8239-4bdb-ab1a-af9f779ce68c];]=1.0.2,
> Extkey[name=AAA_AUTHZ_AVAILABLE_NAMESPACES;type=interface
> java.util.Collection;uuid=AAA_AUTHZ_AVAILABLE_NAMESPACES[6dffa34c-955f-486a-bd35-0a272b45a711];]=[o=
> gmu.edu], Extkey[name=EXTENSION_MANAGER_TRACE_LOG;type=interface
> org.slf4j.Logger;uuid=EXTENSION_MANAGER_TRACE_LOG[863db666-3ea7-4751-9695-918a3197ad83];]=org.slf4j.impl.Slf4jLogger(org.ovirt.engine.core.extensions.mgr.ExtensionsManager.trace.ovirt-engine-extension-aaa-ldap.authz.GMU-authz),
> Extkey[name=EXTENSION_PROVIDES;type=interface
> java.util.Collection;uuid=EXTENSION_PROVIDES[8cf373a6-65b5-4594-b828-0e275087de91];]=[org.ovirt.engine.api.extensions.aaa.Authz],
> Extkey[name=EXTENSION_CONFIGURATION_FILE;type=class
> java.lang.String;uuid=EXTENSION_CONFIGURATION_FILE[4fb0ffd3-983c-4f3f-98ff-9660bd67af6a];]=/etc/ovirt-engine/extensions.d/GMU-authz.properties},
> Extkey[name=AAA_AUTHZ_QUERY_FLAGS;type=class
> java.lang.Integer;uuid=AAA_AUTHZ_QUERY_FLAGS[97d226e9-8d87-49a0-9a7f-af689320907b];]=3,
> Extkey[name=EXTENSION_INVOKE_COMMAND;type=class
> org.ovirt.engine.api.extensions.ExtUUID;uuid=EXTENSION_INVOKE_COMMAND[485778ab-bede-4f1a-b823-77b262a2f28d];]=AAA_AUTHZ_FETCH_PRINCIPAL_RECORD[5a5bf9bb-9336-4376-a823-26efe1ba26df],
> Extkey[name=AAA_AUTHN_AUTH_RECORD;type=class
> org.ovirt.engine.api.extensions.ExtMap;uuid=AAA_AUTHN_AUTH_RECORD[e9462168-b53b-44ac-9af5-f25e1697173e];]={Extkey[name=AAA_AUTHN_AUTH_RECORD_PRINCIPAL;type=class
> java.lang.String;uuid=AAA_AUTHN_AUTH_RECORD_PRINCIPAL[c3498f07-11fe-464c-958c-8bd7490b119a];]=aneil2}}
> {Extkey[name=EXTENSION_INVOKE_RESULT;type=class
> java.lang.Integer;uuid=EXTENSION_INVOKE_RESULT[0909d91d-8bde-40fb-b6c0-099c772ddd4e];]=2,
> Extkey[name=EXTENSION_INVOKE_MESSAGE;type=class
> 

Re: [ovirt-users] sign-out with kerberos sso

2015-04-07 Thread Alastair Neil
Just a quick follow up.  I tried the 3.5.2 RC3 and same issue.


On 7 April 2015 at 22:54, Alastair Neil  wrote:

> I have been setting up aaa, following the recipe in the RedHat portal:
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html#sect-Single_Sign-On_to_the_Administration_and_User_Portal
>
> and I can successfully authenticate, however the Sign Out button does not
> clear the session properly and does nothing.  I found this long standing bug
>
> https://bugzilla.redhat.com/show_bug.cgi?id=884653
>
> this bug was updated last month as supposedly fixed by an errata release
> of RHEV 3.5.0.
>
> I'm using FC20 with ovirt 3.5.1.1, Is there an equivalent fix in ovirt?
> If so how can I access it?
>
> Thanks, Alastair
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] sign-out with kerberos sso

2015-04-07 Thread Alastair Neil
I have been setting up aaa, following the recipe in the RedHat portal:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Directory_Users.html#sect-Single_Sign-On_to_the_Administration_and_User_Portal

and I can successfully authenticate, however the Sign Out button does not
clear the session properly and does nothing.  I found this long standing bug

https://bugzilla.redhat.com/show_bug.cgi?id=884653

this bug was updated last month as supposedly fixed by an errata release of
RHEV 3.5.0.

I'm using FC20 with ovirt 3.5.1.1, Is there an equivalent fix in ovirt?  If
so how can I access it?

Thanks, Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs freezing during heals

2015-04-03 Thread Alastair Neil
Any follow up on this?

 Are there known issues using a replica 3 glsuter datastore with lvm thin
provisioned bricks?

On 20 March 2015 at 15:22, Alastair Neil  wrote:

> CentOS 6.6
>
>
>>  vdsm-4.16.10-8.gitc937927.el6
>> glusterfs-3.6.2-1.el6
>> 2.6.32 - 504.8.1.el6.x86_64
>
>
> moved to 3.6 specifically to get the snapshotting feature, hence my desire
> to migrate to thinly provisioned lvm bricks.
>
>
>
>
> On 20 March 2015 at 14:57, Darrell Budic  wrote:
>
>> What version of gluster are you running on these?
>>
>> I’ve seen high load during heals bounce my hosted engine around due to
>> overall system load, but never pause anything else. Cent 7 combo
>> storage/host systems, gluster 3.5.2.
>>
>>
>> On Mar 20, 2015, at 9:57 AM, Alastair Neil  wrote:
>>
>> Pranith
>>
>> I have run a pretty straightforward test.  I created a two brick 50 G
>> replica volume with normal lvm bricks, and installed two servers, one
>> centos 6.6 and one centos 7.0.  I kicked off bonnie++ on both to generate
>> some file system activity and then made the volume replica 3.  I saw no
>> issues on the servers.
>>
>> Not clear if this is a sufficiently rigorous test and the Volume I have
>> had issues on is a 3TB volume  with about 2TB used.
>>
>> -Alastair
>>
>>
>> On 19 March 2015 at 12:30, Alastair Neil  wrote:
>>
>>> I don't think I have the resources to test it meaningfully.  I have
>>> about 50 vms on my primary storage domain.  I might be able to set up a
>>> small 50 GB volume and provision 2 or 3 vms running test loads but I'm not
>>> sure it would be comparable.  I'll give it a try and let you know if I see
>>> similar behaviour.
>>>
>>> On 19 March 2015 at 11:34, Pranith Kumar Karampuri 
>>> wrote:
>>>
>>>>  Without thinly provisioned lvm.
>>>>
>>>> Pranith
>>>>
>>>> On 03/19/2015 08:01 PM, Alastair Neil wrote:
>>>>
>>>> do you mean raw partitions as bricks or simply with out thin
>>>> provisioned lvm?
>>>>
>>>>
>>>>
>>>> On 19 March 2015 at 00:32, Pranith Kumar Karampuri >>> > wrote:
>>>>
>>>>>  Could you let me know if you see this problem without lvm as well?
>>>>>
>>>>> Pranith
>>>>>
>>>>> On 03/18/2015 08:25 PM, Alastair Neil wrote:
>>>>>
>>>>> I am in the process of replacing the bricks with thinly provisioned
>>>>> lvs yes.
>>>>>
>>>>>
>>>>>
>>>>> On 18 March 2015 at 09:35, Pranith Kumar Karampuri <
>>>>> pkara...@redhat.com> wrote:
>>>>>
>>>>>>  hi,
>>>>>>   Are you using thin-lvm based backend on which the bricks are
>>>>>> created?
>>>>>>
>>>>>> Pranith
>>>>>>
>>>>>> On 03/18/2015 02:05 AM, Alastair Neil wrote:
>>>>>>
>>>>>>  I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There
>>>>>> are two virtualisation clusters one with two nehelem nodes and one with
>>>>>>  four  sandybridge nodes. My master storage domain is a GlusterFS backed 
>>>>>> by
>>>>>> a replica 3 gluster volume from 3 of the gluster nodes.  The engine is a
>>>>>> hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede 
>>>>>> by
>>>>>> nfs from a different gluster volume.  All the hosts are CentOS 6.6.
>>>>>>
>>>>>>   vdsm-4.16.10-8.gitc937927.el6
>>>>>>> glusterfs-3.6.2-1.el6
>>>>>>> 2.6.32 - 504.8.1.el6.x86_64
>>>>>>
>>>>>>
>>>>>>  Problems happen when I try to add a new brick or replace a brick
>>>>>> eventually the self heal will kill the VMs. In the VM's logs I see kernel
>>>>>> hung task messages.
>>>>>>
>>>>>>  Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for
>>>>>>> more than 120 seconds.
>>>>>>> Mar 12 23:05:16 static1 kernel:  Not tainted
>>>>>>> 2.6.32-504.3.3.el6.x86_64 #1
>>>>>>> Mar 12 23:05:16 static1 kernel: "echo 0 >
>>>>>>> /proc/sys/kernel/hung_task_timeout_secs" disables this mes

Re: [ovirt-users] VMs freezing during heals

2015-03-20 Thread Alastair Neil
CentOS 6.6


>  vdsm-4.16.10-8.gitc937927.el6
> glusterfs-3.6.2-1.el6
> 2.6.32 - 504.8.1.el6.x86_64


moved to 3.6 specifically to get the snapshotting feature, hence my desire
to migrate to thinly provisioned lvm bricks.




On 20 March 2015 at 14:57, Darrell Budic  wrote:

> What version of gluster are you running on these?
>
> I’ve seen high load during heals bounce my hosted engine around due to
> overall system load, but never pause anything else. Cent 7 combo
> storage/host systems, gluster 3.5.2.
>
>
> On Mar 20, 2015, at 9:57 AM, Alastair Neil  wrote:
>
> Pranith
>
> I have run a pretty straightforward test.  I created a two brick 50 G
> replica volume with normal lvm bricks, and installed two servers, one
> centos 6.6 and one centos 7.0.  I kicked off bonnie++ on both to generate
> some file system activity and then made the volume replica 3.  I saw no
> issues on the servers.
>
> Not clear if this is a sufficiently rigorous test and the Volume I have
> had issues on is a 3TB volume  with about 2TB used.
>
> -Alastair
>
>
> On 19 March 2015 at 12:30, Alastair Neil  wrote:
>
>> I don't think I have the resources to test it meaningfully.  I have about
>> 50 vms on my primary storage domain.  I might be able to set up a small 50
>> GB volume and provision 2 or 3 vms running test loads but I'm not sure it
>> would be comparable.  I'll give it a try and let you know if I see similar
>> behaviour.
>>
>> On 19 March 2015 at 11:34, Pranith Kumar Karampuri 
>> wrote:
>>
>>>  Without thinly provisioned lvm.
>>>
>>> Pranith
>>>
>>> On 03/19/2015 08:01 PM, Alastair Neil wrote:
>>>
>>> do you mean raw partitions as bricks or simply with out thin provisioned
>>> lvm?
>>>
>>>
>>>
>>> On 19 March 2015 at 00:32, Pranith Kumar Karampuri 
>>> wrote:
>>>
>>>>  Could you let me know if you see this problem without lvm as well?
>>>>
>>>> Pranith
>>>>
>>>> On 03/18/2015 08:25 PM, Alastair Neil wrote:
>>>>
>>>> I am in the process of replacing the bricks with thinly provisioned lvs
>>>> yes.
>>>>
>>>>
>>>>
>>>> On 18 March 2015 at 09:35, Pranith Kumar Karampuri >>> > wrote:
>>>>
>>>>>  hi,
>>>>>   Are you using thin-lvm based backend on which the bricks are
>>>>> created?
>>>>>
>>>>> Pranith
>>>>>
>>>>> On 03/18/2015 02:05 AM, Alastair Neil wrote:
>>>>>
>>>>>  I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There
>>>>> are two virtualisation clusters one with two nehelem nodes and one with
>>>>>  four  sandybridge nodes. My master storage domain is a GlusterFS backed 
>>>>> by
>>>>> a replica 3 gluster volume from 3 of the gluster nodes.  The engine is a
>>>>> hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede 
>>>>> by
>>>>> nfs from a different gluster volume.  All the hosts are CentOS 6.6.
>>>>>
>>>>>   vdsm-4.16.10-8.gitc937927.el6
>>>>>> glusterfs-3.6.2-1.el6
>>>>>> 2.6.32 - 504.8.1.el6.x86_64
>>>>>
>>>>>
>>>>>  Problems happen when I try to add a new brick or replace a brick
>>>>> eventually the self heal will kill the VMs. In the VM's logs I see kernel
>>>>> hung task messages.
>>>>>
>>>>>  Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for
>>>>>> more than 120 seconds.
>>>>>> Mar 12 23:05:16 static1 kernel:  Not tainted
>>>>>> 2.6.32-504.3.3.el6.x86_64 #1
>>>>>> Mar 12 23:05:16 static1 kernel: "echo 0 >
>>>>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>>> Mar 12 23:05:16 static1 kernel: nginx D 0001
>>>>>> 0  1736   1735 0x0080
>>>>>> Mar 12 23:05:16 static1 kernel: 8800778b17a8 0082
>>>>>>  000126c0
>>>>>> Mar 12 23:05:16 static1 kernel: 88007e5c6500 880037170080
>>>>>> 0006ce5c85bd9185 88007e5c64d0
>>>>>> Mar 12 23:05:16 static1 kernel: 88007a614ae0 0001722b64ba
>>>>>> 88007a615098 8800778b1fd8
>>>>>> Mar 12 2

Re: [ovirt-users] VMs freezing during heals

2015-03-20 Thread Alastair Neil
Pranith

I have run a pretty straightforward test.  I created a two brick 50 G
replica volume with normal lvm bricks, and installed two servers, one
centos 6.6 and one centos 7.0.  I kicked off bonnie++ on both to generate
some file system activity and then made the volume replica 3.  I saw no
issues on the servers.

Not clear if this is a sufficiently rigorous test and the Volume I have had
issues on is a 3TB volume  with about 2TB used.

-Alastair


On 19 March 2015 at 12:30, Alastair Neil  wrote:

> I don't think I have the resources to test it meaningfully.  I have about
> 50 vms on my primary storage domain.  I might be able to set up a small 50
> GB volume and provision 2 or 3 vms running test loads but I'm not sure it
> would be comparable.  I'll give it a try and let you know if I see similar
> behaviour.
>
> On 19 March 2015 at 11:34, Pranith Kumar Karampuri 
> wrote:
>
>>  Without thinly provisioned lvm.
>>
>> Pranith
>>
>> On 03/19/2015 08:01 PM, Alastair Neil wrote:
>>
>> do you mean raw partitions as bricks or simply with out thin provisioned
>> lvm?
>>
>>
>>
>> On 19 March 2015 at 00:32, Pranith Kumar Karampuri 
>> wrote:
>>
>>>  Could you let me know if you see this problem without lvm as well?
>>>
>>> Pranith
>>>
>>> On 03/18/2015 08:25 PM, Alastair Neil wrote:
>>>
>>> I am in the process of replacing the bricks with thinly provisioned lvs
>>> yes.
>>>
>>>
>>>
>>> On 18 March 2015 at 09:35, Pranith Kumar Karampuri 
>>> wrote:
>>>
>>>>  hi,
>>>>   Are you using thin-lvm based backend on which the bricks are
>>>> created?
>>>>
>>>> Pranith
>>>>
>>>> On 03/18/2015 02:05 AM, Alastair Neil wrote:
>>>>
>>>>  I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There are
>>>> two virtualisation clusters one with two nehelem nodes and one with  four
>>>>  sandybridge nodes. My master storage domain is a GlusterFS backed by a
>>>> replica 3 gluster volume from 3 of the gluster nodes.  The engine is a
>>>> hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede by
>>>> nfs from a different gluster volume.  All the hosts are CentOS 6.6.
>>>>
>>>>   vdsm-4.16.10-8.gitc937927.el6
>>>>> glusterfs-3.6.2-1.el6
>>>>> 2.6.32 - 504.8.1.el6.x86_64
>>>>
>>>>
>>>>  Problems happen when I try to add a new brick or replace a brick
>>>> eventually the self heal will kill the VMs. In the VM's logs I see kernel
>>>> hung task messages.
>>>>
>>>>  Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for
>>>>> more than 120 seconds.
>>>>> Mar 12 23:05:16 static1 kernel:  Not tainted
>>>>> 2.6.32-504.3.3.el6.x86_64 #1
>>>>> Mar 12 23:05:16 static1 kernel: "echo 0 >
>>>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> Mar 12 23:05:16 static1 kernel: nginx D 0001 0
>>>>>  1736   1735 0x0080
>>>>> Mar 12 23:05:16 static1 kernel: 8800778b17a8 0082
>>>>>  000126c0
>>>>> Mar 12 23:05:16 static1 kernel: 88007e5c6500 880037170080
>>>>> 0006ce5c85bd9185 88007e5c64d0
>>>>> Mar 12 23:05:16 static1 kernel: 88007a614ae0 0001722b64ba
>>>>> 88007a615098 8800778b1fd8
>>>>> Mar 12 23:05:16 static1 kernel: Call Trace:
>>>>> Mar 12 23:05:16 static1 kernel: []
>>>>> schedule_timeout+0x215/0x2e0
>>>>> Mar 12 23:05:16 static1 kernel: []
>>>>> wait_for_common+0x123/0x180
>>>>> Mar 12 23:05:16 static1 kernel: [] ?
>>>>> default_wake_function+0x0/0x20
>>>>> Mar 12 23:05:16 static1 kernel: [] ?
>>>>> _xfs_buf_read+0x46/0x60 [xfs]
>>>>> Mar 12 23:05:16 static1 kernel: [] ?
>>>>> xfs_trans_read_buf+0x197/0x410 [xfs]
>>>>> Mar 12 23:05:16 static1 kernel: []
>>>>> wait_for_completion+0x1d/0x20
>>>>> Mar 12 23:05:16 static1 kernel: []
>>>>> xfs_buf_iowait+0x9b/0x100 [xfs]
>>>>> Mar 12 23:05:16 static1 kernel: [] ?
>>>>> xfs_trans_read_buf+0x197/0x410 [xfs]
>>>>> Mar 12 23:05:16 static1 kernel: []
>>>>> _xfs_buf_read+0x46/0x60 [xfs]
>>>>>

[ovirt-users] VMs freezing during heals

2015-03-17 Thread Alastair Neil
I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There are two
virtualisation clusters one with two nehelem nodes and one with  four
 sandybridge nodes. My master storage domain is a GlusterFS backed by a
replica 3 gluster volume from 3 of the gluster nodes.  The engine is a
hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage broviede by
nfs from a different gluster volume.  All the hosts are CentOS 6.6.

 vdsm-4.16.10-8.gitc937927.el6
> glusterfs-3.6.2-1.el6
> 2.6.32 - 504.8.1.el6.x86_64


Problems happen when I try to add a new brick or replace a brick eventually
the self heal will kill the VMs. In the VM's logs I see kernel hung task
messages.

Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for more than
> 120 seconds.
> Mar 12 23:05:16 static1 kernel:  Not tainted 2.6.32-504.3.3.el6.x86_64
> #1
> Mar 12 23:05:16 static1 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Mar 12 23:05:16 static1 kernel: nginx D 0001 0
>  1736   1735 0x0080
> Mar 12 23:05:16 static1 kernel: 8800778b17a8 0082
>  000126c0
> Mar 12 23:05:16 static1 kernel: 88007e5c6500 880037170080
> 0006ce5c85bd9185 88007e5c64d0
> Mar 12 23:05:16 static1 kernel: 88007a614ae0 0001722b64ba
> 88007a615098 8800778b1fd8
> Mar 12 23:05:16 static1 kernel: Call Trace:
> Mar 12 23:05:16 static1 kernel: []
> schedule_timeout+0x215/0x2e0
> Mar 12 23:05:16 static1 kernel: []
> wait_for_common+0x123/0x180
> Mar 12 23:05:16 static1 kernel: [] ?
> default_wake_function+0x0/0x20
> Mar 12 23:05:16 static1 kernel: [] ?
> _xfs_buf_read+0x46/0x60 [xfs]
> Mar 12 23:05:16 static1 kernel: [] ?
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: []
> wait_for_completion+0x1d/0x20
> Mar 12 23:05:16 static1 kernel: []
> xfs_buf_iowait+0x9b/0x100 [xfs]
> Mar 12 23:05:16 static1 kernel: [] ?
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: []
> _xfs_buf_read+0x46/0x60 [xfs]
> Mar 12 23:05:16 static1 kernel: []
> xfs_buf_read+0xab/0x100 [xfs]
> Mar 12 23:05:16 static1 kernel: []
> xfs_trans_read_buf+0x197/0x410 [xfs]
> Mar 12 23:05:16 static1 kernel: []
> xfs_imap_to_bp+0x54/0x130 [xfs]
> Mar 12 23:05:16 static1 kernel: [] xfs_iread+0x7b/0x1b0
> [xfs]
> Mar 12 23:05:16 static1 kernel: [] ?
> inode_init_always+0x11e/0x1c0
> Mar 12 23:05:16 static1 kernel: [] xfs_iget+0x27e/0x6e0
> [xfs]
> Mar 12 23:05:16 static1 kernel: [] ?
> xfs_iunlock+0x5d/0xd0 [xfs]
> Mar 12 23:05:16 static1 kernel: [] xfs_lookup+0xc6/0x110
> [xfs]
> Mar 12 23:05:16 static1 kernel: []
> xfs_vn_lookup+0x54/0xa0 [xfs]
> Mar 12 23:05:16 static1 kernel: [] do_lookup+0x1a5/0x230
> Mar 12 23:05:16 static1 kernel: []
> __link_path_walk+0x7a4/0x1000
> Mar 12 23:05:16 static1 kernel: [] ?
> cache_grow+0x217/0x320
> Mar 12 23:05:16 static1 kernel: [] path_walk+0x6a/0xe0
> Mar 12 23:05:16 static1 kernel: []
> filename_lookup+0x6b/0xc0
> Mar 12 23:05:16 static1 kernel: [] user_path_at+0x57/0xa0
> Mar 12 23:05:16 static1 kernel: [] ?
> _xfs_trans_commit+0x214/0x2a0 [xfs]
> Mar 12 23:05:16 static1 kernel: [] ?
> xfs_iunlock+0x7e/0xd0 [xfs]
> Mar 12 23:05:16 static1 kernel: [] vfs_fstatat+0x50/0xa0
> Mar 12 23:05:16 static1 kernel: [] ?
> touch_atime+0x14d/0x1a0
> Mar 12 23:05:16 static1 kernel: [] vfs_stat+0x1b/0x20
> Mar 12 23:05:16 static1 kernel: [] sys_newstat+0x24/0x50
> Mar 12 23:05:16 static1 kernel: [] ?
> audit_syscall_entry+0x1d7/0x200
> Mar 12 23:05:16 static1 kernel: [] ?
> __audit_syscall_exit+0x25e/0x290
> Mar 12 23:05:16 static1 kernel: []
> system_call_fastpath+0x16/0x1b



I am wondering if my volume settings are causing this.  Can anyone with
more knowledge take a look and let me know:

network.remote-dio: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> nfs.export-volumes: on
> network.ping-timeout: 20
> cluster.self-heal-readdir-size: 64KB
> cluster.quorum-type: auto
> cluster.data-self-heal-algorithm: diff
> cluster.self-heal-window-size: 8
> cluster.heal-timeout: 500
> cluster.self-heal-daemon: on
> cluster.entry-self-heal: on
> cluster.data-self-heal: on
> cluster.metadata-self-heal: on
> cluster.readdir-optimize: on
> cluster.background-self-heal-count: 20
> cluster.rebalance-stats: on
> cluster.min-free-disk: 5%
> cluster.eager-lock: enable
> storage.owner-uid: 36
> storage.owner-gid: 36
> auth.allow:*
> user.cifs: disable
> cluster.server-quorum-ratio: 51%


Many Thanks,  Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] failures in vdsm logs Ovirt 3.5 & Gluster 3.6.1

2014-11-27 Thread Alastair Neil
I see very frequents errors in the vdsm log on one of the gluster servers
in my gluster cluster. The host is fine in the ovirt console, as are the
volumes, and gluster functions ok, so this is a nuisance primarily.  The
gluster cluster is a replica 2, with 2 hosts, both CentSO 6.6 with gluster
version 3.6.1, vdsm version 4.16.7.  Ovirt engine version 3.5.0.1-1.fc20

the host producing the errors  is identified by fqdn in ovirt, the one that
does not produce errors is identified  by ip address.

Any ideas about how to trouble shoot this?


-Thanks, Alastair


Gluster Server VentOS 6.6:
vdsmd: vdsm-4.16.7-1.gitdb83943.el6
gluster: glusterfs-3.6.1-1.el6

ovirt host Fedora 20:
hosted ovirt-engine 3.5.0.1-1


Thread-53::DEBUG::2014-11-24
> 14:11:45,362::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call hostsList with () {}
> Thread-53::ERROR::2014-11-24
> 14:11:45,363::BindingXMLRPC::1151::vds::(wrapper) unexpected error

Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterPeerStatus
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> Thread-53::DEBUG::2014-11-24
> 14:03:08,721::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call volumesList with () {}
> Thread-53::ERROR::2014-11-24
> 14:03:08,721::BindingXMLRPC::1151::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 78, in volumesList
> return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterVolumeInfo
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] (no subject)

2014-11-24 Thread Alastair Neil
I see frequents errors in the vdsm log on one of the gluster servers in my
gluster cluster.  Can anyone suggest a way to troubleshoot this,  the host
is fine in the ovirt console and gluster functions ok.  The Gluster cluster
is a replica 2, with 2 hosts.  I am trying to add another host to move to
replica 3 but having trouble and this error seems to cause any  changes to
the cluster to fail with unexpected errors in ovirt.

-Thanks, Alastair


Gluster Server VentOS 6.6:
vdsmd: vdsm-4.16.7-1.gitdb83943.el6
gluster: glusterfs-3.6.1-1.el6

ovirt host Fedora 20:
hosted ovirt-engine 3.5.0.1-1


Thread-53::DEBUG::2014-11-24
> 14:11:45,362::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call hostsList with () {}
> Thread-53::ERROR::2014-11-24
> 14:11:45,363::BindingXMLRPC::1151::vds::(wrapper) unexpected error

Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterPeerStatus
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
> Thread-53::DEBUG::2014-11-24
> 14:03:08,721::BindingXMLRPC::1132::vds::(wrapper) client
> [xxx.xxx.xxx.39]::call volumesList with () {}
> Thread-53::ERROR::2014-11-24
> 14:03:08,721::BindingXMLRPC::1151::vds::(wrapper) unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 78, in volumesList
> return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterVolumeInfo
>   File "/usr/lib64/python2.6/multiprocessing/managers.py", line 725, in
> _callmethod
> conn.send((self._id, methodname, args, kwds))
> IOError: [Errno 32] Broken pipe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] CAS authentication

2014-11-14 Thread Alastair Neil
Can any one tell me if there is a CAS authentication plugin available or
planned?  If it is planned is there a target for the feature?

-Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine , how to make changes to the VM post deploy

2014-11-06 Thread Alastair Neil
does the broker automatically sync the change you made
in /etc/ovirt-hosted-engine/hosted-engine.conf to the other ha hosts or did
you omit a step?


On 6 November 2014 11:10, Groten, Ryan  wrote:

> I went through this a couple months ago.  Migrated my hosted-engine from
> one NFS host to another.  Here are the steps that I documented from the
> experience.  There is probably a better way, but this worked for me on two
> separate hosted-engine environments.
>
> 1. Make a backup of RHEV-M
> 2. Migrate VMs off all hosts that run hosted-engine (except
> hosted-engine itself)
> 3. Put hosted-engine hosts in maintenance mode (except host that's
> running hosted-engine)
> 4. Put hosted-engine in global maintenance mode
> 5. Shutdown hosted-engine
> 6. Stop ovirt-ha-agent and ovirt-ha-broker services on all
> hosted-engine hosts
> 7. On each hosted-engine host:
> a. service ovirt-ha-agent stop
> b. service ovirt-ha-broker stop
> c. sanlock client shutdown -f 1
> d. service sanlock stop
> e. umount /rhev/data-center/mnt/
> f. service sanlock start
> 8. mount new NFS share on /hosted_tgt
> 9. mount old NFS share on /hosted_src
> 10. Copy data (make sure sparse files are kept sparse):
> a. rsync --sparse -crvlp /hosted_src/* /hosted_tgt/
> 11. Edit /etc/ovirt-hosted-engine/hosted-engine.conf and change
> path:
> storage=10.1.208.122:/HostedEngine_Test
> 12. Make sure permissions are vdsm:kvm in /hosted_tgt/
> 13. umount /hosted_tgt
> 14. umount /hosted_src
> 15. Pick one hosted-engine host and reboot, then run:
> a. hosted-engine --connect-storage (make sure the new NFS
> is mounted properly)
> b. hosted-engine --start-pool (wait a few seconds then try
> again if you get an error)
> c. service ovirt-ha-broker start
> d. service ovirt-ha-agent start
> e. hosted-engine --vm-start
>
>
> -Original Message-
> From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf
> Of Frank Wall
> Sent: November-06-14 8:12 AM
> To: Jiri Moskovcak
> Cc: users
> Subject: Re: [ovirt-users] hosted engine , how to make changes to the VM
> post deploy
>
> On Wed, Nov 05, 2014 at 08:11:39AM +0100, Jiri Moskovcak wrote:
> > On 11/04/2014 03:52 PM, Alastair Neil wrote:
> > > So is this the workflow?
> > >
> > > set the hosted-engine maintenance to global
> > > shutdown the engine VM
> > > make changes via virsh or editing vm.conf
> > > sync changes to the other ha nodes
> > > restart the VM
> > > set hosted-engine maintenance to none
> >
> > - well, not official, because it can cause a lot of troubles, so I
> > would not recommend it unless you have a really good reason to do it.
>
> I'd like to move my ovirt-engine VM to a new NFS storage.
> I was thinking to adopt this workflow for this use-case (in combination
> with rsync to mirror the old storage).
>
> Do you think this would succeed or is there another (and maybe
> "supported") way to move ovirt-engine to a different storage?
>
>
> Regards
> - Frank
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Protecting the storage of the self hosted engine

2014-11-05 Thread Alastair Neil
It was my understanding that the replica 3 requirement is for glusterfs
fuse storage, it's not clear that this would extend to NFS provided by a
gluster volume.  I'd appreciate clarification.

-Alastair


On 5 November 2014 12:57, Daniel Helgenberger 
wrote:

>
>
> On 05.11.2014 15:30, wodel youchi wrote:
> > Hi, I am new on oirt.
> Hello and welcome!
>
> >
> > I want to know the best way to protect the storage of the hosted-engine?
> IHMO reliable hardware and contingency plans.
>
> >
> > In Ovirt3.5, only NFS and iSCSI are supported for the engine VM, so this
> means
> > that the NFS server or the iSCSI volume become the weak link.
> First we need to define 'weak link'. IMOH this can be Network and / or
> storage hardware like controllers and spindles (SSDs). As the later
> tends to be reliable I think your mean indeed the data link layer as
> 'weak link'?
>
> Gluster can be the same weak link for instance, as it needs a network
> layer. If you use iSCSI together with some storage appliance maybe with
> redundant controllers, this setup is quite reliable and engine storage
> is protected if you use iSCSI Multipath (and the paths are indeed
> separate hardware switches).
>
> You could call NFS a weak link; but even there are quite reliable setups
> available witch support fail over, replicated storage and IPMP.
>
> >
> > I've read two articles, one using GlsuterFS+NFS and CTDB for High
> availability
> > of the engine storage,: oVirt 3.4, Glusterized
> I have to warn you at this point. This setup seems quite tempting; even
> using localhost addresses with gluster's build in NFS. This was tried
> before (myself included) but it is far from stable.
>
> You would at least need repica 3 gluster volumes to avoid split brains.
> These seem to happen quite often. I include Martin here, we talked about
> this at the ovirt workshop in Düsseldorf; maybe he can provide a better
> explanation.
>
> AFAIK gluster will be supported as engine storage in the future; but
> this is not the case right now. Of course, you are welcome to try!
>
> That said, and because you are new to ovirt, the main thing you need to
> protect is not the engine storage, but your production data domains.
>
> The VMs will run fine and continue to run with the engine down or not
> available. In case of a real disaster, you will be able to import these
> storage domains along with their VMs to a new engine.
>
> For me, I tend to have my engine backed up using engine-backup [1] and
> put the result to a different storage. From that data you can recreate
> the whole engine.
>
> > ,
> >
> >
> > image 
> >
> >
> >
> >
> >
> > oVirt 3.4, Glusterized
> > 
> > oVirt's Hosted Engine feature, introduced in the project's 3.4 release,
> enables
> > the open source virtualization system to host its own management server,
> which
> > means...
> > Afficher sur community.redha...
> > 
> >
> > Aperçu par Yahoo
> >
> >
> > And another using GluserFS+NFS and KeepAlive: How to workaround through
> the maze
> > and reach the goal of the new amazing oVirt Hosted Engine with 3.4.0
> Beta |
> > 
> >
> >
> >
> >
> > How to workaround through the maze and reach the goal of the new amazing
> oVirt
> > Hosted Engine with 3
> > 
> > andrewklau - My Small World of Rants, Excitement, Snippets and Tuts How
> to
> > workaround through the maze and reach the goal of the new amazing oVirt
> Hosted
> > Engine with 3.4.0 Beta
> > Afficher sur www.andrewklau.com
> > 
> >
> > Aperçu par Yahoo
> >
> >
> > Is there a better way to achieve this goal?
> >
> > Thanks
> >
> HTH
>
> [1] http://www.ovirt.org/Ovirt-engine-backup
>
> --
> Daniel Helgenberger
> m box bewegtbild GmbH
>
> P: +49/30/2408781-22
> F: +49/30/2408781-10
>
> ACKERSTR. 19
> D-10115 BERLIN
>
>
> www.m-box.de  www.monkeymen.tv
>
> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine , how to make changes to the VM post deploy

2014-11-04 Thread Alastair Neil
Thanks Jirka

So is this the workflow?

set the hosted-engine maintenance to global
> shutdown the engine VM
> make changes via virsh or editing vm.conf
> sync changes to the other ha nodes
> restart the VM
> set hosted-engine maintenance to none



Also, the patch for the broker will be in 3.5.1?


-Alastair


On 4 November 2014 02:50, Jiri Moskovcak  wrote:

> On 11/04/2014 04:38 AM, Alastair Neil wrote:
>
>> I have successfully migrated my standalone ovirt into a hosted
>> instance.  I have some questions and comments.
>>
>>
>> Is there a mechanism to make changes to the hosted engine VM post
>> deploy.  I would like to change some of the VM configuration choices.
>>
>> I tried setting the maintenance mode to "global" and then editing:
>>
>>   /etc/ovirt-hosted-engine/vm.conf
>>
>> I changed something simple: the VM name.  Then I copied the file to the
>> two other HA hosts and set the maintenance mode to "none".
>>
>>
> - you need to kill the vm and restart it after you edit the configuation
>
> hosted-engine --vm-shutdown
> hosted-engine --vm-start
>
> - also you can use virsh [1] to edit the vm
>
> [1] http://wiki.libvirt.org/page/FAQ#Where_are_VM_config_files_
> stored.3F_How_do_I_edit_a_VM.27s_XML_config.3F
>
>
>  No joy the VM hostname remains unchanged in the portal.
>>
>>
>> Also, the notification system is horrendously spammy, I get email
>> notification of each state change from each of the HA hosts, surely this
>> is not intentional.  Is there some way to control this?
>>
>>
> - that's a bug which should be fixed by this patch
> http://gerrit.ovirt.org/#/c/33518/
>
> --Jirka
>
>  -Thanks,  Alastair
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine , how to make changes to the VM post deploy

2014-11-03 Thread Alastair Neil
I have successfully migrated my standalone ovirt into a hosted instance.  I
have some questions and comments.


Is there a mechanism to make changes to the hosted engine VM post deploy.
I would like to change some of the VM configuration choices.

I tried setting the maintenance mode to "global" and then editing:

 /etc/ovirt-hosted-engine/vm.conf

I changed something simple: the VM name.  Then I copied the file to the two
other HA hosts and set the maintenance mode to "none".

No joy the VM hostname remains unchanged in the portal.


Also, the notification system is horrendously spammy, I get email
notification of each state change from each of the HA hosts, surely this is
not intentional.  Is there some way to control this?

-Thanks,  Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine deploy failed 3.5 centos 6.5 host FC20 vm

2014-11-01 Thread Alastair Neil
No need turned out to be pebkac,  I had been using my domain account for so
long I had the wrong admin account thus the hosted-engine failed to
authenticate I have resolved the issue.


On Fri Oct 31 2014 at 4:35:27 AM Jiri Moskovcak  wrote:

> Hi Alastair,
> I need the engine.log to debug it, because the actual problem is logged
> there.
>
> Thanks,
> Jirka
>
> On 10/29/2014 08:58 PM, Alastair Neil wrote:
> > OK I seem to be having some fundamental confusion about this migration.
> >
> >
> > I have an existing ovirt 3.5 (upgraded from 3.4) setup with  a Data
> > Center containing four clusters, 3 VM clusters for 3 differenc classes
> > of CPU hosts (Penryn, Nehalem, and SandyBridge).  I also have a  gluster
> > storage cluster.
> >
> > There are 4 storage domains, an Export domain (Export-Dom1) nfs v1, and
> > ISO domain (Gluster-ISOs) posix FS v1, a Data domain (Gluster Data)
> > GlusterFS V3, and a Data (Master) (Gluster-VM-Store) GlusterFS v3.
> >
> > As Gluster replica 2 is not considered adequate for the hosted-engine
> > storage I created a volume in the gluster store and exported it as NFS.
> > This is what I planned to use as the storage pool for the hosted
> > engine.  So far so good.
> >
> > I have tried the deployment several times now,  and it fails with the
> > following:
> >
> > [ ERROR ] Cannot automatically add the host to cluster None: HTTP
> > Status 401
> > [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host
> > to cluster None
> >
> >
> > 2014-10-29 15:26:11 DEBUG
> > otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> > add_host._closeup:502 Cannot add the host to cluster None
> > Traceback (most recent call last):
> >File
> > "/usr/share/ovirt-hosted-engine-setup/scripts/../
> plugins/ovirt-hosted-engine-setup/engine/add_host.py",
> > line 426, in _closeup
> >  ca_file=self.cert,
> >File "/usr/lib/python2.6/site-packages/ovirtsdk/api.py", line
> > 154, in __init__
> >  url=''
> >File
> > "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py",
> > line 118, in request
> >  persistent_auth=self._persistent_auth)
> >File
> > "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py",
> > line 146, in __doRequest
> >  persistent_auth=persistent_auth
> >File
> > "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line
> > 134, in doRequest
> >  raise RequestError, response
> > RequestError:
> > status: 401
> > reason: Unauthorized
> > detail: HTTP Status 401
> > 2014-10-29 15:26:11 ERROR
> > otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> > add_host._closeup:510 Cannot automatically add the host to
> > cluster None:
> > HTTP Status 401
> > 2014-10-29 15:26:11 DEBUG otopi.context context._executeMethod:152
> > method exception
> > Traceback (most recent call last):
> >File "/usr/lib/python2.6/site-packages/otopi/context.py", line
> > 142, in _executeMethod
> >  method['method']()
> >File
> > "/usr/share/ovirt-hosted-engine-setup/scripts/../
> plugins/ovirt-hosted-engine-setup/engine/add_host.py",
> > line 517, in _closeup
> >  cluster=cluster_name,
> > RuntimeError: Cannot add the host to cluster None
> >
> >
> >
> > The hosted-engine host cluster name it seems is set to "None", and then
> > fails to add the host as there is no cluster "None" in the restored
> > engine.  Presumably the storage domain would need to be added too,
> > however I don't ever seem to see any message about this
> >
> > I recall being prompted for a data-center name and even a storage-domain
> > name, but not a cluster name, so am I missing a step.  I could use some
> > guidance as I am stumped.  Is there some pre-migration tasks I am
> > failing to do in the original engine?
> >
> >
> >
> > .
> >
> > On 29 October 2014 03:10, Jiri Moskovcak  > <mailto:jmosk...@redhat.com>> wrote:
> >
> > On 10/27/2014 06:22 PM, Alastair Neil wrote:
> >
> > After belatedly realising that no engine for EL7 is planned for
> > 3.5 I
> > tried using FC20:
> >
> > I used

Re: [ovirt-users] hosted engine deploy failed 3.5 centos 6.5 host FC20 vm

2014-10-29 Thread Alastair Neil
OK I seem to be having some fundamental confusion about this migration.


I have an existing ovirt 3.5 (upgraded from 3.4) setup with  a Data Center
containing four clusters, 3 VM clusters for 3 differenc classes of CPU
hosts (Penryn, Nehalem, and SandyBridge).  I also have a  gluster storage
cluster.

There are 4 storage domains, an Export domain (Export-Dom1) nfs v1, and ISO
domain (Gluster-ISOs) posix FS v1, a Data domain (Gluster Data) GlusterFS
V3, and a Data (Master) (Gluster-VM-Store) GlusterFS v3.

As Gluster replica 2 is not considered adequate for the hosted-engine
storage I created a volume in the gluster store and exported it as NFS.
This is what I planned to use as the storage pool for the hosted engine.
So far so good.

I have tried the deployment several times now,  and it fails with the
following:

[ ERROR ] Cannot automatically add the host to cluster None: HTTP Status
> 401
> [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
> cluster None


2014-10-29 15:26:11 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._closeup:502 Cannot add the host to cluster None
> Traceback (most recent call last):
>   File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_host.py",
> line 426, in _closeup
> ca_file=self.cert,
>   File "/usr/lib/python2.6/site-packages/ovirtsdk/api.py", line 154, in
> __init__
> url=''
>   File
> "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line
> 118, in request
> persistent_auth=self._persistent_auth)
>   File
> "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line
> 146, in __doRequest
> persistent_auth=persistent_auth
>   File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line
> 134, in doRequest
> raise RequestError, response
> RequestError:
> status: 401
> reason: Unauthorized
> detail: HTTP Status 401
> 2014-10-29 15:26:11 ERROR
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._closeup:510 Cannot automatically add the host to
> cluster None:
> HTTP Status 401
> 2014-10-29 15:26:11 DEBUG otopi.context context._executeMethod:152 method
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
> _executeMethod
> method['method']()
>   File
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_host.py",
> line 517, in _closeup
> cluster=cluster_name,
> RuntimeError: Cannot add the host to cluster None



The hosted-engine host cluster name it seems is set to "None", and then
fails to add the host as there is no cluster "None" in the restored
engine.  Presumably the storage domain would need to be added too, however
I don't ever seem to see any message about this

I recall being prompted for a data-center name and even a storage-domain
name, but not a cluster name, so am I missing a step.  I could use some
guidance as I am stumped.  Is there some pre-migration tasks I am failing
to do in the original engine?




.

On 29 October 2014 03:10, Jiri Moskovcak  wrote:

> On 10/27/2014 06:22 PM, Alastair Neil wrote:
>
>> After belatedly realising that no engine for EL7 is planned for 3.5 I
>> tried using FC20:
>>
>> I used a database called engine with user engine on the VM to restore to.
>> The engine-backup restore appeared to complete with no errors save the
>> canonical complaint about less that 16GB of memory being available.
>> However on completion the host the hosted-engine-deploy threw this error:
>>
>> Failed to execute stage 'Closing up': The host name
>> "ovirt-admin-hosted.x.xxx.edu
>> <http://ovirt-admin-hosted.vsnet.gmu.edu>" contained in the URL
>>
>> doesn't match any of the names in the server certificate.
>>
>>
>> from the setup log
>>
>> 2014-10-27 12:55:49 DEBUG
>> otopi.ovirt_hosted_engine_setup.check_liveliness
>> check_liveliness.isEngineUp:46 Checking for Engine health status
>> 2014-10-27 12:55:50 INFO
>> otopi.ovirt_hosted_engine_setup.check_liveliness
>> check_liveliness.isEngineUp:64 Engine replied: DB Up!Welcome to
>> Health Status!
>> 2014-10-27 12:55:50 DEBUG otopi.context context._executeMethod:138
>> Stage closeup METHOD
>> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.
>> Plugin._closeup
>> 2014-10-27 12:55:50 DEBUG
>> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
>> 

Re: [ovirt-users] migrate from stand alone on FC19 to hosted OK to use CentOS 7?

2014-10-27 Thread Alastair Neil
Thanks Jirka

I assume EL7 will at some point be supported by the engine?

-Alastair


On 27 October 2014 02:54, Jiri Moskovcak  wrote:

> On 10/25/2014 12:11 AM, Alastair Neil wrote:
>
>> I am trying to migrate my old ovirt install which started out at 3.3
>> standalone engine on FC19 to a hosted engine.  I want to use CentOS ,
>> however, the postgresql version on 6.5 is old (8.4.20) and I am unable
>> to get a clean restore.  The version on FC 19 is 9.2.8, it looks like EL
>> 7 has 9.2.7 (I am hoping the difference in the minor rev will not bite
>> me).
>>
>> I was wondering if there are any issues using EL 7 to host the engine?
>> I know I had seen some reports of issues with 3.5 on EL7 as hosts but
>> was not sure if the engine had any gotchas.
>>
>
> using el7 on host for hosted engine is ok (just tried that few times last
> week), but engine is not supported on el7, so use different os when
> installing the engine vm.
>
> --Jirka
>
>
>> Thanks, Alastair
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted engine deploy failed 3.5 centos 6.5 host FC20 vm

2014-10-27 Thread Alastair Neil
After belatedly realising that no engine for EL7 is planned for 3.5 I tried
using FC20:

I used a database called engine with user engine on the VM to restore to.
The engine-backup restore appeared to complete with no errors save the
canonical complaint about less that 16GB of memory being available.
However on completion the host the hosted-engine-deploy threw this error:

Failed to execute stage 'Closing up': The host name "
> ovirt-admin-hosted.x.xxx.edu "
> contained in the URL doesn't match any of the names in the server
> certificate.


from the setup log

> 2014-10-27 12:55:49 DEBUG otopi.ovirt_hosted_engine_setup.check_liveliness
> check_liveliness.isEngineUp:46 Checking for Engine health status
> 2014-10-27 12:55:50 INFO otopi.ovirt_hosted_engine_setup.check_liveliness
> check_liveliness.isEngineUp:64 Engine replied: DB Up!Welcome to Health
> Status!
> 2014-10-27 12:55:50 DEBUG otopi.context context._executeMethod:138 Stage
> closeup METHOD
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._closeup
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._getPKICert:89 Acquiring ca.crt from the engine
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._getPKICert:101 -BEGIN CERTIFICATE-
>
> MIID3DCCAsSgAwIBAgICEAAwDQYJKoZIhvcNAQEFBQAwTzELMAkGA1UEBhMCVVMxFjAUBgNVBAoT
>
> DXZzbmV0LmdtdS5lZHUxKDAmBgNVBAMTH292aXJ0LWFkbWluLnZzbmV0LmdtdS5lZHUuNzIyNDcw
>
> IhcRMTMxMTExMTk1NTQ1KzAwMDAXDTIzMTExMDE5NTU0NVowTzELMAkGA1UEBhMCVVMxFjAUBgNV
>
> BAoTDXZzbmV0LmdtdS5lZHUxKDAmBgNVBAMTH292aXJ0LWFkbWluLnZzbmV0LmdtdS5lZHUuNzIy
>
> NDcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDAzjsdTOPIhruA/TvupQ+syMdVu8GT
>
> VJ9IlFdqc/RhiV9YB6snYAF6MIeWKnW0eOL9jY/5TmfIqY/+rvYvLhPui1/612KoW9kEcZXUw0k-2
>
> ntz1i+wHv5PEq1Cvn/G8mI9b56EFiiYPfAzcdKGbJ8iqafFPW71/612KoW9kEcZXUwyUXLHF01Yo
>
> nQGAtjL+VGgY6jWaaFD4j/5XTkzfcybI8jAW8o97vfTrnmqe+2cvIUyip9l5KQJjblO6FDjpJJUC
>
> MhyDEjJPCKAT1kW1f3E/t8lHD4UUsMpX4rB142oGwBo5st3sGlUks5fFLHtYjFTUYSSmTwOlnq+t
>
> D8HFr01lAgMBAAGjgb0wgbowHQYDVR0OBBYEFFpdSy5ACG6PC8YtE8vGRYvSYyI6MHgGA1UdIwRx
>
> MG+AFFpdSy5ACG6PC8YtE8vGRYvSYyI6oVOkUTBPMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNdnNu
>
> ZXQuZ211LmVkdTEoMCYGA1UEAxMfb3ZpcnQtYWRtaW4udnNuZXQuZ211LmVkdS43MjI0N4ICEAAw
>
> DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKqhXoL/
>
> jlVhw9qasoqMnJw6ypHjJQCVAukCHvwioHVz+XwvIcIGuod+rHOcvexPZyCkacU2sOaIPjnyv8mJ
>
> sNQ4nKW/oGwUfiKBgsvjv+cHAaqcQNn7MI0VDL71ulYq8UpW0bX3n5fafbstbdN1K2uad3UZH0ae
>
> pv+gLiCXIKTmTtRtHCiKAxVw7Nx48rN8jJyzbP0FoK0+uddrI4TSJDfa5F3USdiYCk/bPCLThDPe
>
> UgpyVDXH11c+j+Bp8IKUvNLLw6gjBkDkPa6oS7qKIP9DaVuroJyUO7OQOes3Uz54+QGc1A+Zewv+
> 2mgdbFVYcsm1qpxBYL6R5fK2ThMz4r8=
> -END CERTIFICATE-
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._getSSHkey:111 Acquiring SSH key from the engine
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._getSSHkey:123 ssh-rsa
> B3NzaC1yc2EDAQABAAABAQCpmyaDlP8Kt/yDb/kB4OaIdPx2sgH8T5Ra6hBRGHMxnTtykajnDj9WMannNc0F3d0htvVQXPKZYxxsXxNeHq00Ga/agnCjsYM9EjzujdsBqvyOTjlVX3BVWhWGZu5yNxYwpvdQBRCzhHibgqaafWNRvaixUeO1VAlU+q5W4bZDxJwKui+Bf1dLuZw94zHKs3jiGFcQOegJUVYmWuLVh5GH6SNLMLdbJdr4B5MwlK8ItiOC9XgUdH0RxN56Y1PEUkLserNOW/FxsXuf+cbWRsMtVa5xj82AlDWQUjyQleC91Nl7FT3OHGU1nJf289EjzujdsBqvyOTjlVX3BV5
> ovirt-engine
> 2014-10-27 12:55:50 DEBUG otopi.transaction transaction._prepare:77
> preparing 'File transaction for '/root/.ssh/authorized_keys''
> 2014-10-27 12:55:50 DEBUG otopi.filetransaction
> filetransaction.prepare:194 file '/root/.ssh/authorized_keys' missing
> 2014-10-27 12:55:50 DEBUG otopi.transaction transaction.commit:159
> committing 'File transaction for '/root/.ssh/authorized_keys''
> 2014-10-27 12:55:50 DEBUG otopi.filetransaction filetransaction.commit:327
> Executing restorecon for /root/.ssh
> 2014-10-27 12:55:50 DEBUG otopi.filetransaction filetransaction.commit:341
> restorecon result rc=0, stdout=, stderr=
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> plugin.executeRaw:785 execute: ('/sbin/restorecon', '-r', '/root/.ssh'),
> executable='None', cwd='None', env=None
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> plugin.executeRaw:803 execute-result: ('/sbin/restorecon', '-r',
> '/root/.ssh'), rc=0
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host plugin.execute:861
> execute-output: ('/sbin/restorecon', '-r', '/root/.ssh') stdout:
>
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host plugin.execute:866
> execute-output: ('/sbin/restorecon', '-r', '/root/.ssh') stderr:
>
> 2014-10-27 12:55:50 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.engine.add_host
> add_host._closeup:415 Connecting to the Engine
> 2014-10-27 12:55:50 DEBUG otopi.context context._e

[ovirt-users] migrate from stand alone on FC19 to hosted OK to use CentOS 7?

2014-10-24 Thread Alastair Neil
I am trying to migrate my old ovirt install which started out at 3.3
standalone engine on FC19 to a hosted engine.  I want to use CentOS ,
however, the postgresql version on 6.5 is old (8.4.20) and I am unable to
get a clean restore.  The version on FC 19 is 9.2.8, it looks like EL 7 has
9.2.7 (I am hoping the difference in the minor rev will not bite me).

I was wondering if there are any issues using EL 7 to host the engine?  I
know I had seen some reports of issues with 3.5 on EL7 as hosts but was not
sure if the engine had any gotchas.

Thanks, Alastair
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Move ovirt-engine to a new server

2014-09-15 Thread Neil
Hi guys,

Please could someone try assist me. I'm starting to lose hair over this :)

I'm trying to migrate my current engine to a new host with the same OS
and hostname. Centos 6.5, ovirt-engine-3.4.3-1.el6.noarch

I've installed all the packages, I run engine-setup and go through the
options keeping the details the same as before, I then restore my
config back to /etc/ovirt-engine as well as to /etc/pki/ovirt-engine,
but when it gets to trying to restore my database I get the following
error...

[root@backup dbscripts]# ./restore.sh -u postgres -f
/mnt/fw-ovirt-backup/engine-db-2014-09-15-16-48.sql
psql: FATAL:  Ident authentication failed for user "postgres"
Database engine does not exist, please create an empty database named engine.

[root@backup dbscripts]# su - postgres -c "psql -d template1 -c 'drop
database engine;'"
DROP DATABASE
[root@backup dbscripts]# su - postgres -c "psql -d template1 -c
'create database engine owner engine;'"
CREATE DATABASE

[root@backup dbscripts]# ./restore.sh -u postgres -f
/mnt/fw-ovirt-backup/engine-db-2014-09-15-16-48.sql
psql: FATAL:  Ident authentication failed for user "postgres"
Database engine does not exist, please create an empty database named engine.

or even

[root@backup dbscripts]# ./restore.sh -u engine -f
/mnt/fw-ovirt-backup/engine-db-2014-09-15-16-48.sql
psql: FATAL:  Ident authentication failed for user "engine"
Database engine does not exist, please create an empty database named engine.

This is my pg_hba.conf as well..


# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# "local" is for Unix domain socket connections only
local   all all   ident
hostengine  engine  0.0.0.0/0   md5
hostengine  engine  ::0/0   md5
# IPv4 local connections:
hostall all 127.0.0.1/32  ident
# IPv6 local connections:
hostall all ::1/128   ident

I've done this a couple time before and haven't encountered this
issue, so it seems rather odd.

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 答复: Installing 3.4.3 Centos 6.5

2014-09-15 Thread Neil
Thanks very much for the reply.

After three download attempts using yum I eventually just used wget -c
and did a yum localinstall and that has now finally worked.

Also now that I have the packaged installed my engine-setup has
completed perfectly.

This was a really strange once as I haven't encountered this
dependency problem before.

Thanks for all of your help.

Regards.

Neil Wilson.

On Mon, Sep 15, 2014 at 12:00 PM, Xie, Chao  wrote:
> HI,Neil:
> Download the package again. Sometimes the source mirror will abort 
> ,and the package is not download completely. But the yum conside it is a 
> complete package and install it but failed.
>
> -邮件原件-
> 发件人: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] 代表 Neil
> 发送时间: 2014年9月15日 17:50
> 收件人: Yedidyah Bar David
> 抄送: users@ovirt.org
> 主题: Re: [ovirt-users] Installing 3.4.3 Centos 6.5
>
> Hi David,
>
> Strangely enough when I try to install (yum install
> ovirt-engine-webadmin-portal)  I get the following, not sure if this is the 
> file on the repo that's broken or if something else has gone wrong...
>
>
> Running Transaction
>   Installing : ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
>
>1/1
> Error unpacking rpm package ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
> error: unpacking of archive failed on file
> /usr/share/ovirt-engine/engine.ear/webadmin.war/2FB574457575838C39AD546F911CF30A.cache.html;5416b33a:
> cpio: Digest mismatch
>   Verifying  : ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
>
>1/1
>
> Failed:
>   ovirt-engine-webadmin-portal.noarch 0:3.4.3-1.el6
>
> Has anyone else encountered this issue before?
>
> This is the list of packages currently installed...
>
> ovirt-engine-lib-3.4.3-1.el6.noarch
> ovirt-engine-setup-base-3.4.3-1.el6.noarch
> ovirt-engine-websocket-proxy-3.4.3-1.el6.noarch
> ovirt-engine-cli-3.4.0.5-1.el6.noarch
> ovirt-image-uploader-3.4.2-1.el6.noarch
> ovirt-host-deploy-java-1.2.2-1.el6.noarch
> ovirt-engine-backend-3.4.3-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.4.3-1.el6.noarch
> ovirt-engine-dbscripts-3.4.3-1.el6.noarch
> ovirt-engine-3.4.3-1.el6.noarch
> ovirt-release34-1.0.3-1.noarch
> ovirt-engine-sdk-python-3.4.3.0-1.el6.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.4.3-1.el6.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.3-1.el6.noarch
> ovirt-iso-uploader-3.4.3-1.el6.noarch
> ovirt-host-deploy-1.2.2-1.el6.noarch
> ovirt-engine-userportal-3.4.3-1.el6.noarch
> ovirt-engine-setup-3.4.3-1.el6.noarch
> ovirt-engine-restapi-3.4.3-1.el6.noarch
> ovirt-engine-tools-3.4.3-1.el6.noarch
>
> Thanks!
>
> Regards.
>
> Neil Wilson.
>
> On Mon, Sep 15, 2014 at 10:30 AM, Neil  wrote:
>> Hi David,
>>
>> Wow you're correct, I see that somehow it isn't installed.
>>
>> To install it I just did the usual...
>>
>> yum localinstall
>> http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm
>> yum install ovirt-engine
>> engine-setup
>>
>> Not sure if I've perhaps missed a step?
>>
>> Thanks.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>> On Mon, Sep 15, 2014 at 9:35 AM, Yedidyah Bar David  wrote:
>>> - Original Message -
>>>> From: "Neil" 
>>>> To: users@ovirt.org
>>>> Sent: Monday, September 15, 2014 10:17:45 AM
>>>> Subject: [ovirt-users] Installing 3.4.3 Centos 6.5
>>>>
>>>> Hi guys,
>>>>
>>>> Please could someone assist me, I'm trying to install ovirt 3.4.3 so
>>>> I can move my current ovirt engine from one server to another. I've
>>>> done a minimal install of Centos 6.5, did a yum update, then I've
>>>> added in the repo and when I run engine-setup and go through all the
>>>> options, it eventually errors out with the following error...
>>>>
>>>> [ ERROR ] Failed to execute stage 'Transaction commit': Command
>>>> '/bin/rpm' failed to execute
>>>
>>> It failed with (among other output):
>>> package ovirt-engine-webadmin-portal is not installed
>>>
>>> What exactly did you install with 'yum install'? If 'ovirt-engine',
>>> you should have also installed 'ovirt-engine-webadmin-portal' which
>>> is a dependency of it.
>>> --
>>> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installing 3.4.3 Centos 6.5

2014-09-15 Thread Neil
Hi David,

Strangely enough when I try to install (yum install
ovirt-engine-webadmin-portal)  I get the following, not sure if this
is the file on the repo that's broken or if something else has gone
wrong...


Running Transaction
  Installing : ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch

   1/1
Error unpacking rpm package ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
error: unpacking of archive failed on file
/usr/share/ovirt-engine/engine.ear/webadmin.war/2FB574457575838C39AD546F911CF30A.cache.html;5416b33a:
cpio: Digest mismatch
  Verifying  : ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch

   1/1

Failed:
  ovirt-engine-webadmin-portal.noarch 0:3.4.3-1.el6

Has anyone else encountered this issue before?

This is the list of packages currently installed...

ovirt-engine-lib-3.4.3-1.el6.noarch
ovirt-engine-setup-base-3.4.3-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.3-1.el6.noarch
ovirt-engine-cli-3.4.0.5-1.el6.noarch
ovirt-image-uploader-3.4.2-1.el6.noarch
ovirt-host-deploy-java-1.2.2-1.el6.noarch
ovirt-engine-backend-3.4.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.3-1.el6.noarch
ovirt-engine-dbscripts-3.4.3-1.el6.noarch
ovirt-engine-3.4.3-1.el6.noarch
ovirt-release34-1.0.3-1.noarch
ovirt-engine-sdk-python-3.4.3.0-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.3-1.el6.noarch
ovirt-iso-uploader-3.4.3-1.el6.noarch
ovirt-host-deploy-1.2.2-1.el6.noarch
ovirt-engine-userportal-3.4.3-1.el6.noarch
ovirt-engine-setup-3.4.3-1.el6.noarch
ovirt-engine-restapi-3.4.3-1.el6.noarch
ovirt-engine-tools-3.4.3-1.el6.noarch

Thanks!

Regards.

Neil Wilson.

On Mon, Sep 15, 2014 at 10:30 AM, Neil  wrote:
> Hi David,
>
> Wow you're correct, I see that somehow it isn't installed.
>
> To install it I just did the usual...
>
> yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm
> yum install ovirt-engine
> engine-setup
>
> Not sure if I've perhaps missed a step?
>
> Thanks.
>
> Regards.
>
> Neil Wilson.
>
>
> On Mon, Sep 15, 2014 at 9:35 AM, Yedidyah Bar David  wrote:
>> - Original Message -
>>> From: "Neil" 
>>> To: users@ovirt.org
>>> Sent: Monday, September 15, 2014 10:17:45 AM
>>> Subject: [ovirt-users] Installing 3.4.3 Centos 6.5
>>>
>>> Hi guys,
>>>
>>> Please could someone assist me, I'm trying to install ovirt 3.4.3 so I
>>> can move my current ovirt engine from one server to another. I've done
>>> a minimal install of Centos 6.5, did a yum update, then I've added in
>>> the repo and when I run engine-setup and go through all the options,
>>> it eventually errors out with the following error...
>>>
>>> [ ERROR ] Failed to execute stage 'Transaction commit': Command
>>> '/bin/rpm' failed to execute
>>
>> It failed with (among other output):
>> package ovirt-engine-webadmin-portal is not installed
>>
>> What exactly did you install with 'yum install'? If 'ovirt-engine', you
>> should have also installed 'ovirt-engine-webadmin-portal' which is a
>> dependency of it.
>> --
>> Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installing 3.4.3 Centos 6.5

2014-09-15 Thread Neil
Hi David,

Wow you're correct, I see that somehow it isn't installed.

To install it I just did the usual...

yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm
yum install ovirt-engine
engine-setup

Not sure if I've perhaps missed a step?

Thanks.

Regards.

Neil Wilson.


On Mon, Sep 15, 2014 at 9:35 AM, Yedidyah Bar David  wrote:
> - Original Message -----
>> From: "Neil" 
>> To: users@ovirt.org
>> Sent: Monday, September 15, 2014 10:17:45 AM
>> Subject: [ovirt-users] Installing 3.4.3 Centos 6.5
>>
>> Hi guys,
>>
>> Please could someone assist me, I'm trying to install ovirt 3.4.3 so I
>> can move my current ovirt engine from one server to another. I've done
>> a minimal install of Centos 6.5, did a yum update, then I've added in
>> the repo and when I run engine-setup and go through all the options,
>> it eventually errors out with the following error...
>>
>> [ ERROR ] Failed to execute stage 'Transaction commit': Command
>> '/bin/rpm' failed to execute
>
> It failed with (among other output):
> package ovirt-engine-webadmin-portal is not installed
>
> What exactly did you install with 'yum install'? If 'ovirt-engine', you
> should have also installed 'ovirt-engine-webadmin-portal' which is a
> dependency of it.
> --
> Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Urgent: not comply with the cluster Default emulated machines

2014-08-06 Thread Neil
Hi Roy,

Thank you very much for replying so quickly. I think I've managed to
work out what is causing it.

During the updates it looks like one of my hosts ended up with
qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64, which is the one that was
working, and the other two hosts ended up with
qemu-kvm-rhev-0.12.1.2-2.355.el6.3.x86_64

I've since removed  2.355 and re-installed node03 with 2.415 and it's
now operational again.

Thank you very much for your assistance.

Greatly appreciated.

Kind regards.

Neil Wilson.



On Thu, Aug 7, 2014 at 8:27 AM, Roy Golan  wrote:
> On 08/07/2014 09:10 AM, Neil wrote:
>
> let see what qemu output as its supported emulated machines
>
> on your non-operational host:
>
> /usr/libexec/qemu-kvm -M ?
>
> if rhel6.4 is there than vdsm still caches old values probably
>
> verify with
>
> vdsClient 0 -s getVdsCaps | grep rhel
>
>
>
>
>> Hi guys,
>>
>> Please could someone assist urgently, 2 of my 3 hosts are non
>> operational and some VM's won't start because I don't have resources
>> to run them all on one host.
>>
>> I upgraded to 3.4 from 3.3 yesterday and everything seemed fine, then
>> woke up this morning to this problem...
>>
>> host node03 does not comply with the cluster Default emulated
>> machines. The Hosts emulated machines are rhel6.4.0,pc
>>
>>
>> Hosts CentOS release 6.5 (Final)
>> vdsm-python-4.14.11.2-0.el6.x86_64
>> vdsm-cli-4.14.11.2-0.el6.noarch
>> vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
>> vdsm-xmlrpc-4.14.11.2-0.el6.noarch
>> vdsm-4.14.11.2-0.el6.x86_64
>> qemu-kvm-rhev-0.12.1.2-2.355.el6.3.x86_64
>> qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
>> qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.8.x86_64
>>
>> Engine:
>> ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
>> ovirt-release34-1.0.2-1.noarch
>> ovirt-engine-dbscripts-3.4.3-1.el6.noarch
>> ovirt-release-el6-9-1.noarch
>> ovirt-iso-uploader-3.4.0-1.el6.noarch
>> ovirt-engine-lib-3.4.3-1.el6.noarch
>> ovirt-engine-backend-3.4.3-1.el6.noarch
>> ovirt-engine-websocket-proxy-3.4.3-1.el6.noarch
>> ovirt-engine-userportal-3.4.3-1.el6.noarch
>> ovirt-engine-setup-base-3.4.3-1.el6.noarch
>> ovirt-host-deploy-java-1.2.2-1.el6.noarch
>> ovirt-engine-cli-3.3.0.6-1.el6.noarch
>> ovirt-engine-setup-3.4.3-1.el6.noarch
>> ovirt-engine-restapi-3.4.3-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-3.4.3-1.el6.noarch
>> ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
>> ovirt-image-uploader-3.4.0-1.el6.noarch
>> ovirt-engine-tools-3.4.3-1.el6.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-3.4.3-1.el6.noarch
>> ovirt-host-deploy-1.2.2-1.el6.noarch
>> ovirt-log-collector-3.4.1-1.el6.noarch
>> ovirt-engine-3.4.3-1.el6.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.3-1.el6.noarch
>>
>> I set my cluster compatibility to 3.4 after the upgrade as well.
>>
>> Thank you!
>>
>> Regards.
>>
>> Neil Wilson.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Urgent: not comply with the cluster Default emulated machines

2014-08-06 Thread Neil
Hi guys,

Please could someone assist urgently, 2 of my 3 hosts are non
operational and some VM's won't start because I don't have resources
to run them all on one host.

I upgraded to 3.4 from 3.3 yesterday and everything seemed fine, then
woke up this morning to this problem...

host node03 does not comply with the cluster Default emulated
machines. The Hosts emulated machines are rhel6.4.0,pc


Hosts CentOS release 6.5 (Final)
vdsm-python-4.14.11.2-0.el6.x86_64
vdsm-cli-4.14.11.2-0.el6.noarch
vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
vdsm-xmlrpc-4.14.11.2-0.el6.noarch
vdsm-4.14.11.2-0.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.3.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.8.x86_64

Engine:
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-release34-1.0.2-1.noarch
ovirt-engine-dbscripts-3.4.3-1.el6.noarch
ovirt-release-el6-9-1.noarch
ovirt-iso-uploader-3.4.0-1.el6.noarch
ovirt-engine-lib-3.4.3-1.el6.noarch
ovirt-engine-backend-3.4.3-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.3-1.el6.noarch
ovirt-engine-userportal-3.4.3-1.el6.noarch
ovirt-engine-setup-base-3.4.3-1.el6.noarch
ovirt-host-deploy-java-1.2.2-1.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-setup-3.4.3-1.el6.noarch
ovirt-engine-restapi-3.4.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.3-1.el6.noarch
ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
ovirt-image-uploader-3.4.0-1.el6.noarch
ovirt-engine-tools-3.4.3-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.3-1.el6.noarch
ovirt-host-deploy-1.2.2-1.el6.noarch
ovirt-log-collector-3.4.1-1.el6.noarch
ovirt-engine-3.4.3-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.3-1.el6.noarch

I set my cluster compatibility to 3.4 after the upgrade as well.

Thank you!

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RAID Expanded

2014-08-04 Thread Neil
Hi Itamar,

Thanks for coming back to me. Strangely enough I can see the new LUNS
if I move SPM onto host01 , but my other two hosts don't see the new
LUNS. I'm assuming it's probably something to do with the FC card
drivers/software so I'm trying to work through it currently.

Thanks very much though.

Much appreciated.

Kind regards.

Neil Wilson.


On Mon, Aug 4, 2014 at 2:19 PM, Itamar Heim  wrote:
> On 08/04/2014 11:18 AM, Neil wrote:
>>
>> Hi guys,
>>
>> My apologies if this is a re-post, but I don't recall seeing any replies.
>>
>> I've just added some new LUNS to my SAN, and when I go to edit the
>> main storage domain I don't see them, however if I try say "New
>> storage domain" the LUNS show up. Can I add in the additional LUNS to
>> my existing storage domain?
>
>
> yes, you can extend an existing SD with more LUNs
>
>
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>> On Mon, Jul 28, 2014 at 9:16 AM, Neil  wrote:
>>>
>>> Hi guys,
>>>
>>> I'm running oVirt 3.3 with an FC SAN. I had 12x500GB LUNS and these
>>> were all assigned to my data domain. Once the new volumes have finally
>>> finished syncing on the SAN, will oVirt automatically see the new
>>> LUNS, or is there anything that needs to be done in order to see them?
>>>
>>> I'm just trying to prepare for when things finally finish.
>>>
>>> Thank you.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RAID Expanded

2014-08-04 Thread Neil
Hi guys,

My apologies if this is a re-post, but I don't recall seeing any replies.

I've just added some new LUNS to my SAN, and when I go to edit the
main storage domain I don't see them, however if I try say "New
storage domain" the LUNS show up. Can I add in the additional LUNS to
my existing storage domain?

Thank you.

Regards.

Neil Wilson.


On Mon, Jul 28, 2014 at 9:16 AM, Neil  wrote:
> Hi guys,
>
> I'm running oVirt 3.3 with an FC SAN. I had 12x500GB LUNS and these
> were all assigned to my data domain. Once the new volumes have finally
> finished syncing on the SAN, will oVirt automatically see the new
> LUNS, or is there anything that needs to be done in order to see them?
>
> I'm just trying to prepare for when things finally finish.
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] RAID Expanded

2014-07-28 Thread Neil
Hi guys,

I'm running oVirt 3.3 with an FC SAN. I had 12x500GB LUNS and these
were all assigned to my data domain. Once the new volumes have finally
finished syncing on the SAN, will oVirt automatically see the new
LUNS, or is there anything that needs to be done in order to see them?

I'm just trying to prepare for when things finally finish.

Thank you.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Misaligned Disk after resize

2014-07-22 Thread Neil
Hi Daniel,

Correct, we are using FC as primary storage. The VM itself was a
Server 2003 VM, so the only resources we could get were from the task
manager, the CPU resources in Task Manager showed the load spiking to
100% usage.

The backup VM (prior to extending) is currently being restored from
the export domain and the extended VM is shutdown currently, so I
won't be able to give you the info until I boot it up again.

My apologies, another technician is actually working on this problem
and he's just gone ahead trying to restore.

Thanks for coming back to me though.

Much appreciated.

Regards.

Neil Wilson.


On Tue, Jul 22, 2014 at 3:27 PM, Daniel Helgenberger
 wrote:
> Hi Neil,
>
> To severely impact iops with a single nonaligned VM - image the is
> rather unusual.
>
> Can you please be more specific:
> IIRC, to run into the misalignment scenario you need block backed
> storage for your VM. So FC / Posix / iSCSI but not NFS.
> - The CPU load of the visualization host or the VM?
> - Can you confirm (eg. with top) that you have constant a high IO Wait
> (>20%) there?
>
> Cheers,
>
>
>
> On Di, 2014-07-22 at 14:47 +0200, Neil wrote:
>> Hi guys,
>>
>> We re-sized (expanded) a Server 2003 VM a few weeks back and since
>> then the CPU load seems to be rather excessive. Also in the oVirt GUI
>> it reports the disk is misaligned, I've done a bit of reading and it
>> sounds like the only way to correct the alignment again is to
>> re-install, is this the only option?
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> --
>
> Daniel Helgenberger
> m box bewegtbild GmbH
>
> P: +49/30/2408781-22
> F: +49/30/2408781-10
>
> ACKERSTR. 19
> D-10115 BERLIN
>
>
> www.m-box.de  www.monkeymen.tv
>
> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Misaligned Disk after resize

2014-07-22 Thread Neil
Hi guys,

We re-sized (expanded) a Server 2003 VM a few weeks back and since
then the CPU load seems to be rather excessive. Also in the oVirt GUI
it reports the disk is misaligned, I've done a bit of reading and it
sounds like the only way to correct the alignment again is to
re-install, is this the only option?

Thank you.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server 2012 R2 no drive found (Solved)

2014-05-29 Thread Neil
Good morning guys,

I've got good news, I re-downloaded the Windows 2012R2 ISO last night
because a technician informed me that the original ISO was smaller
than the size mentioned on the Windows download link. After
re-downloading it and uploading it again, everything is now working
perfectly and I can install using virtio or IDE.

My apologies for this being such a silly issue.

Thank you very much for your all of your assistance.

It is greatly appreciated as usual.

Kind regards.

Neil Wilson.


On Wed, May 28, 2014 at 7:12 PM, Neil  wrote:
> Hi Gianluca,
>
> Quite correct on all counts,  it was migrated from dreyou 3.1 and it was a
> new vm created in 3.4, so there could be inconsistencies from the upgrade
> but I've got no idea where to even start in trying to find them. I'll verify
> all the packages have been upgraded tomorrow and check from there.
>
> Thanks for the assistance.
>
> Regards.
>
> Neil Wilson
>
> On 28 May 2014 6:06 PM, "Gianluca Cecchi"  wrote:
>>
>> You wrote that you migrated from 3.1 to 3.4.
>> I imagine this is a new VM created from scratch when already in 3.4,
>> correct?
>> If I remember correctly 3.1 form ovirt repos was not available for CentOS
>> 6.x, correct?
>> So what was your upgrade path? Did you come from Dreyou repo and then
>> updated from Dreyou repo for 3.1 to oVirt repo for 3.4?
>> Did you verify if any package from that repo still remained and for any
>> reason create inconsistencies with current environment?
>>
>> Just guessing ...
>>
>> Gianluca
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server 2012 R2 no drive found

2014-05-28 Thread Neil
Hi Gianluca,

Quite correct on all counts,  it was migrated from dreyou 3.1 and it was a
new vm created in 3.4, so there could be inconsistencies from the upgrade
but I've got no idea where to even start in trying to find them. I'll
verify all the packages have been upgraded tomorrow and check from there.

Thanks for the assistance.

Regards.

Neil Wilson
On 28 May 2014 6:06 PM, "Gianluca Cecchi"  wrote:

> You wrote that you migrated from 3.1 to 3.4.
> I imagine this is a new VM created from scratch when already in 3.4,
> correct?
> If I remember correctly 3.1 form ovirt repos was not available for CentOS
> 6.x, correct?
> So what was your upgrade path? Did you come from Dreyou repo and then
> updated from Dreyou repo for 3.1 to oVirt repo for 3.4?
> Did you verify if any package from that repo still remained and for any
> reason create inconsistencies with current environment?
>
> Just guessing ...
>
> Gianluca
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable

2014-05-28 Thread Alastair Neil
I just noticed this in the console and I don't know if it is relevant.

When I look at the "General" tab on the hosts under "GlusterFS Version" it
shows "N/A".


On 28 May 2014 11:03, Alastair Neil  wrote:

> ovirt version is 3.4.  I did have a slightly older version of vdsm on
> gluster0 but I have updated it and the issue persists.  The compatibility
> version on the storage cluster is 3.3.
>
> I checked the logs for GlusterSyncJob notifications and there are none.
>
>
>
>
>
>
>
> On 28 May 2014 10:19, Sahina Bose  wrote:
>
>>  Hi Alastair,
>>
>> This could be a mismatch in the hostname identified in ovirt and gluster.
>>
>> You could check for any exceptions from GlusterSyncJob in engine.log.
>>
>> Also, what version of ovirt are you using. And the compatibility version
>> of your cluster?
>>
>>
>> On 05/28/2014 12:40 AM, Alastair Neil wrote:
>>
>>  Hi thanks for the reply. Here is an extract from a grep I ran on the
>> vdsm log grepping for the volume name vm-store.  It seems to indicate the
>> bricks are ONLINE.
>>
>>  I am uncertain how to extract meaningful information from the
>> engine.log can you provide some guidance?
>>
>>  Thanks,
>>
>>  Alastair
>>
>>
>>
>>> Thread-100::DEBUG::2014-05-27
>>> 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client
>>> [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}
>>> Thread-100::DEBUG::2014-05-27
>>> 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with
>>> {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick':
>>> 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick':
>>> 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158',
>>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status':
>>> 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049',
>>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE',
>>> 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE',
>>> 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid':
>>> '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname':
>>> 'gluster0', 'pid': '12882', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status':
>>> {'message': 'Done', 'code': 0}}
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-c

Re: [ovirt-users] post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable

2014-05-28 Thread Alastair Neil
ovirt version is 3.4.  I did have a slightly older version of vdsm on
gluster0 but I have updated it and the issue persists.  The compatibility
version on the storage cluster is 3.3.

I checked the logs for GlusterSyncJob notifications and there are none.







On 28 May 2014 10:19, Sahina Bose  wrote:

>  Hi Alastair,
>
> This could be a mismatch in the hostname identified in ovirt and gluster.
>
> You could check for any exceptions from GlusterSyncJob in engine.log.
>
> Also, what version of ovirt are you using. And the compatibility version
> of your cluster?
>
>
> On 05/28/2014 12:40 AM, Alastair Neil wrote:
>
>  Hi thanks for the reply. Here is an extract from a grep I ran on the
> vdsm log grepping for the volume name vm-store.  It seems to indicate the
> bricks are ONLINE.
>
>  I am uncertain how to extract meaningful information from the engine.log
> can you provide some guidance?
>
>  Thanks,
>
>  Alastair
>
>
>
>> Thread-100::DEBUG::2014-05-27
>> 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client
>> [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}
>> Thread-100::DEBUG::2014-05-27
>> 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with
>> {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick':
>> 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid':
>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick':
>> 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158',
>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status':
>> 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049',
>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE',
>> 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid':
>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE',
>> 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid':
>> '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname':
>> 'gluster0', 'pid': '12882', 'hostuuid':
>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status':
>> {'message': 'Done', 'code': 0}}
>> Thread-16::DEBUG::2014-05-27
>> 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> iflag=direct
>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>> bs=4096 count=1' (cwd None)
>> Thread-16::DEBUG::2014-05-27
>> 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>> if

Re: [ovirt-users] Removing VM disk issue (Solved)

2014-05-28 Thread Neil
Thanks Maor and Koen,

Koen: I tried the engine restart and vdsm restart, but that didn't
work unfortunately.

Maor: I ran...

psql -U postgres engine

then...

UPDATE images set imagestatus = 1 where image_guid = (SELECT
image_guid from all_disks where disk_alias = 'proxy02_Disk0' and
vm_names ='proxy02');
UPDATE 1

This then unlocked the disk and it allowed me to remove the entire VM
and disk thereafter.

Thank you very much for your assistance.

Kind regard.

Neil Wilson.

On Wed, May 28, 2014 at 3:18 PM, Koen Vanoppen  wrote:
> We have this same issue also once in a while. Most of the times it is solved
> by just restarting the ovirt engine. If this doesn't work, try to restart
> your vdsm service.
>
> Kind regards
>
>
> 2014-05-26 10:07 GMT+02:00 Maor Lipchuk :
>
>> On 05/19/2014 05:57 PM, Neil wrote:
>> > Hi Maor,
>> >
>> > Sorry for the late reply.
>> >
>> > Unfortunately I don't have much in the way of older logs, but can
>> > confirm that the VM has been shutdown for about 3 months so there
>> > really shouldn't be any tasks running.
>> >
>> > My version is ovirt-engine-3.3.3-2.el6.noarch
>> >
>> > I presume running the command won't have any affect on any other VM's
>> > so it should be safe to do this?
>> hi Neil,
>>
>> The command should not affect other VMs (Make sure to run the command
>> with "vm_names ='proxy02'").
>>
>> Just to be sure use SELECT before running the command to check that only
>> one image is returned:
>> SELECT * FROM images where image_guid = (SELECT image_guid from
>> all_disks where disk_alias = 'proxy02_Disk0' and vm_names ='proxy02');
>> >
>> > Thank you!
>> >
>> > Regards.
>> >
>> > Neil Wilson.
>> >
>> >
>> > On Thu, May 15, 2014 at 11:52 AM, Maor Lipchuk 
>> > wrote:
>> >> Hi Neil,
>> >>
>> >> I have been looking at your logs but it seems that there is no
>> >> indication of any operation being done on this specific disk.
>> >> I also see that there are no running tasks in VDSM, so the disk should
>> >> not be locked.
>> >>
>> >> If you have any older engine logs please send them so we can
>> >> investigate
>> >> the problem more better.
>> >>
>> >> What is the engine version you are running?
>> >>
>> >> If you want to unlock the disk, you can run this query in your
>> >> postgres:
>> >> UPDATE images set imagestatus = 1 where image_guid = (SELECT image_guid
>> >> from all_disks where disk_alias = 'proxy02_Disk0' and vm_names
>> >> ='proxy02');
>> >>
>> >> regards,
>> >> Maor
>> >>
>> >> On 05/14/2014 05:30 PM, Neil wrote:
>> >>> Hi Maor,
>> >>>
>> >>> Attached are the logs, and below is a screenshot of the event log, I
>> >>> removed the two disks that were attached to the proxy previously which
>> >>> completed.
>> >>>
>> >>> Thank you!
>> >>>
>> >>> Regards.
>> >>>
>> >>> Neil Wilson.
>> >>>
>> >>>
>> >>>
>> >>> On Wed, May 14, 2014 at 2:43 PM, Maor Lipchuk 
>> >>> wrote:
>> >>>> Hi Neil,
>> >>>>
>> >>>> Can u please attach the logs of engine and VDSM.
>> >>>> What there is in the event log, was there any operation being done on
>> >>>> the disk before?
>> >>>>
>> >>>> regards,
>> >>>> Maor
>> >>>>
>> >>>> On 05/14/2014 03:35 PM, Neil wrote:
>> >>>>> Hi guys,
>> >>>>>
>> >>>>> I'm trying to remove a VM and reclaim the space that the VM was
>> >>>>> using.
>> >>>>> This particular VM had a thin provisioned disk attached as well as
>> >>>>> "fat" provisioned, the "fat" disks I managed to detach, however when
>> >>>>> I
>> >>>>> try to remove the VM, it says "The following disks are locked:
>> >>>>> Please
>> >>>>> try again in a few minutes. Attached is a screenshot from the GUI
>> >>>>> side, showing the proxy disk being locked, any ideas?
>> >>>>>
>> >>>>> I'm wanting to completely remove the VM and reclaim the space for
>> >>>>> the
>> >>>>> datacenter.
>> >>>>>
>> >>>>> Thank you.
>> >>>>>
>> >>>>> Regards.
>> >>>>>
>> >>>>> Neil Wilson
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> ___
>> >>>>> Users mailing list
>> >>>>> Users@ovirt.org
>> >>>>> http://lists.ovirt.org/mailman/listinfo/users
>> >>>>>
>> >>>>
>> >>>
>> >>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server 2012 R2 no drive found

2014-05-28 Thread Neil
Thank you Rene, greatly appreciated.

I'm using the Server 2012 R2 Standard edition and at this point I
haven't even got a NIC added to the VM.

Another piece of info which might be useful (or not) is I'm using a FC
SAN for storage, not sure if this would have any affect.

Thanks.

Regards.

Neil Wilson.


On Wed, May 28, 2014 at 2:42 PM, René Koch  wrote:
> Hi Neil,
>
> I'll test Windows Server 2012 R2 again with IDE and VirtIO disks (and e1000
> and VirtIO network cards) with VirtIO-drivers from RHEL channel and drivers
> bundles with spice guest agent and will let you know the results. Please
> keep in mind that it will take some time...
> Btw, I will use Windows 2012 R2 english, 180-days trial...
>
>
> Regards,
> René
>
>
>
> On 05/28/2014 01:11 PM, Neil wrote:
>>
>> Hi guys,
>>
>> Is anyone able to assist here? I still can't install 2012 R2, whether
>> I use IDE, virtio, or virtio-scsi, I don't see a drive during the
>> first install.
>>
>> I've tried the "virtio-win-1.6.8/vioserial/2k12R2/amd64" as a test and
>> this also doesn't work, so it doesn't seem like a driver issue, more
>> of a cluster issue.
>>
>> I've also tried changing my Datacenter compatibility and Cluster to
>> 3.4, but this seemed to make no difference. Both my hosts are upgraded
>> to Centos 6.5 all updates, as well as my ovirt and VDSM is updated to
>> the latest stable 3.4 packages.
>>
>> I'm at a loss here, but desperately need to get this R2 installed.
>>
>> Further to what Paul mentioned below, this was upgraded from 3.1 so it
>> doesn't just appear to happen to new 3.4 installs.
>>
>> Any help is greatly appreciated.
>>
>> On Mon, May 26, 2014 at 12:45 PM, Neil  wrote:
>>>
>>> Hi guys,
>>>
>>> Thanks for the replies.
>>>
>>> What's strange is that even when choosing an IDE disk I don't see any
>>> hard drive showing up when I try to install 2012R2, is this normal? I
>>> can understand why R2 won't see the virtio scsi disk, but to me it
>>> should be showing up when using IDE, or am I wrong here?
>>>
>>> I see that in the virtio drivers from RHEL supplementary there is a
>>> folder "virtio-win-1.6.8/vioserial/2k12R2/amd64" but the license seems
>>> to indicate that you need a valid subscription in order to use these
>>> drivers... if this is true then is no one using server 2012 R2 on
>>> oVirt without a valid subscription?
>>>
>>> I see with the drivers from
>>> "http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/src/";
>>> there is only a Win8 driver, and trying to use this on my 2012 R2
>>> doesn't find a matching driver.
>>>
>>> Can anyone clarify this from Redhat? If you using RHEV, presumably you
>>> can use R2, but it seems using oVirt you aren't allowed to?
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>>
>>>
>>>
>>> On Fri, May 23, 2014 at 10:40 PM, Paul.LKW  wrote:
>>>>
>>>> Hi guys:
>>>> Please do not just say your one is working it is helpless in fact there
>>>> already some guys reported issues in Win platform (including me) and
>>>> there
>>>> is no way to report that, do your think paid version in Redhat would be
>>>> the
>>>> same or the client will already fxxked.
>>>> I noted this seems occured only in newly installed ovirt and old
>>>> installation is fine.
>>>>
>>>> Paul.LKW
>>>>
>>>> 於 2014/5/23 下午11:02,"Neil"  寫道:
>>>>>
>>>>>
>>>>> Hi guys,
>>>>>
>>>>> I've been trying to install 2012 R2 onto my ovirt 3.4 but no matter
>>>>> what I do, it either doesn't find an IDE drive or a Virtio drive (when
>>>>> using the virtio ISO).
>>>>>
>>>>> ovirt-engine-lib-3.4.0-1.el6.noarch
>>>>> ovirt-engine-restapi-3.4.0-1.el6.noarch
>>>>> ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
>>>>> ovirt-engine-3.4.0-1.el6.noarch
>>>>> ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
>>>>> ovirt-host-deploy-java-1.2.0-1.el6.noarch
>>>>> ovirt-engine-cli-3.2.0.10-1.el6.noarch
>>>>> ovirt-engine-setup-3.4.0-1.el6.noarch
>>>>> ovirt-host-deploy-1.2.0-1.el6.noarch
&

Re: [ovirt-users] Server 2012 R2 no drive found

2014-05-28 Thread Neil
Hi guys,

Is anyone able to assist here? I still can't install 2012 R2, whether
I use IDE, virtio, or virtio-scsi, I don't see a drive during the
first install.

I've tried the "virtio-win-1.6.8/vioserial/2k12R2/amd64" as a test and
this also doesn't work, so it doesn't seem like a driver issue, more
of a cluster issue.

I've also tried changing my Datacenter compatibility and Cluster to
3.4, but this seemed to make no difference. Both my hosts are upgraded
to Centos 6.5 all updates, as well as my ovirt and VDSM is updated to
the latest stable 3.4 packages.

I'm at a loss here, but desperately need to get this R2 installed.

Further to what Paul mentioned below, this was upgraded from 3.1 so it
doesn't just appear to happen to new 3.4 installs.

Any help is greatly appreciated.

On Mon, May 26, 2014 at 12:45 PM, Neil  wrote:
> Hi guys,
>
> Thanks for the replies.
>
> What's strange is that even when choosing an IDE disk I don't see any
> hard drive showing up when I try to install 2012R2, is this normal? I
> can understand why R2 won't see the virtio scsi disk, but to me it
> should be showing up when using IDE, or am I wrong here?
>
> I see that in the virtio drivers from RHEL supplementary there is a
> folder "virtio-win-1.6.8/vioserial/2k12R2/amd64" but the license seems
> to indicate that you need a valid subscription in order to use these
> drivers... if this is true then is no one using server 2012 R2 on
> oVirt without a valid subscription?
>
> I see with the drivers from
> "http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/src/";
> there is only a Win8 driver, and trying to use this on my 2012 R2
> doesn't find a matching driver.
>
> Can anyone clarify this from Redhat? If you using RHEV, presumably you
> can use R2, but it seems using oVirt you aren't allowed to?
>
> Thanks.
>
> Regards.
>
> Neil Wilson.
>
>
>
> On Fri, May 23, 2014 at 10:40 PM, Paul.LKW  wrote:
>> Hi guys:
>> Please do not just say your one is working it is helpless in fact there
>> already some guys reported issues in Win platform (including me) and there
>> is no way to report that, do your think paid version in Redhat would be the
>> same or the client will already fxxked.
>> I noted this seems occured only in newly installed ovirt and old
>> installation is fine.
>>
>> Paul.LKW
>>
>> 於 2014/5/23 下午11:02,"Neil"  寫道:
>>>
>>> Hi guys,
>>>
>>> I've been trying to install 2012 R2 onto my ovirt 3.4 but no matter
>>> what I do, it either doesn't find an IDE drive or a Virtio drive (when
>>> using the virtio ISO).
>>>
>>> ovirt-engine-lib-3.4.0-1.el6.noarch
>>> ovirt-engine-restapi-3.4.0-1.el6.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
>>> ovirt-engine-3.4.0-1.el6.noarch
>>> ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
>>> ovirt-host-deploy-java-1.2.0-1.el6.noarch
>>> ovirt-engine-cli-3.2.0.10-1.el6.noarch
>>> ovirt-engine-setup-3.4.0-1.el6.noarch
>>> ovirt-host-deploy-1.2.0-1.el6.noarch
>>> ovirt-engine-backend-3.4.0-1.el6.noarch
>>> ovirt-image-uploader-3.4.0-1.el6.noarch
>>> ovirt-engine-tools-3.4.0-1.el6.noarch
>>> ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
>>> ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
>>> ovirt-engine-setup-base-3.4.0-1.el6.noarch
>>> ovirt-iso-uploader-3.4.0-1.el6.noarch
>>> ovirt-engine-userportal-3.4.0-1.el6.noarch
>>> ovirt-log-collector-3.4.1-1.el6.noarch
>>> ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
>>> ovirt-engine-dbscripts-3.4.0-1.el6.noarch
>>>
>>> vdsm-4.14.6-0.el6.x86_64
>>> vdsm-xmlrpc-4.14.6-0.el6.noarch
>>> vdsm-cli-4.14.6-0.el6.noarch
>>> vdsm-python-zombiereaper-4.14.6-0.el6.noarch
>>> vdsm-python-4.14.6-0.el6.x86_64
>>>
>>> qemu-img-0.12.1.2-2.415.el6_5.8.x86_64
>>> qemu-kvm-tools-0.12.1.2-2.415.el6_5.8.x86_64
>>> qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64
>>> gpxe-roms-qemu-0.9.7-6.9.el6.noarch
>>>
>>> Is there a special trick to get this working, or could something be
>>> wrong? When it comes to creating a guest I don't see a Server 2012 R2
>>> 64bit in the drop down list?
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> Neil Wilson.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't Install/Upgrade host

2014-05-28 Thread Neil
Hi Alon,

That has sorted it out. The permissions got messed up in between
restoring from previous backups etc.

Thank you very much, greatly appreciated.

Regards.

Neil Wilson

On Wed, May 28, 2014 at 9:11 AM, Alon Bar-Lev  wrote:
>
>
> - Original Message -
>> From: "Neil" 
>> To: "Alon Bar-Lev" 
>> Cc: users@ovirt.org
>> Sent: Wednesday, May 28, 2014 10:04:00 AM
>> Subject: Re: [ovirt-users] Can't Install/Upgrade host
>>
>> Hi Alon,
>>
>> Thanks for the reply, below is the output.
>
> Something changed the file attributes of ca.pem (two places) to be incorrect.
>
>> [root@engine01 ovirt-engine]#  ls -lR /etc/pki/ovirt-engine/
>> /etc/pki/ovirt-engine/:
>> total 80
>> lrwxrwxrwx. 1 root  root 6 May 16 13:56 apache-ca.pem -> ca.pem
>> -rw-r--r--. 1 root  root   570 May 16 13:56 cacert.conf
>> -rw-r--r--. 1 root  root   519 May 16 13:56 cacert.template
>> -rw-r--r--. 1 root  root   384 Mar 24 12:47 cacert.template.in
>> -rw-r--r--. 1 root  root   482 May 16 13:56 cacert.template.rpmnew
>> -rwxr-x---. 1 root  root  3362 May 16 13:56 ca.pem
>
> this ^ should be world readable, not executable.
>
>> -rw-r--r--. 1 root  root   585 May 16 13:56 cert.conf
>> drwxr-xr-x. 2 ovirt ovirt 4096 Mar 24 12:47 certs
>> -rw-r--r--. 1 root  root   572 May 16 13:56 cert.template
>> -rw-r--r--. 1 root  root   483 Mar 24 12:47 cert.template.in
>> -rw-r--r--. 1 root  root   534 May 16 13:56 cert.template.rpmnew
>> -rw-r--r--. 1 ovirt ovirt  950 May 22 20:07 database.txt
>> -rw-r--r--. 1 ovirt ovirt   20 May 22 20:07 database.txt.attr
>> -rw-r--r--. 1 ovirt ovirt   20 May 16 13:56 database.txt.attr.old
>> -rw-r--r--. 1 ovirt ovirt  885 May 16 13:56 database.txt.old
>> drwxr-xr-x. 2 root  root  4096 Mar 24 12:47 keys
>> -rw-r--r--. 1 root  root   548 Mar 24 12:47 openssl.conf
>> drwxr-x---. 2 ovirt ovirt 4096 Mar 24 12:47 private
>> drwxr-xr-x. 2 ovirt ovirt 4096 May 27 13:16 requests
>> -rw-r--r--. 1 ovirt ovirt3 May 22 20:07 serial.txt
>> -rw-r--r--. 1 ovirt ovirt3 May 16 13:56 serial.txt.old
>>
>> /etc/pki/ovirt-engine/certs:
>> total 100
>> -rw-r--r--. 1 root root 3362 May 16 13:56 01.pem
>> -rw-r--r--. 1 root root 3509 May 16 13:56 02.pem
>> -rw-r--r--. 1 root root 3466 May 16 13:56 03.pem
>> -rw-r--r--. 1 root root 3466 May 16 13:56 04.pem
>> -rw-r--r--. 1 root root 3362 May 16 13:56 05.pem
>> -rw-r--r--. 1 root root 3509 May 16 13:56 06.pem
>> -rw-r--r--. 1 root root 3362 May 16 13:56 07.pem
>> -rw-r--r--. 1 root root 3509 May 16 13:56 08.pem
>> -rw-r--r--. 1 root root 3466 May 16 13:56 09.pem
>> -rw-r--r--. 1 root root 3467 May 16 13:56 0A.pem
>> -rw-r--r--. 1 root root 3467 May 16 13:56 0B.pem
>> -rw-r--r--. 1 root root 3467 May 16 13:56 0C.pem
>> -rw-r--r--. 1 root root 3467 May 16 13:56 0D.pem
>> -rw-r--r--. 1 root root 3070 May 16 13:56 0E.pem
>> -rw-r--r--. 1 root root 3070 May 16 13:56 0F.pem
>> -rw-r--r--. 1 root root 3070 May 16 13:56 10.251.193.8cert.pem
>> -rw-r--r--. 1 root root 3070 May 16 13:56 10.251.193.9cert.pem
>
> these two are strange as I expect to be owned by ovirt user as engine created.
>
>> -rw-r--r--. 1 root root 4267 May 22 20:07 10.pem
>> -rw-r-. 1 root root 3509 May 16 13:56 apache.cer
>> -rw-r--r--. 1 root root  763 May 16 13:56 ca.der
>> -rw-r--r--. 1 root root 3509 May 16 13:56 engine.cer
>> -rw-r--r--. 1 root root  784 May 16 13:56 engine.der
>> -rw-r--r--. 1 root root 4267 May 22 20:07 websocket-proxy.cer
>>
>> /etc/pki/ovirt-engine/keys:
>> total 36
>> -rw-r-. 1 root  root   916 May 16 13:56 apache.key.nopass
>> -rw-r-. 1 root  root  2786 May 16 13:56 apache.p12
>> -rw---. 1 root  root  1054 May 22 20:07 engine_id_rsa
>> -rw---. 1 root  root   916 May 16 13:56 engine_id_rsa.20140522200739
>> -rw---. 1 root  root   912 May 16 13:56 engine_id_rsa.old
>> -rw-r-. 1 ovirt ovirt 2786 May 16 13:56 engine.p12
>> -rw-r--r--. 1 root  root   220 May 16 13:56 engine.ssh.key.txt
>> -rw---. 1 ovirt ovirt 1832 May 22 20:07 websocket-proxy.key.nopass
>> -rw---. 1 root  root  2517 May 22 20:07 websocket-proxy.p12
>>
>> /etc/pki/ovirt-engine/private:
>> total 4
>> -rwxr-x---. 1 root root 887 May 16 13:56 ca.pem
>
> this should be owned by ovirt user and not be executable.
>
>>
>> /etc/pki/ovirt-engine/requests:
>> total 24
>> -rw-r--r--. 1 root  root  862 May 16 13:56 10.251.193.8req.pem
>> -rw-r--r--. 1 ovirt ovirt 862 May 27 17:35 10.251.193.9.req
>> -rw-r--r--. 1 root  root  862 May 16 13:56 1

Re: [ovirt-users] Can't Install/Upgrade host

2014-05-28 Thread Neil
Hi Alon,

Thanks for the reply, below is the output.

[root@engine01 ovirt-engine]#  ls -lR /etc/pki/ovirt-engine/
/etc/pki/ovirt-engine/:
total 80
lrwxrwxrwx. 1 root  root 6 May 16 13:56 apache-ca.pem -> ca.pem
-rw-r--r--. 1 root  root   570 May 16 13:56 cacert.conf
-rw-r--r--. 1 root  root   519 May 16 13:56 cacert.template
-rw-r--r--. 1 root  root   384 Mar 24 12:47 cacert.template.in
-rw-r--r--. 1 root  root   482 May 16 13:56 cacert.template.rpmnew
-rwxr-x---. 1 root  root  3362 May 16 13:56 ca.pem
-rw-r--r--. 1 root  root   585 May 16 13:56 cert.conf
drwxr-xr-x. 2 ovirt ovirt 4096 Mar 24 12:47 certs
-rw-r--r--. 1 root  root   572 May 16 13:56 cert.template
-rw-r--r--. 1 root  root   483 Mar 24 12:47 cert.template.in
-rw-r--r--. 1 root  root   534 May 16 13:56 cert.template.rpmnew
-rw-r--r--. 1 ovirt ovirt  950 May 22 20:07 database.txt
-rw-r--r--. 1 ovirt ovirt   20 May 22 20:07 database.txt.attr
-rw-r--r--. 1 ovirt ovirt   20 May 16 13:56 database.txt.attr.old
-rw-r--r--. 1 ovirt ovirt  885 May 16 13:56 database.txt.old
drwxr-xr-x. 2 root  root  4096 Mar 24 12:47 keys
-rw-r--r--. 1 root  root   548 Mar 24 12:47 openssl.conf
drwxr-x---. 2 ovirt ovirt 4096 Mar 24 12:47 private
drwxr-xr-x. 2 ovirt ovirt 4096 May 27 13:16 requests
-rw-r--r--. 1 ovirt ovirt3 May 22 20:07 serial.txt
-rw-r--r--. 1 ovirt ovirt3 May 16 13:56 serial.txt.old

/etc/pki/ovirt-engine/certs:
total 100
-rw-r--r--. 1 root root 3362 May 16 13:56 01.pem
-rw-r--r--. 1 root root 3509 May 16 13:56 02.pem
-rw-r--r--. 1 root root 3466 May 16 13:56 03.pem
-rw-r--r--. 1 root root 3466 May 16 13:56 04.pem
-rw-r--r--. 1 root root 3362 May 16 13:56 05.pem
-rw-r--r--. 1 root root 3509 May 16 13:56 06.pem
-rw-r--r--. 1 root root 3362 May 16 13:56 07.pem
-rw-r--r--. 1 root root 3509 May 16 13:56 08.pem
-rw-r--r--. 1 root root 3466 May 16 13:56 09.pem
-rw-r--r--. 1 root root 3467 May 16 13:56 0A.pem
-rw-r--r--. 1 root root 3467 May 16 13:56 0B.pem
-rw-r--r--. 1 root root 3467 May 16 13:56 0C.pem
-rw-r--r--. 1 root root 3467 May 16 13:56 0D.pem
-rw-r--r--. 1 root root 3070 May 16 13:56 0E.pem
-rw-r--r--. 1 root root 3070 May 16 13:56 0F.pem
-rw-r--r--. 1 root root 3070 May 16 13:56 10.251.193.8cert.pem
-rw-r--r--. 1 root root 3070 May 16 13:56 10.251.193.9cert.pem
-rw-r--r--. 1 root root 4267 May 22 20:07 10.pem
-rw-r-. 1 root root 3509 May 16 13:56 apache.cer
-rw-r--r--. 1 root root  763 May 16 13:56 ca.der
-rw-r--r--. 1 root root 3509 May 16 13:56 engine.cer
-rw-r--r--. 1 root root  784 May 16 13:56 engine.der
-rw-r--r--. 1 root root 4267 May 22 20:07 websocket-proxy.cer

/etc/pki/ovirt-engine/keys:
total 36
-rw-r-. 1 root  root   916 May 16 13:56 apache.key.nopass
-rw-r-. 1 root  root  2786 May 16 13:56 apache.p12
-rw---. 1 root  root  1054 May 22 20:07 engine_id_rsa
-rw---. 1 root  root   916 May 16 13:56 engine_id_rsa.20140522200739
-rw---. 1 root  root   912 May 16 13:56 engine_id_rsa.old
-rw-r-. 1 ovirt ovirt 2786 May 16 13:56 engine.p12
-rw-r--r--. 1 root  root   220 May 16 13:56 engine.ssh.key.txt
-rw---. 1 ovirt ovirt 1832 May 22 20:07 websocket-proxy.key.nopass
-rw---. 1 root  root  2517 May 22 20:07 websocket-proxy.p12

/etc/pki/ovirt-engine/private:
total 4
-rwxr-x---. 1 root root 887 May 16 13:56 ca.pem

/etc/pki/ovirt-engine/requests:
total 24
-rw-r--r--. 1 root  root  862 May 16 13:56 10.251.193.8req.pem
-rw-r--r--. 1 ovirt ovirt 862 May 27 17:35 10.251.193.9.req
-rw-r--r--. 1 root  root  862 May 16 13:56 10.251.193.9req.pem
-rw-r--r--. 1 root  root  603 May 16 13:56 ca.csr
-rw-r--r--. 1 root  root  597 May 16 13:56 engine.req
-rw-r--r--. 1 root  root  863 May 22 20:07 websocket-proxy.req



On Wed, May 28, 2014 at 8:19 AM, Alon Bar-Lev  wrote:
> Please send the output of:
>
> # ls -lR /etc/pki/ovirt-engine/
>
> - Original Message -----
>> From: "Neil" 
>> To: users@ovirt.org
>> Sent: Wednesday, May 28, 2014 9:04:57 AM
>> Subject: [ovirt-users] Can't Install/Upgrade host
>>
>> Hi guys,
>>
>> I'm trying to upgrade/re-install a host running Centos 6.5, but even
>> after removing the host completely and trying to re-add it, I keep
>> getting a "Certificate enrollment failed" error. The full error below
>> is taken from my engine.log...
>>
>> 2014-05-27 10:38:33,729 ERROR
>> [org.ovirt.engine.core.utils.servlet.ServletUtils]
>> (ajp--127.0.0.1-8702-4) Can't read file
>> "/var/lib/ovirt-engine/reports.xml" for request
>> "/ovirt-engine/services/reports-ui", will send a 404 error response.
>> 2014-05-27 11:10:49,343 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
>> (VdsDeploy) Error during deploy dialog: java.io.IOException:
>> Unexpected connection termination
>> 2014-05-27 11:10:49,344 ERROR
>> [org.ovirt.engine.core.utils.ssh.SSHDialog]
>> (org.ovirt.thread.pool-6-thread-31) SS

[ovirt-users] Can't Install/Upgrade host

2014-05-27 Thread Neil
97b7d7a] Error during host
10.251.193.9 install, prefering first exception:
java.lang.RuntimeException: Certificate enrollment failed
2014-05-27 15:04:24,352 ERROR
[org.ovirt.engine.core.bll.InstallVdsCommand]
(org.ovirt.thread.pool-6-thread-34) [797b7d7a] Host installation
failed for host 322cbee8-16e6-11e2-9d38-6388c61dd004,
node02.blabla.gov.za.: java.lang.RuntimeException: Certificate
enrollment failed
2014-05-27 16:48:49,075 ERROR
[org.ovirt.engine.core.utils.servlet.ServletUtils]
(ajp--127.0.0.1-8702-4) Can't read file
"/var/lib/ovirt-engine/reports.xml" for request
"/ovirt-engine/services/reports-ui", will send a 404 error response.
2014-05-27 17:03:10,817 ERROR
[org.ovirt.engine.core.utils.hostinstall.OpenSslCAWrapper] (VdsDeploy)
Sign Certificate request failed with exit code 1
2014-05-27 17:03:10,817 ERROR
[org.ovirt.engine.core.utils.hostinstall.OpenSslCAWrapper] (VdsDeploy)
Sign Certificate request script errors:
Error opening Certificate ca.pem
140117678909256:error:0200100D:system library:fopen:Permission
denied:bss_file.c:398:fopen('ca.pem','r')
140117678909256:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
Error opening CA private key private/ca.pem
140049924028232:error:0200100D:system library:fopen:Permission
denied:bss_file.c:398:fopen('private/ca.pem','r')
140049924028232:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
2014-05-27 17:03:10,821 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(VdsDeploy) Error during deploy dialog: java.lang.RuntimeException:
Certificate enrollment failed
2014-05-27 17:03:10,828 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-6-thread-18) [2bb26823] Error during host
10.251.193.9 install: java.lang.RuntimeException: Certificate
enrollment failed
2014-05-27 17:03:10,839 ERROR
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-6-thread-18) [2bb26823] Installation
10.251.193.9: Certificate enrollment failed
2014-05-27 17:03:10,891 ERROR [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-6-thread-18) [2bb26823] Error during host
10.251.193.9 install, prefering first exception:
java.lang.RuntimeException: Certificate enrollment failed
2014-05-27 17:03:10,895 ERROR
[org.ovirt.engine.core.bll.InstallVdsCommand]
(org.ovirt.thread.pool-6-thread-18) [2bb26823] Host installation
failed for host d2debdfe-76e7-40cf-a7fd-78a0f50f14d4,
node02.blabla.gov.za.: java.lang.RuntimeException: Certificate
enrollment failed

I've looked around quite a bit and can't seem to find much.

Please could someone assist.

Thank you.

Regards,

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   3   >