Re: [ovirt-users] problem after rebooting the node

2017-02-08 Thread Shalabh Goel
Sorry for the late reply. Actually the problem is with only one of my three
nodes. So I think that it is an issue with the upgrade. I am using Ovirt-ng
node OS. I will just re-install ovirt-ng OS on this node and upgrade it the
way I did others.

Actually, I upgraded my storage node (NFS) and lost all my data since it
was in a separate folder in root (/iso and /vm) which got deleted after I
upgraded my node. So I will have to start all over again. :(

Anyway thanks for the help. Please just update the documentation on how to
upgrade the Ovirt-ng nodes properly (I did ask about that but never got a
reply :P).

On Mon, Feb 6, 2017 at 5:00 PM, Edward Haas  wrote:

> The ones you mentioned before, we just need the whole files and not
> snippets of them.
> vdsm.log, supervdsm.log, messages.log and the ovs ones you previously
> mentioned.
>
> On Mon, Feb 6, 2017 at 1:14 PM, Shalabh Goel 
> wrote:
>
>> which all log files? Actually I am new to Ovirt, so it would be really
>> helpful if  you can tell me which ones??
>>
>> Thanks
>>
>> On Mon, Feb 6, 2017 at 4:39 PM, Edward Haas  wrote:
>>
>>> Please package the logs (tar or zip) and send them.
>>>
>>> On Mon, Feb 6, 2017 at 12:05 PM, Shalabh Goel 
>>> wrote:
>>>
 Yes, I am using OVS as the switch type and I did not know that it was
 not supported officially.

 The output of ovs-vsctl show is as follows:

 f634d53e-4849-488b-8454-6b1fafa7c6ac
 ovs_version: "2.6.90"

 I am attaching OVS switch logs below:

 /var/log/openvswitch/ovsdb-server.log


 2017-02-06T09:46:07.788Z|1|vlog|INFO|opened log file
 /var/log/openvswitch/ovsdb-server.log
 2017-02-06T09:46:07.791Z|2|ovsdb_server|INFO|ovsdb-server (Open
 vSwitch) 2.6.90
 2017-02-06T09:46:17.802Z|3|memory|INFO|2296 kB peak resident set
 size after 10.0 seconds
 2017-02-06T09:46:17.802Z|4|memory|INFO|cells:16 json-caches:1
 monitors:1 sessions:1

 ovs-vswitchd.log


 2017-02-06T09:46:07.999Z|1|vlog|INFO|opened log file
 /var/log/openvswitch/ovs-vswitchd.log
 2017-02-06T09:46:08.036Z|2|ovs_numa|INFO|Discovered 24 CPU cores
 on NUMA node 0
 2017-02-06T09:46:08.036Z|3|ovs_numa|INFO|Discovered 24 CPU cores
 on NUMA node 1
 2017-02-06T09:46:08.036Z|4|ovs_numa|INFO|Discovered 2 NUMA nodes
 and 48 CPU cores
 2017-02-06T09:46:08.037Z|5|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connecting...
 2017-02-06T09:46:08.037Z|6|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connected
 2017-02-06T09:46:08.039Z|7|bridge|INFO|ovs-vswitchd (Open vSwitch)
 2.6.90

 What should I do now?

 The engine says that "Host host2 does not comply with the cluster
 Default networks, the following networks are missing on host: 'ovirtmgmt'
 "

 What other logs should I attach?

 Thanks

 Shalabh Goel

 On Sun, Feb 5, 2017 at 1:10 PM, Edward Haas  wrote:

> Based on what I can see, you used OVS as the switch type and it seems
> ovs (openvswitch) is not properly installed on your host.
> Make sure that you have ovs operational by issuing "ovs-vsctl show".
>
> You should note that OVS network support is not an official release
> feature, and you should use it on 4.1 and up versions.
> Fixes will be probably submitted to master (appearing in nightly
> builds).
>
> Next time please include the mailing-list in your replies and attach
> the log files, it is less spamming.
>
> Thanks,
> Edy.
>
> On Fri, Feb 3, 2017 at 5:07 AM, Shalabh Goel 
> wrote:
>
>> log from messages
>>
>> Feb  3 08:27:53 ovirtnode3 ovs-vsctl: 
>> ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
>> database connection failed (No such file or directory)
>> Feb  3 08:27:53 ovirtnode3 journal: vdsm vds ERROR Executing commands
>> failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
>> connection failed (No su
>> ch file or directory)#012Traceback (most recent call last):#012  File
>> "/usr/share/vdsm/API.py", line 1531, in setupNetworks#012
>> supervdsm.getProxy().setup
>> Networks(networks, bondings, options)#012  File
>> "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
>> __call__#012return callMethod()#012  Fi
>> le "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in
>> #012**kwargs)#012  File "", line 2, in
>> setupNetworks#012  File "/usr
>> /lib64/python2.7/multiprocessing/managers.py", line 773, in
>> _callmethod#012raise convert_to_error(kind,
>> result)#012ConfigNetworkError: (21, 'Executing co
>> mmands failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock:
>> 

[ovirt-users] About H.264 encode in ovirt

2017-02-08 Thread 张 余歌
hello,my friend,i want to be sure whether ovirt support h.264 encode?Did anyone 
know it?thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host DNS

2017-02-08 Thread Todd Punderson
I changed my DNS servers since installing my hosted engine and hosts. I've
manually set my /etc/resolv.conf to have the correct nameserver entries.
But I found when I rebooted that it was being overwritten. I did some
googling and found that in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
(My management network name) there is a nameservers config that gets
applied when vdsm starts which updated my ifcfg files. I edited that file
to include my correct name servers.

Now, if I choose a host, go to the network interfaces tab, then click
"Setup Host Network" when I exit out with the "Save" checkbox checked, the
file in the vdsm directory is being changed back to my old nameserver entry.

I searched all over the hosted engine UI and I can't find where that's old
entry is coming from. Where do I go to change this?
Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] USB 2.0 compatibility -- RESOLVED!

2017-02-08 Thread Jonathan Woytek
I was finally able to resolve this. So that maybe other users can find how
to fix this if they run into the same issues:

tl;dr:
Edit the virtual machine in question, go under Console, make sure SPICE is
chosen for graphics, and under "USB Redirection" choose "Native." When you
reboot the VM, it will have both UHCI and EHCI (2.0) controllers. Now, any
USB 2.0 devices that you attach will work correctly.


Long version:
I struggled with this for quite a while. While doing some additional
research, I found a note in the 4.1.0 release notes referencing BZ 1373223.
This particular note was for ppc64 architecture systems, but it curiously
said that enabling USB Redirection under SPICE would change the USB
controllers available on the host. I dug a little deeper into the bug
thread and found a couple of references that seemed to indicate that
x86/x86_64 exhibited the same behavior. Apparently, UHCI (1.1) is the
default because it supports SmartCards, but the EHCI (2.0) controller does
not. So, turning on USB Redirection under SPICE settings for the console
enables an EHCI controller, thereby enabling devices that require USB 2.0.

Personal opinion:
This is really obfuscated. It would be great if this switch lived somewhere
more obvious. Even if that doesn't happen, this should exist in a document
somewhere. Maybe it does and I couldn't find it, but I tried!

jonathan

-- 
Jonathan Woytek
http://www.dryrose.com
KB3HOZ
PGP:  462C 5F50 144D 6B09 3B65  FCE8 C1DC DEC4 E8B6 AABC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt console ticket time threshold and password

2017-02-08 Thread Jiri Belka
I doubt you can have it "static" and open consoles from Admin/User Portals.
You can submit a feature request but IMO this feature goes against all AAA
implemented in oVirt.

Anyway, what about a libvirt/vdsm hook for following?

~~~
virsh # qemu-monitor-command 10 --hmp 'set_password spice foobar keep'
virsh # qemu-monitor-command 10 --hmp 'expire_password spice never'
virsh # qemu-monitor-command 10 --hmp 'info spice'
Server:
 address: 0:5908
migrated: false
auth: spice
compiled: 0.12.4
  mouse-mode: server
Channels: none
~~~

$ remote-viewer spice://${host}?port=5908

j.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-storage import fails on hyperconverged glusterFS

2017-02-08 Thread Liebe , André-Sebastian
Hello Sahina,

First of all, sorry for the late reply, but I got distracted by other things 
and left off on vacation.
The problem vanished after a complete shutdown of the whole datacenter (due to 
hardware maintenance).


Sincerely
André-Sebastian Liebe

Von: Sahina Bose [mailto:sab...@redhat.com]
Gesendet: Montag, 23. Januar 2017 08:15
An: Liebe, André-Sebastian
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] hosted-storage import fails on hyperconverged 
glusterFS



On Fri, Jan 20, 2017 at 3:01 PM, Liebe, André-Sebastian 
> wrote:
Hello List,

I run into trouble after moving our hosted engine from nfs to hyperconverged 
glusterFS by backup/restore[1] procedure. The engine logs it  can't import and 
activate the hosted-storage although I can see the storage.
Any Hints how to fix this?

- I created the ha-replica-3 gluster volume prior to hosted-engine-setup using 
the hosts short name.
- Then ran hosted-engine-setup to install an new hosted engine (by installing 
centOS7 and ovirt-engine amnually)
- inside the new hosted-engine I restored the last successfull backup (wich was 
in running state)
- then I connected to the engine-database and removed the old hosted-engine by 
hand (as part of this patch would do: https://gerrit.ovirt.org/#/c/64966/) and 
all known hosts (after marking all vms as down, where I got ETL error messages 
later on for this)

Did you also clean up the old HE storage domain? The error further down 
indicates that engine has a reference to the HE storage domain.
- then I finished up the engine installation by running the engine-setup inside 
the hosted_engine
- and finally completed the hosted-engine-setup


The new hosted engine came up successfully with all prior known storage and 
after enabling glusterFS, the cluster this HA-host is part of, I could see it 
in the volumes and storage tab. After adding the remaining two hosts, the 
volume was marked as active.

But here's the the error message I get repeadately since then:
> 2017-01-19 08:49:36,652 WARN  
> [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand]
>  (org.ovirt.thread.pool-6-thread-10) [3b955ecd] Validation of action 
> 'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons: 
> VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_ALREADY_EXIST


There are also some repeating messages about this ha-replica-3 volume, because 
I used the hosts short name on volume creation, which I can't change afaik 
without a complete cluster shutdown.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (DefaultQuartzScheduler3) 
> [7471d7de] Running command: AddUnmanagedVmsCommand internal: true.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, FullListVDSCommand(HostName = , 
> FullListVDSCommandParameters:{runAsync='true', 
> hostId='f62c7d04-9c95-453f-92d5-6dabf9da874a', 
> vds='Host[,f62c7d04-9c95-453f-92d5-6dabf9da874a]', 
> vmIds='[dfea96e8-e94a-407e-af46-3019fd3f2991]'}), log id: 2d0941f9
> 2017-01-19 08:48:03,163 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, FullListVDSCommand, return: 
> [{guestFQDN=, emulatedMachine=pc, pid=0, guestDiskMapping={}, 
> devices=[Ljava.lang.Object;@4181d938, cpuType=Haswell-noTSX, smp=2, 
> vmType=kvm, memSize=8192, vmName=HostedEngine, username=, exitMessage=XML 
> error: maximum vcpus count must be an integer, 
> vmId=dfea96e8-e94a-407e-af46-3019fd3f2991, displayIp=0, displayPort=-1, 
> guestIPs=, 
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
>  exitCode=1, nicModel=rtl8139,pv, exitReason=1, status=Down, maxVCpus=None, 
> clientIp=, statusTime=6675071780, display=vnc, displaySecurePort=-1}], log 
> id: 2d0941f9
> 2017-01-19 08:48:03,163 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] 
> (DefaultQuartzScheduler3) [7471d7de] null architecture type, replacing with 
> x86_64, %s
> 2017-01-19 08:48:17,779 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterServersListVDSCommand(HostName = lvh2, 
> VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 57d029dc
> 2017-01-19 08:48:18,177 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, GlusterServersListVDSCommand, 
> return: [172.31.1.22/24:CONNECTED, 
> lvh3.lab.gematik.de:CONNECTED, lvh4.lab.gematik.de:CONNECTED], log id: 
> 57d029dc
> 2017-01-19 08:48:18,180 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> 

[ovirt-users] Upgrading oVirt-Node-NG from 4.0.3 to 4.0.6

2017-02-08 Thread Thomas Kendall
We recently migrated from 3.6 to 4.0, but I'm a little confused about how
to keep the nodes up to date. I see the auto-updates come through for my
4.0.3 nodes, but they don't seem to upgrade them to the newer 4.0.x
releases.

Is there a way to do this upgrade?  I have two nodes that were installed
with 4.0.3, and I would like to bring them up to the same version as
everything else.

For reference, the 4.0.3 nodes were built off the 4.0-2016083011 iso, and
the 4.0.6 nodes were built off the 4.0-2017011712 iso.

Thanks,
Thomas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually starting VMs via vdsClient (HE offline)

2017-02-08 Thread Dan Yasny
On Wed, Feb 8, 2017 at 4:18 PM, Doug Ingham  wrote:

> Hi Dan,
>
> On 8 February 2017 at 18:10, Dan Yasny  wrote:
>
>>
>>
>> On Wed, Feb 8, 2017 at 4:07 PM, Doug Ingham  wrote:
>>
>>> Hi Guys,
>>>  My Hosted-Engine has failed & it looks like the easiest solution will
>>> be to install a new one. Now before I try to re-add the old hosts (still
>>> running the guest VMs) & import the storage domain into the new engine, in
>>> case things don't go to plan, I want to make sure I'm able to bring up the
>>> guests on the hosts manually.
>>>
>>> The problem is vdsClient is giving me an "Unexpected exception", without
>>> much more info as to why it's failing.
>>>
>>> Any idea?
>>>
>>> [root@v0 ~]# vdsClient -s 0 list table | grep georep
>>> 9d1c3fef-498e-4c20-b124-01364d4d45a8  30455  georep-proxy Down
>>>
>>> [root@v0 ~]# vdsClient -s 0 continue 9d1c3fef-498e-4c20-b124-01364d
>>> 4d45a8
>>> Unexpected exception
>>>
>>> /var/log/vdsm/vdsm.log
>>> periodic/1063::WARNING::2017-02-08 17:57:52,532::periodic::276::v
>>> irt.periodic.VmDispatcher::(__call__) could not run >> 'vdsm.virt.periodic.DriveWatermarkMonitor'> on
>>> ['65c9807c-7216-40b3-927c-5fd93bbd42ba', u'9d1c3fef-498e-4c20-b124-0136
>>> 4d4d45a8']
>>>
>>>
>> continue meane un-pause, not "start from a stopped state"
>>
>
> I search the manual for start/init/resume syntax, and "continue" was the
> closest thing I found.
>

I don't have a 4.x vdsm handy, but on 3.6 the verb is "create". With a LOT
of params of course.


>
>
>> now having said that, if you expect the VMs not to be able to start after
>> you rebuild the engine and the VMs exist on the hosts, I'd collect a virsh
>> -r dumpxml VMNAME for each - that way you have the disks in use, and all
>> the VM configuration in a file, and with some minor LVM manipulation you'll
>> be able to start the VM via virsh
>>
>
> My main concern is that I might have to halt the VMs or VDSM services for
> some reason when trying to migrate to the new engine. I just want to make
> sure that no matter what happens, I can still get the VMs back online.
>

The way oVirt works is very much tied into the engine DB. When you click
"start" on a VM, the engine will query the DB, pull out the VM details (CPU
config, disks, RAM etc), pick a suitable host, enable the hosts' access to
the disks on the storage domain, generate the libvirt domxml for the VM
(the file you'd get from virsh dumpxml) and start the VM according to the
generated XML. With vdsClient and without the engine DB you'll be missing
all those details the database provides, while my way, with the XML, they
are all already in place, populated by the engine when it was still alive.


>
> I'm still getting myself acquainted with virsh/vdsClient. Could you
> provide any insight into what I'd have to do to restart the guests manually?
>
>
see above. But seriously, above all, I'd recommend you backup the engine
(it comes with a utility) often and well. I do it via cron every hour in
production, keeping a rotation of hourly and daily backups, just in case.
It doesn't take much space or resources, but it's more than just best
practice - that database is the summary of the entire setup.



> Thanks,
> --
> Doug
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt console ticket time threshold and password

2017-02-08 Thread rightkicktech.gmail.com
Hi Jiri,

I understand that.
What I was doing with plain KVM was to have a static password for vnc. Is this 
possible with ovirt?

Alex

On February 8, 2017 3:55:19 PM EET, Jiri Belka  wrote:
>Without console password everybody could vnc/spice to a console port on
>your host.
>I suppose you don't want this in multi-user environment.
>
>j.
>
>- Original Message -
>From: "rightkicktech.gmail.com" 
>To: "Ovirt Users Mailing List" 
>Sent: Saturday, January 28, 2017 11:23:53 AM
>Subject: [ovirt-users] Ovirt console ticket time threshold and password
>
>Hi all, 
>
>Is there any standard recommended way to alter the default value of 120
>secs set on spice console? 
>Also, can the password be disabled if needed? There are several hacks
>floating arround, but none seems clean. 
>
>Thanx, 
>Alex 
>
>
>-- 
>Sent from my Android device with K-9 Mail. Please excuse my brevity. 
>
>___
>Users mailing list
>Users@ovirt.org
>http://lists.ovirt.org/mailman/listinfo/users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually starting VMs via vdsClient (HE offline)

2017-02-08 Thread Doug Ingham
Hi Dan,

On 8 February 2017 at 18:10, Dan Yasny  wrote:

>
>
> On Wed, Feb 8, 2017 at 4:07 PM, Doug Ingham  wrote:
>
>> Hi Guys,
>>  My Hosted-Engine has failed & it looks like the easiest solution will be
>> to install a new one. Now before I try to re-add the old hosts (still
>> running the guest VMs) & import the storage domain into the new engine, in
>> case things don't go to plan, I want to make sure I'm able to bring up the
>> guests on the hosts manually.
>>
>> The problem is vdsClient is giving me an "Unexpected exception", without
>> much more info as to why it's failing.
>>
>> Any idea?
>>
>> [root@v0 ~]# vdsClient -s 0 list table | grep georep
>> 9d1c3fef-498e-4c20-b124-01364d4d45a8  30455  georep-proxy Down
>>
>> [root@v0 ~]# vdsClient -s 0 continue 9d1c3fef-498e-4c20-b124-01364d4d45a8
>> Unexpected exception
>>
>> /var/log/vdsm/vdsm.log
>> periodic/1063::WARNING::2017-02-08 17:57:52,532::periodic::276::v
>> irt.periodic.VmDispatcher::(__call__) could not run > 'vdsm.virt.periodic.DriveWatermarkMonitor'> on
>> ['65c9807c-7216-40b3-927c-5fd93bbd42ba', u'9d1c3fef-498e-4c20-b124-0136
>> 4d4d45a8']
>>
>>
> continue meane un-pause, not "start from a stopped state"
>

I search the manual for start/init/resume syntax, and "continue" was the
closest thing I found.


> now having said that, if you expect the VMs not to be able to start after
> you rebuild the engine and the VMs exist on the hosts, I'd collect a virsh
> -r dumpxml VMNAME for each - that way you have the disks in use, and all
> the VM configuration in a file, and with some minor LVM manipulation you'll
> be able to start the VM via virsh
>

My main concern is that I might have to halt the VMs or VDSM services for
some reason when trying to migrate to the new engine. I just want to make
sure that no matter what happens, I can still get the VMs back online.

I'm still getting myself acquainted with virsh/vdsClient. Could you provide
any insight into what I'd have to do to restart the guests manually?

Thanks,
-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually starting VMs via vdsClient (HE offline)

2017-02-08 Thread Dan Yasny
On Wed, Feb 8, 2017 at 4:07 PM, Doug Ingham  wrote:

> Hi Guys,
>  My Hosted-Engine has failed & it looks like the easiest solution will be
> to install a new one. Now before I try to re-add the old hosts (still
> running the guest VMs) & import the storage domain into the new engine, in
> case things don't go to plan, I want to make sure I'm able to bring up the
> guests on the hosts manually.
>
> The problem is vdsClient is giving me an "Unexpected exception", without
> much more info as to why it's failing.
>
> Any idea?
>
> [root@v0 ~]# vdsClient -s 0 list table | grep georep
> 9d1c3fef-498e-4c20-b124-01364d4d45a8  30455  georep-proxy Down
>
> [root@v0 ~]# vdsClient -s 0 continue 9d1c3fef-498e-4c20-b124-01364d4d45a8
> Unexpected exception
>
> /var/log/vdsm/vdsm.log
> periodic/1063::WARNING::2017-02-08 17:57:52,532::periodic::276::
> virt.periodic.VmDispatcher::(__call__) could not run  'vdsm.virt.periodic.DriveWatermarkMonitor'> on 
> ['65c9807c-7216-40b3-927c-5fd93bbd42ba',
> u'9d1c3fef-498e-4c20-b124-01364d4d45a8']
>
>
continue meane un-pause, not "start from a stopped state"


now having said that, if you expect the VMs not to be able to start after
you rebuild the engine and the VMs exist on the hosts, I'd collect a virsh
-r dumpxml VMNAME for each - that way you have the disks in use, and all
the VM configuration in a file, and with some minor LVM manipulation you'll
be able to start the VM via virsh


> Cheers,
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Manually starting VMs via vdsClient (HE offline)

2017-02-08 Thread Doug Ingham
Hi Guys,
 My Hosted-Engine has failed & it looks like the easiest solution will be
to install a new one. Now before I try to re-add the old hosts (still
running the guest VMs) & import the storage domain into the new engine, in
case things don't go to plan, I want to make sure I'm able to bring up the
guests on the hosts manually.

The problem is vdsClient is giving me an "Unexpected exception", without
much more info as to why it's failing.

Any idea?

[root@v0 ~]# vdsClient -s 0 list table | grep georep
9d1c3fef-498e-4c20-b124-01364d4d45a8  30455  georep-proxy Down

[root@v0 ~]# vdsClient -s 0 continue 9d1c3fef-498e-4c20-b124-01364d4d45a8
Unexpected exception

/var/log/vdsm/vdsm.log
periodic/1063::WARNING::2017-02-08
17:57:52,532::periodic::276::virt.periodic.VmDispatcher::(__call__) could
not run  on
['65c9807c-7216-40b3-927c-5fd93bbd42ba',
u'9d1c3fef-498e-4c20-b124-01364d4d45a8']

Cheers,
-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] libvirtError: Cannot get interface MTU

2017-02-08 Thread Tyson Landon
I am new to Ovirt and have a test cluster running with 2 hosts. I have errors 
when live migrating that started after I installed a new NIC in one host. Live 
migrate was working fine before the changes. The vdsm log shows the errors 
below depending on if i am migrating from host1 or host2.  Running ovs-vsctl 
list-br shows that the bridge exists. Ip a shows the interface is down. I do 
not know what a vdsm bridge is for or if the server just needs to restart a 
service to get things in order again.  Have any of you seen this error before 
and were able to fix it?

HOST 1
libvirtError: Cannot get interface MTU on 'vdsmbr_TK4cSEjh': No such device

HOST 2
libvirtError: Cannot get interface MTU on 'vdsmbr_vqqOTlrR': No such device


HOST 1
ovs-vsctl list-br
vdsmbr_3fDBsCCF
vdsmbr_TK4cSEjh
vdsmbr_VWhs9soF

HOST2
ovs-vsctl list-br
vdsmbr_YBtJDFGg
vdsmbr_qf3gXZdq
vdsmbr_vqqOTlrR

Thanks
Tyson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage domain experienced a high latency

2017-02-08 Thread Nir Soffer
On Wed, Feb 8, 2017 at 6:11 PM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> got a new FC Storage (EMC Unity 300F) which is seen by my Hosts additional
> to my old Storage for Migration.
>
> New Storage has only on PATH until Migration is done.
>
> I already have a few VMs running on the new Storage without Problem.
>
> But after starting some VMs (don’t really no whats the difference to
> working ones), the Path for new Storage fails.
>
>
>
> Engine tells me: Storage Domain  experienced a high latency
> of 22.4875 seconds from host 
>
>
>
> Where can I start looking?
>

>
> In /var/log/messages I found:
>
>
>
> Feb  8 09:03:53 ovirtnode01 multipathd: 360060160422143002a38935800ae2760:
> sdd - emc_clariion_checker: Active path is healthy.
>
> Feb  8 09:03:53 ovirtnode01 multipathd: 8:48: reinstated
>
> Feb  8 09:03:53 ovirtnode01 multipathd: 360060160422143002a38935800ae2760:
> remaining active paths: 1
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 8
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 5833475
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 5833475
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967168
>
> Feb  8 09:03:53 ovirtnode01 kernel: Buffer I/O error on dev dm-207,
> logical block 97, async page read
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967168
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967280
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967280
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 0
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 0
>
> Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967168
>
> Feb  8 09:03:53 ovirtnode01 kernel: device-mapper: multipath: Reinstating
> path 8:48.
>
> Feb  8 09:03:53 ovirtnode01 kernel: sd 3:0:0:22: alua: port group 01 state
> A preferred supports tolUsNA
>
> Feb  8 09:03:53 ovirtnode01 sanlock[5192]: 2017-02-08 09:03:53+0100 151809
> [11772]: s59 add_lockspace fail result -202
>
> Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: remove map (uevent)
>
> Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: devmap not registered,
> can't remove
>
> Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: remove map (uevent)
>
> Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: remove map (uevent)
>
> Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: devmap not registered,
> can't remove
>
> Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: remove map (uevent)
>
> Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: remove map (uevent)
>
> Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: devmap not registered,
> can't remove
>
> Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: remove map (uevent)
>
> Feb  8 09:04:08 ovirtnode01 kernel: dd: sending ioctl 80306d02 to a
> partition!
>
> Feb  8 09:04:24 ovirtnode01 sanlock[5192]: 2017-02-08 09:04:24+0100 151840
> [15589]: read_sectors delta_leader offset 2560 rv -202
> /dev/f9b70017-0a34-47bc-bf2f-dfc70200a347/ids
>
> Feb  8 09:04:34 ovirtnode01 sanlock[5192]: 2017-02-08 09:04:34+0100 151850
> [15589]: f9b70017 close_task_aio 0 0x7fd78c0008c0 busy
>
> Feb  8 09:04:39 ovirtnode01 multipathd: 360060160422143002a38935800ae2760:
> sdd - emc_clariion_checker: Read error for WWN
> 60060160422143002a38935800ae2760.  Sense data are 0x0/0x0/0x0.
>
> Feb  8 09:04:39 ovirtnode01 multipathd: checker failed path 8:48 in map
> 360060160422143002a38935800ae2760
>
> Feb  8 09:04:39 ovirtnode01 multipathd: 360060160422143002a38935800ae2760:
> remaining active paths: 0
>
> Feb  8 09:04:39 ovirtnode01 kernel: qla2xxx [:11:00.0]-801c:3: Abort
> command issued nexus=3:0:22 --  1 2002.
>
> Feb  8 09:04:39 ovirtnode01 kernel: device-mapper: multipath: Failing path
> 8:48.
>
> Feb  8 09:04:40 ovirtnode01 kernel: qla2xxx [:11:00.0]-801c:3: Abort
> command issued nexus=3:0:22 --  1 2002.
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: 8 callbacks
> suppressed
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967168
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967280
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 0
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967168
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 4294967280
>
> Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev
> dm-10, sector 0
>

Maybe you should consult the storage vendor about this?

Can be also incorrect multipath configuration, maybe 

[ovirt-users] Storage domain experienced a high latency

2017-02-08 Thread Grundmann, Christian
Hi,

got a new FC Storage (EMC Unity 300F) which is seen by my Hosts additional to 
my old Storage for Migration.

New Storage has only on PATH until Migration is done.

I already have a few VMs running on the new Storage without Problem.

But after starting some VMs (don't really no whats the difference to working 
ones), the Path for new Storage fails.



Engine tells me: Storage Domain  experienced a high latency of 
22.4875 seconds from host 



Where can I start looking?



In /var/log/messages I found:



Feb  8 09:03:53 ovirtnode01 multipathd: 360060160422143002a38935800ae2760: sdd 
- emc_clariion_checker: Active path is healthy.

Feb  8 09:03:53 ovirtnode01 multipathd: 8:48: reinstated

Feb  8 09:03:53 ovirtnode01 multipathd: 360060160422143002a38935800ae2760: 
remaining active paths: 1

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 8

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 5833475

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 5833475

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967168

Feb  8 09:03:53 ovirtnode01 kernel: Buffer I/O error on dev dm-207, logical 
block 97, async page read

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967168

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967280

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967280

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 0

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 0

Feb  8 09:03:53 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967168

Feb  8 09:03:53 ovirtnode01 kernel: device-mapper: multipath: Reinstating path 
8:48.

Feb  8 09:03:53 ovirtnode01 kernel: sd 3:0:0:22: alua: port group 01 state A 
preferred supports tolUsNA

Feb  8 09:03:53 ovirtnode01 sanlock[5192]: 2017-02-08 09:03:53+0100 151809 
[11772]: s59 add_lockspace fail result -202

Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: remove map (uevent)

Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: devmap not registered, can't 
remove

Feb  8 09:04:05 ovirtnode01 multipathd: dm-33: remove map (uevent)

Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: remove map (uevent)

Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: devmap not registered, can't 
remove

Feb  8 09:04:06 ovirtnode01 multipathd: dm-34: remove map (uevent)

Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: remove map (uevent)

Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: devmap not registered, can't 
remove

Feb  8 09:04:08 ovirtnode01 multipathd: dm-33: remove map (uevent)

Feb  8 09:04:08 ovirtnode01 kernel: dd: sending ioctl 80306d02 to a partition!

Feb  8 09:04:24 ovirtnode01 sanlock[5192]: 2017-02-08 09:04:24+0100 151840 
[15589]: read_sectors delta_leader offset 2560 rv -202 
/dev/f9b70017-0a34-47bc-bf2f-dfc70200a347/ids

Feb  8 09:04:34 ovirtnode01 sanlock[5192]: 2017-02-08 09:04:34+0100 151850 
[15589]: f9b70017 close_task_aio 0 0x7fd78c0008c0 busy

Feb  8 09:04:39 ovirtnode01 multipathd: 360060160422143002a38935800ae2760: sdd 
- emc_clariion_checker: Read error for WWN 60060160422143002a38935800ae2760.  
Sense data are 0x0/0x0/0x0.

Feb  8 09:04:39 ovirtnode01 multipathd: checker failed path 8:48 in map 
360060160422143002a38935800ae2760

Feb  8 09:04:39 ovirtnode01 multipathd: 360060160422143002a38935800ae2760: 
remaining active paths: 0

Feb  8 09:04:39 ovirtnode01 kernel: qla2xxx [:11:00.0]-801c:3: Abort 
command issued nexus=3:0:22 --  1 2002.

Feb  8 09:04:39 ovirtnode01 kernel: device-mapper: multipath: Failing path 8:48.

Feb  8 09:04:40 ovirtnode01 kernel: qla2xxx [:11:00.0]-801c:3: Abort 
command issued nexus=3:0:22 --  1 2002.

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: 8 callbacks suppressed

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967168

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967280

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 0

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967168

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 4294967280

Feb  8 09:04:42 ovirtnode01 kernel: blk_update_request: I/O error, dev dm-10, 
sector 0





multipath -ll output for this Domain



360060160422143002a38935800ae2760 dm-10 DGC ,VRAID

size=2.0T features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw

`-+- policy='service-time 0' prio=50 status=active

  `- 3:0:0:22 sdd 8:48  active ready  running





Thx Christian





___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Ovirt console ticket time threshold and password

2017-02-08 Thread Jiri Belka
Without console password everybody could vnc/spice to a console port on your 
host.
I suppose you don't want this in multi-user environment.

j.

- Original Message -
From: "rightkicktech.gmail.com" 
To: "Ovirt Users Mailing List" 
Sent: Saturday, January 28, 2017 11:23:53 AM
Subject: [ovirt-users] Ovirt console ticket time threshold and password

Hi all, 

Is there any standard recommended way to alter the default value of 120 secs 
set on spice console? 
Also, can the password be disabled if needed? There are several hacks floating 
arround, but none seems clean. 

Thanx, 
Alex 


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity. 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt ng nodes

2017-02-08 Thread Juan Pablo
you can check if this is valid:
http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
if its not or innacurate, please provide feedback.

regards,

2017-02-08 9:35 GMT-03:00 Yedidyah Bar David :

> On Wed, Feb 8, 2017 at 2:26 PM, Massimo Mad  wrote:
> > Hi David,
> > I try to use yum update but i have this error :
> > yum update
> > Loaded plugins: fastestmirror, imgbased-warning
> > Warning: yum operations are not persisted across upgrades!
> > The problem is that on the server i have this repository:
> > ovirt-4.0.repo
> > ovirt-4.0-dependencies.repo
> > With this repository it's possible upgrade ovirt only only between minor
> > releases for example from 4.0.1 to 4.0.6, i want to upgrade the host from
> > 4.0 to 4.1.
>
> So please try adding 4.1 repos (install ovirt-release41) and then update.
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-engine is participating in openjdk's adoption group for jdk9

2017-02-08 Thread Roy Golan
ovirt-engine is now listed in openjdk's adoption group in order to help and
supply feedback on jdk9.

https://wiki.openjdk.java.net/display/quality/Quality+Outreach

I'm working on creating a ci job to just run compilation, that would be
travis or our jenkins.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-engine-notifier tuning?

2017-02-08 Thread nicolas

Any ideas about this?

Thanks.

El 2017-02-04 14:59, Nicolás escribió:

Hi,

Is there a way to configure the frequency on which the
ovirt-engine-notifier notifies about the same issue? We've recently
had a Domain Storage with low space and an email was sent cca. every
15 minutes. Is there a way to increase this notification time?

This is oVirt 4.0.4-4.

Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt ng nodes

2017-02-08 Thread Yedidyah Bar David
On Wed, Feb 8, 2017 at 2:26 PM, Massimo Mad  wrote:
> Hi David,
> I try to use yum update but i have this error :
> yum update
> Loaded plugins: fastestmirror, imgbased-warning
> Warning: yum operations are not persisted across upgrades!
> The problem is that on the server i have this repository:
> ovirt-4.0.repo
> ovirt-4.0-dependencies.repo
> With this repository it's possible upgrade ovirt only only between minor
> releases for example from 4.0.1 to 4.0.6, i want to upgrade the host from
> 4.0 to 4.1.

So please try adding 4.1 repos (install ovirt-release41) and then update.
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] UPDATED Invitation: Deep Dive: virt-sparsify @ Wed 2017-02-08 17:00 - 17:30 (IST)

2017-02-08 Thread Shmuel Melamud
Hi!

I invite you to Deep Dive session dedicated to the new virt-sparsify
feature of oVirt 4.1.

When: Wed 2017-02-08 17:00 - 17:30 (IST)

Join here: https://youtu.be/ayseKlGLwHI  (NOTE location changed)

Virt-sparsify is a new feature in oVirt 4.1. It allows to remove unused
space from a disk image and return it back to the storage.

We will learn why this feature is needed and how to use it from user
perspective. After that we will look deeper into implementation and explain
prerequisites of virt-sparsify operation.

Shmuel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt ng nodes

2017-02-08 Thread Massimo Mad
Hi David,
I try to use yum update but i have this error :
yum update
Loaded plugins: fastestmirror, imgbased-warning
Warning: yum operations are not persisted across upgrades!
The problem is that on the server i have this repository:
ovirt-4.0.repo
ovirt-4.0-dependencies.repo
With this repository it's possible upgrade ovirt only only between minor
releases for example from 4.0.1 to 4.0.6, i want to upgrade the host from
4.0 to 4.1.

Regards Massimo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] web admin gui timeout changes in 4.1

2017-02-08 Thread Gianluca Cecchi
Hello,
while in 4.0.6 it seemed perhaps too short and you got logged out, in 4.1
it seems very relaxed
Did the default values changed? In which measures?
Any way to tweak?
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Best way to shutdown and restart hypervisor?

2017-02-08 Thread Gianluca Cecchi
Hello,
what is considered the best way to shutdown and restart an hypervisor,
supposing plain CentOS 7 host?

For example to cover these scenarios:
1) update host from 4.0 to 4.1
2) planned maintenance to the cabinet where the server is located and take
the opportunity to update also OS packages

My supposed workflow:

- put host into maintenance
- yum update on host
- shutdown os from inside host (because from power mgmt it is brutal power
off / power on)
--> should I get any warning from web admin gui in this case, even if the
host was in maintenance mode?
- power mgmt -> start from webadmin gui
(or power on button/virtual button at host side?)

Would be advisable to put inside power mgmt functionality some logic about
os mgmt, so for example, if action is restart, first try to shutdown OS and
only in case of failure power off/power on?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine errors after 4.1 upgrade.

2017-02-08 Thread Simone Tiraboschi
On Wed, Feb 8, 2017 at 8:59 AM, Yedidyah Bar David  wrote:

> On Wed, Feb 8, 2017 at 2:31 AM, Todd Punderson 
> wrote:
> > Seeing issues with my hosted engine, it seems it's unable to extract
> vm.conf
> > from storage. My ovirt-hosted-engine-ha/agent.log is full of this
> repeating
> > over and over. This is happening on all 3 of my hosts. My storage is
> > glusterfs on the hosts themselves.
> >
> > Hopefully this is enough info to get started.
>

Another step is editing /etc/ovirt-hosted-engine-ha/agent-log.conf changing
from

[logger_root]
level=INFO
handlers=syslog,logfile
propagate=0

to

[logger_root]
level=DEBUG
handlers=syslog,logfile
propagate=0

and restart ovirt-ha-agent to get more detailed info about the issue.


> >
> > Thanks!
> >
> > MainThread::INFO::2017-02-07
> > 19:27:33,063::hosted_engine::612::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> > Initializing VDSM
> > MainThread::INFO::2017-02-07
> > 19:27:35,455::hosted_engine::639::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Connecting the storage
> > MainThread::INFO::2017-02-07
> > 19:27:35,456::storage_server::219::ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer::(connect_storage_server)
> > Connecting storage server
> > MainThread::INFO::2017-02-07
> > 19:27:40,169::storage_server::226::ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer::(connect_storage_server)
> > Connecting storage server
> > MainThread::INFO::2017-02-07
> > 19:27:40,202::storage_server::233::ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer::(connect_storage_server)
> > Refreshing the storage domain
> > MainThread::INFO::2017-02-07
> > 19:27:40,418::hosted_engine::666::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Preparing images
> > MainThread::INFO::2017-02-07
> > 19:27:40,419::image::126::ovirt_hosted_engine_ha.lib.
> image.Image::(prepare_images)
> > Preparing images
> > MainThread::INFO::2017-02-07
> > 19:27:43,370::hosted_engine::669::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Reloading vm.conf from the shared storage domain
> > MainThread::INFO::2017-02-07
> > 19:27:43,371::config::206::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Trying to get a fresher copy of vm configuration from the OVF_STORE
> > MainThread::INFO::2017-02-07
> > 19:27:45,968::ovf_store::103::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:3e14c1b5-5ade-4827-aad4-66c59824acd2,
> > volUUID:3cbeeb3b-f755-4d42-a654-8dab34213792
> > MainThread::INFO::2017-02-07
> > 19:27:46,257::ovf_store::103::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce,
> > volUUID:8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> > MainThread::INFO::2017-02-07
> > 19:27:46,355::ovf_store::112::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Extracting Engine VM OVF from the OVF_STORE
> > MainThread::INFO::2017-02-07
> > 19:27:46,366::ovf_store::119::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > OVF_STORE volume path:
> > /rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:
> _engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-
> a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> > MainThread::ERROR::2017-02-07
> > 19:27:46,389::ovf_store::124::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Unable to extract HEVM OVF
> > MainThread::ERROR::2017-02-07
> > 19:27:46,390::config::235::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
>
> Can you please attach the output of:
>
> sudo -u vdsm dd
> if=/rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:
> _engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-
> a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> | tar -tvf -
>
> Thanks.
>
> Did everything work well in 4.0? How did you upgrade?
>
> Best,
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine errors after 4.1 upgrade.

2017-02-08 Thread Yedidyah Bar David
On Wed, Feb 8, 2017 at 2:31 AM, Todd Punderson  wrote:
> Seeing issues with my hosted engine, it seems it's unable to extract vm.conf
> from storage. My ovirt-hosted-engine-ha/agent.log is full of this repeating
> over and over. This is happening on all 3 of my hosts. My storage is
> glusterfs on the hosts themselves.
>
> Hopefully this is enough info to get started.
>
> Thanks!
>
> MainThread::INFO::2017-02-07
> 19:27:33,063::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Initializing VDSM
> MainThread::INFO::2017-02-07
> 19:27:35,455::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Connecting the storage
> MainThread::INFO::2017-02-07
> 19:27:35,456::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2017-02-07
> 19:27:40,169::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2017-02-07
> 19:27:40,202::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Refreshing the storage domain
> MainThread::INFO::2017-02-07
> 19:27:40,418::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Preparing images
> MainThread::INFO::2017-02-07
> 19:27:40,419::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
> Preparing images
> MainThread::INFO::2017-02-07
> 19:27:43,370::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Reloading vm.conf from the shared storage domain
> MainThread::INFO::2017-02-07
> 19:27:43,371::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> Trying to get a fresher copy of vm configuration from the OVF_STORE
> MainThread::INFO::2017-02-07
> 19:27:45,968::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found OVF_STORE: imgUUID:3e14c1b5-5ade-4827-aad4-66c59824acd2,
> volUUID:3cbeeb3b-f755-4d42-a654-8dab34213792
> MainThread::INFO::2017-02-07
> 19:27:46,257::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found OVF_STORE: imgUUID:9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce,
> volUUID:8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> MainThread::INFO::2017-02-07
> 19:27:46,355::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2017-02-07
> 19:27:46,366::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> OVF_STORE volume path:
> /rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:_engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> MainThread::ERROR::2017-02-07
> 19:27:46,389::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Unable to extract HEVM OVF
> MainThread::ERROR::2017-02-07
> 19:27:46,390::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf

Can you please attach the output of:

sudo -u vdsm dd
if=/rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:_engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
| tar -tvf -

Thanks.

Did everything work well in 4.0? How did you upgrade?

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users