Re: [ovirt-users] Moving a Hosted Engine from Fedora 20 to CentOS 7

2015-11-06 Thread John Florian
On 10/29/2015 05:49 AM, Roy Golan wrote:
>
>
> On Thu, Oct 29, 2015 at 11:39 AM, Roy Golan  > wrote:
>
>
>
> On Wed, Oct 28, 2015 at 10:23 PM, John Florian
> > wrote:
>
> Can somebody please point me to documentation or describe how
> I should
> proceed with this task?  I see lots of pages for moving from a
> physical
> engine to a VM and vice-versa but am having no luck finding
> how to go
> about building a new HE to obsolete my original.
>
>
> using ovirt-hosted-engine-setup you can choose a setup without
> using the appliance. So you can scratch install your VM
>
>
>
> BTW the ovirt-engine-appliacnce we build [1] is Centos based. Seems
> like perfect candidate.
>
> #install the appliance
> yum install ovirt-engine-appliance
>
> #and then run the setup
> ovirt-hosted-engine-setup
>
> choose "disk" in this stage
> Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: disk
>
> it will suggest the downloaded appliance automatically
> See this wiki for reference
> http://www.ovirt.org/Features/HEApplianceFlow#Testing
>

I'm afraid I'm lost here.  Here's a map of my setup:

oVirt 3.5.5 hosted engine is named enceladus-f20 (on Fedora 20)
I have one oVirt 3.5.5 Host named oberon-f20 (also on Fedora 20)
I previously had one other oVirt 3.5.5 Host named ophelia-f20

I took opehlia down, installed CentOS 7 on it and attempted a
"hosted-engine --deploy".  I can't remember if that was 3.5.5 or 3.6,
but I could not add it to my cluster.  From what I could gather
enceladus-f20 provided an emulation type of pc1.0 while ophelia-c7
didn't seem to have anything that matched, the closest being just "pc". 
That looked hopelessly complicated to resolve so I thought I'd try
again, but this time putting F22 on the ophelia and doing a 3.6 HE
deploy saying yes to the redeploy prompt.  This time I was met with:

  Checking for oVirt-Engine status at enceladus-f20.doubledog.org...

[ INFO  ] Engine replied: DB Up!Welcome to Health Status!

[ ERROR ] Cannot automatically add the host to cluster Default: Cannot add 
Host. Connecting to host via SSH has failed, verify that the host is reachable 
(IP address, routable address etc.) You may refer to the engine.log file for 
further details. 

  

  Please check Engine VM configuration.

On enceladus-f20, I see:

==> /var/log/ovirt-engine/engine.log <==

2015-11-06 19:17:12,085 INFO  [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(ajp--127.0.0.1-8702-9) Running command: LoginUserCommand internal: false.

2015-11-06 19:17:12,128 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-9) Correlation ID: null, Call Stack: null, Custom Event 
ID: -1, Message: User admin@internal logged in.

==> /var/log/ovirt-engine/server.log <==

2015-11-06 19:17:13,654 INFO  
[org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-1) Client 
session created

2015-11-06 19:17:13,663 INFO  
[org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-2) Server 
version string: SSH-2.0-OpenSSH_6.9

2015-11-06 19:17:13,667 WARN  
[org.apache.sshd.client.session.ClientSessionImpl] (pool-18-thread-3) Exception 
caught: java.lang.IllegalStateException: Unable to negotiate key exchange for 
kex algorithms (client: diffie-hellman-group1-sha1 / server: 
curve25519-sha...@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1)

at 
org.apache.sshd.common.session.AbstractSession.negotiate(AbstractSession.java:1098)

at 
org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:357)

at 
org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)

at 
org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:266)

at 
org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:720)

at 
org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)

at 
org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)

at 
org.apache.sshd.common.io.nio2.Nio2Session$1.completed(Nio2Session.java:188)

at 
org.apache.sshd.common.io.nio2.Nio2Session$1.completed(Nio2Session.java:174)

at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126) 
[rt.jar:1.7.0_79]

at sun.nio.ch.Invoker$2.run(Invoker.java:218) [rt.jar:1.7.0_79]

at 
sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
 [rt.jar:1.7.0_79]

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[rt.jar:1.7.0_79]

at 

[ovirt-users] Engine setup: insistent DNS demand

2015-11-06 Thread Jamie Lawrence

Hi all,

I’m having trouble finding current references to this problem. (I’m 
seeing workarounds from 2013, but, not surprisingly, things have changed 
since then.)


I’m attempting to run engine-setup, and get to the DNS reverse lookup 
of the FQDN. The machine has two (bonded) interfaces, one for storage 
and one for everything else. The “everything else” network has DNS 
service, the storage network doesn’t, and this seems to make 
engine-setup cranky. /etc/hosts is properly set up for the storage 
network, but that apparently doesn’t count. I tried running with the 
-offline flag, but that apparently still expects DNS.


We do not want/need DNS on the storage network, and I’m hoping someone 
knows a workaround for this not involving DNSMasq.


I considered downing that interface for the setup, but I don’t know 
why engine-setup is so insistent about DNS, and hiding an interface 
seems like a potentially bad idea in any case, so I thought I’d ask 
about it first.


Details:
ovirt-engine.noarch 0:3.6.0.3-1.el7.centos
ovirt-engine-setup-plugin-allinone.noarch 0:3.6.0.3-1.el7.centos

CentOS Linux release 7.1.1503 (Core)

TIA, and happy weekend to all,

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why rbd support is not natively available just like glusterFS?

2015-11-06 Thread Nir Soffer
On Mon, Nov 2, 2015 at 10:57 AM, Gaetan SLONGO  wrote:
> Dear oVirt users,
>
> We are currently looking for a virtualization solution and oVirt seems to be 
> a good choice for us.
> The problem is we have to deploy it on the top of a Ceph/RBD storage.
>
> Maybe I missed something but it seems Ceph/RBD block devices are not 
> available in oVirt, just like GlusterFS. However Qemu supports it as a native 
> storage backend. Could you explain why it is not available ?

Because we chose to consume ceph storage via cinder. This will make it
easier to use other
storage backends supported by Cinder in the future.

> We saw Ceph storage could be usable through OpenStack Cinder in the 3.6 
> version but we don't want/need to deploy an OpenStack infrastructure (and we 
> don't found complete documentation about that).

Yes, you need to deploy Cinder to use ceph storage.

What documentation is missing?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-06 Thread Nir Soffer
On Fri, Nov 6, 2015 at 5:59 PM, Devin A. Bougie
 wrote:
> Hi Nir,
>
> On Nov 6, 2015, at 5:02 AM, Nir Soffer  wrote:
>> On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie  
>> wrote:
>>> Hi, All.  Is it possible to run vdsm without sanlock?  We'd prefer to run 
>>> libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock 
>>> overhead, but it looks like vdsmd / ovirt requires sanlock.
>>
>> True, we require sanlock.
>> What is "sanlock overhead"?
>
> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).

There is no such dependency.

Sanlock is using either an lv on block device (iscsi, fcp) or a file
(on nfs, gluster) to
maintain leases.

If sanlock cannot access storage and maintain the lease, it is likely
that your vm
also cannot access storage and will pause soon.

Anything else?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine setup: insistent DNS demand

2015-11-06 Thread alexmcwhirter

On 2015-11-06 20:00, Jamie Lawrence wrote:

Hi all,

I’m having trouble finding current references to this problem. (I’m
seeing workarounds from 2013, but, not surprisingly, things have
changed since then.)

I’m attempting to run engine-setup, and get to the DNS reverse lookup
of the FQDN. The machine has two (bonded) interfaces, one for storage
and one for everything else. The “everything else” network has DNS
service, the storage network doesn’t, and this seems to make
engine-setup cranky. /etc/hosts is properly set up for the storage
network, but that apparently doesn’t count. I tried running with the
-offline flag, but that apparently still expects DNS.

We do not want/need DNS on the storage network, and I’m hoping someone
knows a workaround for this not involving DNSMasq.

I considered downing that interface for the setup, but I don’t know
why engine-setup is so insistent about DNS, and hiding an interface
seems like a potentially bad idea in any case, so I thought I’d ask
about it first.

Details:
ovirt-engine.noarch 0:3.6.0.3-1.el7.centos
ovirt-engine-setup-plugin-allinone.noarch 0:3.6.0.3-1.el7.centos

CentOS Linux release 7.1.1503 (Core)

TIA, and happy weekend to all,

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Im using an older version of ovirt, so it may not apply. But i have all 
of my systems setup with /etc/hosts and no DNS at all. The installer 
complained, but it still installed and ran just fine. Downing the 
interface you want ovirt to use during setup is a bad idea.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi

I pathced the code for "emergency"
I can't find how change confguration.

But I think that's a bug:

- All work in ovirt 3.5, after upgrade stop working
- The log show a python exception.

I think a thing:

If there are chages on configuration requirements I have to be warned 
during upgrade, or I have to find a specific error in log.
...remove something that non exist from a list, and leave a cryptic 
python exception as error log, isn't the better solution...


Il 06/11/2015 8.12, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi" > כתב:

>
> To temporary solve the problem I patched storageserver.py as 
suggested on link above.


I would not patch the code but change the configuration.

> I can't find a related issue on bugzilla.

Would you file bug about this?

>
>
> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>
>> My error is related to this message:
>>
>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>
>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have an Ovirt installation with only 1 host and self-hosted engine.
>>> My Master Data storage domain is GlusterFS type.
>>>
>>> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

>>> The error in vdsm.log is:
>>>
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

>>> g
>>> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
>>> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', 
u'connection': u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': 
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':

>>>  '', u'port': u''}], options=None)
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

>>> e
>>> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
>>> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in 
connect

>>> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in 
options

>>> backup_servers_option = self._get_backup_servers_option()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

>>> servers.remove(self._volfileserver)
>>> ValueError: list.remove(x): x not in list
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
>>> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk stuck in locked status

2015-11-06 Thread Johan Kooijman
Ah, more interesting: the disk lives half on storage domain #1, half on
storage domain #2. I don't really need these disks, but can't do anything
to these disks at the moment.

What to do?

On Thu, Nov 5, 2015 at 4:41 PM, Johan Kooijman 
wrote:

> Hi all,
>
> I was moving a disk from one storage domain to the other when the engine
> was restarted. The VM the disk is on, is fine, but the disk stays in locked
> status.
>
> How can I resolve this?
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>



-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm is locked

2015-11-06 Thread Budur Nagaraju
yeah.. Thanks for the help  :)

Regards,
Nagaraju


On Fri, Nov 6, 2015 at 1:21 PM, Roman Mohr  wrote:

> Hi Budur,
>
> On Fri, Nov 6, 2015 at 7:02 AM, Budur Nagaraju  wrote:
>
>> HI
>>
>> One of the host is down  with H/w issue and and the vms which are using
>> the resources like (CPU and RAM ) in the faulty node are locked .
>>
>> Is there any way to release these and deploy them in another host ? below
>> are the logs.
>>
>>
> did you try to right click on the faulty host and click "confirm host has
> been rebooted"? This might help.
> Probably you have the situation described in [1], question 19.
>
>
>
>> 2015-11-06 11:28:31,866 INFO
>> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
>> Connecting to pbuovirt4.bnglab.psecure.net/10.206.68.4
>> 2015-11-06 11:28:31,867 WARN
>> [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor)
>> Retry failed
>> 2015-11-06 11:28:31,867 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient]
>> (DefaultQuartzScheduler_Worker-87) Exception during connection
>> 2015-11-06 11:28:31,868 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (DefaultQuartzScheduler_Worker-87) Command
>> GetCapabilitiesVDSCommand(HostName = pbuovirt4, HostId =
>> 9d56d598-0c0f-49d7-ba53-8d0b41083b0f,
>> vds=Host[pbuovirt4,9d56d598-0c0f-49d7-ba53-8d0b41083b0f]) execution failed.
>> Exception: VDSNetworkException: java.net.ConnectException: Connection
>> refused
>> 2015-11-06 11:28:31,868 WARN
>> [org.ovirt.engine.core.vdsbroker.VdsManager]
>> (DefaultQuartzScheduler_Worker-87) Host pbuovirt4 is not responding. It
>> will stay in Connecting state for a grace period of 122 seconds and after
>> that an attempt to fence the host will be issued.
>> 2015-11-06 11:28:31,873 ERROR
>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>> (DefaultQuartzScheduler_Worker-87) Failure to refresh Vds runtime info:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> java.net.ConnectException: Connection refused
>>
>>
>> Regards,
>> Nagaraju
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> [1] http://www.ovirt.org/Ovirt_faq
>
> Best Regards,
> Roman
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-06 Thread Stefano Danzi

Hello!
It's a test evoirment, so I have only one node.
If I start manually glusterd some seconds after boot I have no problems. 
This error is only during boot.


I think that something chages during upgrade. Maybe that now glusterd 
start before networkging or rpc.


Il 06/11/2015 5.29, Sahina Bose ha scritto:

Did you upgrade all the nodes too?
Are some of your nodes not-reachable?

Adding gluster-users for glusterd error.

On 11/06/2015 12:00 AM, Stefano Danzi wrote:


After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when 
the host boot.

Manual start of service after boot works fine.

gluster log:

[2015-11-04 13:37:55.360876] I [MSGID: 100030] 
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterd: Started running 
/usr/sbin/glusterd version 3.7.5 (args: /usr/sbin/glusterd -p 
/var/run/glusterd.pid)
[2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init] 
0-management: Maximum allowed open file descriptors set to 65536
[2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init] 
0-management: Using /var/lib/glusterd as working directory
[2015-11-04 13:37:55.464540] W [MSGID: 103071] 
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm 
event channel creation failed [Nessun device corrisponde]
[2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init] 
0-rdma.management: Failed to initialize IB Device
[2015-11-04 13:37:55.464566] W 
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed
[2015-11-04 13:37:55.464616] W 
[rpcsvc.c:1597:rpcsvc_transport_create] 0-rpc-service: cannot create 
listener, initing the transport failed
[2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init] 
0-management: creation of 1 listeners failed, continuing with 
succeeded transport
[2015-11-04 13:37:57.663862] I [MSGID: 106513] 
[glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 30600
[2015-11-04 13:37:58.284522] I [MSGID: 106194] 
[glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list] 
0-management: No missed snaps list.
[2015-11-04 13:37:58.287477] E [MSGID: 106187] 
[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd: 
resolve brick failed in restore
[2015-11-04 13:37:58.287505] E [MSGID: 101019] 
[xlator.c:428:xlator_init] 0-management: Initialization of volume 
'management' failed, review your volfile again
[2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init] 
0-management: initializing translator failed
[2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate] 
0-graph: init failed
[2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-: 
received signum (0), shutting down



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi
oVirt is configured to use ovirtbk-mount.hawai.lan but gluster 
ovirt01.hawai.lan.

ovirtbk-mount.hawai.lan is an alias of ovirt01 and is in /etc/hosts

Il 06/11/2015 8.01, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 1:47 לפנה״צ,‏ "Stefano Danzi" > כתב:

>
> Hello,
> I have an Ovirt installation with only 1 host and self-hosted engine.
> My Master Data storage domain is GlusterFS type.
>
> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

> The error in vdsm.log is:
>
> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

> g
> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection': 
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': 
u'1', u'vfs_type': u'glusterfs', u'password':

>  '', u'port': u''}], options=None)

The error below suggest that ovirt and gluster are not configured in 
the same way, one using a domain name and the other ip address.


Can you share the output of
gluster volme info

On one of the bricks, or on the host (you will need to use --remote-host)

Nir

> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

> e
> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
> backup_servers_option = self._get_backup_servers_option()
>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

> servers.remove(self._volfileserver)
> ValueError: list.remove(x): x not in list
> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-06 Thread Nir Soffer
On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie
 wrote:
> Hi, All.  Is it possible to run vdsm without sanlock?  We'd prefer to run 
> libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock 
> overhead, but it looks like vdsmd / ovirt requires sanlock.

True, we require sanlock.

What is "sanlock overhead"?

Nir

>
> Thanks,
> Devin
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] feedback-on-oVirt-engine-3.5.4.2-1.el6

2015-11-06 Thread Nir Soffer
On Thu, Nov 5, 2015 at 7:50 AM, shihong...@wware.org
 wrote:
> hi,
>  I'm  a user of ovirt. And I use it in my work,but  I  find  a bug  of
> it.
> the problem describe:
> I have two  cluster ,and  one is in use the nfs, I use anather to mount it
> but ,it is fail and tip has been occupied .  So I  umount the old one ,but
> it still faild to mout it.

I don't understand your setup or your problem. You should give much
more details,
and attach engine and vdsm logs showing the failed operation.

Please share with us:
/var/log/ovirt-engine/engine.log - on the engine machine
/var/log/vdsm/vdsm.log - in the spm host or the host that failed

Nir

>
> 
> shihong...@wware.org
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk stuck in locked status

2015-11-06 Thread Nir Soffer
On Fri, Nov 6, 2015 at 10:13 AM, Johan Kooijman  wrote:
> Ah, more interesting: the disk lives half on storage domain #1, half on
> storage domain #2.

This is very unlikely. You probably have the full disk on the original
domain, and partial copy on the destination.

> I don't really need these disks, but can't do anything to
> these disks at the moment.
>
> What to do?
>
> On Thu, Nov 5, 2015 at 4:41 PM, Johan Kooijman 
> wrote:
>>
>> Hi all,
>>
>> I was moving a disk from one storage domain to the other when the engine
>> was restarted. The VM the disk is on, is fine, but the disk stays in locked
>> status.
>>
>> How can I resolve this?

1. Make sure the copy operation was stopped. The easiest way is to
restart vdsm on the spm. If you want to be more precise, you can use
vdsClient to find the task and cancel it.

2. Unlock the disk - there is a script for unlocking entities at
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh

3. Delete the partial copy of the disk on the destination domain. If
this is NFS, you can simply remove the image directory.

4. Open a bugs:
- engine should keep operations state in the db, and should handle
failures, unlocking disk if an operation fail.
- vdsm should not leave partial disks on the destination domain if an
operation failed.

>>
>> --
>> Met vriendelijke groeten / With kind regards,
>> Johan Kooijman
>
>
>
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt DB 3.6 to empty 3.6 DB

2015-11-06 Thread Simone Tiraboschi
On Fri, Nov 6, 2015 at 1:37 AM, p...@email.cz  wrote:

> Hello,
> can anybody help me with importing oVirt DB v.3.5 to new clean oVIrt 3.6
> database ??
>

Importing to a different release is not supported cause only the setup
script was in charge of upgrading the DB structure for the newer release.

The easiest solution is to temporary install engine from 3.5.5 somewhere (a
VM could do the job), upgrade it to 3.6.0 and than export the DB again when
done.
At that point you could import the 3.6 ready backup on its final
destination.



>
> # engine-backup --mode=restore --file=./backup --log=./restore.log
> Preparing to restore:
> - Unpacking file './backup'
> FATAL: Backup version '3.6' doesn't match installed version
>
> How to FIX it ??
> Regs. Paf1
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Must a user see his/her VMs through the web browser?

2015-11-06 Thread John Hunter
sure, thanks :)

On Fri, Nov 6, 2015 at 3:28 AM, Itamar Heim  wrote:

> On 11/05/2015 04:04 AM, John Hunter wrote:
>
>> Hi Yaniv,
>>
>> Thanks for that, I will check it later.
>>
>> Have a nice day :)
>>
>>
> we also have some samples if you want to build your own portal:
> https://gerrit.ovirt.org/gitweb?p=samples-portals.git;a=tree
>
> On Thu, Nov 5, 2015 at 4:18 PM, Yaniv Kaul > > wrote:
>>
>> On Thu, Nov 5, 2015 at 9:56 AM, John Hunter > > wrote:
>>
>> nah, i prefer not using arc welder :(
>>
>>
>> As already mentioned in the thread, perhaps Boxes can do the job, at
>> least on Gnome.
>> See
>> http://community.redhat.com/blog/2014/10/gnome-boxes-3-14-unboxed/
>> and https://www.ovirt.org/images/6/6c/Fergeau-ovirt-boxes.pdf for
>> more details.
>> Y.
>>
>>
>> On Thu, Nov 5, 2015 at 3:03 PM, Tomas Jelinek
>> > wrote:
>>
>>
>>
>> - Original Message -
>> > From: "John Hunter"  zhjw...@gmail.com>>
>> > To: "Tomas Jelinek"  tjeli...@redhat.com>>
>> > Cc: "Robert Story"  rst...@tislabs.com>>,
>> users@ovirt.org , "devel"
>> >, "Filip Krepinsky"
>> > >
>> > Sent: Thursday, November 5, 2015 7:20:54 AM
>> > Subject: Re: [ovirt-users] Must a user see his/her VMs
>> through the web browser?
>> >
>> > Hi Tomas,
>> >
>> > I have see the repo a little bit, as far as I can see, it's
>> just for mobile.
>>
>> yes, it is a mobile client.
>> There is a way to run it as a google chrome app using arc
>> welder so on a desktop (on fedora at least it works for me).
>> But this way it is more interesting than useful I'd say...
>>
>>  > Do you have a plan to make it useable on some Linux
>> distro, like
>>  > Ubuntu, or Debian?
>>  >
>>  > What I need is a GUI application or a library that can be
>> communicate
>>  > with the ovirt-engine.
>>  >
>>  > Cheers,
>>  > John
>>  >
>>  > On Wed, Nov 4, 2015 at 11:14 PM, Tomas Jelinek
>> > wrote:
>>  >
>>  > >
>>  > >
>>  > > - Original Message -
>>  > > > From: "John Hunter" > >
>>  > > > To: "Tomas Jelinek" > >
>>  > > > Cc: "Robert Story" > >, users@ovirt.org
>> , "devel" <
>>  > > de...@ovirt.org >
>>  > > > Sent: Wednesday, November 4, 2015 2:59:21 PM
>>  > > > Subject: Re: [ovirt-users] Must a user see his/her
>> VMs through the web
>>  > > browser?
>>  > > >
>>  > > > On Wed, Nov 4, 2015 at 9:29 PM, Tomas Jelinek
>> >
>>  > > wrote:
>>  > > >
>>  > > > >
>>  > > > >
>>  > > > > - Original Message -
>>  > > > > > From: "Robert Story" > >
>>  > > > > > To: "John Hunter" > >
>>  > > > > > Cc: users@ovirt.org ,
>> "devel" >
>>  > > > > > Sent: Wednesday, November 4, 2015 1:56:21 PM
>>  > > > > > Subject: Re: [ovirt-users] Must a user see
>> his/her VMs through the
>>  > > web
>>  > > > > browser?
>>  > > > > >
>>  > > > > > On Wed, 4 Nov 2015 17:50:15 +0800 John wrote:
>>  > > > > > JH> I have installed the oVirt all in one, and I
>> can log in the user
>>  > > > > portal
>>  > > > > > JH> through web browser to see user's VMs.
>>  > > > > > JH>
>>  > > > > > JH> I am wondering if there is an client
>> Application that can do the
>>  > > same
>>  > > > > > JH> thing, like VMware Horizon client has version
>> for Windows, Linux
>>  > > and
>>  > > > > > JH> IOS, etc.
>>  > > > > >
>>  

[ovirt-users] can oVirt 3.6 manage 3.5 hypervizor

2015-11-06 Thread p...@email.cz

Hi,
can oVirt 3.6 manage  hypervizors with 3.5 version ?
Meaning during cluster upgrade step by step.
(   A)  oVirt mgmt , B) 1st.hypervizor, C)  2nd hypervizor,  .. )
If oVirt DB converted from 3.5.5 -> 3.5.5.upg.3.6 -> final 3.6
regs. Paf1
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] No MAC addresses left in the MAC Address Pool

2015-11-06 Thread nicolas

Hi,

We're running oVirt 3.5.3.1-1. One of our users is trying to create a 
new VM. However, he gets the following warning:


2015-11-05 17:33:12,596 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-23) [7a0ba949] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: No MAC addresses left in the MAC 
Address Pool.
2015-11-05 17:33:28,804 WARN  [org.ovirt.engine.core.bll.CloneVmCommand] 
(ajp--127.0.0.1-8702-18) [22611dad] CanDoAction of action CloneVm failed 
for user XXX@YYY. Reasons: 
VAR__ACTION__ADD,VAR__TYPE__VM,MAC_POOL_NOT_ENOUGH_MAC_ADDRESSES


Is there a way to increase the MAC Pool?

Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6 hosted engine VM not showing

2015-11-06 Thread Ollie Armstrong
As a follow up to my own posting, for future Googlers, I found a bug
report at https://bugzilla.redhat.com/show_bug.cgi?id=1273378 which
will follow the issue.

On 5 November 2015 at 17:12, Ollie Armstrong  wrote:
> The engine VM used to display in 3.5 but it no longer appears to in 3.6.
> The storage method for the engine is the new FC if that's relevant.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Nir Soffer
On Fri, Nov 6, 2015 at 10:38 AM, Stefano Danzi  wrote:
> I pathced the code for "emergency"

A safer way is to downgrade vdsm to the previous version.

> I can't find how change confguration.

1. Put the gluster domain in maintenance
- select the domain in the storage tab
- in the "data center" sub tab, click maintenance
- the domain will turn to locked, and then to maintenance mode

2. Edit the domain
3. Change the address to the same one configured in gluster (ovirt01...)
4. Activate the domain

> But I think that's a bug:
>
> - All work in ovirt 3.5, after upgrade stop working

Yes, it should keep working after an upgrade

However, using straightforward setup, like using same server address
in both ovirt and
gluster will increase the chance that things will continue to work
after an upgrade.

> - The log show a python exception.

This is good, and makes debugging this issue easy.

> I think a thing:
>
> If there are chages on configuration requirements I have to be warned during
> upgrade, or I have to find a specific error in log.

Correct

> ...remove something that non exist from a list,

The code assumes that the server address is in the list, so removing
it is correct.

This assumption is wrong, we will have to change to code to handle this case.

> and leave a cryptic python
> exception as error log, isn't the better solution...

The traceback in the log is very important, without it would be very
hard to debug this issue.

>
>
> Il 06/11/2015 8.12, Nir Soffer ha scritto:
>
>
> בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi"  כתב:
>>
>> To temporary solve the problem I patched storageserver.py as suggested on
>> link above.
>
> I would not patch the code but change the configuration.
>
>> I can't find a related issue on bugzilla.
>
> Would you file bug about this?
>
>>
>>
>> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>>
>>> My error is related to this message:
>>>
>>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>>
>>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:

 Hello,
 I have an Ovirt installation with only 1 host and self-hosted engine.
 My Master Data storage domain is GlusterFS type.

 After upgrading to Ovirt 3.6 data storage domain and default dataceter
 are down.
 The error in vdsm.log is:

 Thread-6585::DEBUG::2015-11-04
 23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
 state
 preparin
 g
 Thread-6585::INFO::2015-11-04
 23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=7,
 spUUID=u'----', conLi
 st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection':
 u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': u'1',
 u'vfs_type': u'glusterfs', u'password':
  '', u'port': u''}], options=None)
 Thread-6585::DEBUG::2015-11-04
 23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating
 directory: /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data
 mode: Non
 e
 Thread-6585::WARNING::2015-11-04
 23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir
 /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already exists
 Thread-6585::ERROR::2015-11-04
 23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2462, in
 connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
 backup_servers_option = self._get_backup_servers_option()
   File "/usr/share/vdsm/storage/storageServer.py", line 340, in
 _get_backup_servers_option
 servers.remove(self._volfileserver)
 ValueError: list.remove(x): x not in list
 Thread-6585::DEBUG::2015-11-04
 23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
 {46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
 Thread-6585::INFO::2015-11-04
 23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect:
 connectStorageServer, Return response: {'statuslist': [{'status': 100, 
 'id':
 u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
 Thread-6585::DEBUG::2015-11-04
 23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare)
 Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist':
 [{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
 Thread-6585::DEBUG::2015-11-04
 23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState)
 

Re: [ovirt-users] upgrade from 3.5 to 3.6 causing problems with migration

2015-11-06 Thread Simone Tiraboschi
On Fri, Nov 6, 2015 at 5:21 PM, Jason Keltz  wrote:

> Hi.
>
> Last night, I upgraded my engine from 3.5 to 3.6.  That went flawlessly.
> Today, I'm trying to upgrade the vdsm on the hosts from 3.5 to 3.6 (along
> with applying other RHEL7.1 updates)  However, when I'm trying to put each
> host into maintenance mode, and migrations start to occur, they all seem to
> FAIL now!  Even worse, when they fail, it leaves the hosts DOWN!  If
> there's a failure, I'd expect the host to simply abort the migration
> Any help in debugging this would be VERY much appreciated!
>
> 2015-11-06 10:09:16,065 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-4) [] Correlation ID: 658ba478, Job ID:
>> 524e8c44-04e0-42d3-89f9-9f4e4d397583, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  (VM: eportfolio, Source: virt1).
>> 2015-11-06 10:10:17,112 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-22) [2f0dee16] Correlation ID: 7da3ac1b,
>> Job ID: 93c0b1f2-4c8e-48cf-9e63-c1ba91be425f, Call Stack: null, Custom
>> Event ID: -1, Message: Migration failed  (VM: ftp1, Source: virt1).
>> 2015-11-06 10:15:08,273 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-45) [] Correlation ID: 5394ef76, Job ID:
>> 994065fc-a142-4821-934a-c2297d86ec12, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  while Host is in 'preparing for maintenance'
>> state.
>> 2015-11-06 10:19:13,712 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-36) [] Correlation ID: 6e422728, Job ID:
>> 994065fc-a142-4821-934a-c2297d86ec12, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  while Host is in 'preparing for maintenance'
>> state.
>> 2015-11-06 10:42:37,852 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-12) [] Correlation ID: e7f6300, Job ID:
>> 1ea16622-0fa0-4e92-89e5-9dc235c03ef8, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  (VM: ipa, Source: virt1).
>> 2015-11-06 10:43:59,732 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-40) [] Correlation ID: 39cfdf9, Job ID:
>> 72be29bc-a02b-4a90-b5ec-8b995c2fa692, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  (VM: labtesteval, Source: virt1).
>> 2015-11-06 10:52:11,893 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-23) [] Correlation ID: 5c435149, Job ID:
>> 1dcd1e14-baa6-44bc-a853-5d33107b759c, Call Stack: null, Custom Event ID:
>> -1, Message: Migration failed  (VM: www-vhost, Source: virt1).
>>
>
>
> The complete engine log, virt1, virt2, and virt3 vdsm logs are here:
>
> http://www.eecs.yorku.ca/~jas/ovirt-debug/11062015
>
>
Is vdsmd service still active on that hosts?


> Jason.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA

2015-11-06 Thread Simone Tiraboschi
On Fri, Nov 6, 2015 at 6:24 PM, Budur Nagaraju  wrote:

> Hi Simone
>
> Yes my ovirt engine is running on VM,can you pls update on  how to do the
> self hosted engine mode  settings?by doing this any of the deployed VM will
> get effected?
>

No, ovirt-ha-agent will manage only the engine VM but HA for other VMs will
be managed by the engine.

Here how to deploy from scratch using the oVirt engine appliance:
https://www.youtube.com/watch?v=ODJ_UO7U1WQ

Here how to migrate an existing env to hosted engine:
http://www.ovirt.org/Migrate_to_Hosted_Engine


> Thanks
> Nagaraju
> On Nov 6, 2015 9:50 PM, "Simone Tiraboschi"  wrote:
>
>>
>>
>> On Fri, Nov 6, 2015 at 12:54 PM, Amador Pahim  wrote:
>>
>>> On 11/06/2015 02:55 AM, Budur Nagaraju wrote:
>>>
>>> HI
>>>
>>> Can someone update how to do settings ,if the node goes down vms should
>>> automatically migrated to other host .
>>>
>>>
>>>
>>> http://www.ovirt.org/OVirt_Administration_Guide#Configuring_a_Highly_Available_Virtual_Machine
>>>
>>>
>>
>> And if you are running your engine on a virtual machine and you want to
>> have high availability also there you have to move to self hosted-engine
>> mode.
>>
>>>
>>> Regards,
>>> Nagaraju
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-11-06 Thread Simone Tiraboschi
On Thu, Nov 5, 2015 at 7:10 PM, Stefano Danzi  wrote:

>
>
>
> the content is:
>
> [email]
> smtp-server=localhost
> smtp-port=25
> destination-emails=root@localhost
> source-email=root@localhost
>
> [notify]
> state_transition=maintenance|start|stop|migrate|up|down
>
> and is the default. My conf was lost during upgrade.
> If I restart ovirt-ha-broker the broker.conf is replaced with the default
>
> If I don't restart ovirt-ha-broker, the broker.conf is silently replaced
> after a while.
>
> Looking here
> http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html
> I understand that broker.conf is stored in another place and overwrite at
> runtime.
>

The broker.conf is now on the shared storage (as other hosted-engine
related configuration files) so that in the future they'll be easily
editable from the web UI.

The issue here seams to be that the upgrade overwrite it with the default
file before copying to the shared storage.
I'm opening a bug against that.

Let's try to fix in your instance (please substitute
'192.168.1.115:_Virtual_ext35u36'
with the mount point on your system):

dir=`mktemp -d` && cd $dir
systemctl stop ovirt-ha-broker
sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-engine.conf)
sdUUID=${sdUUID_line:7:36}
conf_volume_UUID_line=$(grep conf_volume_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_volume_UUID=${conf_volume_UUID_line:17:36}
conf_image_UUID_line=$(grep conf_image_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_image_UUID=${conf_image_UUID_line:16:36}
dd 
if=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
2>/dev/null| tar -xvf -
cp /etc/ovirt-hosted-engine-ha/broker.conf.rpmsave broker.conf # or edit
broker.conf as you need
tar -cO * | dd of=/rhev/data-center/mnt/192.168.1.115:
_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
systemctl start ovirt-ha-broker



>
>
>
> Il 05/11/2015 18.44, Simone Tiraboschi ha scritto:
>
> Can you please paste here the content of
> /var/lib/ovirt-hosted-engine-ha/broker.conf ?
> eventually make it anonymous if you prefer
>
>
>
> On Thu, Nov 5, 2015 at 6:42 PM, Stefano Danzi  wrote:
>
>> After upgrading from 3.5 to 3.6 Hosted engine notifications stop to work.
>> I think that broker.conf was lost during upgrade.
>>
>> I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1260757
>> But I don't undertand how to change the configuration now.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can oVirt 3.6 manage 3.5 hypervizor

2015-11-06 Thread Oved Ourfali
On Nov 6, 2015 12:24, "Martin Perina"  wrote:
>
> Hi,
>
> yes, this is correct way how to do the upgrade:
>
> 1. Upgrade engine
> 2. Put each host into maintenance and upgrade it

Actually in 3.6 the upgrade button will be enabled when there is an upgrade
available, and pressing it will also move the host to maintenance if
needed.

As long as the cluster level is 3.5 (existing cluster level doesn't change
after upgrade) you'll be able to work with 3.5 hosts.

> 3. When all hosts in cluster are upgraded you can raise
>cluster level
>
> DB upgrade is done during upgrade engine part automatically:
>
> 1. Add 3.6 repos
> 2. yum update 'ovirt-engine-setup*'
> 3. engine-setup
>
> Martin Perina
>
> - Original Message -
> > From: p...@email.cz
> > To: users@ovirt.org
> > Sent: Friday, November 6, 2015 10:29:22 AM
> > Subject: [ovirt-users] can oVirt 3.6 manage  3.5 hypervizor
> >
> > Hi,
> > can oVirt 3.6 manage hypervizors with 3.5 version ?
> > Meaning during cluster upgrade step by step.
> > ( A) oVirt mgmt , B) 1st.hypervizor, C) 2nd hypervizor,  .. )
> > If oVirt DB converted from 3.5.5 -> 3.5.5.upg.3.6 -> final 3.6
> > regs. Paf1
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] upgrade from 3.5 to 3.6 causing problems with migration

2015-11-06 Thread Jason Keltz

On 11/06/2015 02:02 PM, Simone Tiraboschi wrote:



On Fri, Nov 6, 2015 at 5:21 PM, Jason Keltz > wrote:


Hi.

Last night, I upgraded my engine from 3.5 to 3.6.  That went
flawlessly.
Today, I'm trying to upgrade the vdsm on the hosts from 3.5 to 3.6
(along with applying other RHEL7.1 updates) However, when I'm
trying to put each host into maintenance mode, and migrations
start to occur, they all seem to FAIL now!  Even worse, when they
fail, it leaves the hosts DOWN!  If there's a failure, I'd expect
the host to simply abort the migration  Any help in debugging
this would be VERY much appreciated!

2015-11-06 10:09:16,065 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-4) [] Correlation ID:
658ba478, Job ID: 524e8c44-04e0-42d3-89f9-9f4e4d397583, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
(VM: eportfolio, Source: virt1).

2015-11-06 10:10:17,112 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-22) [2f0dee16] Correlation ID:
7da3ac1b, Job ID: 93c0b1f2-4c8e-48cf-9e63-c1ba91be425f, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
(VM: ftp1, Source: virt1).

2015-11-06 10:15:08,273 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-45) [] Correlation ID:
5394ef76, Job ID: 994065fc-a142-4821-934a-c2297d86ec12, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
while Host is in 'preparing for maintenance' state.

2015-11-06 10:19:13,712 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-36) [] Correlation ID:
6e422728, Job ID: 994065fc-a142-4821-934a-c2297d86ec12, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
while Host is in 'preparing for maintenance' state.

2015-11-06 10:42:37,852 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-12) [] Correlation ID:
e7f6300, Job ID: 1ea16622-0fa0-4e92-89e5-9dc235c03ef8, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
(VM: ipa, Source: virt1).

2015-11-06 10:43:59,732 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-40) [] Correlation ID:
39cfdf9, Job ID: 72be29bc-a02b-4a90-b5ec-8b995c2fa692, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
(VM: labtesteval, Source: virt1).

2015-11-06 10:52:11,893 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-23) [] Correlation ID:
5c435149, Job ID: 1dcd1e14-baa6-44bc-a853-5d33107b759c, Call
Stack: null, Custom Event ID: -1, Message: Migration failed 
(VM: www-vhost, Source: virt1).




The complete engine log, virt1, virt2, and virt3 vdsm logs are here:

http://www.eecs.yorku.ca/~jas/ovirt-debug/11062015



Is vdsmd service still active on that hosts?


Hi Simone..

Yes..

virt1:
sh-4.2# systemctl -l status vdsmd
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: active (running) since Thu 2015-11-05 22:47:46 EST; 15h ago
 Main PID: 16520 (vdsm)
   CGroup: /system.slice/vdsmd.service
   ├─16520 /usr/bin/python /usr/share/vdsm/vdsm
   ├─30038 /usr/libexec/ioprocess --read-pipe-fd 67 
--write-pipe-fd 66 --max-threads 10 --max-queued-requests 10
   ├─30055 /usr/libexec/ioprocess --read-pipe-fd 76 
--write-pipe-fd 75 --max-threads 10 --max-queued-requests 10
   └─30062 /usr/libexec/ioprocess --read-pipe-fd 81 
--write-pipe-fd 84 --max-threads 10 --max-queued-requests 10


Nov 06 10:09:15 virt1.cs.yorku.ca vdsm[16520]: vdsm root WARNING File: 
/var/lib/libvirt/qemu/channels/62ff4ada-ee98-491e-bfb5-7adda7b513ee.com.redhat.rhevm.vdsm 
already removed
Nov 06 10:09:15 virt1.cs.yorku.ca vdsm[16520]: vdsm root WARNING File: 
/var/lib/libvirt/qemu/channels/62ff4ada-ee98-491e-bfb5-7adda7b513ee.org.qemu.guest_agent.0 
already removed
Nov 06 10:10:15 virt1.cs.yorku.ca vdsm[16520]: vdsm root WARNING File: 
/var/lib/libvirt/qemu/channels/aa487207-7ff4-465a-9d9b-2a103d50dc77.com.redhat.rhevm.vdsm 
already removed
Nov 06 10:10:15 virt1.cs.yorku.ca vdsm[16520]: vdsm root WARNING File: 
/var/lib/libvirt/qemu/channels/aa487207-7ff4-465a-9d9b-2a103d50dc77.org.qemu.guest_agent.0 
already removed
Nov 06 10:42:36 virt1.cs.yorku.ca vdsm[16520]: vdsm root 

[ovirt-users] Upgrade method for 3.5 node

2015-11-06 Thread Phil Daws
Hello,

I have upgraded my engine to 3.6 (el6) and would now like to do the same on my 
3.5 (el7) node.  The node was built using a minimal Centos7 ISO and then I 
installed the 3.5 rpm.

How best would I go about performing the upgrade please ?

Thanks, Phil


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-06 Thread Devin A. Bougie
Hi Nir,

On Nov 6, 2015, at 5:02 AM, Nir Soffer  wrote:
> On Thu, Nov 5, 2015 at 11:33 PM, Devin A. Bougie  
> wrote:
>> Hi, All.  Is it possible to run vdsm without sanlock?  We'd prefer to run 
>> libvirtd with virtlockd (lock_manager = "lockd") to avoid the sanlock 
>> overhead, but it looks like vdsmd / ovirt requires sanlock.
> 
> True, we require sanlock.
> What is "sanlock overhead"?

Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).  I 
have no problem setting up the filesystem or configuring sanlock to use it, but 
then the vm's fail if the shared filesystem blocks or fails.  We'd like to have 
our vm images use block devices and avoid any dependency on a remote or shared 
file system.  My understanding is that virtlockd can lock a block device 
directly, while sanlock requires something like gfs2 or nfs.

Perhaps it's my misunderstanding or misreading, but it seemed like things were 
moving in the direction of virtlockd.  For example:
http://lists.ovirt.org/pipermail/devel/2015-March/010127.html

Thanks for following up!

Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA

2015-11-06 Thread Simone Tiraboschi
On Fri, Nov 6, 2015 at 12:54 PM, Amador Pahim  wrote:

> On 11/06/2015 02:55 AM, Budur Nagaraju wrote:
>
> HI
>
> Can someone update how to do settings ,if the node goes down vms should
> automatically migrated to other host .
>
>
>
> http://www.ovirt.org/OVirt_Administration_Guide#Configuring_a_Highly_Available_Virtual_Machine
>
>

And if you are running your engine on a virtual machine and you want to
have high availability also there you have to move to self hosted-engine
mode.

>
> Regards,
> Nagaraju
>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] upgrade from 3.5 to 3.6 causing problems with migration

2015-11-06 Thread Jason Keltz

Hi.

Last night, I upgraded my engine from 3.5 to 3.6.  That went flawlessly.
Today, I'm trying to upgrade the vdsm on the hosts from 3.5 to 3.6 
(along with applying other RHEL7.1 updates)  However, when I'm trying to 
put each host into maintenance mode, and migrations start to occur, they 
all seem to FAIL now!  Even worse, when they fail, it leaves the hosts 
DOWN!  If there's a failure, I'd expect the host to simply abort the 
migration  Any help in debugging this would be VERY much appreciated!


2015-11-06 10:09:16,065 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-4) [] Correlation ID: 658ba478, Job 
ID: 524e8c44-04e0-42d3-89f9-9f4e4d397583, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  (VM: eportfolio, Source: virt1).
2015-11-06 10:10:17,112 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-22) [2f0dee16] Correlation ID: 
7da3ac1b, Job ID: 93c0b1f2-4c8e-48cf-9e63-c1ba91be425f, Call Stack: 
null, Custom Event ID: -1, Message: Migration failed  (VM: ftp1, 
Source: virt1).
2015-11-06 10:15:08,273 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-45) [] Correlation ID: 5394ef76, Job 
ID: 994065fc-a142-4821-934a-c2297d86ec12, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  while Host is in 'preparing 
for maintenance' state.
2015-11-06 10:19:13,712 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-36) [] Correlation ID: 6e422728, Job 
ID: 994065fc-a142-4821-934a-c2297d86ec12, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  while Host is in 'preparing 
for maintenance' state.
2015-11-06 10:42:37,852 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-12) [] Correlation ID: e7f6300, Job 
ID: 1ea16622-0fa0-4e92-89e5-9dc235c03ef8, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  (VM: ipa, Source: virt1).
2015-11-06 10:43:59,732 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-40) [] Correlation ID: 39cfdf9, Job 
ID: 72be29bc-a02b-4a90-b5ec-8b995c2fa692, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  (VM: labtesteval, Source: virt1).
2015-11-06 10:52:11,893 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-23) [] Correlation ID: 5c435149, Job 
ID: 1dcd1e14-baa6-44bc-a853-5d33107b759c, Call Stack: null, Custom 
Event ID: -1, Message: Migration failed  (VM: www-vhost, Source: virt1).



The complete engine log, virt1, virt2, and virt3 vdsm logs are here:

http://www.eecs.yorku.ca/~jas/ovirt-debug/11062015

Jason.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unknown libvirterror - where to start?

2015-11-06 Thread Christophe TREFOIS
Dear Nir,

Thank you for your help. I shall ignore the messages then :)

Kind regards,

—
Christophe

Dr Christophe Trefois, Dipl.-Ing.  
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine  
6, avenue du Swing 
L-4367 Belvaux  
T: +352 46 66 44 6124 
F: +352 46 66 44 6949  
http://www.uni.lu/lcsb




This message is confidential and may contain privileged information. 
It is intended for the named recipient only. 
If you receive it in error please notify me and permanently delete the original 
message and any copies. 


  

> On 06 Nov 2015, at 12:23, Nir Soffer  wrote:
> 
> On Tue, Nov 3, 2015 at 10:30 AM, Yaniv Kaul  wrote:
>> On Tue, Nov 3, 2015 at 9:52 AM, Christophe TREFOIS
>>  wrote:
>>> 
>>> Hi,
>>> 
>>> I checked the logs on my hypervisor that contains also the overt-engine
>>> (self-hosted) and I see strange unknown libvirterrors that come periodically
>>> in the vdsm.log file. The storage is glusterFS running on the hypervisor as
>>> well, one NFS export domain and an ISO domain. A NFS domain from another
>>> place is in maintenance mode.
>>> 
>>> I am running oVirt 3.5.3.
>>> 
>>> Thank you for any pointers as to where to start fixing this issue.
>>> 
>>> — log excerpt --
>>> 
>>> Thread-1947641::DEBUG::2015-11-03
>>> 08:47:31,398::stompReactor::163::yajsonrpc.StompServer::(send) Sending
>>> response
>>> Thread-8108::DEBUG::2015-11-03
>>> 08:47:31,410::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>>> element is not present
>> 
>> 
>> This (depending on your host OS version, 6.x or 7.x) is either
>> https://bugzilla.redhat.com/show_bug.cgi?id=1220474 or
>> https://bugzilla.redhat.com/show_bug.cgi?id=1260864
>> Y.
> 
> The error about missing metadata is just noise in the log, nothing to
> worry about.
> 
> Adding Martin
> 
>> 
>>> Dummy-1895260::DEBUG::2015-11-03
>>> 08:47:31,477::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail) dd
>>> if=/rhev/data-center/0002-0002-0002-0002-03d5/mastersd/dom_md/inbox
>>> iflag=direct,fullblock count=1 bs=1024000 (cwd None)
>>> Dummy-1895260::DEBUG::2015-11-03
>>> 08:47:31,501::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
>>> SUCCESS:  = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB)
>>> copied, 0.00331278 s, 309 MB/s\n';  = 0
>>> Thread-7913::DEBUG::2015-11-03
>>> 08:47:32,298::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>>> element is not present
>>> Thread-5682::DEBUG::2015-11-03
>>> 08:47:32,417::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>>> element is not present
>>> Detector thread::DEBUG::2015-11-03
>>> 08:47:32,591::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>> Adding connection from 127.0.0.1:44671
>>> Detector thread::DEBUG::2015-11-03
>>> 08:47:32,598::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>>> Connection removed from 127.0.0.1:44671
>>> Detector thread::DEBUG::2015-11-03
>>> 08:47:32,599::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:44671
>>> Detector thread::DEBUG::2015-11-03
>>> 08:47:32,599::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http
>>> detected from ('127.0.0.1', 44671)
>>> Thread-1947642::DEBUG::2015-11-03
>>> 08:47:32,602::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::moving from state init -> state
>>> preparing
>>> Thread-1947642::INFO::2015-11-03
>>> 08:47:32,603::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> repoStats(options=None)
>>> Thread-1947642::INFO::2015-11-03
>>> 08:47:32,603::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> repoStats, Return response: {u'de9eb737-691f-4622-9070-891531d599a0':
>>> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
>>> '0.000373613', 'lastCheck': '2.5', 'valid': True},
>>> u'fe4fd19a-8714-44e0-ae41-663a4b62da7a': {'code': 0, 'actual': True,
>>> 'version': 0, 'acquired': True, 'delay': '0.000409446', 'lastCheck': '6.4',
>>> 'valid': True}, u'8253a89b-651e-4ff4-865b-57adef05d383': {'code': 0,
>>> 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000520671',
>>> 'lastCheck': '1.8', 'valid': True}, 'b18eb29e-8bb1-45b9-a60e-a8e07210e066':
>>> {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
>>> '0.000424445', 'lastCheck': '6.5', 'valid': True}}
>>> Thread-1947642::DEBUG::2015-11-03
>>> 08:47:32,603::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::finished:
>>> 

Re: [ovirt-users] Unknown libvirterror - where to start?

2015-11-06 Thread Martin Sivak
Hi,

> Thread-8108::DEBUG::2015-11-03 
> 08:47:31,410::libvirtconnection::143::root::(wrapper) Unknown libvirterror: 
> ecode: 80 edom: 20 level: 2
> message: metadata not found: Requested metadata element is not present

we fixed this on vdsm side of oVirt 3.6 too:

https://gerrit.ovirt.org/#/c/45799/

But Nir is correct. This was just noise, it went away when any QoS was
defined for the VM and was not important otherwise.

> > VM Channels Listener::DEBUG::2015-11-03 
> > 08:47:34,386::vmchannels::96::vds::(_handle_timeouts) Timeout on fileno 125.
> We've seen this as well. I don't think there's a specific bug filed on this 
> issue. I wonder if they related.

No relation to the metadata log messages. Can't help here.

Best regards

--
Martin Sivak
SLA / oVirt


On Fri, Nov 6, 2015 at 1:14 PM, Christophe TREFOIS
 wrote:
> Dear Nir,
>
> Thank you for your help. I shall ignore the messages then :)
>
> Kind regards,
>
> —
> Christophe
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the 
> original message and any copies.
> 
>
>
>
>> On 06 Nov 2015, at 12:23, Nir Soffer  wrote:
>>
>> On Tue, Nov 3, 2015 at 10:30 AM, Yaniv Kaul  wrote:
>>> On Tue, Nov 3, 2015 at 9:52 AM, Christophe TREFOIS
>>>  wrote:

 Hi,

 I checked the logs on my hypervisor that contains also the overt-engine
 (self-hosted) and I see strange unknown libvirterrors that come 
 periodically
 in the vdsm.log file. The storage is glusterFS running on the hypervisor as
 well, one NFS export domain and an ISO domain. A NFS domain from another
 place is in maintenance mode.

 I am running oVirt 3.5.3.

 Thank you for any pointers as to where to start fixing this issue.

 — log excerpt --

 Thread-1947641::DEBUG::2015-11-03
 08:47:31,398::stompReactor::163::yajsonrpc.StompServer::(send) Sending
 response
 Thread-8108::DEBUG::2015-11-03
 08:47:31,410::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
 ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
 element is not present
>>>
>>>
>>> This (depending on your host OS version, 6.x or 7.x) is either
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1220474 or
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1260864
>>> Y.
>>
>> The error about missing metadata is just noise in the log, nothing to
>> worry about.
>>
>> Adding Martin
>>
>>>
 Dummy-1895260::DEBUG::2015-11-03
 08:47:31,477::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail) dd
 if=/rhev/data-center/0002-0002-0002-0002-03d5/mastersd/dom_md/inbox
 iflag=direct,fullblock count=1 bs=1024000 (cwd None)
 Dummy-1895260::DEBUG::2015-11-03
 08:47:31,501::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
 SUCCESS:  = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB)
 copied, 0.00331278 s, 309 MB/s\n';  = 0
 Thread-7913::DEBUG::2015-11-03
 08:47:32,298::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
 ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
 element is not present
 Thread-5682::DEBUG::2015-11-03
 08:47:32,417::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
 ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
 element is not present
 Detector thread::DEBUG::2015-11-03
 08:47:32,591::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
 Adding connection from 127.0.0.1:44671
 Detector thread::DEBUG::2015-11-03
 08:47:32,598::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
 Connection removed from 127.0.0.1:44671
 Detector thread::DEBUG::2015-11-03
 08:47:32,599::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:44671
 Detector thread::DEBUG::2015-11-03
 08:47:32,599::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over 
 http
 detected from ('127.0.0.1', 44671)
 Thread-1947642::DEBUG::2015-11-03
 08:47:32,602::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::moving from state init -> 
 state
 preparing
 Thread-1947642::INFO::2015-11-03
 08:47:32,603::logUtils::44::dispatcher::(wrapper) Run and protect:
 repoStats(options=None)
 Thread-1947642::INFO::2015-11-03
 

Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480

2015-11-06 Thread Amador Pahim

On 11/06/2015 05:07 AM, Punit Dambiwal wrote:

Hi,

After upgrade to host with ovirt 3.6 i am not able to attach the 
storage domain to host...storage domain can be add through command 
line but from ovirt dashboard as active host can not..


engine logs :- https://paste.fedoraproject.org/287499/


2015-11-06 15:01:51,103 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-78) [40bde6e1] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: The error message for 
connection gluster.3linux.com:/sata returned by VDSM was: 480



Checking the error 480 in vdsm:

(vdsm/storage/storage_exception.py)

class UnsupportedGlusterVolumeReplicaCountError(StorageException):
code = 480
message = "Gluster volume replica count is not supported"


Seems like you're not using a replica 3 gluster volume.



Thanks,
Punit



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] An cloud based OVirt Lab for anyone to use and ideally enhance...

2015-11-06 Thread Kyle Bassett
Hi everyone, I have been playing with latest OVirt the last little while.  I go 
back with RHEV since the early days when it was in BETA and I was running it in 
production :} It has certainly come a long, long way...

I see lots of upgrade testing / new feature discussion going on here, I wanted 
to share it with the community to take advantage of and ideally make it even 
better.
I plan to integrate this setup with Openstack and add IDM and other features as 
soon as possible.

If you want to try it out, you don’t need real hardware and still get all the 
functionally of KVM.
I used Ravello Systems to build it - https://ravellosystems.com - You can 
utilize the free trial for 2 weeks.

You can find the blueprint here -> 
https://www.ravellosystems.com/repo/blueprints/64554219

Thanks for helping with the original issues I ran into getting it all working 
properly.

Kyle



Here is the description:

A perfect nested lab environment to learn (test upgrades / new features) on 
OVirt (RHEV) and ManageIQ (Cloudforms) 
I have been testing openstack blueprints from the Ravello REPO quite a bit and 
saw no-one had build an OVirt setup.
It's been a while since I used OVirt / RHEV (I had been using it since back in 
the day when when it was BETA!)
I already had an ESXi setup working with ManageIQ / Cloudforms so I figured why 
not add OVirt / RHEV as well... Openstack will be next...

Here is a blueprint (built on Fedora22) that includes an 2 node OVirt cluster - 
OVirt Engine Version: 3.5.4.2-1.fc20 (Upstream Redhat Enterprise Virtualization 
- RHEV)
It also includes a ManageIQ (Upstream Cloudforms) instance to manage / 
orchestrate the OVirt environment.
Also a Fedora 22 Desktop to use as local management / jumpbox to your 
environment.

1 OVirt Manager - oVirt Engine Version: 3.5.4.2-1.fc20
2 OVirt KVM Hypervisors
1 Ovirt hypervisor template (used to add more nodes to your cluster)
1 FreeNAS appliance to offer up shared NFS storage for the cluster
1 ManageIQ appliance
1 Fedora 22 Desktop (Configured with VNC access)

http://www.ovirt.org/Home 
http://manageiq.org/ 
http://www.freenas.org/ 
Give the services and webui a few minutes to start up after booting.
You will find id/password information in the description of each virtual 
machine in the Ravello UI. Some vm's required a sshkey.

If you want build you own hypervisor nodes - see some additional steps below 
(this has already been done on the template I provided).

There’s 2 small issues that you need to work around:

Due to an issue with the the user-mode CPU detection in libvirt, this patch 
needs to be applied to /usr/share/libvirt/cpu_map.xml. This patch forces the 
CPU type to be an Opteron G2 independent of the CPUID. To apply this patch, log 
on to the hypervisor in rescue mode, apply the patch, and then issue the 
command “persist /usr/share/libvirt/cpu_map.xml”. This needs to be done after 
step 8 above.

https://gist.github.com/geertj/56425d0fdc7c54d4bc9f 

Here is a good blog that details a similar setup for RHEV.

https://www.ravellosystems.com/blog/run-red-hat-enterprise-virtualization-kvm-ec2/
 

Feel free to ask questions or provide comments

Have fun
Kyle



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unknown libvirterror - where to start?

2015-11-06 Thread Nir Soffer
On Tue, Nov 3, 2015 at 10:30 AM, Yaniv Kaul  wrote:
> On Tue, Nov 3, 2015 at 9:52 AM, Christophe TREFOIS
>  wrote:
>>
>> Hi,
>>
>> I checked the logs on my hypervisor that contains also the overt-engine
>> (self-hosted) and I see strange unknown libvirterrors that come periodically
>> in the vdsm.log file. The storage is glusterFS running on the hypervisor as
>> well, one NFS export domain and an ISO domain. A NFS domain from another
>> place is in maintenance mode.
>>
>> I am running oVirt 3.5.3.
>>
>> Thank you for any pointers as to where to start fixing this issue.
>>
>> — log excerpt --
>>
>> Thread-1947641::DEBUG::2015-11-03
>> 08:47:31,398::stompReactor::163::yajsonrpc.StompServer::(send) Sending
>> response
>> Thread-8108::DEBUG::2015-11-03
>> 08:47:31,410::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>> element is not present
>
>
> This (depending on your host OS version, 6.x or 7.x) is either
> https://bugzilla.redhat.com/show_bug.cgi?id=1220474 or
> https://bugzilla.redhat.com/show_bug.cgi?id=1260864
> Y.

The error about missing metadata is just noise in the log, nothing to
worry about.

Adding Martin

>
>> Dummy-1895260::DEBUG::2015-11-03
>> 08:47:31,477::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail) dd
>> if=/rhev/data-center/0002-0002-0002-0002-03d5/mastersd/dom_md/inbox
>> iflag=direct,fullblock count=1 bs=1024000 (cwd None)
>> Dummy-1895260::DEBUG::2015-11-03
>> 08:47:31,501::storage_mailbox::731::Storage.Misc.excCmd::(_checkForMail)
>> SUCCESS:  = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB)
>> copied, 0.00331278 s, 309 MB/s\n';  = 0
>> Thread-7913::DEBUG::2015-11-03
>> 08:47:32,298::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>> element is not present
>> Thread-5682::DEBUG::2015-11-03
>> 08:47:32,417::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
>> ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
>> element is not present
>> Detector thread::DEBUG::2015-11-03
>> 08:47:32,591::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:44671
>> Detector thread::DEBUG::2015-11-03
>> 08:47:32,598::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:44671
>> Detector thread::DEBUG::2015-11-03
>> 08:47:32,599::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:44671
>> Detector thread::DEBUG::2015-11-03
>> 08:47:32,599::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over http
>> detected from ('127.0.0.1', 44671)
>> Thread-1947642::DEBUG::2015-11-03
>> 08:47:32,602::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::moving from state init -> state
>> preparing
>> Thread-1947642::INFO::2015-11-03
>> 08:47:32,603::logUtils::44::dispatcher::(wrapper) Run and protect:
>> repoStats(options=None)
>> Thread-1947642::INFO::2015-11-03
>> 08:47:32,603::logUtils::47::dispatcher::(wrapper) Run and protect:
>> repoStats, Return response: {u'de9eb737-691f-4622-9070-891531d599a0':
>> {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
>> '0.000373613', 'lastCheck': '2.5', 'valid': True},
>> u'fe4fd19a-8714-44e0-ae41-663a4b62da7a': {'code': 0, 'actual': True,
>> 'version': 0, 'acquired': True, 'delay': '0.000409446', 'lastCheck': '6.4',
>> 'valid': True}, u'8253a89b-651e-4ff4-865b-57adef05d383': {'code': 0,
>> 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000520671',
>> 'lastCheck': '1.8', 'valid': True}, 'b18eb29e-8bb1-45b9-a60e-a8e07210e066':
>> {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
>> '0.000424445', 'lastCheck': '6.5', 'valid': True}}
>> Thread-1947642::DEBUG::2015-11-03
>> 08:47:32,603::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::finished:
>> {u'de9eb737-691f-4622-9070-891531d599a0': {'code': 0, 'actual': True,
>> 'version': 0, 'acquired': True, 'delay': '0.000373613', 'lastCheck': '2.5',
>> 'valid': True}, u'fe4fd19a-8714-44e0-ae41-663a4b62da7a': {'code': 0,
>> 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000409446',
>> 'lastCheck': '6.4', 'valid': True}, u'8253a89b-651e-4ff4-865b-57adef05d383':
>> {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
>> '0.000520671', 'lastCheck': '1.8', 'valid': True},
>> 'b18eb29e-8bb1-45b9-a60e-a8e07210e066': {'code': 0, 'actual': True,
>> 'version': 3, 'acquired': True, 'delay': '0.000424445', 'lastCheck': '6.5',
>> 'valid': True}}
>> Thread-1947642::DEBUG::2015-11-03
>> 08:47:32,603::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`1d99a166-cb9a-4025-8211-a48e210b5234`::moving from 

Re: [ovirt-users] No MAC addresses left in the MAC Address Pool

2015-11-06 Thread Amador Pahim

On 11/06/2015 07:42 AM, nico...@devels.es wrote:

Hi,

We're running oVirt 3.5.3.1-1. One of our users is trying to create a 
new VM. However, he gets the following warning:


2015-11-05 17:33:12,596 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ajp--127.0.0.1-8702-23) [7a0ba949] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: No MAC addresses left in the MAC 
Address Pool.
2015-11-05 17:33:28,804 WARN 
[org.ovirt.engine.core.bll.CloneVmCommand] (ajp--127.0.0.1-8702-18) 
[22611dad] CanDoAction of action CloneVm failed for user XXX@YYY. 
Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,MAC_POOL_NOT_ENOUGH_MAC_ADDRESSES


Is there a way to increase the MAC Pool?


There are 2 settings to play here:

# engine-config -g MaxMacsCountInPool
MaxMacsCountInPool: 10 version: general

# engine-config -g MacPoolRanges
MacPoolRanges: 00:1a:4a:ab:64:00-00:1a:4a:ab:64:ff version: general

You're probably exhausting the MacPoolRanges. You can either extend the 
current range or add a new range, using commas between the ranges:


# engine-config -s 
"MacPoolRanges=00:1a:4a:ab:64:00-00:1a:4a:ab:64:ff,00:1A:4A:97:5F:00-00:1A:4A:97:5F:FF"


Then restart ovirt-engine service.

Iirc, this setting is changing to the UI in ovirt 3.6.




Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] resources.ovirt.org Migration Notification

2015-11-06 Thread Anton Marchukov
Hello All.

Please note that today resources.ovirt.org has been migrated to another
data centre to solve long lasting capacity problems. As part of this
migration, all the content is finally back to one single place (as some of
you may noted, older content was moved away to resources01 at some point,
but that is now deprecated).

If you use the correct DNS for the host (that is resources.ovirt.org) than
you do not need to do anything with regard to this migration.

Here is the list of know issues and possible breakages at the moment.

1. Mirrors will have a delay with updates. This issue is being worked on.
2. In case you used linode01.ovirt.org to connect, please, change this to
resources.ovirt.org as it is no longer the same host.
3. resources01.phx.ovirt.org is depricated. It is not down, but all the
content is finally back to resources.ovirt.org

Shall you have any other problems, feel free to drop us a note at
in...@ovirt.org list so we can fix it asap.

Anton.

-- 
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Attach disk to vm firefox

2015-11-06 Thread Jonas Israelsson

Greetings.

I think I have stumbled upon a bug related to firefox.

Running released version of oVirt 3.6 and trying to attach a disk to a 
vm. It is however impossible to select a disk after position 13 (from 
top) in the list.
Looking at the 'table border' that I assume normally should go around 
the whole window, in firefox stops a few disks from the bottom of the 
list and

disks below that point can't be selected, hence not be attached.

Works in Chrome.

Running Firefox 41.0.2 under Linux (Opensuse 13.2)

It this a known issue ?

See attached snapshot.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users