Re: [ovirt-users] Error in starting vdsm in host

2017-04-07 Thread shubham dubey
did I need to install it on the ovirt-engine machine?
Recently I have created the first host for my new ovirt setup.But when I
try to do initialize the host
it was getting failed.So, I think I need to first configure the vdsm in the
host.

I tried "vdsm-tool configure" it is giving me this error.
[root@localhost ~]# vdsm-tool configure

Checking configuration status...
lvm is configured for vdsm
libvirt is already configured for vdsm
FAILED: conflicting vdsm and libvirt-qemu tls configuration.
vdsm.conf with ssl=False requires the following changes:
libvirtd.conf: listen_tcp=1, auth_tcp="none", listen_tls=0
 qemu.conf: spice_tls=0.
Error:  Configuration of libvirt is invalid

I have changed those values in libvirtd.conf but still getting this same
error.

On Sat, Apr 8, 2017 at 12:19 AM, Sandro Bonazzola 
wrote:

>
>
> Il 07/Apr/2017 19:53, "shubham dubey"  ha scritto:
>
> Hello,
> I am trying to install and configure vdsm in a newly created centos
> 7.3.The packages that I have installed is vdsm, vdsm-cli and libvirtd.
>
>
> May I ask why are you trying to run vdsm by hand on the host?
>
>
>
> Now when I am trying to start the vdsm service I am getting this error.
>
> [root@localhost ~]# systemctl start vdsmd
>
>
>
>
> Please set a hostname on the host. Localhost won't work very well.
>
> Did you configure vdsm before trying to start it?
> Something like "vdsm-tool configure".
>
>
> Job for vdsmd.service failed because the control process exited with error
> code. See "systemctl status vdsmd.service" and "journalctl -xe" for details.
>
> [root@localhost ~]# journalctl -xe
> -- The result is failed.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for
> MOM instance configured for VDSM purposes.
> -- Subject: Unit mom-vdsm.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit mom-vdsm.service has failed.
> --
> -- The result is dependency.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
> mom-vdsm.service/start failed with result 'dependency'.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
> entered failed state.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency
> job for unit lvm2-lvmetad.socket, ignoring: Invalid re
> Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service holdoff
> time over, scheduling restart.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency
> job for unit lvm2-lvmetad.socket, ignoring: Unit is ma
> Apr 07 23:06:47 localhost.localdomain systemd[1]: start request repeated
> too quickly for vdsmd.service
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Failed to start Virtual
> Desktop Server Manager.
> -- Subject: Unit vdsmd.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit vdsmd.service has failed.
> --
> -- The result is failed.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for
> MOM instance configured for VDSM purposes.
> -- Subject: Unit mom-vdsm.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit mom-vdsm.service has failed.
> --
> -- The result is dependency.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
> mom-vdsm.service/start failed with result 'dependency'.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
> entered failed state.
> Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.
>
> the momd service output is
>
> [root@localhost ~]# systemctl status momd
> ● momd.service - Memory Overcommitment Manager Daemon
>Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
> preset: disabled)
>Active: inactive (dead) since Fri 2017-04-07 23:15:32 IST; 3s ago
>   Process: 13031 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
> /var/run/momd.pid (code=exited, status=0/SUCCESS)
>  Main PID: 13034 (code=exited, status=0/SUCCESS)
>
> Apr 07 23:15:32 localhost.localdomain systemd[1]: Starting Memory
> Overcommitment Manager Daemon...
> Apr 07 23:15:32 localhost.localdomain systemd[1]: PID file
> /var/run/momd.pid not readable (yet?) after start.
> Apr 07 23:15:32 localhost.localdomain systemd[1]: Started Memory
> Overcommitment Manager Daemon.
> Apr 07 23:15:32 localhost.localdomain python[13034]: No worthy mechs found
>
>
> the output for starting mom-vdsm service is
>
> [root@localhost ~]# systemctl restart mom-vdsm.service
> A dependency job for mom-vdsm.service failed. See 'journalctl -xe' for
> details.
> [root@localhost ~]# journalctl -xe
> -- The result is failed.
> Apr 07 23:18:40 localhost.localdomain systemd[1]: Dependency failed for
> MOM instance configured for VDSM purposes.
> -- Subject: Unit 

Re: [ovirt-users] Error in starting vdsm in host

2017-04-07 Thread Sandro Bonazzola
Il 07/Apr/2017 19:53, "shubham dubey"  ha scritto:

Hello,
I am trying to install and configure vdsm in a newly created centos 7.3.The
packages that I have installed is vdsm, vdsm-cli and libvirtd.


May I ask why are you trying to run vdsm by hand on the host?



Now when I am trying to start the vdsm service I am getting this error.

[root@localhost ~]# systemctl start vdsmd




Please set a hostname on the host. Localhost won't work very well.

Did you configure vdsm before trying to start it?
Something like "vdsm-tool configure".


Job for vdsmd.service failed because the control process exited with error
code. See "systemctl status vdsmd.service" and "journalctl -xe" for details.

[root@localhost ~]# journalctl -xe
-- The result is failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Invalid re
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service holdoff
time over, scheduling restart.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Unit is ma
Apr 07 23:06:47 localhost.localdomain systemd[1]: start request repeated
too quickly for vdsmd.service
Apr 07 23:06:47 localhost.localdomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit vdsmd.service has failed.
-- 
-- The result is failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.

the momd service output is

[root@localhost ~]# systemctl status momd
● momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
   Active: inactive (dead) since Fri 2017-04-07 23:15:32 IST; 3s ago
  Process: 13031 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
/var/run/momd.pid (code=exited, status=0/SUCCESS)
 Main PID: 13034 (code=exited, status=0/SUCCESS)

Apr 07 23:15:32 localhost.localdomain systemd[1]: Starting Memory
Overcommitment Manager Daemon...
Apr 07 23:15:32 localhost.localdomain systemd[1]: PID file
/var/run/momd.pid not readable (yet?) after start.
Apr 07 23:15:32 localhost.localdomain systemd[1]: Started Memory
Overcommitment Manager Daemon.
Apr 07 23:15:32 localhost.localdomain python[13034]: No worthy mechs found


the output for starting mom-vdsm service is

[root@localhost ~]# systemctl restart mom-vdsm.service
A dependency job for mom-vdsm.service failed. See 'journalctl -xe' for
details.
[root@localhost ~]# journalctl -xe
-- The result is failed.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:18:40 localhost.localdomain systemd[1]: vdsmd.service failed.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Invalid re
Apr 07 23:18:40 localhost.localdomain systemd[1]: vdsmd.service holdoff
time over, scheduling restart.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Unit is ma
Apr 07 23:18:40 localhost.localdomain systemd[1]: start request repeated
too quickly for vdsmd.service
Apr 07 23:18:40 localhost.localdomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
-- Subject: Unit 

[ovirt-users] Error in starting vdsm in host

2017-04-07 Thread shubham dubey
Hello,
I am trying to install and configure vdsm in a newly created centos 7.3.The
packages that I have installed is vdsm, vdsm-cli and libvirtd.
Now when I am trying to start the vdsm service I am getting this error.

[root@localhost ~]# systemctl start vdsmd
Job for vdsmd.service failed because the control process exited with error
code. See "systemctl status vdsmd.service" and "journalctl -xe" for details.

[root@localhost ~]# journalctl -xe
-- The result is failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Invalid re
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service holdoff
time over, scheduling restart.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Unit is ma
Apr 07 23:06:47 localhost.localdomain systemd[1]: start request repeated
too quickly for vdsmd.service
Apr 07 23:06:47 localhost.localdomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit vdsmd.service has failed.
-- 
-- The result is failed.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:06:47 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:06:47 localhost.localdomain systemd[1]: vdsmd.service failed.

the momd service output is

[root@localhost ~]# systemctl status momd
● momd.service - Memory Overcommitment Manager Daemon
   Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
   Active: inactive (dead) since Fri 2017-04-07 23:15:32 IST; 3s ago
  Process: 13031 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
/var/run/momd.pid (code=exited, status=0/SUCCESS)
 Main PID: 13034 (code=exited, status=0/SUCCESS)

Apr 07 23:15:32 localhost.localdomain systemd[1]: Starting Memory
Overcommitment Manager Daemon...
Apr 07 23:15:32 localhost.localdomain systemd[1]: PID file
/var/run/momd.pid not readable (yet?) after start.
Apr 07 23:15:32 localhost.localdomain systemd[1]: Started Memory
Overcommitment Manager Daemon.
Apr 07 23:15:32 localhost.localdomain python[13034]: No worthy mechs found


the output for starting mom-vdsm service is

[root@localhost ~]# systemctl restart mom-vdsm.service
A dependency job for mom-vdsm.service failed. See 'journalctl -xe' for
details.
[root@localhost ~]# journalctl -xe
-- The result is failed.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit mom-vdsm.service has failed.
-- 
-- The result is dependency.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Job
mom-vdsm.service/start failed with result 'dependency'.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Unit vdsmd.service
entered failed state.
Apr 07 23:18:40 localhost.localdomain systemd[1]: vdsmd.service failed.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Invalid re
Apr 07 23:18:40 localhost.localdomain systemd[1]: vdsmd.service holdoff
time over, scheduling restart.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Cannot add dependency job
for unit lvm2-lvmetad.socket, ignoring: Unit is ma
Apr 07 23:18:40 localhost.localdomain systemd[1]: start request repeated
too quickly for vdsmd.service
Apr 07 23:18:40 localhost.localdomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit vdsmd.service has failed.
-- 
-- The result is failed.
Apr 07 23:18:40 localhost.localdomain systemd[1]: Dependency failed for MOM
instance configured for VDSM purposes.

Re: [ovirt-users] Hosts Network Management

2017-04-07 Thread Ondrej Svoboda
Hello Kai,

I'd like to know – what is your initial problem that you cannot solve
through the GUI?

VDSM does use a custom comment in ifcfg files, but only to recognize them –
if they are unknown, it acquires the relevant network devices from
NetworkManager. But VDSM is in charge of networking configuration,
exclusively.

If you need to add extra configuration parameters in ifcfg files, or modify
them, you could write or use a VDSM hook to alter the ifcfg files as they
are written.

But perhaps it is better for us to know what problems you are having and
whether they can be solved by available methods.

Thank you,
Ondra

On Thu, Apr 6, 2017 at 12:14 PM, Kai Wagner  wrote:

> Hi all,
>
> is it possible to set a comment at the beginning of a file for network
> files -> something like don't touch this anymore please?
>
> I want to configure my network stuff on the cli and not via the UI
> because I failed now a few times.
>
> Thx
>
> Kai
>
>
> --
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB
> 21284 (AG Nürnberg)
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] JavaAPI exporting VM

2017-04-07 Thread Juan Hernández
On 04/07/2017 03:04 PM, David David wrote:
> Hello.
> 
> Can't reexport a VM if this VM is already present in the export domain
> 
> With Java api:
> 
> *VM vm = api.getVMs().get("vm01");
> 
> StorageDomain exportDomain = api.getStorageDomains().get("export");   *
>  
> *Action act = vm.exportVm(new Action() {
> {
> setExclusive(True);
> setDiscardSnapshots(True);
> setStorageDomain(exportDomain);
> }
> });*
> 
> And following error is returned:
> 
> Exception in thread "main"
> code  : 409
> reason: Conflict
> detail: Cannot export VM. VM with the same identifier already exists.
> at org.ovirt.engine.sdk.web.HttpProxy.execute(HttpProxy.java:120)
> at
> org.ovirt.engine.sdk.web.HttpProxyBroker.add(HttpProxyBroker.java:209)
> at
> org.ovirt.engine.sdk.web.HttpProxyBroker.action(HttpProxyBroker.java:153)
> at org.ovirt.engine.sdk.decorators.VM.exportVm(VM.java:784)
> 

That should have worked. What version of the SDK and what version of the
engine are you using? Can you run the example with debug model enabled
and share the debug output?

Also, are you completely sure you actual compiled and run that versoin
of the code? I ask because it uses "True" instead of "true", which
should fail to compile.

I see that you are using version 3.6 of the Java SDK. That version of
the SDK and the version of the API it uses (version 3) are deprecated
since version 4 of the engine. Please consider using version 4 of the
SDK instead. With version 4 of the SDK you can do that as follows:

---8<---
package org.ovirt.engine.sdk4.examples;

import static org.ovirt.engine.sdk4.ConnectionBuilder.connection;
import static org.ovirt.engine.sdk4.builders.Builders.storageDomain;

import org.ovirt.engine.sdk4.Connection;
import org.ovirt.engine.sdk4.services.SystemService;
import org.ovirt.engine.sdk4.services.VmService;
import org.ovirt.engine.sdk4.services.VmsService;
import org.ovirt.engine.sdk4.types.Vm;

// This example shows how to export a virtual machine.
public class ExportVm {
public static void main(String[] args) throws Exception {
// Create the connection to the server:
Connection connection = connection()
.url("https://engine40.local/ovirt-engine/api;)
.user("admin@internal")
.password("redhat123")
//.trustStoreFile("truststore.jks")
.insecure(true)
.build();

// Get the reference to the root of the services tree:
SystemService systemService = connection.systemService();

// Find the virtual machine:
VmsService vmsService = systemService.vmsService();
Vm vm = vmsService.list()
.search("name=myvm")
.send()
.vms()
.get(0);

// Export the virtual machine:
VmService vmService = vmsService.vmService(vm.id());
vmService.export()
.exclusive(true)
.discardSnapshots(true)
.storageDomain(
storageDomain()
.name("myexport")
)
.send();

// Close the connection to the server:
connection.close();
}
}
--->8---

The documentation of version 4 of the SDK is available here:

  https://github.com/oVirt/ovirt-engine-sdk-java/tree/master/sdk

https://github.com/oVirt/ovirt-engine-sdk-java/tree/master/sdk/src/test/java/org/ovirt/engine/sdk4/examples
  http://www.javadoc.io/doc/org.ovirt.engine.api/sdk/4.1.3

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI storage domain and multipath when adding node

2017-04-07 Thread Gianluca Cecchi
Hello,
my configuration is what described here:
http://lists.ovirt.org/pipermail/users/2017-March/080992.html

So I'm using iSCSI multipath and not bonding
can anyone reproduce?


Initial situation is only one node configured and active with some VMS

I go and configure a second node; it tries to activate but networks are not
all already mapped and so gies to non operational.
I setup all networks and activate the node

It happens that:
- on the first node where I currently have 2 iSCSI connections and
2multipath lines (with p1p1.100 and p1p2) it is instantiated a new iSCSI
SID using interface "default" and in multipath -l output I see now 3 lines

- on the newly added node I only see 1 iSCSI SID using interface default

My way to solve the situation was to go inside iscsi multipath section
do nothing but save the same config

brutally on first node
iscsiadm -m session -u
--> all iscsi sessions are closed
after a while I see again the original 2 connections recovered, with
correct interface names used

- on second node
iscsiadm -m session -u
--> the only session is cloed
nothing happens
if I set to maintenance the node and then activate the node
--> the 2 correct iscsi sessions are activated...

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving disk from one storage domain to another

2017-04-07 Thread Bill James



On 4/7/17 12:52 AM, Nir Soffer wrote:
On Fri, Apr 7, 2017 at 2:40 AM Bill James > wrote:


We are trying to convert our qa environment from local nfs to gluster.
When I move a disk with a VM that is running on same server as the
storage it fails.
When I move a disk with VM running on a different system it works.

VM running on same system as disk:

2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]
(vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start
replication for vda to {u'domainID':
u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volumeInfo': {'domainID':
u'6affd8c3-2c
51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0,
'path':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, 'diskType':
'file',
'format': 'cow', 'cache': 'none', u'volumeID':
u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', u'imageID':
u'7ae9b3f7-3507-4469-a080-d0944d0ab753', u'poolID':
u'8b6303b3-79c6-4633-ae21-71b15ed00675', u'device': 'disk', 'path':

u'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
'propagateErrors': u'off', 'volumeChain': [{'domainID':
u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
'leaseOffset': 0, 'path':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d',
'volumeID': u'6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'leasePath':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d.lease',
'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, {'domainID':
u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
'leaseOffset': 0, 'path':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':

u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}]} (vm:3594)
Traceback (most recent call last):
   File "/usr/share/vdsm/virt/vm.py", line 3588, in diskReplicateStart
 self._startDriveReplication(drive)
   File "/usr/share/vdsm/virt/vm.py", line 3713, in
_startDriveReplication
 self._dom.blockCopy(drive.name , destxml,
flags=flags)
   File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
line
69, in f
 ret = attr(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 123, in wrapper
 ret = f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in
wrapper
 return func(inst, *args, **kwargs)
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 684, in
blockCopy
 if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
dom=self)
libvirtError: internal error: unable to execute QEMU command
'drive-mirror': Could not open

'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5':
Permission denied


[root@ovirt1 test vdsm]# ls -l

/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
-rw-rw 2 vdsm kvm 197120 Apr  6 13:29

/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5



Then if I try and rerun it it says, even though move failed:

2017-04-06 13:49:27,197 INFO  (jsonrpc/1) [dispatcher] Run and
protect:
getAllTasksStatuses, Return response: {'allT
asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212,
'message': 'Volume already exists', 'taskState':
  'finished', 'taskResult': 

Re: [ovirt-users] Python-SDK4: How to list VM user sessions?

2017-04-07 Thread Juan Hernández
I have been trying to reproduce this and I wasn't able. In theory the
404 error that you get should only happen if the virtual machine doesn't
exist, but that isn't the case.

Can you check the server.log file and share the complete stack traces
that should appear after the "HTTP 404 Not Found" message?

On 03/31/2017 10:25 AM, Giulio Casella wrote:
> On 30/03/2017 20:05, Juan Hernández wrote:
>> On 03/30/2017 01:01 PM, Giulio Casella wrote:
>>> Hi,
>>> I'm trying to obtain a list of users connected to a VM, using python SDK
>>> v4.
>>> Here's what I'm doing:
>>>
>>> vm = vms_service.list(search="name=vmname")[0]
>>> vm_service = vms_service.vm_service(vm.id)
>>> sessions = vm_service.sessions_service().list()
>>>
>>> But "sessions" is None.
>>>
>>> Same result using:
>>>
>>> s = connection.follow_link(vm.sessions)
>>>
>>> "s" is None.
>>>
>>> I tried also using curl, and if I connect to:
>>>
>>> https://my.ovirt.host/ovirt-engine/api/v4/vms//sessions
>>>
>>> I get a beautiful 404.
>>>
>>> Also using v3 of python SDK I obtain the same behaviour.
>>>
>>> So I suspect that retrieving user sessions via API is not implemented,
>>> is it? If not, what I'm doing wrong?
>>>
>>> I'm using RHV 4.0.6.3-0.1.el7ev
>>>
>>> Thanks in advance,
>>> Giulio
>>>
>>
>> Giulio, you should never get a 404 error from that URL, unless the
>> virtual doesn't exist or isn't visible for you. What user name are you
>> to create the SDK connection? An administrator or a regular user?
>>
> 
> I tried with a regular domain user (with superuser role assigned) and
> admin@internal, with same result.
> 
>> Also, please check the /var/log/ovirt-engine/server.log and
>> /var/log/ovirt-engine/engine.log when you send that request. Do you see
>> there something relevant?
> 
> server.log reports:
> 
> 2017-03-31 10:03:11,346 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n]
> (default task-33) RESTEASY002010: Failed to execute:
> javax.ws.rs.WebApplicationException: HTTP 404 Not Found
> 
> (no surprise here, same message obtained by curl).
> 
> engine.log is full of:
> 
> ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
> task-7) [] Cannot authenticate using authentication Headers:
> invalid_grant: The provided authorization grant for the auth code has
> expired
> 
> (indipendently of my request)
> 
> It's quite strange I can perform almost every other operation (e.g.
> getting other VM parameters, running methods, etc.)
> 
> 
>>
>> Finally, please run your script with the 'debug=True' option in the
>> connection, and with a log file, like here:
>>
>>
>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/list_vms.py#L20-L37
>>
>>
>> Then share that log file so that we can check what the server is
>> returning exactly. Make sure to remove your password from that log file
>> before sharing it.
>>
> Find attached produced log (passwords purged).
> 
> BTW: VM is a Fedora 24, with guest agents correctly installed (I can see
> user sessions in admin portal and in postgresql DB).
> 
> Thanks,
> Giulio
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM overwrites network config

2017-04-07 Thread Alan Cowles
Hey guys,

I'm in a lab setup currently with 2 hosts, running RHEV-3.5, with a
self-hosted engine on RHEL 6.9 servers. I am doing this in order to plot
out a production upgrade I am planning going forward to 4.0, and I'm a bit
stuck and I'm hoping it's ok to ask questions here concerning this product
and version.

In my lab, I have many vlans trunked on my switchports, so I have to create
individual vlan interfaces on my RHEL install. During the install, I am
able to pick my ifcfg-eth0.502 interface for rhevm, and ifcfg-eth1.504
interface for NFS, access the storage and create my self-hosted engine. The
issue I am running into is that I get into RHEV-M, and I am continuing to
set the hosts up or add other hosts, when I go to move my NFS network to
host2 it only allows me to select the base eth1 adapter, and not the VLAN
tagged version. I am able to tag the VLAN in the RHEV-M configured network
itself, but this has the unfortunate side effect of tagging a network on
top of the already tagged interface on host1, taking down NFS and the self
hosted engine.

I am able to access the console of host1, and I configure the ifcfg files,
vlan files, and bridge files to be on the correct interfaces, and I get my
host back up, and my RHEV-M back up. However when I try to make these
manual changes to host2 and get it up, the changes to these files are
completely overwritten the moment the host reboots connected to vdsmd
start-up.

Right now, I have vdsmd disabled, and I have host2 configured the way I
need it to be with the rhevm bridge on eth0.502, the NFS bridge on
eth1.504, and my VMNet "guest" bridge on eth1.500, however that leaves me
with a useless host from RHEV standards.

I've checked several different conf files to see where vdsmd is pulling
it's configuration from but I can't find it, or find a way to modify it to
fit my needs.

Any advice or pointers here would be greatly appreciated. Thank you all in
advance.

AC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] JavaAPI exporting VM

2017-04-07 Thread David David
Hello.

Can't reexport a VM if this VM is already present in the export domain

With Java api:



*VM vm = api.getVMs().get("vm01");StorageDomain exportDomain =
api.getStorageDomains().get("export");   *







*Action act = vm.exportVm(new Action() {{
setExclusive(True);setDiscardSnapshots(True);
setStorageDomain(exportDomain);}});*

And following error is returned:

Exception in thread "main"
code  : 409
reason: Conflict
detail: Cannot export VM. VM with the same identifier already exists.
at org.ovirt.engine.sdk.web.HttpProxy.execute(HttpProxy.java:120)
at
org.ovirt.engine.sdk.web.HttpProxyBroker.add(HttpProxyBroker.java:209)
at
org.ovirt.engine.sdk.web.HttpProxyBroker.action(HttpProxyBroker.java:153)
at org.ovirt.engine.sdk.decorators.VM.exportVm(VM.java:784)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Initial network setup question ...

2017-04-07 Thread Alan Bunch
Hello all,

I have a question about the initial setup for ovirt. I have 3 nodes that I am 
about ready to install a hyper converged setup using Gluster for storage and 
hosted-engine. Gluster is setup and the volumes are mounted.

My question is this:
What does the networking setup need to look like at install time ? Do I need to 
setup all of the bonds, vlans and bridges before I start the install or should 
I just set an ovirtmgmt bond/vlan/bridge and setup the rest of the networking 
inside of ovirt. I expect to need 3 or 4 networks/vlan to match my existing 
networks to attach vm's to.

Any help or pointers would be appreciated.

Thank You
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [Ovirt 4.0 Python SDK] Host fail to move in Maintenance State

2017-04-07 Thread TranceWorldLogic .
Hi,

I was trying to deactivate host via python SDK but found that host is not
moving in Maintenance State.

In this scenario, I found that I have one additional network added in
cluster but not setup on Host.
Hence it retrying in background for network sync.
Because of this I suspect host is fail to move in maintenance state.

Can someone help me how to force via python API to deactivate host ?
Or
Can I stop background sync of host ?

Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-07 Thread Yaniv Kaul
On Fri, Apr 7, 2017 at 4:11 AM, martin chamambo  wrote:

> I created a vg and an lv using targetcli on a centos 7.3 host,managed to
> connect my ovirtengine on that Lv When I reboot my engine and nodes,
> the storage won't come up and when I check my storage server,the lun will
> have been deleted and those lvs and vg will be automatically created.I have
> initialized my storage more than 3 times and.the same thing keeps happening
>

Have you persisted your targetcli configuration? This has nothing to do
with oVirt - we don't delete LUNs on the storage side.
Y.


>
> On Apr 6, 2017 8:31 PM, "Liron Aravot"  wrote:
>
>>
>>
>> On Wed, Apr 5, 2017 at 3:16 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Sun, Apr 2, 2017 at 9:15 PM, martin chamambo 
>>> wrote:
>>>
 I managed to configure the mail iscsi domain for my ovirt 4.1 engine
 and node , it connects to the storage initially and initialises the data
 center ,but after rebooting the node and engine , it creates unecessary vg
 and lvs like below

 280246d3-ac7b-44ff-8c03-dc2bcb9edb70 d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 128.00m
   2d57ab88-16e4-4007-9047-55fc4a35b534 d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 128.00m
   ids  d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 128.00m
   inboxd5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 128.00m
   leases   d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a-   2.00g
   master   d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a-   1.00g
   metadata d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 512.00m
   outbox   d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a- 128.00m
   xleases  d5104206-5863-4f9d-9ea7-2b140c97d65f
 -wi-a-   1.00g

 whats the cause of this

>>>
>>> I'm not sure what is the problem? These LVs are the metadata LVs for a
>>> storage domain.
>>> Y.
>>>
>>>
>>
>> As Yaniv wrote - those are LVs created by oVirt.
>> Is the node  currently in use in oVirt? if so, it should be connected to
>> your storage server and have those vg/lvs.
>>
>>>
 NB:mY ISCSI storage is on a centos 7 box

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moving disk from one storage domain to another

2017-04-07 Thread Nir Soffer
On Fri, Apr 7, 2017 at 2:40 AM Bill James  wrote:

> We are trying to convert our qa environment from local nfs to gluster.
> When I move a disk with a VM that is running on same server as the
> storage it fails.
> When I move a disk with VM running on a different system it works.
>
> VM running on same system as disk:
>
> 2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]
> (vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start
> replication for vda to {u'domainID':
> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volumeInfo': {'domainID':
> u'6affd8c3-2c
> 51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, 'diskType': 'file',
> 'format': 'cow', 'cache': 'none', u'volumeID':
> u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', u'imageID':
> u'7ae9b3f7-3507-4469-a080-d0944d0ab753', u'poolID':
> u'8b6303b3-79c6-4633-ae21-71b15ed00675', u'device': 'disk', 'path':
>
> u'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
> 'propagateErrors': u'off', 'volumeChain': [{'domainID':
> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
> 'leaseOffset': 0, 'path':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d',
> 'volumeID': u'6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d.lease',
> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, {'domainID':
> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
> 'leaseOffset': 0, 'path':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':
> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:
> _gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}]} (vm:3594)
> Traceback (most recent call last):
>File "/usr/share/vdsm/virt/vm.py", line 3588, in diskReplicateStart
>  self._startDriveReplication(drive)
>File "/usr/share/vdsm/virt/vm.py", line 3713, in _startDriveReplication
>  self._dom.blockCopy(drive.name, destxml, flags=flags)
>File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
> 69, in f
>  ret = attr(*args, **kwargs)
>File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
> line 123, in wrapper
>  ret = f(*args, **kwargs)
>File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in
> wrapper
>  return func(inst, *args, **kwargs)
>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 684, in
> blockCopy
>  if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
> dom=self)
> libvirtError: internal error: unable to execute QEMU command
> 'drive-mirror': Could not open
>
> '/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5':
> Permission denied
>
>
> [root@ovirt1 test vdsm]# ls -l
>
> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
> -rw-rw 2 vdsm kvm 197120 Apr  6 13:29
>
> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
>
>
>
> Then if I try and rerun it it says, even though move failed:
>
> 2017-04-06 13:49:27,197 INFO  (jsonrpc/1) [dispatcher] Run and protect:
> getAllTasksStatuses, Return response: {'allT
> asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212,
> 'message': 'Volume already exists', 'taskState':
>   'finished', 'taskResult': 'cleanSuccess', 'taskID':
> '078d962c-e682-40f9-a177-2a8b479a7d8b'}}} (logUtils:52)
>
>
> So now I have to clean up the disks that it failed to move so I can
> migrate the VM and then move the disk again.
> Or so it seems.

[ovirt-users] Host in connecting state and all data domains are red in ovirt 4.1

2017-04-07 Thread gflwqs gflwqs
Hi list this morning i saw one of my 2 hosts were in connecting state.
When i checked the /var/log/messages on the host i saw this happened:

Apr  7 06:05:38 ovirt12 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.submonitor_base.SubmonitorBase ERROR Error
executing submonitor mgmt-bridge, args {'use_ssl': 'true', 'bridge_name':
'ovirtmgmt', 'address': '0'}#012Traceback (most recent call last):#012
 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitor_base.py",
line 115, in _worker#012self.action(self._options)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors/mgmt_bridge.py",
line 44, in action#012caps = cli.getVdsCapabilities()#012  File
"/usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py", line 167, in
_callMethod#012raise
JsonRpcNoResponseError(method)#012JsonRpcNoResponseError: [-32605] No
response for JSON-RPC Host.getCapabilities request.
Apr  7 06:05:40 ovirt12 journal: vdsm vds.dispatcher ERROR SSL error
receiving from : unexpected eof

Is this the cause of the problem?
What has happened?
How do i get out of this problem?

Regards
Christian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users