Re: [ovirt-users] Windows 10

2016-03-14 Thread Sandro Bonazzola
On Fri, Mar 11, 2016 at 3:12 PM, Jean-Marie Perron <
jean-marie.per...@viseo.com> wrote:

> Hi Uwe,
>
> Thank you for your reply.
>
> I tested versions 0.6, 0.7, 0.11 and 0.12 of qxlwddm. It's always the same.
>

So, after some investigation, Windows 10 requires the WDDM driver with 3D
support, and it is not supported by SPICE.
So oVirt Guest Tools supports Windows 10 but SPICE doesn't fully support it.




>
> Jean-Marie
>
> -Message d'origine-
> De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part
> de Uwe Laverenz
> Envoyé : vendredi 11 mars 2016 10:49
> À : users@ovirt.org
> Objet : Re: [ovirt-users] Windows 10
>
> Hi,
>
> Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:
> > Hello,
> >
> > OVirt 3.6.3 is installed on CentOS 7.
> >
> > I use 64-bit Windows 10 client with spice display.
> >
> > After installing the spice-guest-tools and oVirt-tools-setup on the VM
> > Windows 10, the display always lag and slow.
> >
> > The display on a Windows 7 VM is fluid.
> >
> > On Device Manager and Display adapters, I see see the graphics card
> > "Red Hat QLX Controller"
> >
> > Are Windows 10 is fully supported by oVirt?
>
> I haven't tested this but please have a look at the qxlwddm driver here:
>
> https://people.redhat.com/vrozenfe/qxlwddm/
>
>
> Some people reported that this works for Win 8/8.1/10:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=895356
>
> cu,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] install additional nodes without knowing another HE host's root password?

2016-03-14 Thread Yedidyah Bar David
On Mon, Mar 14, 2016 at 8:59 AM, Sandro Bonazzola  wrote:
>
>
> On Thu, Feb 25, 2016 at 7:58 AM, Wee Sritippho  wrote:
>>
>> Hi,
>>
>> I'm trying to deploy a 2nd host to my hosted-engine environment, but the
>> 1st host doesn't have a root password - it only has a sudo account.
>
>
> this kind of configuration is not supported by Hosted Engine.
> Please open a RFE for supporting this setup.
>
>
>
>>
>> I can temporarily set a password and enable ssh for the root account, but
>> I'm curious whether there is proper way to do this without messing with the
>> user accounts? for example:
>>
>> - Locating the answer file that was copied from the 1st host manually (I
>> already tried 'hosted-engine --deploy
>> --config-append=answers-XX.conf' but the setup thought that I'm
>> going to deploy my 1st host)

IMO it should have passed this point. Use the answer file in
/etc/ovirt-hosted-engine,
not in /var . If all goes well they should be identical, otherwise
/etc will have the
"last known good" one and /var will have all.

But it might fail later during host-deploy. That stage requires ssh
from the engine
to the host as root (currently, there's an open RFE about this). But it does not
require a password, because we add the engine's public key to the host's
/root/.ssh/authorized_keys. So if you allow ssh as root with public key auth, it
should work.

Please check/post setup logs to try and understand why it was
identified as 1st host.

>> - scp from an account other than 'root'

Not difficult in principle, currently not supported. As Sandro said, you can
open an RFE.

Best,

>>
>> Regards,
>> Wee
>>
>>
>> ---
>> ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
>> https://www.avast.com/antivirus
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Impact of changing VLAN of ovirtmgmt in the Data Center?

2016-03-14 Thread Edward Haas
On Fri, Feb 26, 2016 at 2:51 AM, Garry Tiedemann
 wrote:
> Hi everyone,
>
> In Data Centers > (Name) > Networks section of the Ovirt GUI, network
> definitions include the VLAN IDs.
> In my case, the VLAN ID of ovirtmgmt has been empty (meaning VLAN 1) since I
> built it; it's always been wrong.
>
> My hypervisor hosts' ovirtmgmt bridges are actually in VLAN 20.
>
> An error message alerted me to this mismatch a few days ago.
>
> None of my production VMs are on VLAN 1, or VLAN 20, but I'd like to confirm
> if it's safe to change this.
>
> Can changing the VLAN ID of ovirtmgmt within Data Center > Networks impact
> VMs from other VLANs?
>
> I see no reason why it should be a problem. We just need some certainty on
> that point.
>
> Thankyou in advance for answers.
>
> Regards,
>
> Garry
>
> PS This is a briefer and clearer re-statement of the question I asked a
> couple of days ago.
> ___

Hello Garry,

If you have other networks for the VM/s, these should not be affected
by such a change.
The risk is in loosing the management network (ovirtmgmt) on your
nodes, including Engine.. But if you know that all nodes (including
Engine) are connected using a VLAN now and the switch uses a trunk
port, you should be ok.
If the node will fail to communicate with Engine after the change,
they will attempt a revert to the last known configuration, reducing
the risk of loosing them.

Note that when you set the network VLAN ID, Engine will send a request
to ALL nodes that have the network for setting up the VLAN.
You need to make sure that the switch port connected to the physical
interface is a trunk and can forward the specified tag ID.
(you did specify something about untagged packets that are classified
to your VLAN ID, that was unclear to me)

If you already managed to perform the change, please share your
results and insights.

Thanks,
Edy.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shutting down a host witn running vms on it.

2016-03-14 Thread Tomas Jelinek


- Original Message -
> From: "Nathanaël Blanchet" 
> To: users@ovirt.org
> Sent: Wednesday, February 24, 2016 3:55:29 PM
> Subject: [ovirt-users] shutting down a host witn running vms on it.
> 
> Hi all,
> 
> What is the behaviour of a host with running vms when a reboot is
> initiated, for instance by a powerchute shutdown event or an init event?
> In such a case, will vms will properly shutdown?

no, VDSM does not catch any such events to clean the VMs up.
You should not interact with the hosts directly this way.
If you need to, than you should first put the host to maintenance mode. 
Moving to maintenance mode will than migrate the VMs out of the
running host and prepare everything in a way that the host is again yours.

> Thank you for your help.
> 
> --
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shutting down a host witn running vms on it.

2016-03-14 Thread Martin Perina


- Original Message -
> From: "Tomas Jelinek" 
> To: "Nathanaël Blanchet" 
> Cc: users@ovirt.org
> Sent: Monday, March 14, 2016 9:34:34 AM
> Subject: Re: [ovirt-users] shutting down a host witn running vms on it.
> 
> 
> 
> - Original Message -
> > From: "Nathanaël Blanchet" 
> > To: users@ovirt.org
> > Sent: Wednesday, February 24, 2016 3:55:29 PM
> > Subject: [ovirt-users] shutting down a host witn running vms on it.
> > 
> > Hi all,
> > 
> > What is the behaviour of a host with running vms when a reboot is
> > initiated, for instance by a powerchute shutdown event or an init event?
> > In such a case, will vms will properly shutdown?
> 
> no, VDSM does not catch any such events to clean the VMs up.
> You should not interact with the hosts directly this way.
> If you need to, than you should first put the host to maintenance mode.
> Moving to maintenance mode will than migrate the VMs out of the
> running host and prepare everything in a way that the host is again yours.

The behaviour depends on the software which communicates with your OS,
it it's properly set up, then standard shutdown of OS is executed, so
standard shutdown signal is sent also to libvirt.

But from oVirt point of view, it this happens when host is not in
Maintenance, host will become Non Reponsive for oVirt and oVirt will
try to fence the host using power management agents which you provided
for the host.

> 
> > Thank you for your help.
> > 
> > --
> > Nathanaël Blanchet
> > 
> > Supervision réseau
> > Pôle Infrastrutures Informatiques
> > 227 avenue Professeur-Jean-Louis-Viala
> > 34193 MONTPELLIER CEDEX 5
> > Tél. 33 (0)4 67 54 84 55
> > Fax  33 (0)4 67 54 84 14
> > blanc...@abes.fr
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] a KVM Host has different UUIDs ?

2016-03-14 Thread Juan Hernández
On 03/11/2016 11:34 AM, Jean-Pierre Ribeauville wrote:
> Hi,
> 
> When retrieving KVM Host UUID   via virsh  getcapabilities()  or via
>  ovirt Python SDK ( by using  obj.get_id() ), then both UUIDs are not
> identical .
> 
> (FYI, Guest UUIDs retrieved via both ways , i.e.
> virDomainGetUUIDString() or python ovirt sdk ,  are identical.)
> 
> 
> Did I miss something ?
> 

This is normal, and expected. The host exists before it is added to the
oVirt system, and already has an identifier. When it is added to the
oVirt system it gets a new identifier assigned, for tracking it in the
oVirt database. Those two identifiers are different.

On the other hand virtual machines are created by the oVirt system. It
assign them an identifier for tracking in the oVirt database, and it
happens to use the same identifier when asking libvirt to create the
virtual machine.

Anyhow, you should not rely on these identifiers being the same, or
different, as it is an implementation detail that may change in the future.

Is there any specific thing you are trying to achieve? If you share it
we may be able to suggest a different approach.

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM get stuck randomly

2016-03-14 Thread Pavel Gashev
Hello,

I saw the same issue at least once. There were the following lines in 
/var/log/libvirt/qemu/VMNAME.log at the moment:

main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 539.767000 ms, bitrate 7289423 
bps (6.951735 Mbps) LOW BANDWIDTH
red_dispatcher_set_cursor_peer:
inputs_connect: inputs channel client create
red_channel_client_disconnect: rcc=0x7fd368324000 (channel=0x7fd366428000 
type=1 id=0)
main_channel_client_on_disconnect: rcc=0x7fd368324000
red_client_destroy: destroy client 0x7fd366332200 with #channels=4
red_channel_client_disconnect: rcc=0x7fd3683aa000 (channel=0x7fd36643 
type=3 id=0)
red_dispatcher_disconnect_display_peer:
red_channel_client_disconnect: rcc=0x7fd3681e6000 (channel=0x7fd366fea600 
type=2 id=0)
red_channel_client_disconnect: rcc=0x7fd36758a000 (channel=0x7fd3663eab00 
type=4 id=0)
red_dispatcher_disconnect_cursor_peer:

Host software:

OS Version: RHEL - 7 - 2.1511.el7.centos.2.10
Kernel Version: 3.10.0 - 327.10.1.el7.x86_64
KVM Version: 2.3.0 - 31.el7_2.7.1
LIBVIRT Version: libvirt-1.2.17-13.el7_2.3
VDSM Version: vdsm-4.17.23-0.el7.centos
SPICE Version: 0.12.4 - 15.el7

VM is a quite old FC9, so there are no ovirt/qemu guest agents installed inside.

And I have no Gluster there.

On Sun, 2016-03-13 at 07:46 +, Christophe TREFOIS wrote:

Dear all,

I have a problem since couple of weeks, where randomly 1 VM (not always the 
same) becomes completely unresponsive.
We find this out because our Icinga server complains that host is down.

Upon inspection, we find we can’t open a console to the VM, nor can we login.

In oVirt engine, the VM looks like “up”. The only weird thing is that RAM usage 
shows 0% and CPU usage shows 100% or 75% depending on number of cores.
The only way to recover is to force shutdown the VM via 2-times shutdown from 
the engine.

Could you please help me to start debugging this?
I can provide any logs, but I’m not sure which ones, because I couldn’t see 
anything with ERROR in the vdsm logs on the host.

The host is running

OS Version: RHEL - 7 - 1.1503.el7.centos.2.8
Kernel Version: 3.10.0 - 229.14.1.el7.x86_64
KVM Version:2.1.2 - 23.el7_1.8.1
LIBVIRT Version:libvirt-1.2.8-16.el7_1.4
VDSM Version:   vdsm-4.16.26-0.el7.centos
SPICE Version:  0.12.4 - 9.el7_1.3
GlusterFS Version:  glusterfs-3.7.5-1.el7

We use a locally exported gluster as storage domain (eg, storage is on the same 
machine exposed via gluster). No replica.
We run around 50 VMs on that host.

Thank you for your help in this,

—
Christophe


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt and CAS SSO

2016-03-14 Thread Fabrice Bacchella
I managed to set a not 100% perfect solution but quite usable any way.

I used org.ovirt.engineextensions.aaa.misc.http.AuthnExtension for 
authentication, behind a mod_cas_auth. [1]
Authorization is done using 
org.ovirt.engine.extension.aaa.jdbc.binding.api.AuthzExtension.

I still need to create users manually with ovirt-aaa-jdbc-tool and assign right 
manually, but I don't have a lof of users, so I can live with that.

I can share my configuration with you if you are interested.

I tried to have a look at the source code of current AAA modules. And they 
teach me only one thing, that without a complete documentation, there is
no hope to write a new one. Is the javadoc 
ovirt-engine-extensions-api-impl-javadoc online somewhere ?

[1] https://wiki.jasig.org/display/casc/mod_auth_cas.


> Le 11 mars 2016 à 17:55, Martin Perina  a écrit :
> 
> Hi,
> 
> I'm glad to hear that you were able to successfully configure aaa-misc
> and mod_auth_cas to allow CAS based login for oVirt.
> 
> Unfortunately regarding CAS authorization for oVirt I have somewhat bad
> news for you. But let me explain the issue a bit:
> 
> 1. Using aaa-misc we are able to pass only user name of the authenticated
>   user from apache to ovirt.
> 
> 2. After that we have authenticated user on oVirt and then we pass
>   its username to authz extension to fetch full principal record including
>   group memberships. At the moment we don't pass anything else to authz
>   extension, just principal name (username).
> 
> So here are options how to enable CAS authorization for oVirt:
> 
> 1. Implement new authz extension which will fetch principal record for CAS
>   server (if this is possible, I don't know much about CAS)
> 
> 2. Or implement new authn/authz extensions specific to CAS which will use
>   CAS API do both authn and authz.
> 
> 3. Use LDAP as a backend for you CAS server (if possible) and configure
>   authz part using ovirt-engine-extension-aaa-ldap
> 
> 4. You could also create an RFE bug on oVirt to add CAS support, but
>   no promises from me :-) you are the first user asking about CAS support
> 
> Regarding documentation:
> 
>  - oVirt engine extensions API JavaDoc is contained in package
>ovirt-engine-extensions-api-impl-javadoc
> 
>  - Ondra wrote some great articles about oVirt AAA configurations and
>published them on his blog [1]
> 
>  - You can also take a look at some presentations about oVirt extensions:
> 
>  The New oVirt Extension API: Taking AAA to the next level [2] [3]
>  oVirt Extension API: The first step for fully modular oVirt [4] [5]
> 
>  - And you can also take a look at sources of existing aaa-ldap [6],
>aaa-misc [7] and aaa-jdbc [8] extensions
> 
> And of course feel free to ask!
> 
> Regards
> 
> Martin Perina
> 
> [1] http://machacekondra.blogspot.cz/
> [2] https://www.youtube.com/watch?v=bSbdqmRNLi0
> [3] 
> http://www.slideshare.net/MartinPeina/the-new-ovirt-extension-api-taking-aaa-authentication-authorization-accounting-to-the-next-level
> [4] https://www.youtube.com/watch?v=9b9WVFsy_yg
> [5] 
> http://www.slideshare.net/MartinPeina/ovirt-extension-api-the-first-step-for-fully-modular-ovirt
> [6] https://github.com/oVirt/ovirt-engine-extension-aaa-ldap
> [7] https://github.com/oVirt/ovirt-engine-extension-aaa-misc
> [8] https://github.com/oVirt/ovirt-engine-extension-aaa-jdbc
> 
> - Original Message -
>> From: "Fabrice Bacchella" 
>> To: Users@ovirt.org
>> Sent: Tuesday, March 8, 2016 11:54:13 AM
>> Subject: [ovirt-users] ovirt and CAS SSO
>> 
>> I'm trying to add CAS SSO to ovirt.
>> 
>> For authn (authentication),
>> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension is OK, I put jboss
>> behind an Apache with mod_auth_cas.
>> 
>> Now I'm fighting with authz (authorization). CAS provides everything needed
>> as header. So I don't need ldap or jdbc extensions. Is there anything done
>> about that or do I need to write my own extension ? Is there some
>> documentation about that ?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shutdown Problems on oVirt Node

2016-03-14 Thread Allon Mureinik
Odd. Never seen such behavior in any of our set ups.
Can you please include vdsm's logs, sanlock's logs and /var/log/messages?

Thanks!

On Wed, Mar 2, 2016 at 6:00 PM, Roger Meier 
wrote:

> Hi their
>
> I have currently a strange problem on a new oVirt 3.6 installation. At
> the moment a clean shutdown doesn't work, most of the time he reboots
> the system or hangs in the shutdown progress.
>
> I discovered this, when i tested our multiple UPS solution and send some
> test signals over ipmi to our server with ex. ipmipower -h 192.168.2.218
> -u root -p password --soft . We also discovered that shutdown -h now ,
> poweroff or init 0 had the same effect.
>
> On a clean CentOS installation, which is not included in our oVirt setup
> this works as expected, but on our ovirt-node this doesn't work.
>
> In the shutdown progress i see the following which tooks very long:
> > A stop job is running for Shared Storage Lease Manager (23s / 1min 47s)
>
> At the end i had then the following on my console screen:
>
> [ OK ] Reached target Shutdown
>
> Nothing more happens. no poweroff or something. I can wait more than
> three minutes and nothing happens.
>
> I also tried a clean re-install from the oVirt Administration WebUI but
> this doesn't have any effect on this issue.
>
> When i type "service sanlock stop" or "service vdsmd stop" in the server
> console and then do a poweroff , all works as expected. The shutdown is
> the also realy fast as expected.
>
> At the moment we think that the problem is on ovirt, vdsmd or on the
> sanlock settings for ovirt, because all settings on our site are on
> default settings.
>
> Currently Setup are two Intel Server (With RRM) with CentOS-7 (1511) and
> oVirt 3.6.3 and one Intel Server with OpenIndiana which provides Storage
> via NFS.
>
> Had someone a solution for this? is this perhaps a bug and shoul'd  be
> reported?
>
> Greetings
> Roger Meier
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Shutdown Problems on oVirt Node

2016-03-14 Thread Nir Soffer
On Wed, Mar 2, 2016 at 6:00 PM, Roger Meier  wrote:
> Hi their
>
> I have currently a strange problem on a new oVirt 3.6 installation. At
> the moment a clean shutdown doesn't work, most of the time he reboots
> the system or hangs in the shutdown progress.
>
> I discovered this, when i tested our multiple UPS solution and send some
> test signals over ipmi to our server with ex. ipmipower -h 192.168.2.218
> -u root -p password --soft . We also discovered that shutdown -h now ,
> poweroff or init 0 had the same effect.
>
> On a clean CentOS installation, which is not included in our oVirt setup
> this works as expected, but on our ovirt-node this doesn't work.
>
> In the shutdown progress i see the following which tooks very long:
>> A stop job is running for Shared Storage Lease Manager (23s / 1min 47s)

This is sanlock - maybe it would not stop because it has active
lockspaces, delaying shutdown?

Did you put the host to maintenance before shutting it down?

In maintenance mode, vdsm will release all lockspaces, so sanlock
should not delay shutdown in any way.

>
> At the end i had then the following on my console screen:
>
> [ OK ] Reached target Shutdown
>
> Nothing more happens. no poweroff or something. I can wait more than
> three minutes and nothing happens.
>
> I also tried a clean re-install from the oVirt Administration WebUI but
> this doesn't have any effect on this issue.
>
> When i type "service sanlock stop" or "service vdsmd stop" in the server
> console and then do a poweroff , all works as expected. The shutdown is
> the also realy fast as expected.

During poweroff sanlock service is stopped like any other service, so
if it worked from the shell, it should work during shutdown.

Stopping vdsm is not needed and should not effect the shutdown.

> At the moment we think that the problem is on ovirt, vdsmd or on the
> sanlock settings for ovirt, because all settings on our site are on
> default settings.
>
> Currently Setup are two Intel Server (With RRM) with CentOS-7 (1511) and
> oVirt 3.6.3 and one Intel Server with OpenIndiana which provides Storage
> via NFS.
>
> Had someone a solution for this? is this perhaps a bug and shoul'd  be
> reported?

You can file a bug and attach the logs mentioned by Allon, it will help
to track this issue.

Since this is a problem with ovirt-node, I would open a bug for it. It may also
be an issue with sanlock init scripts.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Ala Hino
Hi Marcelo,

Is it cold (the VM is down) or live (the VM is up) merge (snapshot
deletion)?
What version are you running?
Can you please share engine and vdsm logs?

Please note that at some point we try to verify that image was removed by
running getVolumeInfo hence, the volume not found is expected. The thing
is, that you say that volume does exist.
Can you run following command on the host:

vdsClient -s 0 getVolumeInfo

Thank you,
Ala


On Sat, Mar 12, 2016 at 3:35 PM, Marcelo Leandro 
wrote:

> I see the log error:
> Mar 12, 2016 10:33:40 AM
> VDSM Host04 command failed: Volume does not exist:
> (u'948d0453-1992-4a3c-81db-21248853a88a',)
>
> but the volume exist :
> 948d0453-1992-4a3c-81db-21248853a88a
>
> 2016-03-12 10:10 GMT-03:00 Marcelo Leandro :
> > Good morning
> >
> > I have a doubt, when i do a snapshot, a new lvm is generated, however
> > when I delete this snapshot the lvm not off, that's right?
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > 3fba372c-4c39-4843-be9e-b358b196331d
> b47f58e0-d576-49be-b8aa-f30581a0373a
> > 5097df27-c676-4ee7-af89-ecdaed2c77be
> c598bb22-a386-4908-bfa1-7c44bd764c96
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
> > total 0
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
> > 3fba372c-4c39-4843-be9e-b358b196331d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
> > b47f58e0-d576-49be-b8aa-f30581a0373a ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
> > c598bb22-a386-4908-bfa1-7c44bd764c96 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
> >
> >
> >
> > disks snapshot:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 3fba372c-4c39-4843-be9e-b358b196331d
> > image: 3fba372c-4c39-4843-be9e-b358b196331d
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > disk base:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > b47f58e0-d576-49be-b8aa-f30581a0373a
> > image: b47f58e0-d576-49be-b8aa-f30581a0373a
> > file format: raw
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> >
> >
> > Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro  wrote:
> Good morning
>
> I have a doubt, when i do a snapshot, a new lvm is generated, however
> when I delete this snapshot the lvm not off, that's right?

You question is not clear. Can you explain what is the unexpected behavior?

To check if an lv created or removed by ovirt, you can do:

pvscan --cache
lvs vg-uuid

Nir

>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366  7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> 3fba372c-4c39-4843-be9e-b358b196331d  b47f58e0-d576-49be-b8aa-f30581a0373a
> 5097df27-c676-4ee7-af89-ecdaed2c77be  c598bb22-a386-4908-bfa1-7c44bd764c96
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
> total 0
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
> 3fba372c-4c39-4843-be9e-b358b196331d ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
> 5097df27-c676-4ee7-af89-ecdaed2c77be ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
> 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
> b47f58e0-d576-49be-b8aa-f30581a0373a ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
> c598bb22-a386-4908-bfa1-7c44bd764c96 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
>
> disks snapshot:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 3fba372c-4c39-4843-be9e-b358b196331d
> image: 3fba372c-4c39-4843-be9e-b358b196331d
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
> disk base:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> b47f58e0-d576-49be-b8aa-f30581a0373a
> image: b47f58e0-d576-49be-b8aa-f30581a0373a
> file format: raw
> virtual size: 112G (120259084288 bytes)
> disk size: 0
>
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] a KVM Host has different UUIDs ?

2016-03-14 Thread Jean-Pierre Ribeauville
Hi,

To clarify what I'm doing : 

I install   a piece of code within host that collect infos from Guest , host 
itself and  datacenters and cluster  IDs to whom  this host belongs
in order to build a tree view from the host point of view.


As I'm wishing to use this code whatever in which infra is included the KVM 
Host  ( RHEV ,  Standalone , Openstatck Compute node,  ), I don't require 
any change in the UUIDs scheme but just try to use these ones in the most 
convenient way.


Thx for help.


Regards,

J.P.

-Original Message-
From: Juan Hernández [mailto:jhern...@redhat.com] 
Sent: lundi 14 mars 2016 10:01
To: Jean-Pierre Ribeauville; users@ovirt.org
Subject: Re: [ovirt-users] a KVM Host has different UUIDs ?

On 03/11/2016 11:34 AM, Jean-Pierre Ribeauville wrote:
> Hi,
> 
> When retrieving KVM Host UUID   via virsh  getcapabilities()  or via
>  ovirt Python SDK ( by using  obj.get_id() ), then both UUIDs are not 
> identical .
> 
> (FYI, Guest UUIDs retrieved via both ways , i.e.
> virDomainGetUUIDString() or python ovirt sdk ,  are identical.)
> 
> 
> Did I miss something ?
> 

This is normal, and expected. The host exists before it is added to the oVirt 
system, and already has an identifier. When it is added to the oVirt system it 
gets a new identifier assigned, for tracking it in the oVirt database. Those 
two identifiers are different.

On the other hand virtual machines are created by the oVirt system. It assign 
them an identifier for tracking in the oVirt database, and it happens to use 
the same identifier when asking libvirt to create the virtual machine.

Anyhow, you should not rely on these identifiers being the same, or different, 
as it is an implementation detail that may change in the future.

Is there any specific thing you are trying to achieve? If you share it we may 
be able to suggest a different approach.

--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 
28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid - C.I.F. B82657941 
- Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt and CAS SSO

2016-03-14 Thread Alastair Neil
On 11 March 2016 at 11:55, Martin Perina  wrote:

> Hi,
>
> I'm glad to hear that you were able to successfully configure aaa-misc
> and mod_auth_cas to allow CAS based login for oVirt.
>
> Unfortunately regarding CAS authorization for oVirt I have somewhat bad
> news for you. But let me explain the issue a bit:
>
> 1. Using aaa-misc we are able to pass only user name of the authenticated
>user from apache to ovirt.
>
> 2. After that we have authenticated user on oVirt and then we pass
>its username to authz extension to fetch full principal record including
>group memberships. At the moment we don't pass anything else to authz
>extension, just principal name (username).
>
> So here are options how to enable CAS authorization for oVirt:
>
> 1. Implement new authz extension which will fetch principal record for CAS
>server (if this is possible, I don't know much about CAS)
>
> 2. Or implement new authn/authz extensions specific to CAS which will use
>CAS API do both authn and authz.
>
> 3. Use LDAP as a backend for you CAS server (if possible) and configure
>authz part using ovirt-engine-extension-aaa-ldap
>
> 4. You could also create an RFE bug on oVirt to add CAS support, but
>no promises from me :-) you are the first user asking about CAS support
>


err, no I asked about it about 18 months ago on this very list and got no
response.  So in a way they are the first to ask and actually get a
response.





>
> And of course feel free to ask!
>
> Regards
>
> Martin Perina
>
> [1] http://machacekondra.blogspot.cz/
> [2] https://www.youtube.com/watch?v=bSbdqmRNLi0
> [3]
> http://www.slideshare.net/MartinPeina/the-new-ovirt-extension-api-taking-aaa-authentication-authorization-accounting-to-the-next-level
> [4] https://www.youtube.com/watch?v=9b9WVFsy_yg
> [5]
> http://www.slideshare.net/MartinPeina/ovirt-extension-api-the-first-step-for-fully-modular-ovirt
> [6] https://github.com/oVirt/ovirt-engine-extension-aaa-ldap
> [7] https://github.com/oVirt/ovirt-engine-extension-aaa-misc
> [8] https://github.com/oVirt/ovirt-engine-extension-aaa-jdbc
>
> - Original Message -
> > From: "Fabrice Bacchella" 
> > To: Users@ovirt.org
> > Sent: Tuesday, March 8, 2016 11:54:13 AM
> > Subject: [ovirt-users] ovirt and CAS SSO
> >
> > I'm trying to add CAS SSO to ovirt.
> >
> > For authn (authentication),
> > org.ovirt.engineextensions.aaa.misc.http.AuthnExtension is OK, I put
> jboss
> > behind an Apache with mod_auth_cas.
> >
> > Now I'm fighting with authz (authorization). CAS provides everything
> needed
> > as header. So I don't need ldap or jdbc extensions. Is there anything
> done
> > about that or do I need to write my own extension ? Is there some
> > documentation about that ?
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro  wrote:
>
>
> Is it cold (the VM is down) or live (the VM is up) merge (snapshot
> deletion)?
>
> VM is up
>
> What version are you running?
>
> oVirt Engine Version: 3.6.3.4-1.el7.centos
>
>
> Can you please share engine and vdsm logs?
>
> yes.
>
> Please note that at some point we try to verify that image was removed by
> running getVolumeInfo hence, the volume not found is expected. The thing is,
> that you say that volume does exist.
> Can you run following command on the host:
>
> vdsClient -s 0 getVolumeInfo
>
> return the command:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0
> getVolumeInfo  c2dc0101-748e-4a7b-9913-47993eaa52bd
> 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43
> 948d0453-1992-4a3c-81db-21248853a88a
> Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
>
> after restarting the host where vm was on, the link discs in image_group_id
> was broken but was not removed.
>
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
> You question is not clear. Can you explain what is the unexpected behavior?
>
> the link to the lvm should not be deleted after deleting the snapshot?

Are you talking about /dev/vgname/lvname link or the links under
/run/vdsm/storage/domain/image/volume,
or /rhev/data-center/pull/domain/image/volume?

/dev/vgname/lvname is created by udev rules when lv is activated or deactivated.
To understand if this is the issue, can you show the output of:

pvscan --cache
lvs vgname
ls -l /dev/vgname

Both before the the merge, and after the merge was completed.

The lv should not exist, and the links should be deleted.

Links under /run/vdsm/storage or /rhev/data-center/ should be created
when starting a vm, and tore down when stopping a vm, hotunpluging
a disk, or removing a snapshot.

To understand if there is an issue, we need the output of:

tree /run/vdsm/stoage/domain/image
tree /rhev/data-center/pool/domain/images/image

Before and after the merge.

The links should be deleted.

Nir

>
>
> Thanks
>
> 2016-03-14 10:14 GMT-03:00 Nir Soffer :
>>
>> On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro 
>> wrote:
>> > Good morning
>> >
>> > I have a doubt, when i do a snapshot, a new lvm is generated, however
>> > when I delete this snapshot the lvm not off, that's right?
>>
>> You question is not clear. Can you explain what is the unexpected
>> behavior?
>>
>> To check if an lv created or removed by ovirt, you can do:
>>
>> pvscan --cache
>> lvs vg-uuid
>>
>> Nir
>>
>> >
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be
>> > c598bb22-a386-4908-bfa1-7c44bd764c96
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
>> > total 0
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
>> > 3fba372c-4c39-4843-be9e-b358b196331d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 0

[ovirt-users] Howto Backup and Restore on new Server

2016-03-14 Thread Taste-Of-IT

Hello,
i want to backup a view VMs, delete the old Server, install on same 
Hardware a new one and restore the VMs. I use oVirt as All-in-One. What 
is the best Way? Can i simply copy the full snapshot to an external 
usb-disk, or should i use a external backup domain for that.? And 
Restore simply copy the snapshot backup to the VM Domain or use the same 
backup domain to import the VMs?


Thx
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro  wrote:
>
>
> Is it cold (the VM is down) or live (the VM is up) merge (snapshot
> deletion)?
>
> VM is up
>
> What version are you running?
>
> oVirt Engine Version: 3.6.3.4-1.el7.centos
>
>
> Can you please share engine and vdsm logs?
>
> yes.

Looking in your vdsm log, I see this error (454 times in 6 hours),
which looks like a bug:

periodic/5::ERROR::2016-03-12
09:28:02,847::executor::188::Executor::(_execute_task) Unhandled
exception in 
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 186,
in _execute_task
callable()
  File "/usr/share/vdsm/virt/periodic.py", line 279, in __call__
self._execute()
  File "/usr/share/vdsm/virt/periodic.py", line 324, in _execute
self._vm.updateNumaInfo()
  File "/usr/share/vdsm/virt/vm.py", line 5071, in updateNumaInfo
self._numaInfo = numaUtils.getVmNumaNodeRuntimeInfo(self)
  File "/usr/share/vdsm/numaUtils.py", line 116, in getVmNumaNodeRuntimeInfo
vnode_index = str(vcpu_to_vnode[vcpu_id])
KeyError: 1

Adding Francesco and Martin to look at this.

>
> Please note that at some point we try to verify that image was removed by
> running getVolumeInfo hence, the volume not found is expected. The thing is,
> that you say that volume does exist.
> Can you run following command on the host:
>
> vdsClient -s 0 getVolumeInfo
>
> return the command:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0
> getVolumeInfo  c2dc0101-748e-4a7b-9913-47993eaa52bd
> 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43
> 948d0453-1992-4a3c-81db-21248853a88a
> Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
>
> after restarting the host where vm was on, the link discs in image_group_id
> was broken but was not removed.
>
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
> You question is not clear. Can you explain what is the unexpected behavior?
>
> the link to the lvm should not be deleted after deleting the snapshot?
>
>
> Thanks
>
> 2016-03-14 10:14 GMT-03:00 Nir Soffer :
>>
>> On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro 
>> wrote:
>> > Good morning
>> >
>> > I have a doubt, when i do a snapshot, a new lvm is generated, however
>> > when I delete this snapshot the lvm not off, that's right?
>>
>> You question is not clear. Can you explain what is the unexpected
>> behavior?
>>
>> To check if an lv created or removed by ovirt, you can do:
>>
>> pvscan --cache
>> lvs vg-uuid
>>
>> Nir
>>
>> >
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be
>> > c598bb22-a386-4908-bfa1-7c44bd764c96
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
>> > total 0
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
>> > 3fba372c-4c39-4843-be9e-b358b196331d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
>> > 5aaf9

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Marcelo Leandro
*Are you talking about /dev/vgname/lvname link or the links
under/run/vdsm/storage/domain/image/volume,or
/rhev/data-center/pull/domain/image/volume?*

in  /rhev/data-center/pull/domain/image/volume



*/dev/vgname/lvname is created by udev rules when lv is activated or
deactivated.To understand if this is the issue, can you show the output of:*

*pvscan --cache*
*return:*
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#


*lvs vgname*
*return:*
  06d35bed-445f-453b-a1b5-cf1a26e21d57 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  19.00g
  0bad7a90-e6d5-4f80-9e77-276092989ec3 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   1.00g
  12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 502.00g
  191eb95f-2604-406b-ad90-1387cd4df7aa c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  10.00g
  235da77a-8713-4bdf-bb3b-4c6478b0ffe2 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---   1.68t
  289b1789-e65a-4725-95fe-7b1a59208b45 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  15.00g
  2d1cd019-f547-47c9-b360-0247f5283563 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  14.00g
  2e59f7f2-9e30-460e-836a-5e0d3d625059 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  27.50g
  2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  21.00g
  3d01ae03-ee4e-4fc2-aedd-6fc757f84f22 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 202.00g
  4626025f-53ab-487a-9f95-35ae65393f03 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   6.00g
  5dbb5762-6828-4c95-9cd1-d05896758af7 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 100.00g
  5e1461fc-c609-479d-9627-e88936fb15ed c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  11.00g
  64800fa4-85c2-4567-9605-6dc8ed5fec52 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  39.00g
  661293e4-26ef-4c2c-903b-442a2b7fb5c6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  13.00g
  79e4e84b-370a-4d6d-9683-197dabb591c2 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   5.12g
  7a3a6929-973e-4eec-bef0-1b99101e850d c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  20.00g
  7a79ae4f-4a47-4ce2-8570-95efc7774f7b c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  80.00g
  828d4c13-62c5-4d23-b0cc-e4ec88928c1f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 128.00m
  871874e8-0d89-4f13-962a-3d8175194130 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  54.00g
  a0a9aac2-d387-4148-a8a0-a906cfc1b513 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 240.00g
  aa397814-43d4-42f7-9151-fd6d9f6d0b7f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  22.00g
  b3433da9-e6b5-4ab4-9aed-47a698079a62 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  55.00g
  b47f58e0-d576-49be-b8aa-f30581a0373a c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 124.00g
  b5174aaa-b4ed-48e2-ab60-4bd51edde175 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---   4.00g
  b8027a73-2d37-4df6-a2ac-4782859b749f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 128.00m
  b86ed4a4-c922-4567-98b4-bace49d258f6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  13.00g
  ba8a3a28-1dd5-4072-bcd1-f8155fade47a c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  21.00g
  bb1bb92b-a8a7-486a-b171-18317e5d8095 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 534.00g
  c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  13.00g
  e88dfa8a-a9dc-4843-8c46-cc57ad700a04 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   4.00g
  f2ca34b7-c2b5-4072-b539-d1ee91282652 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 137.00g
  ids  c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 128.00m
  inboxc2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 128.00m
  leases   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a-   2.00g
  master   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a-   1.00g
  metadata c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 512.00m
  outbox   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 128.00m



*ls -l /dev/vgname*
*return:*
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# ls -l
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/
total 0
lrwxrwxrwx. 1 root root 8 Mar 14 13:18 0569a2e0-275b-4702-8500-dff732fea13c
-> ../dm-68
lrwxrwxrwx. 1 root root 8 Mar 13 23:00 06d35bed-445f-453b-a1b5-cf1a26e21d57
-> ../dm-39
lrwxrwxrwx. 1 root root 8 Mar 13 23:00 0ab62c79-0dc1-43ef-9043-1f209e988bd9
-> ../dm-66
lrwxrwxrwx. 1 root root 8 Mar 14 15:22 0bad7a90-e6d5-4f80-9e77-276092989ec3
-> ../dm-86
lrwxrwxrwx. 1 root root 8 Mar 14 15:16 1196d06c-d3ea-40ee-841a-a3de379b09f9
-> ../dm-85
lrwxrwxrwx. 1 root root 8 Mar 13 22:33 12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
-> ../dm-32
lrwxrwxrwx. 1 root root 8 Mar 13 22:33 18b1b7e1-0f76-4e1b-aea1-c4b737dad26d
-> ../dm-64
lrwxrwxrwx. 1 root root 8 Mar  2 01:20 191eb95f-2604-406b-ad90-1387cd4df7aa
-> ../dm-40
lrwxrwxrwx.

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Marcelo Leandro
All the disks in the
/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
are deleted snapshots that were not removed. The disk no contain snapshot.
In
/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
it must have just one disk after the merge.

Right?
Em 14/03/2016 12:41, "Marcelo Leandro"  escreveu:

>
>
> *Are you talking about /dev/vgname/lvname link or the links
> under/run/vdsm/storage/domain/image/volume,or
> /rhev/data-center/pull/domain/image/volume?*
>
> in  /rhev/data-center/pull/domain/image/volume
>
>
>
> */dev/vgname/lvname is created by udev rules when lv is activated or
> deactivated.To understand if this is the issue, can you show the output of:*
>
> *pvscan --cache*
> *return:*
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#
>
>
> *lvs vgname*
> *return:*
>   06d35bed-445f-453b-a1b5-cf1a26e21d57
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  19.00g
>   0bad7a90-e6d5-4f80-9e77-276092989ec3
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   1.00g
>   12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 502.00g
>   191eb95f-2604-406b-ad90-1387cd4df7aa
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  10.00g
>   235da77a-8713-4bdf-bb3b-4c6478b0ffe2
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   1.68t
>   289b1789-e65a-4725-95fe-7b1a59208b45
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  15.00g
>   2d1cd019-f547-47c9-b360-0247f5283563
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  14.00g
>   2e59f7f2-9e30-460e-836a-5e0d3d625059
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  27.50g
>   2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>   3d01ae03-ee4e-4fc2-aedd-6fc757f84f22
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 202.00g
>   4626025f-53ab-487a-9f95-35ae65393f03
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   6.00g
>   5dbb5762-6828-4c95-9cd1-d05896758af7
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 100.00g
>   5e1461fc-c609-479d-9627-e88936fb15ed
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  11.00g
>   64800fa4-85c2-4567-9605-6dc8ed5fec52
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  39.00g
>   661293e4-26ef-4c2c-903b-442a2b7fb5c6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  13.00g
>   79e4e84b-370a-4d6d-9683-197dabb591c2
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   5.12g
>   7a3a6929-973e-4eec-bef0-1b99101e850d
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  20.00g
>   7a79ae4f-4a47-4ce2-8570-95efc7774f7b
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  80.00g
>   828d4c13-62c5-4d23-b0cc-e4ec88928c1f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>   871874e8-0d89-4f13-962a-3d8175194130
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  54.00g
>   a0a9aac2-d387-4148-a8a0-a906cfc1b513
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 240.00g
>   aa397814-43d4-42f7-9151-fd6d9f6d0b7f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  22.00g
>   b3433da9-e6b5-4ab4-9aed-47a698079a62
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  55.00g
>   b47f58e0-d576-49be-b8aa-f30581a0373a
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 124.00g
>   b5174aaa-b4ed-48e2-ab60-4bd51edde175
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   4.00g
>   b8027a73-2d37-4df6-a2ac-4782859b749f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>   b86ed4a4-c922-4567-98b4-bace49d258f6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>   ba8a3a28-1dd5-4072-bcd1-f8155fade47a
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>   bb1bb92b-a8a7-486a-b171-18317e5d8095
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 534.00g
>   c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>   e88dfa8a-a9dc-4843-8c46-cc57ad700a04
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   4.00g
>   f2ca34b7-c2b5-4072-b539-d1ee91282652
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 137.00g
>   ids
>  c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 128.00m
>   inbox
>  c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 128.00m
>   leases
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a-   2.00g
>   master
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a-   1.00g
>   metadata
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 512.00m
>   outbox
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 128.00m
>
>
>
> *ls -l /dev/vgname*
> *return:*
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# ls -l
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/
> total 0
> lrwxrwxrwx. 1 root root 8 Mar 14 13:18
> 0569a2e0-275b-4702-8500-dff732fea13c -> ../dm-68
> lrwxrwxrwx. 1 root root 8 Mar 13 23:00
> 06d35bed-445f-453b-a1b5-cf1a26e21d57 -> ../dm-39
> lrwxrwxrwx. 1 root root 8 Mar 13 

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 6:11 PM, Marcelo Leandro  wrote:
> All the disks in the
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> are deleted snapshots that were not removed. The disk no contain snapshot.
> In
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> it must have just one disk after the merge.
>
> Right?

Yes, this seems to be a bug when doing a merge on a host which is not
the spm.

According to the log you attached (vdsm.log.5):
- we do not deactivate the lv after the merge
- therefore the link /dev/vgname/lvname is not deleted
- we don't delete the link at /rhev/data-center
- we don't delete the links at /run/vdsm/storage

The links under /run/vdsm/storage and /rhev/data-center should will
be deleted when hotunpluging this disk, or when stopping the vm.

Please file a ovirt/vdsm bug for this and include the information
from this thread.

Nir

>
> Em 14/03/2016 12:41, "Marcelo Leandro"  escreveu:
>>
>> Are you talking about /dev/vgname/lvname link or the links under
>> /run/vdsm/storage/domain/image/volume,
>> or /rhev/data-center/pull/domain/image/volume?
>>
>> in  /rhev/data-center/pull/domain/image/volume
>>
>>
>> /dev/vgname/lvname is created by udev rules when lv is activated or
>> deactivated.
>> To understand if this is the issue, can you show the output of:
>>
>> pvscan --cache
>> return:
>> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
>> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#
>>
>>
>> lvs vgname
>> return:
>>   06d35bed-445f-453b-a1b5-cf1a26e21d57
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  19.00g
>>   0bad7a90-e6d5-4f80-9e77-276092989ec3
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   1.00g
>>   12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 502.00g
>>   191eb95f-2604-406b-ad90-1387cd4df7aa
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  10.00g
>>   235da77a-8713-4bdf-bb3b-4c6478b0ffe2
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   1.68t
>>   289b1789-e65a-4725-95fe-7b1a59208b45
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  15.00g
>>   2d1cd019-f547-47c9-b360-0247f5283563
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  14.00g
>>   2e59f7f2-9e30-460e-836a-5e0d3d625059
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  27.50g
>>   2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>>   3d01ae03-ee4e-4fc2-aedd-6fc757f84f22
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 202.00g
>>   4626025f-53ab-487a-9f95-35ae65393f03
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   6.00g
>>   5dbb5762-6828-4c95-9cd1-d05896758af7
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 100.00g
>>   5e1461fc-c609-479d-9627-e88936fb15ed
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  11.00g
>>   64800fa4-85c2-4567-9605-6dc8ed5fec52
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  39.00g
>>   661293e4-26ef-4c2c-903b-442a2b7fb5c6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  13.00g
>>   79e4e84b-370a-4d6d-9683-197dabb591c2
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   5.12g
>>   7a3a6929-973e-4eec-bef0-1b99101e850d
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  20.00g
>>   7a79ae4f-4a47-4ce2-8570-95efc7774f7b
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  80.00g
>>   828d4c13-62c5-4d23-b0cc-e4ec88928c1f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>>   871874e8-0d89-4f13-962a-3d8175194130
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  54.00g
>>   a0a9aac2-d387-4148-a8a0-a906cfc1b513
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 240.00g
>>   aa397814-43d4-42f7-9151-fd6d9f6d0b7f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  22.00g
>>   b3433da9-e6b5-4ab4-9aed-47a698079a62
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  55.00g
>>   b47f58e0-d576-49be-b8aa-f30581a0373a
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 124.00g
>>   b5174aaa-b4ed-48e2-ab60-4bd51edde175
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   4.00g
>>   b8027a73-2d37-4df6-a2ac-4782859b749f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>>   b86ed4a4-c922-4567-98b4-bace49d258f6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>>   ba8a3a28-1dd5-4072-bcd1-f8155fade47a
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>>   bb1bb92b-a8a7-486a-b171-18317e5d8095
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 534.00g
>>   c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>>   e88dfa8a-a9dc-4843-8c46-cc57ad700a04
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   4.00g
>>   f2ca34b7-c2b5-4072-b539-d1ee91282652
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 137.00g
>>   ids
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 128.00m
>>   inbox
>> c2dc0101-7

Re: [ovirt-users] Windows 10

2016-03-14 Thread Jean-Marie Perron
Is it planned a development of WDDM driver for Windows 10 fully support with 
SPICE ?

Jean-Marie

De : Sandro Bonazzola [mailto:sbona...@redhat.com]
Envoyé : lundi 14 mars 2016 08:04
À : Jean-Marie Perron 
Cc : Uwe Laverenz ; users@ovirt.org
Objet : Re: [ovirt-users] Windows 10



On Fri, Mar 11, 2016 at 3:12 PM, Jean-Marie Perron 
mailto:jean-marie.per...@viseo.com>> wrote:
Hi Uwe,

Thank you for your reply.

I tested versions 0.6, 0.7, 0.11 and 0.12 of qxlwddm. It's always the same.

So, after some investigation, Windows 10 requires the WDDM driver with 3D 
support, and it is not supported by SPICE.
So oVirt Guest Tools supports Windows 10 but SPICE doesn't fully support it.




Jean-Marie

-Message d'origine-
De : users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] De la part de 
Uwe Laverenz
Envoyé : vendredi 11 mars 2016 10:49
À : users@ovirt.org
Objet : Re: [ovirt-users] Windows 10

Hi,

Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:
> Hello,
>
> OVirt 3.6.3 is installed on CentOS 7.
>
> I use 64-bit Windows 10 client with spice display.
>
> After installing the spice-guest-tools and oVirt-tools-setup on the VM
> Windows 10, the display always lag and slow.
>
> The display on a Windows 7 VM is fluid.
>
> On Device Manager and Display adapters, I see see the graphics card
> "Red Hat QLX Controller"
>
> Are Windows 10 is fully supported by oVirt?

I haven't tested this but please have a look at the qxlwddm driver here:

https://people.redhat.com/vrozenfe/qxlwddm/


Some people reported that this works for Win 8/8.1/10:

https://bugzilla.redhat.com/show_bug.cgi?id=895356

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine disk migration

2016-03-14 Thread Christophe TREFOIS
This procedure is what makes so scared.

Restoring a backup, usually, ends up in cataclysmic nightmares. Maybe not so in 
oVirt :)

Is there a recommended way to test restoring an ovirt-engine backup to see if 
it would “fail” or “work” in production?


On 14 Mar 2016, at 07:57, Sandro Bonazzola 
mailto:sbona...@redhat.com>> wrote:



On Fri, Mar 4, 2016 at 6:27 PM, Pat Riehecky 
mailto:riehe...@fnal.gov>> wrote:
I'm on oVirt 3.6

I'd like to migrate my hosted engine storage to another location and have a few 
questions:


There is no special procedure to migrate the Hosted Engine storage to a new one.
A possible way to do that is to backup the oVirt Engine data and restore them 
on a newly deployed Hosted Engine using the new storage location as if it was a 
migration from bare metal to hosted engine.




(a) what is the right syntax for glusterfs in 
/etc/ovirt-hosted-engine/hosted-engine.conf? (I'm currently on nfs3)

(b) what is the right syntax for fibre channel?

(c) where are instructions for how to migrate the actual disk files? (google 
was little help)

(d) Can the hosted engine use the same (gluster/fibre) volume as my VM Images?

(e) I get various "Cannot edit Virtual Machine. This VM is not managed by the 
engine." in the console for manipulating the HostedEngine.  Is that expected?

Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 10

2016-03-14 Thread Yaniv Dary
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Mon, Mar 14, 2016 at 6:34 PM, Jean-Marie Perron <
jean-marie.per...@viseo.com> wrote:

> Is it planned a development of WDDM driver for Windows 10 fully support
> with SPICE ?
>

I would ask this in the Spice project lists.


>
>
> Jean-Marie
>
>
>
> *De :* Sandro Bonazzola [mailto:sbona...@redhat.com]
> *Envoyé :* lundi 14 mars 2016 08:04
> *À :* Jean-Marie Perron 
> *Cc :* Uwe Laverenz ; users@ovirt.org
>
> *Objet :* Re: [ovirt-users] Windows 10
>
>
>
>
>
>
>
> On Fri, Mar 11, 2016 at 3:12 PM, Jean-Marie Perron <
> jean-marie.per...@viseo.com> wrote:
>
> Hi Uwe,
>
> Thank you for your reply.
>
> I tested versions 0.6, 0.7, 0.11 and 0.12 of qxlwddm. It's always the same.
>
>
>
> So, after some investigation, Windows 10 requires the WDDM driver with 3D
> support, and it is not supported by SPICE.
> So oVirt Guest Tools supports Windows 10 but SPICE doesn't fully support
> it.
>
>
>
>
>
>
>
>
> Jean-Marie
>
> -Message d'origine-
> De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part
> de Uwe Laverenz
> Envoyé : vendredi 11 mars 2016 10:49
> À : users@ovirt.org
> Objet : Re: [ovirt-users] Windows 10
>
>
> Hi,
>
> Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:
> > Hello,
> >
> > OVirt 3.6.3 is installed on CentOS 7.
> >
> > I use 64-bit Windows 10 client with spice display.
> >
> > After installing the spice-guest-tools and oVirt-tools-setup on the VM
> > Windows 10, the display always lag and slow.
> >
> > The display on a Windows 7 VM is fluid.
> >
> > On Device Manager and Display adapters, I see see the graphics card
> > "Red Hat QLX Controller"
> >
> > Are Windows 10 is fully supported by oVirt?
>
> I haven't tested this but please have a look at the qxlwddm driver here:
>
> https://people.redhat.com/vrozenfe/qxlwddm/
>
>
> Some people reported that this works for Win 8/8.1/10:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=895356
>
> cu,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
> --
>
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to get logged in into restapi in java using session ID

2016-03-14 Thread Juan Hernández
On 02/18/2016 03:22 PM, shailendra saxena wrote:
> Has somebody tried to get  API object with the constructor having
> parameters- URL and restapisession-  by using java-sdk ?
> Here is the following syntax ?
> Api api =new Api(RestURL,restapisessionID)
> 
> I have searched a lot but i didn't find anything. Somewhere  I read that
> Ovirt has some issue on keeping restapisession. Is it correct ?
> 

This should work, but the value that you have to pass is the session
cookie plus its value:

  JSESSIONID=asfasdfasdfas

Also consider using the "ApiBuilder" object, instead of directly the
"Api" constructor, and secure the connection:

  // Get the session id from somewhere:
  String sessionId = ...;

  // Create the builder:
  ApiBuilder builder = new ApiBuilder()
.url(URL)
.sessionId("JSESSIONID=" + sessionId)
.keyStorePath("ca.jks")
.keyStorePassword("mykeystorepassword")
.debug(DEBUG);

  // The "ca.jks" file above needs to be created from
  // the CA certificate of the engine, which is usually
  // located in the "/etc/pki/ovirt-engine/ca.pem" file.
  // Get that file, and then use a the "keytool" command
  // to import it to the "ca.jks" keystore file:
  //
  // keytool \
  // -importcert \
  // -keystore ca.jks \
  // -file ca.pem \
  // -alias ca \
  // -storepass mykeystorepassword \
  // -noprompt
  //
  // The resulting "ca.jks" file only contains the CA
  // certificate, so its content isn't confidential.

  // Create the API object:
  Api api = builder.build();

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to get logged in into restapi in java using session ID

2016-03-14 Thread shailendra saxena
Hello Juan Hernández,
Thanks you for the help. Now i am able to authenticate using session id.


On 14 March 2016 at 23:00, Juan Hernández  wrote:

> On 02/18/2016 03:22 PM, shailendra saxena wrote:
> > Has somebody tried to get  API object with the constructor having
> > parameters- URL and restapisession-  by using java-sdk ?
> > Here is the following syntax ?
> > Api api =new Api(RestURL,restapisessionID)
> >
> > I have searched a lot but i didn't find anything. Somewhere  I read that
> > Ovirt has some issue on keeping restapisession. Is it correct ?
> >
>
> This should work, but the value that you have to pass is the session
> cookie plus its value:
>
>   JSESSIONID=asfasdfasdfas
>
> Also consider using the "ApiBuilder" object, instead of directly the
> "Api" constructor, and secure the connection:
>
>   // Get the session id from somewhere:
>   String sessionId = ...;
>
>   // Create the builder:
>   ApiBuilder builder = new ApiBuilder()
> .url(URL)
> .sessionId("JSESSIONID=" + sessionId)
> .keyStorePath("ca.jks")
> .keyStorePassword("mykeystorepassword")
> .debug(DEBUG);
>
>   // The "ca.jks" file above needs to be created from
>   // the CA certificate of the engine, which is usually
>   // located in the "/etc/pki/ovirt-engine/ca.pem" file.
>   // Get that file, and then use a the "keytool" command
>   // to import it to the "ca.jks" keystore file:
>   //
>   // keytool \
>   // -importcert \
>   // -keystore ca.jks \
>   // -file ca.pem \
>   // -alias ca \
>   // -storepass mykeystorepassword \
>   // -noprompt
>   //
>   // The resulting "ca.jks" file only contains the CA
>   // certificate, so its content isn't confidential.
>
>   // Create the API object:
>   Api api = builder.build();
>
> --
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> 3ºD, 28016 Madrid, Spain
> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
>



-- 
Thanx & regards,

Shailendra Kr. Saxena
IIIT Allahabad
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] multiple NICs VLAN ID conflict

2016-03-14 Thread Bill James
We have DEV and QA is the same "data center" but on the network side of 
things they are on different switches, so they reused some VLAN IDs.

No problem, my server has 4 NICS.
But how do I tell ovirt its ok to have 2 networks with same vlan ID 
because I'm going to put them on different NICs?


It says "specified VLAN ID is already in use".

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How do I start an VM in a different 3.6 cluster

2016-03-14 Thread Bond, Darryl
I have 2 clusters in one Data Centre , 1xNehalem and 1xSandyBridge I can live 
migrate from Nehalem to SandyBridge.


I cannot change a cluster location for a stopped VM or choose which cluster to 
Run in.

The Cluster pulldown menu for the VM implies that it should be possible but 
there is only one choice, the cluster the VM last ran in.


I can find nothing in the documentation etc.


Thanks in advance.


Darryl




The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Admin Guide corrupted

2016-03-14 Thread Bond, Darryl
The Administration Guide is corrupted from Installing Guest Agents and Drivers 
on looks like the markup was screwed up and the rest of the guide is unreadable.


Darryl






The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine disk migration

2016-03-14 Thread Yedidyah Bar David
On Mon, Mar 14, 2016 at 7:09 PM, Christophe TREFOIS
 wrote:
> This procedure is what makes so scared.
>
> Restoring a backup, usually, ends up in cataclysmic nightmares. Maybe not so
> in oVirt :)
>
> Is there a recommended way to test restoring an ovirt-engine backup to see
> if it would “fail” or “work” in production?

Just first do a restore on an isolated VM and see what happens.

If you did not have local/manual customizations, I'd expect the restore to
work. Things that will not work you'll generally found out only after the
restore, depending on your env, the state of the engine, etc. See e.g. also:

https://bugzilla.redhat.com/show_bug.cgi?id=1241811
https://bugzilla.redhat.com/show_bug.cgi?id=1240466

>
>
> On 14 Mar 2016, at 07:57, Sandro Bonazzola  wrote:
>
>
>
> On Fri, Mar 4, 2016 at 6:27 PM, Pat Riehecky  wrote:
>>
>> I'm on oVirt 3.6
>>
>> I'd like to migrate my hosted engine storage to another location and have
>> a few questions:
>
>
>
> There is no special procedure to migrate the Hosted Engine storage to a new
> one.
> A possible way to do that is to backup the oVirt Engine data and restore
> them on a newly deployed Hosted Engine using the new storage location as if
> it was a migration from bare metal to hosted engine.
>
>
>
>>
>>
>> (a) what is the right syntax for glusterfs in
>> /etc/ovirt-hosted-engine/hosted-engine.conf? (I'm currently on nfs3)
>>
>> (b) what is the right syntax for fibre channel?
>>
>> (c) where are instructions for how to migrate the actual disk files?
>> (google was little help)
>>
>> (d) Can the hosted engine use the same (gluster/fibre) volume as my VM
>> Images?
>>
>> (e) I get various "Cannot edit Virtual Machine. This VM is not managed by
>> the engine." in the console for manipulating the HostedEngine.  Is that
>> expected?
>>
>> Pat
>>
>> --
>> Pat Riehecky
>> Scientific Linux developer
>>
>> Fermi National Accelerator Laboratory
>> www.fnal.gov
>> www.scientificlinux.org
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do I start an VM in a different 3.6 cluster

2016-03-14 Thread Raz Tamir
Hi,
I hope I understand your question correctly.
A VM run on a host and host exist in a cluster which means that if you want
to make sure your VM is "running" in a specific cluster you need to run it
on a host that exist in that cluster.
Is that answering your question?

Thanks,
Raz Tamir
Red Hat Isreal
On Mar 15, 2016 05:47, "Bond, Darryl"  wrote:

> I have 2 clusters in one Data Centre , 1xNehalem and 1xSandyBridge I can
> live migrate from Nehalem to SandyBridge.
>
>
> I cannot change a cluster location for a stopped VM or choose which
> cluster to Run in.
>
> The Cluster pulldown menu for the VM implies that it should be possible
> but there is only one choice, the cluster the VM last ran in.
>
>
> I can find nothing in the documentation etc.
>
>
> Thanks in advance.
>
>
> Darryl
>
>
> 
>
> The contents of this electronic message and any attachments are intended
> only for the addressee and may contain legally privileged, personal,
> sensitive or confidential information. If you are not the intended
> addressee, and have received this email, any transmission, distribution,
> downloading, printing or photocopying of the contents of this message or
> attachments is strictly prohibited. Any legal privilege or confidentiality
> attached to this message and attachments is not waived, lost or destroyed
> by reason of delivery to any person other than intended addressee. If you
> have received this message and are not the intended addressee you should
> notify the sender by return email and destroy all copies of the message and
> any attachments. Unless expressly attributed, the views expressed in this
> email do not necessarily represent the views of the company.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users