[ovirt-users] Re: upload ISO from webui failed,"Paused by System"

2018-09-12 Thread Kevin Goldblatt
Hi,


1. Please check whether you have downloaded the CA certificate in your
browser (This can be found when accessing your engine -> Downloads -> CA
Certificate.
2. Please ensure you are not trying to upload the image via a WIFI
connection (If you are you this will cause the operation to pause and you
will need to resume it mulitple times)
3. Try to Resume the operation by highlighting the paused operation and
from the menu select to resume it.

If all these don't help please file a bug in bugzilla and provide the
engine.log (/var/log/ovirt-engine/engine.log on your engine and the
vdsm.log on your vdsm host /var/log/vdsm/vdsm.log)


Regards,


Kevin


On Tue, Sep 11, 2018 at 9:39 AM,  wrote:

> Hi,
> I can't upload iso image from webui,always got "Paused by System"
> system info:
> engine:4.2.6.4-1.el7
> vdsm:4.20.35-1.el7
> imageio-proxy:1.4.4-0.el7
> imageio-daemon:1.4.4-0.el7
>
> operation:
> 1、download and install ca to the browser
> 2、select iso file to upload
> 3、click test ,alert to installl ca
> 4、install ca again(optional)
> 5、upload
> 6、wait and got "Paused by System"
>
> log:
> engine:
> 2018-09-11 11:13:22,273+08 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HSMClearTaskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-61)
> [02c871da-a599-4201-9dd5-92a468dee952] START, HSMClearTaskVDSCommand(HostName
> = 21, 
> HSMTaskGuidBaseVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132',
> taskId='90541bd1-65e7-4185-a051-3d8d9c1e3a5f'}), log id: 3ff168d7
> 2018-09-11 11:13:22,278+08 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HSMClearTaskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-61)
> [02c871da-a599-4201-9dd5-92a468dee952] FINISH, HSMClearTaskVDSCommand,
> log id: 3ff168d7
> 2018-09-11 11:13:22,278+08 INFO  [org.ovirt.engine.core.
> vdsbroker.irsbroker.SPMClearTaskVDSCommand] 
> (EE-ManagedThreadFactory-engine-Thread-61)
> [02c871da-a599-4201-9dd5-92a468dee952] FINISH, SPMClearTaskVDSCommand,
> log id: 43d30861
> 2018-09-11 11:13:22,280+08 INFO  
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-61) 
> [02c871da-a599-4201-9dd5-92a468dee952]
> BaseAsyncTask::removeTaskFromDB: Removed task 
> '90541bd1-65e7-4185-a051-3d8d9c1e3a5f'
> from DataBase
> 2018-09-11 11:13:22,280+08 INFO  
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (EE-ManagedThreadFactory-engine-Thread-61) 
> [02c871da-a599-4201-9dd5-92a468dee952]
> CommandAsyncTask::HandleEndActionResult [within thread]: Removing
> CommandMultiAsyncTasks object for entity '4ff92f68-8353-40ac-a7c5-
> f0efbd054841'
> 2018-09-11 11:13:26,098+08 INFO  [org.ovirt.engine.core.bll.
> storage.disk.image.TransferImageStatusCommand] (default task-8)
> [352651c9-ddde-4e1e-b95c-05ad967ed0b1] Running command:
> TransferImageStatusCommand internal: false. Entities affected :  ID:
> aaa0----123456789aaa Type: SystemAction group CREATE_DISK
> with role type USER
> 2018-09-11 11:13:28,836+08 INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
> [02c871da-a599-4201-9dd5-92a468dee952] Command 'AddDisk' id:
> '4b72410d-e5a0-4c43-b7db-a5324a32d012' child commands
> '[4ff92f68-8353-40ac-a7c5-f0efbd054841]' executions were completed,
> status 'SUCCEEDED'
> 2018-09-11 11:13:28,836+08 INFO  [org.ovirt.engine.core.bll.
> ConcurrentChildCommandsExecutionCallback] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
> [02c871da-a599-4201-9dd5-92a468dee952] Command 'AddDisk' id:
> '4b72410d-e5a0-4c43-b7db-a5324a32d012' Updating status to 'SUCCEEDED',
> The command end method logic will be executed by one of its parent commands.
> 2018-09-11 11:13:28,862+08 INFO  [org.ovirt.engine.core.bll.
> storage.disk.image.TransferDiskImageCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
> [02c871da-a599-4201-9dd5-92a468dee952] Successfully added Upload disk
> 'oVirt-toolsSetup-4.2-1.el7.centos.iso' (disk id:
> '97179509-65eb-4b45-ad8e-ce112cfd016a', image id:
> '6ca217cc-015e-4d00-872a-faf60a8954ac') for image transfer command
> '4cdff02c-7fe4-4000-9c92-8ef613597d13'
> 2018-09-11 11:13:28,892+08 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.PrepareImageVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
> [02c871da-a599-4201-9dd5-92a468dee952] START, PrepareImageVDSCommand(HostName
> = 21, 
> PrepareImageVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132'}),
> log id: 19407261
> 2018-09-11 11:13:28,901+08 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.PrepareImageVDSCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-55)
> [02c871da-a599-4201-9dd5-92a468dee952] FINISH, PrepareImageVDSCommand,
> return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log
> id: 19407261
> 2018-09-11 11:13:28,901+08 INFO  [org.ovirt.engine.core.
> vdsbroker.irsbroker.SetVolumeLegalityVDSCommand] 
> (EE-

Re: [ovirt-users] Regarding Host QoS

2017-06-22 Thread Kevin Goldblatt
Hello Rohit,

Just to clarify,

You edited the Logical Networks via the Rhevm GUI -> Network Tab ->
Selected relevant Network -> Edit Logical Network -> Selected Host Network
QoS -> Custom MTU and entered 5000 and pressed OK and then the error was
generated?

Also provide the following:

What version of Rhevm are you running?

Have you verified that your physical infrastructure (ports/switches)
support and are set to 10g and not the default of 1024?


Regards,


Kevin

On Thu, Jun 22, 2017 at 8:13 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I have ethernet nic with 10Gbps link speed.
> I want to setup one network over it with rate limit as 5Gbps
>
> When I tried to enter 5000 mbps rate in host qos it throws error saying
> value cannot be more than 1024.
> Can some one help me to know how to set limit as 5000mbps(5gbps) rate ?
>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM disk update failure

2017-05-07 Thread Kevin Goldblatt
Hi Stephano,

>From the log I see that the image cannot be found. This would indicate that
you have an issue on your host connected to the storage provider:

OneImageInfoReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=254,
mMessage=Image path does not exist or cannot be accessed/created:
(u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-835b-
21d2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)]]

Please provide the following:

1. Provide the exact scenario you performed in the Update of the disk

2. Can you please check to see that the storage domain on which the disk
resides is up and running,

3. If not then check whether the storage on the host is visible through the
host

4. If all the above is working then please submit a bug through bugzilla
and provide the engine, server and vdsm logs.


Regards,


Kevin

On Tue, May 2, 2017 at 11:10 PM, Stefano Bovina  wrote:

> Hi, while trying to update a VM disk, a failure was returned (forcing me
> to add a new disk)
>
> Any advice on how to resolve this error?
>
> Thanks
>
>
> Installation info:
>
> ovirt-release35-006-1.noarch
> libgovirt-0.3.3-1.el7_2.1.x86_64
> vdsm-4.16.30-0.el7.centos.x86_64
> vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
> vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
> vdsm-python-4.16.30-0.el7.centos.noarch
> vdsm-cli-4.16.30-0.el7.centos.noarch
> qemu-kvm-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
> libvirt-client-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
> libvirt-python-1.2.17-2.el7.x86_64
> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
> libvirt-glib-0.1.9-1.el7.x86_64
> libvirt-daemon-driver-network-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-lxc-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-interface-1.2.17-13.el7_2.3.x86_64
> libvirt-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-config-network-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-secret-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-kvm-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.3.x86_64
>
>
>  engine.log
>
> 2017-05-02 09:48:26,505 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
> (ajp--127.0.0.1-8702-6) [c3d7125] Lock Acquired to object EngineLock
> [exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
> VM_DISK_BOOT
> key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
> , sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
> ]
> 2017-05-02 09:48:26,515 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
> (ajp--127.0.0.1-8702-6) [c3d7125] Running command: UpdateVmDiskCommand
> internal: false. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
> 2017-05-02 09:48:26,562 INFO  
> [org.ovirt.engine.core.bll.ExtendImageSizeCommand]
> (ajp--127.0.0.1-8702-6) [ae718d8] Running command: ExtendImageSizeCommand
> internal: true. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
> 2017-05-02 09:48:26,565 INFO  [org.ovirt.engine.core.vdsbro
> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
> [ae718d8] START, ExtendImageSizeVDSCommand( storagePoolId =
> 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit = false), log
> id: 52aac743
> 2017-05-02 09:48:26,604 INFO  [org.ovirt.engine.core.vdsbro
> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
> [ae718d8] FINISH, ExtendImageSizeVDSCommand, log id: 52aac743
> 2017-05-02 09:48:26,650 INFO  
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (ajp--127.0.0.1-8702-6) [ae718d8] CommandAsyncTask::Adding
> CommandMultiAsyncTasks object for command cb7958d9-6eae-44a9-891a-7fe088
> a79df8
> 2017-05-02 09:48:26,651 INFO  
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
> (ajp--127.0.0.1-8702-6) [ae718d8] CommandMultiAsyncTasks::AttachTask:
> Attaching task 769a4b18-182b-4048-bb34-a276a55ccbff to command
> cb7958d9-6eae-44a9-891a-7fe088a79df8.
> 2017-05-02 09:48:26,661 INFO  
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (ajp--127.0.0.1-8702-6) [ae718d8] Adding task 
> 769a4b18-182b-4048-bb34-a276a55ccbff
> (Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.a
> synctasks.AsyncTaskParameters), polling hasn't started yet..
> 2017-05-02 09:48:26,673 INFO  [org.ovirt.engine.core.dal.db
> broker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6)
> [ae718d8] Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1,
> Message: VM sysinfo-73 sysinfo-73_Disk3 disk was updated by admin@internal.
> 2017-05-02 09:48:26,674

Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-29 Thread Kevin Goldblatt
Hi Eduardo,

You say that the same iscsi storage works on another system? Are you using
different hosts there?

If so you need to check that the problematic systems host iqn is correctly
mapped in the storage provider:

Please check you current hosts and ensure that
/etc/iscsi/initiatorname.iscsi is the same as the one you defined in the
storage provider:
e.g.
cat cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.mycorp:myhostname-vdsa

So then the iqn in the storage provider should also be:
iqn.1994-05.com.mycorp:myhostname-vdsa

After that reboot the host and rescan with the iscsiadm tool if needed

Regards,


Kevin



On Wed, Mar 29, 2017 at 1:12 PM, Liron Aravot  wrote:

>
>
> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral 
> wrote:
>
>> I had a similar problem, in my case this was related to multipath, it was
>> not masking the LUNs correctly, it was seeing it multiple times (one per
>> path), and I could not select the LUNs in the oVirt interface.
>>
>> Once I configured multipath correctly, everything worked like a charm.
>>
>> Best regards,
>>
>> --
>>
>> Eduardo Mayoral.
>>
>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>
>> Hello all,
>>
>> I did all steps as I described in previous email, but no change. I can't
>> see any  LUN after discovery and login of new iSCSI storage.
>> (That storage is ok, if I try to connect it to another and older ovirt
>> domain, it is working...)
>>
>> I tryed it on 3 new iSCSI targets alredy, all have same problem...
>>
>> Can somebody help me, please?
>>
>> --
>> Lukas Kaplan
>>
>>
> Hi Lukas,
> If you try to perform the discovery yourself, do you see the luns?
>
>>
>>
>> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan :
>>
>>> I did following steps:
>>>
>>>  - delete target on all initiators (ovirt nodes)
>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>> 10.53.1.201:3260 -u
>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>> 10.53.1.201:3260 -o delete
>>>
>>>  - stop tgtd on target
>>>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
>>> status=progress)
>>>  - start tgtd
>>>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see
>>> any LUN).
>>>
>>> === After that I ran this commands on one node: ===
>>>
>>> [root@fudi-cn1 ~]# iscsiadm -m session -o show
>>> tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> (non-flash)
>>> tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>> (non-flash)
>>> tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>> (non-flash)
>>>
>>> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
>>> SENDTARGETS:
>>> DiscoveryAddress: 10.53.0.201,3260
>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> Portal: 10.53.0.201:3260,1
>>> Iface Name: default
>>> iSNS:
>>> No targets found.
>>> STATIC:
>>> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>> Portal: 10.53.1.201:3260,1
>>> Iface Name: default
>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> Portal: 10.53.0.10:3260,1
>>> Iface Name: default
>>> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>> Portal: 10.53.0.201:3260,1
>>> Iface Name: default
>>> FIRMWARE:
>>> No targets found.
>>>
>>> === On iscsi target: ===
>>> [root@fuvs-sn1 ~]# cat /proc/mdstat
>>> Personalities : [raid1] [raid6] [raid5] [raid4]
>>> md125 : active raid6 sdl1[11] sdk1[10] sdj1[9] sdi1[8] sdh1[7] sdg1[6]
>>> sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
>>>   9766302720 blocks super 1.2 level 6, 512k chunk, algorithm 2
>>> [12/12] []
>>>   bitmap: 0/8 pages [0KB], 65536KB chunk
>>> ...etc...
>>>
>>>
>>> [root@fuvs-sn1 ~]# cat /etc/tgt/targets.conf
>>> default-driver iscsi
>>>
>>> 
>>> # provided devicce as a iSCSI target
>>> backing-store /dev/md125
>>> # iSCSI Initiator's IP address you allow to connect
>>> #initiator-address 10.53.0.0/23
>>> 
>>>
>>> --
>>> Lukas Kaplan
>>>
>>> 2017-03-25 12:36 GMT+01:00 Lukas Kaplan :
>>>
 Co muze myslet tim mappingem?

 Jinak muzu zkusit ddckem celou storage prepsat nulami.

 co ty na to?

 Odesláno z iPhonu

 Začátek přeposílané zprávy:

 *Od:* Yaniv Kaul 
 *Datum:* 24. března 2017 23:25:21 SEČ
 *Komu:* Lukáš Kaplan 
 *Kopie:* users 
 *Předmět:* *Re: [ovirt-users] iSCSI Discovery cannot detetect LUN*



 On Fri, Mar 24, 2017 at 1:34 PM, Lukáš Kaplan 
 wrote:

> Hello all,
>
> please do you have some experience with troubleshooting adding of
> iSCSI domain to ovirt 4.1.1?
>
> I am chalenging this issue now:
>
> 1) I have successfuly installed oVirt 4.1.1 environment with
> self-hosted engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for
> hosted engine and NFS ISO domain). Everything is working now.
>
> 2) But, when I want to add new iSCSI dom

Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Kevin Goldblatt
Hi Sven,

On your engine you can run the following to get the vms info from the
engine database:

su - postgres -c "psql -U postgres engine -c  'select * from vms;'" |less -S

You may also find some info on the specific vm in the engine log and the
libvirt log:

On the engine - /var/log/ovirt-engine/engine.log (this will probably have
been rotated in your case. Check to see the oldest engine.log in the
directory).

On the host the the vm runs on - /var/log/libvirt/qemu/vm111.log


Hope this helps,


Kevin

On Tue, Mar 21, 2017 at 9:45 AM, Gianluca Cecchi 
wrote:

> On Tue, Mar 21, 2017 at 8:42 AM, Sven Achtelik 
> wrote:
>
>> Hi,
>>
>>
>>
>> does anyone know if this information is pulled from the logs and if it’s
>> related to the log-rotation or if this is part of the Engine DB. I need to
>> know if it’s possible to read this information like 2 or 3 years later for
>> some auditing purpose. It might help if you could let me know where to look
>> at.
>>
>>
>>
>> Thank you,
>>
>>
>>
>> Sven
>>
>> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
>> Auftrag von *Sven Achtelik
>> *Gesendet:* Donnerstag, 16. März 2017 11:54
>> *An:* users@ovirt.org
>> *Betreff:* [ovirt-users] Event History for a VM
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I would need to have an Event-History of our VMs for auditing purposes
>> that is able to go back until the moment the VM was created/imported. I
>> found the Events Tab in the VM view and found that this is not showing
>> everything to the moment of creation. Things that are important for me
>> would be any change in CPUs or Host that the VM is pinned to. Are the
>> Events stored in the Engine DB and can I read them in any way ? Is there a
>> value that needs to be changed in order to keep all Events for a VM ?
>>
>>
>>
>> Thank you for helping,
>>
>>
>>
>> Sven
>>
>>
>>
>
> +1
>
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : iSCSI LUN not detected

2016-08-11 Thread Kevin Goldblatt
Hi Alexis

Were you able to resolve your storage LUN issue?

Regards,


Kevin

On Wed, Aug 10, 2016 at 11:38 AM, Alexis HAUSER <
alexis.hau...@telecom-bretagne.eu> wrote:

> Hi,
>
> I am reinstalling a new Node with a new hosted-engine and I would like to
> import an iSCSI storage from a previous ovirt installation.
> However, I can see all LUN present on that iSCSI but the one I want... I
> checked from the iSCSI array and this disk still exists, it's just not
> detected from Ovirt (3.6)...
> I tried to make a new data domain and chosed that same iSCSI and it's also
> not detected.
>
> Any ideas ?
>
> I didn't remove the storage from the Engine interface on the previous
> installation, just just turned off all VMs accessing the iSCSI and
> unplugged the cable. Is it possible there is still a lock file or something
> from the previous hypervisor ?
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users