Re: [ovirt-users] Fwd: RE: FC QLogic question

2014-09-12 Thread Sander Grendelman
This could also be an selinux issue with FC on the node, I've hit that one
before.
Especially since the luns do show up with multipath -ll
Could be as simple as flipping a sanlock related selinux boolean.

A quick test would be to boot in permissive mode.
See http://www.ovirt.org/Node_Troubleshooting#SELinux for some more
information.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt guest agent iso

2014-07-29 Thread Sander Grendelman
On Mon, Jul 28, 2014 at 1:30 AM, Maurice James 
wrote:

> Screen resizing on Server 2012 works too?
>

That's probably not going to work as long as there is no QXL/SPICE support
for windows >= 2k12.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt guest agent iso

2014-07-28 Thread Sander Grendelman
On Mon, Jul 28, 2014 at 1:13 PM, Sandro Bonazzola 
wrote:

> Il 28/07/2014 13:08, Sander Grendelman ha scritto:
> > I've tested the 3.5_5 installer on both a 2012 and a 2012 R2 guest.
> > On both guests the installer hung at the "installing virtioscsi" step.
> > Killing the installer process lead to an unbootable OS.
> >
> > What is the right place to report bugs in this ISO/installer?
>
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt&component=ovirt-windows-guest-tools
>
> Thank you, I've created a bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1123836
I also found a workaround: after disabling virtio-scsi support the
installer completes successfully.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt guest agent iso

2014-07-28 Thread Sander Grendelman
I've tested the 3.5_5 installer on both a 2012 and a 2012 R2 guest.
On both guests the installer hung at the "installing virtioscsi" step.
Killing the installer process lead to an unbootable OS.

What is the right place to report bugs in this ISO/installer?

On Sat, Jul 26, 2014 at 4:44 AM, Tiemen Ruiten  wrote:

> I installed the newest iso 3.5_5 on my Server 2012 R2 guests and it
> works nicely now.
>
> On 23-07-14 10:41, Lev Veyde wrote:
> > Hi Sandro,
> >
> > No, it doesn't.
> >
> > But the latest release (ISO 3.5_5) should support this OS.
> >
> > Thanks in advance,
> > Lev Veyde.
> >
> > - Original Message -
> > From: "Sandro Bonazzola" 
> > To: "Tiemen Ruiten" , users@ovirt.org,
> lve...@redhat.com, "Simone Tiraboschi" 
> > Sent: Wednesday, July 23, 2014 11:08:11 AM
> > Subject: Re: [ovirt-users] ovirt guest agent iso
> >
> > Il 23/07/2014 10:02, Tiemen Ruiten ha scritto:
> >> On 07/23/14 08:13, Sandro Bonazzola wrote:
> >>> Il 14/07/2014 10:21, Tiemen Ruiten ha scritto:
>  On 07/11/14 17:13, Bob Doolittle wrote:
> > On 07/11/2014 11:12 AM, Bob Doolittle wrote:
> >> On 07/11/2014 10:55 AM, Tiemen Ruiten wrote:
> >>> Hello,
> >>>
> >>> Is it possible/advisable to install the Windows Guest Agent tools
> 3.5
> >>> ISO on a VM on a 3.4.2 oVirt cluster?
> >> Yes. I've done just that. Works great.
> > Caveat - there was a bug related to not setting the Guest Agent to
> > autostart (at least for Windows 2008 R2). I think that's been fixed
> > but don't know if the fix has been pushed to the latest ISO yet.
> > Workaround - go into Services and change it from Manual to Automatic.
> >
> > -Bob
> >
>  Unfortunately, I get 'Unsupported Windows version' on Windows Server
>  2012 R2.
> 
> >>> Which release of the iso are you using?
> >>>
> >>>
> >> This one:
> >>
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-guest-tools/ovirt-guest-tools-3.5-2.iso
> >>
> > Lev, does above build include 2012 R2 support?
> > We're going to release a new build today as part of the 3.5.0 beta 2
> release, can you re-test it once it's ready?
> >
> >
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] converting windows guests with v2v, where to put the iso?

2014-04-18 Thread Sander Grendelman
If your machine is RHEL, the rpm is in the "supplementary" subchannel:
rhel6-x86_64-supplementary

Structure:

rpm -ql virtio-win
/usr/share/doc/virtio-win-1.6.8
/usr/share/doc/virtio-win-1.6.8/virtio-win_license.txt
/usr/share/virtio-win
/usr/share/virtio-win/drivers
/usr/share/virtio-win/drivers/amd64
/usr/share/virtio-win/drivers/amd64/Win2003
/usr/share/virtio-win/drivers/amd64/Win2003/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win2003/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win2003/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win2003/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win2003/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win2003/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win2008
/usr/share/virtio-win/drivers/amd64/Win2008/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win2008/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win2008/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win2008/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win2008/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win2008/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win2008/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win2008/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win2008/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win2008R2
/usr/share/virtio-win/drivers/amd64/Win2008R2/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win2008R2/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win2008R2/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win2008R2/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win2008R2/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win2008R2/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win2008R2/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win2008R2/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win2008R2/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win2012
/usr/share/virtio-win/drivers/amd64/Win2012/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win2012/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win2012/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win2012/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win2012/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win2012/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win2012/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win2012/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win2012/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win2012R2
/usr/share/virtio-win/drivers/amd64/Win2012R2/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win2012R2/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win2012R2/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win2012R2/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win2012R2/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win2012R2/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win2012R2/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win2012R2/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win2012R2/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win7
/usr/share/virtio-win/drivers/amd64/Win7/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win7/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win7/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win7/qxl.cat
/usr/share/virtio-win/drivers/amd64/Win7/qxl.inf
/usr/share/virtio-win/drivers/amd64/Win7/qxl.sys
/usr/share/virtio-win/drivers/amd64/Win7/qxldd.dll
/usr/share/virtio-win/drivers/amd64/Win7/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win7/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win7/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win7/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win7/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win7/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win8
/usr/share/virtio-win/drivers/amd64/Win8.1
/usr/share/virtio-win/drivers/amd64/Win8.1/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win8.1/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win8.1/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win8.1/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win8.1/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win8.1/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win8.1/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win8.1/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win8.1/viostor.sys
/usr/share/virtio-win/drivers/amd64/Win8/netkvm.cat
/usr/share/virtio-win/drivers/amd64/Win8/netkvm.inf
/usr/share/virtio-win/drivers/amd64/Win8/netkvm.sys
/usr/share/virtio-win/drivers/amd64/Win8/vioscsi.cat
/usr/share/virtio-win/drivers/amd64/Win8/vioscsi.inf
/usr/share/virtio-win/drivers/amd64/Win8/vioscsi.sys
/usr/share/virtio-win/drivers/amd64/Win8/viostor.cat
/usr/share/virtio-win/drivers/amd64/Win8/viostor.inf
/usr/share/virtio-win/drivers/amd64/Win8/viostor.sys
/usr/share/virtio-win/drivers/i386
/usr/share/virtio-win/drivers/i386/Win2003
/usr/share/virtio-win/drivers/i386/Win2003/netkvm.cat
/usr/share/virtio-win/drivers/i386/Win2003/netkvm.inf
/usr/share/virtio-win/drivers/i386/Win2003/netkvm.sys
/usr/share/virtio-win/drivers/i386/Win2003/viostor.cat
/usr/share/virtio-win/drivers/i386/Win2

Re: [Users] HA

2014-04-04 Thread Sander Grendelman
Do you have power management configured?
Was the "failed" host fenced/rebooted?


On Fri, Apr 4, 2014 at 2:21 PM, Koen Vanoppen wrote:

> So... It is possible for a fully automatic migration of the VM to another
> hypervisor in case Storage connection fails?
> How can we make this happen? Because for the moment, when we tested the
> situation they stayed in pause state.
> (Test situation:
>
>- Unplug the 2 fibre cables from the hypervisor
>- VM's go in pause state
>- VM's stayed in pause state until the failure was solved
>
> )
>
>
> They only returned when we restored the fiber connection to the
> Hypervisor...
>
> Kind Regards,
>
> Koen
>
>
>
> 2014-04-04 13:52 GMT+02:00 Koen Vanoppen :
>
>> So... It is possible for a fully automatic migration of the VM to another
>> hypervisor in case Storage connection fails?
>> How can we make this happen? Because for the moment, when we tested the
>> situation they stayed in pause state.
>> (Test situation:
>>
>>- Unplug the 2 fibre cables from the hypervisor
>>- VM's go in pause state
>>- VM's stayed in pause state until the failure was solved
>>
>> )
>>
>>
>> They only returned when we restored the fiber connection to the
>> Hypervisor...
>>
>> Kind Regards,
>>
>> Koen
>>
>>
>> 2014-04-03 16:53 GMT+02:00 Koen Vanoppen :
>>
>> -- Forwarded message --
>>> From: "Doron Fediuck" 
>>> Date: Apr 3, 2014 4:51 PM
>>> Subject: Re: [Users] HA
>>> To: "Koen Vanoppen" 
>>> Cc: "Omer Frenkel" , , "Federico
>>> Simoncelli" , "Allon Mureinik" >> >
>>>
>>>
>>>
>>> - Original Message -
>>> > From: "Koen Vanoppen" 
>>> > To: "Omer Frenkel" , users@ovirt.org
>>> > Sent: Wednesday, April 2, 2014 4:17:36 PM
>>> > Subject: Re: [Users] HA
>>> >
>>> > Yes, indeed. I meant not-operational. Sorry.
>>> > So, if I understand this correctly. When we ever come in a situation
>>> that we
>>> > loose both storage connections on our hypervisor, we will have to
>>> manually
>>> > restore the connections first?
>>> >
>>> > And thanx for the tip for speeding up thins :-).
>>> >
>>> > Kind regards,
>>> >
>>> > Koen
>>> >
>>> >
>>> > 2014-04-02 15:14 GMT+02:00 Omer Frenkel < ofren...@redhat.com > :
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > - Original Message -
>>> > > From: "Koen Vanoppen" < vanoppen.k...@gmail.com >
>>> > > To: users@ovirt.org
>>> > > Sent: Wednesday, April 2, 2014 4:07:19 PM
>>> > > Subject: [Users] HA
>>> > >
>>> > > Dear All,
>>> > >
>>> > > Due our acceptance testing, we discovered something. (Document will
>>> > > follow).
>>> > > When we disable one fiber path, no problem multipath finds it way no
>>> pings
>>> > > are lost.
>>> > > BUT when we disabled both the fiber paths (so one of the storage
>>> domain is
>>> > > gone on this host, but still available on the other host), vms go in
>>> paused
>>> > > mode... He chooses a new SPM (can we speed this up?), put's the host
>>> in
>>> > > non-responsive (can we speed this up, more important) and the VM's
>>> stay on
>>> > > Paused mode... I would expect that they would be migrated (yes, HA is
>>> >
>>> > i guess you mean the host moves to not-operational (in contrast to
>>> > non-responsive)?
>>> > if so, the engine will not migrate vms that are paused to do io error,
>>> > because of data corruption risk.
>>> >
>>> > to speed up you can look at the storage domain monitoring timeout:
>>> > engine-config --get StorageDomainFalureTimeoutInMinutes
>>> >
>>> >
>>> > > enabled) to the other host and reboot there... Any solution? We are
>>> still
>>> > > using oVirt 3.3.1 , but we are planning a upgrade to 3.4 after the
>>> easter
>>> > > holiday.
>>> > >
>>> > > Kind Regards,
>>> > >
>>> > > Koen
>>> > >
>>>
>>> Hi Koen,
>>> Resuming from paused due to io issues is supported (adding relevant
>>> folks).
>>> Regardless, if you did not define power management, you should manually
>>> approve
>>> source host was rebooted in order for migration to proceed. Otherwise we
>>> risk
>>> split-brain scenario.
>>>
>>> Doron
>>>
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage Performance Issue !

2014-02-18 Thread Sander Grendelman
You give very little information about your setup and the problem.

- What kind of storage do you use? ( software or hardware raid? how
many spindles? what kind of disks?)
- Does it have battery backed cache? Is the battery on this cache OK?
- How much IO, with how much latency, is generated on the hosts? ( run
some diagnostics e.g. "iostat -mx 10" for a couple of minutes on a
host )
- How much IO, with how much latency, is generated on the VMs?
- Is IO on the host better than on the guest? How much? (test!)
- What is the storage type of the storage domain? ( local FS? iSCSI?
Fibre Channel? GlusterFS? )
- What type of IO controller do you use in the guest? ( IDE? virtio?
virtio-scsi? )
- Do you use thin- or thick provisioned disks?
- Do you use snapshots?

On Wed, Feb 19, 2014 at 4:56 AM, Vishvendra Singh Chauhan
 wrote:
> Hi,
>
> My network is 1 Gigabit. And i am using my each node in a separate
> cluster/DC.
>
> And memory and processor uses are very low. Means when i run 30 vms then
> it's processor uses is 45% and memory uses is 35% around.
>
>
> On Tue, Feb 18, 2014 at 2:14 PM, Dafna Ron  wrote:
>>
>> what is the network configuration? are you using each host with it's own
>> cluster/DC as a local storage host?
>> how many hosts are we talking about? what is the memory/cpu consumption on
>> the hosts?
>>
>>
>>
>> On 02/18/2014 01:18 AM, Vishvendra Singh Chauhan wrote:
>>>
>>> Thanks Dafna,
>>>
>>> I am Using Dell server Just as node, My manager is running on other
>>> machine. On each node we have 40 vm's running. and all most vm's has RHEL6
>>> os. Now the problem about performance of vm's, when client is writing the
>>> data in vm, they are reporting, that they have  very slow speed to write the
>>> data.
>>>
>>> I also feel slow performance at writing speed in the vm's, So now please
>>> suggest, is any way to improve the writing speed in virtual machine's ?.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Feb 17, 2014 at 2:40 PM, Dafna Ron >> > wrote:
>>>
>>> please give more information.
>>> what do you mean by storage performance?
>>> how many vm's are you running?
>>> are you using the Dell as just a host or is engine also installed
>>> on it?
>>>
>>>
>>> On 02/17/2014 03:26 AM, Vishvendra Singh Chauhan wrote:
>>>
>>> Hello Group,
>>>
>>> Please help me out in storage issue in ovirt.
>>>
>>>
>>> I am using Dell PowerEdge XD 720 servers, as node in Ovirt.
>>> Every node has 24TB storage space, so i am using all this
>>> space as the local storage in that.. But still i am facing the
>>> problem in storgae performance. My guests os are very slow to
>>> write the data.
>>>
>>>
>>> So please give me, some tips using them i can increase the
>>> performance in storage.
>>>
>>>
>>>
>>> -- /*Thanks and Regards.*/
>>> /*Vishvendra Singh Chauhan*/
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> -- Dafna Ron
>>>
>>>
>>>
>>>
>>> --
>>> /*Thanks and Regards.*/
>>> /*Vishvendra Singh Chauhan*/
>>> /*(*//*RHC{SA,E,SS,VA}CC{NA,NP})*/
>>> /*+91-9711460593
>>> */
>>>
>>> http://linux-links.blogspot.in/
>>> God First Work Hard Success is Sure...
>>
>>
>>
>> --
>> Dafna Ron
>
>
>
>
> --
> Thanks and Regards.
> Vishvendra Singh Chauhan
> (RHC{SA,E,SS,VA}CC{NA,NP})
> +91-9711460593
> http://linux-links.blogspot.in/
> God First Work Hard Success is Sure...
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-01-23 Thread Sander Grendelman
On Thu, Jan 23, 2014 at 12:23 PM, Karli Sjöberg  wrote:
> Hi!
>
> I´ve gone through upgrading from 3.3.2 to 3.3.3 RC on CentOS 6.5 in our
> test environment, went off without a hitch, so "good job" guys! However
> something I´d very much like to see fixed is live snapshots for CentOS,
> especially since it seems to be fixed already for Fedora. Issue already
> been discussed:
> http://lists.ovirt.org/pipermail/users/2013-December/019090.html
>
> Is this something that can be targeted for 3.3.3 GA?

The problem is that you need a different qemu-kvm package for live snapshots.
At the moment the only way to get this package is building it from the srpm:
http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.3.src.rpm
(It's a drop-in replacement).

This version should probably be included in the oVirt or CentOS repositories.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Can not change "Defined Memory" in VM in Ovirt 3.3.3

2014-01-23 Thread Sander Grendelman
On Thu, Jan 23, 2014 at 3:32 PM, Madhav V Diwan
 wrote:
...
>
> Where/how  do you change the "Defined Memory"  after a VM is created?

The VM has to be down to change the amount of memory.

This works for me in 3.3.2:

Right mouse button on VM -> Edit -> Show advanced options -> System ->
Edit Total Memory
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Change bootdisk from IDE to virtio

2014-01-21 Thread Sander Grendelman
(Below is based on rhel6 experience, please translate to Suse)

May be a grub issue, look at /boot/grub/device.map and /boot/grub/grub.conf


On Tue, Jan 21, 2014 at 2:06 PM, Markus Stockhausen
 wrote:
>> > After switching the disk over to virtio the boot process hangs at
>> >
>> > SeaBIOS...
>> >
>> > Machine UUID ...
>> >
>> > iPXE ...
>> >
>> > Booting from Hard Disk ...
>>
>>  initramfs doesn't contain virtio drivers?
>
> Tried it without luck:
>
> linux-xj78:~ # blkid
> /dev/vda1: UUID="d03b0079-c2e1-4ca2-8ac2-53e46fac55d3" TYPE="ext3"
> /dev/vdb1: UUID="31ecf1f8-6d46-4ee1-bd73-c2dca46478b2" TYPE="ext3"
> /dev/sda1: UUID="92f14b02-77b4-4efa-bd7b-a9136453ae6d" TYPE="swap"
> /dev/sda2: UUID="31681173-fe97-446b-ba6b-7b459c66b52a" TYPE="ext3"
> /dev/vdc1: UUID="413fbea0-343a-4eb7-9901-f92f16c60c48" TYPE="ext3"
> /dev/vdd1: UUID="87b31934-029c-49df-b667-37b239792f52" TYPE="ext3"
> /dev/vde1: UUID="6457aeb7-84ee-495a-8649-6ade44f62ff0" TYPE="ext3"
> /dev/vdf1: UUID="04a15523-1ec5-4108-9574-3e1c32194fee" TYPE="ext3"
> /dev/vdg1: UUID="5f01c32b-5939-4772-b879-d75003107b6c" TYPE="ext3"
> /dev/vdh1: UUID="304850e0-00ac-4a7c-b71e-4413ce4a5e30" TYPE="ext3"
> /dev/vdi1: UUID="dea7d034-6189-42c7-9127-ab2d1bda5696" TYPE="ext3"
> linux-xj78:~ # export 
> rootdev=/dev/disk/by-uuid/31681173-fe97-446b-ba6b-7b459c66b52a
> linux-xj78:~ # mkinitrd
>
> Kernel image:   /boot/vmlinuz-3.0.76-0.11-default
> Initrd image:   /boot/initrd-3.0.76-0.11-default
> KMS drivers:intel-agp cirrus
> Root device:/dev/disk/by-uuid/31681173-fe97-446b-ba6b-7b459c66b52a 
> (/dev/sda2) (mounted on / as ext3)
> Resume device:  /dev/disk/by-uuid/92f14b02-77b4-4efa-bd7b-a9136453ae6d 
> (/dev/sda1)
> Kernel Modules: hwmon thermal_sys thermal processor fan scsi_mod 
> scsi_transport_spi mptbase mptscsih mptspi libata ata_piix ata_generic 
> virtio_ring virtio virtio_blk scsi_dh scsi_dh_emc scsi_dh_hp_sw scsi_dh_alua 
> scsi_dh_rdac mbcache jbd ext3 intel-gtt intel-agp syscopyarea sysfillrect 
> sysimgblt i2c-core drm drm_kms_helper ttm cirrus usb-common usbcore ohci-hcd 
> uhci-hcd ehci-hcd xhci-hcd hid usbhid crc-t10dif sd_mod virtio_pci
> Features:   acpi kms block usb resume.userspace resume.kernel
> Bootsplash: SLES (800x600)
> 37356 blocks
 Network: auto
 Calling mkinitrd -k /boot/vmlinuz-3.0.76-0.11-default -i 
 /tmp/mkdumprd.7kqe8kilVL -f 'kdump network' -B  -s ''
> Regenerating kdump initrd ...
>
> Kernel image:   /boot/vmlinuz-3.0.76-0.11-default
> Initrd image:   /tmp/mkdumprd.7kqe8kilVL
> KMS drivers:intel-agp cirrus
> Root device:/dev/disk/by-uuid/31681173-fe97-446b-ba6b-7b459c66b52a 
> (/dev/sda2) (mounted on / as ext3)
> Resume device:  /dev/disk/by-uuid/92f14b02-77b4-4efa-bd7b-a9136453ae6d 
> (/dev/sda1)
> Kernel Modules: hwmon thermal_sys thermal processor fan scsi_mod 
> scsi_transport_spi mptbase mptscsih mptspi libata ata_piix ata_generic 
> virtio_ring virtio virtio_blk scsi_dh scsi_dh_emc scsi_dh_hp_sw scsi_dh_alua 
> scsi_dh_rdac mbcache jbd ext3 intel-gtt intel-agp syscopyarea sysfillrect 
> sysimgblt i2c-core drm drm_kms_helper ttm cirrus usb-common usbcore ohci-hcd 
> uhci-hcd ehci-hcd xhci-hcd hid usbhid af_packet e1000 nls_utf8 crc-t10dif 
> sd_mod virtio_pci
> Features:   acpi kms block usb network resume.userspace resume.kernel 
> kdump
> 50842 blocks
>
> Markus
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What settings are needed to run windows 98 on a VM?

2014-01-21 Thread Sander Grendelman
It looks like this (kind of) works on ovirt already.
I was able to install win98 from cdrom on a VM with IDE disk.

There are some issues:
- The VM tends to hang during boot, choosing step by step confirmation
(my first suspect was javasup.vxd) seems to fix this, this doesn't
happen if I use the "-no-kvm" option with a local kvm install.
- No network (no e1000 for win98 it seems) this can probably be fixed
with a hook to add a ne2k_pci nic
- Slow graphics, this can probably be fixed with a hook to use the
cirrus adapter in stead of qlx or vga

Working cli kvm start:
qemu-kvm -m 512 -hda /var/tmp/win98.qcow2 -no-kvm -cdrom
/var/tmp/win98se.iso -vga cirrus

Some more info here: http://ubuntuforums.org/showthread.php?t=774745

On Tue, Jan 21, 2014 at 7:27 AM, Eliezer Croitoru  wrote:
> I wanted to make sure I understand.
> This guy comes to me with windows98 and says he must run this machine.
> I don't really care about this machine but this is what he wants..
> Is there any options else then XEN that will run windows98 on a VM?
>
> Thanks,
> Eliezer
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Making v2v easier?

2014-01-20 Thread Sander Grendelman
Is there a specification for the ovf/xml/directory structure in the
export domain we can use
to (semi)manually import an ovirt-compatible machine to an export domain?

On Mon, Jan 20, 2014 at 11:39 AM, Matthew Booth  wrote:
> On 20/01/14 10:36, Itamar Heim wrote:
>> On 01/20/2014 12:18 PM, Matthew Booth wrote:
>>> On 20/01/14 09:53, Richard W.M. Jones wrote:
>>>> On Fri, Jan 17, 2014 at 05:06:13PM +0100, Sander Grendelman wrote:
>>>>> On Fri, Jan 17, 2014 at 4:19 PM, Itamar Heim  wrote:
>>>>>> I see a lot of threads about v2v pains (mostly from ESX?)
>>>>>>
>>>>>> I'm interested to see if we can make this simpler/easier.
>>>>> hear hear!
>>>>>
>>>>>>
>>>>>> if you have experience with this, please describe the steps you are
>>>>>> using
>>>>>> (also the source platform),
>>>>>
>>>>> Sources:
>>>>> - Existing KVM (virt-manager/libvirt) platform
>>>>> - ESX
>>>>> - ova/ovf templates from several sources
>>>>>
>>>>> Methods:
>>>>> - KVM:
>>>>>virt-v2v with libvirtxml option, works reasonably well, most issues
>>>>> are with windows guests where virt-v2v needs libguestfs-winsupport and
>>>>> virtio-win (RHEL only)
>>>>> - ESX:
>>>>>virt-v2v which works reasonably well _if_ the right packages
>>>>> (libguestfs-winsupport virtio-win) are installed.
>>>>>virt-v2v can be used directly from ESX/ESX host (configure .netrc
>>>>> first) but this is quite slow
>>>>>another option is to export the VM as an OVA and then import it
>>>>> with virt-v2v
>>>>> - ova/ovf templates:
>>>>>hit and miss with virt-v2v, especially if they contain something
>>>>> that is not a regular windows/linux guest.
>>>>>Another option is to do a direct copy of the disks on a pre-created
>>>>> VM, clumsy.
>>>>>
>>>>>> and how you would like to see this make simpler
>>>>>> (I'm assuming that would start from somewhere in the webadmin
>>>>>> probably).
>>>>>
>>>>> Webadmin would be nice, but better behaviour from existing tools
>>>>> would be
>>>>> a nice start too.
>>>>>
>>>>> For example: the flow with virt-v2v is
>>>>> 1) Analyze source, look for disks
>>>>> 2) Convert/copy disks to ovirt export domain
>>>>> 3) Try to add virtio stuff to the copied disks on the export domain
>>>>>
>>>>> If step 3 fails ( which happens a LOT), the copied disks are removed.
>>>>> This is very frustrating if you just waited a couple of hours for a
>>>>> large
>>>>> VM (e.g. 200GB) to be copied :(
>>>>>
>>>>> Some kind of graceful abort/resume would be VERY welcome.
>>>>
>>>> The above basically come down to the fact that currently virt-v2v does
>>>> the copy first and the v2v step second.  It was my understanding
>>>> [Matt?] that guestconv is supposed to do the v2v step first followed
>>>> by the copy, which should solve all of that.
>>>
>>> guestconv doesn't address this problem directly. We need smarter copying
>>> for that :/
>>>
>>>>
>>>>> Another issue with virt-v2v is that it _always_ tries to add virtio
>>>>> drivers.  I have a virtual appliance that contains some kind of
>>>>> proprietary embedded OS: adding drivers will always fail, give me
>>>>> some option to override that and configure simple ide / e1000
>>>>> hardware for the VM
>>>
>>> guestconv *does* address that.
>>>
>>>> I suspect in this case what you really should be doing is just copying
>>>> the source disk image, without using virt-v2v at all.
>>>
>>> Matt
>>>
>>
>> is guestconv ready for adoption/testing instead of virt-v2v?
>
> No, it's not even functionally complete, yet. We're planning to get that
> sorted soon.
>
> Matt
> --
> Matthew Booth, RHCA, RHCSS
> Red Hat Engineering, Virtualisation Team
>
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Making v2v easier?

2014-01-20 Thread Sander Grendelman
FWIW, importing directly from an ESX server still works:

virt-v2v-host:
- RHEL/CentOS 6.5 physical host ( virt-v2v uses qemu-kvm = extra++ slow on a VM)
- Packages:
  virt-v2v-0.9.1-5.el6_5.x86_64
  libguestfs-winsupport-1.0-7.el6.x86_64
  libguestfs-tools-c-1.20.11-2.el6.x86_64
  libguestfs-tools-1.20.11-2.el6.x86_64
  libguestfs-1.20.11-2.el6.x86_64
  virtio-win-1.6.7-2.el6.noarch ( RHEL only? )
- network acces to:
oVirt export domain (NFS)
esx host(s) to import from (HTTPS)
- virt-v2v has to run as root to mount the oVirt NFS export domain
- Edit ~/.netrc and add a line for the esx host(s) to import from
(change the <> parts):
machine  login  password 
- Fix permissions on netrc file:
chmod 600 ~/.netrc
- Run virt-v2v ( again: change the <> parts, ?no_verify=1 is needed
when esx uses self signed certs)
LIBGUESTFS_DEBUG=1 virt-v2v -ic esx:///?no_verify=1 -o
rhev -os 
--network  

Conversion can take quite some time after the disk copy,
especially when virt-v2v removes the vmware tools.
Running on a physical host (or using nested virtualization) helps.

On Mon, Jan 20, 2014 at 8:59 AM, Sander Grendelman
 wrote:
> https://rhn.redhat.com/errata/RHBA-2013-1749.html
>
> """
> This update fixes the following bug:
>
> * An update to virt-v2v included upstream support for the import of OVA images
> exported by VMware servers. Unfortunately, testing has shown that VMDK images
> created by recent versions of VMware ESX cannot be reliably supported, thus 
> this
> feature has been withdrawn. (BZ#1028983)
>
> Users of virt-v2v are advised to upgrade to this updated package, which fixes
> this bug.
> """
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Making v2v easier?

2014-01-19 Thread Sander Grendelman
https://rhn.redhat.com/errata/RHBA-2013-1749.html

"""
This update fixes the following bug:

* An update to virt-v2v included upstream support for the import of OVA images
exported by VMware servers. Unfortunately, testing has shown that VMDK images
created by recent versions of VMware ESX cannot be reliably supported, thus this
feature has been withdrawn. (BZ#1028983)

Users of virt-v2v are advised to upgrade to this updated package, which fixes
this bug.
"""
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Making v2v easier?

2014-01-17 Thread Sander Grendelman
On Fri, Jan 17, 2014 at 4:19 PM, Itamar Heim  wrote:
> I see a lot of threads about v2v pains (mostly from ESX?)
>
> I'm interested to see if we can make this simpler/easier.
hear hear!

>
> if you have experience with this, please describe the steps you are using
> (also the source platform),

Sources:
- Existing KVM (virt-manager/libvirt) platform
- ESX
- ova/ovf templates from several sources

Methods:
- KVM:
  virt-v2v with libvirtxml option, works reasonably well, most issues
are with windows guests where virt-v2v needs libguestfs-winsupport and
virtio-win (RHEL only)
- ESX:
  virt-v2v which works reasonably well _if_ the right packages
(libguestfs-winsupport virtio-win) are installed.
  virt-v2v can be used directly from ESX/ESX host (configure .netrc
first) but this is quite slow
  another option is to export the VM as an OVA and then import it with virt-v2v
- ova/ovf templates:
  hit and miss with virt-v2v, especially if they contain something
that is not a regular windows/linux guest.
  Another option is to do a direct copy of the disks on a pre-created
VM, clumsy.

> and how you would like to see this make simpler
> (I'm assuming that would start from somewhere in the webadmin probably).

Webadmin would be nice, but better behaviour from existing tools would be
a nice start too.

For example: the flow with virt-v2v is
1) Analyze source, look for disks
2) Convert/copy disks to ovirt export domain
3) Try to add virtio stuff to the copied disks on the export domain

If step 3 fails ( which happens a LOT), the copied disks are removed.
This is very frustrating if you just waited a couple of hours for a large
VM (e.g. 200GB) to be copied :(

Some kind of graceful abort/resume would be VERY welcome.

Another issue with virt-v2v is that it _always_ tries to add virtio drivers.
I have a virtual appliance that contains some kind of proprietary embedded OS:
adding drivers will always fail, give me some option to override that
and configure
simple ide / e1000 hardware for the VM

>
> Thanks,
>Itamar
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] get VM events via API?

2014-01-17 Thread Sander Grendelman
I did a quick test with ovirt-shell:


# Query vm id:
[oVirt shell (connected)]# list vms --query "name=testvm"

id : 8b2ad3a2-9704-47e6-99f1-f196856d7bd5
name   : plsweb02

[oVirt shell (connected)]#

# List events:

list events --kwargs "vm-id=a82874bb-152d-4efd-952b-3f3304af7bb3"

Unfortunately this uses client side filtering.

REST-API / python sdk should work similarly, AFAIK ovirt-shell uses
the REST-API too

On Fri, Jan 17, 2014 at 1:13 PM, Sven Kieske  wrote:
> Hi,
>
> is it possible to get the vm events via any API/CLI?
>
> I'm talking about the "events" tab which you can select
> in webadmin when you selected a vm.
>
> If this is possible, how can we get the data?
> We could even write a vdsm hook, if necessary, but
> would of course prefer REST-API.
>
> Thanks.
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +49-5772-293-100
> F: +49-5772-293-333
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Horrid performance during disk I/O

2014-01-15 Thread Sander Grendelman
On Tue, Jan 14, 2014 at 10:38 PM, Blaster  wrote:
> On 1/14/2014 11:04 AM, Andrew Cathrow wrote:
>>
>> Did you compare virtio-block to virto-scsi, the former will likely
>> outperform the latter.
>
>
> No, but I have been meaning to, out of curiosity.
>
> But why do you say virto-blk will be faster than virtio-scsi?  The
> virtio-scsi wiki claims equal performance.
That's also what I read but ... this presentation:
http://www.linux-kvm.org/wiki/images/f/f9/2012-forum-virtio-blk-performance-improvement.pdf
claims
claims: "virtio-blk is about ~3 times faster than virtio-scsi in my setup" o_O
So testing is definitely a good idea.
>
> I've been trying to get some real numbers of the performance differences
> using iozone, but the numbers are all over the place, both on the HV and the
> guests, so not very meaningful.   Not an iozone expert, so still trying to
> figure out what I'm doing wrong there as well.

I've also done some (relatively simple) testing with fio.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [QE] 3.4.0 Release tracker

2014-01-13 Thread Sander Grendelman
On Mon, Jan 13, 2014 at 12:32 AM, Dan Kenigsberg  wrote:
> On Fri, Jan 10, 2014 at 04:55:18PM +0200, Itamar Heim wrote:
>> On 01/10/2014 01:52 PM, Sander Grendelman wrote:
>> >Can I propose BZ#1035314 for 3.3.3 or 3.4.0, simple, trivial fix to a hook.
>>
>> Hi Sander,
>>
>> please use bug summary so folks won't have to go and look what the
>> number means just to see if relevant to them.
>>
>> this is about:
>> Bug 1035314 - vdsm-hook-nestedvt uses kvm_intel-only syntax

OK, I'll keep that in mind!

>> i see you posted a patch in the bug - can you post the patch to
>> gerrit as well?
>
> I've done that alraedy: http://gerrit.ovirt.org/#/c/23164/
>
> Sander (and others), could you review and formally verify it?

Looks fine to me, should I also do/confirm something in gerrit?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [QE] 3.4.0 Release tracker

2014-01-10 Thread Sander Grendelman
Can I propose BZ#1035314 for 3.3.3 or 3.4.0, simple, trivial fix to a hook.

On Wed, Jan 8, 2014 at 9:46 AM, Sandro Bonazzola  wrote:
> Hi,
>
> as you may know, we're planning to build oVirt 3.4.0 beta really soon and 
> release 3.4.0 by end of January.
> A tracker bug (https://bugzilla.redhat.com/show_bug.cgi?id=1024889) has been 
> created for this release.
>
> The following is a list of the current blocker bugs with target 3.4.0:
> Whiteboard  Bug ID  Summary
> storage 1032686 [RFE] API to save OVF on any location
> storage 1032679 [RFE] Single Disk Snapshots
> network 987813  [RFE] report BOOTPROTO and BONDING_OPTS independent of 
> netdevice.cfg
>
>
> The following is a list of the bugs with target 3.4.0 not yet fixed:
>
> Whiteboard  Bug ID  Summary
> gluster 1008980 [oVirt] Option 'Select as SPM' available for a host 
> in gluster-only mode of oVirt
> gluster 1038988 Gluster brick sync does not work when host has 
> multiple interfaces
> i18n1033730 [es-ES] need to revise the "create snapshot" 
> translation
> infra   870330  Cache records in memory
> infra   904029  [engine-manage-domains] should use POSIX parameter 
> form and aliases as values
> infra   979231  oVirt Node Upgrade: Support N configuration
> infra   986882  tar which is used by host-deploy is missing from 
> fedora minimal installation
> infra   995362  [RFE] Support firewalld
> infra   1016634 Performance hit as a result of duplicate updates to 
> VdsDynamic in VdsUpdateRuntimeInfo
> infra   1023751 [RFE] Create Bin Overrider for application context 
> files changes we do in JRS
> infra   1023754 [RFE] add trigger to stop etl connection via engine 
> db value.
> infra   1023759 [RFE] re-implement SSO solution based on JRS new SSO 
> interface
> infra   1023761 [RFE] Build nightly JRS builds based on latest JRS 
> version
> infra   1028793 systemctl start vdsmd blocks if dns server unreachable
> infra   1032682 Refactor authentication framework in engine
> infra   1035844 [oVirt][infra] Add host/Reinstall radio button text 
> not actionable
> infra   1045350 REST error during VM creation via API
> infra   1046611 [oVirt][infra] Device custom properties syntax check 
> is wrong
> integration 789040  [RFE] Log Collector should be able to run without 
> asking questions
> integration 967350  [RFE] port dwh installer to otopi
> integration 967351  [RFE] port reports installer to otopi
> integration 1023752 [RFE] add upstream support for Centos el6 arch.
> integration 1024028 [RFE] add trigger to stop etl connection via engine 
> db value.
> integration 1028489 [RFE] pre-populate ISO DOMAIN  with 
> rhev-tools-setup.iso (or equiv)
> integration 1028913 'service network start' sometimes fails during setup
> integration 1037663 F20 - ovirt-log-collector: conflicts with file from 
> package sos-3.0-3.fc20.noarch
> integration 1039616 Setting shmmax on F19 is not enough for starting 
> postgres
> network 987832  failed to add ovirtmgmt bridge when the host has 
> static ip
> network 1001186 With AIO installer and NetworkManager enabled, the 
> ovirtmgmt bridge is not properly configured
> network 1010663 override mtu field allows only values up to 9000
> network 1018947 Yum update to oVirt 3.3 from 3.1.0 fails on CentOS 
> 6.4 with EPEL dependency on python-inotify
> network 1037612 [oVirt][network][RFE] Add "sync" column to hosts sub 
> tab under networks main tab
> network 1040580 [RFE] Apply networks changes to multiple hosts
> network 1040586 [RFE] Ability to configure network on multiple hosts 
> at once
> network 1043220 [oVirt][network][RFE] Add Security-Group support for 
> Neutron based networks
> network 1043230 Allow configuring Network QoS on host interfaces
> network 1044479 Make an iproute2 network configurator for vdsm
> network 1048738 [oVirt][network][RFE] Add subnet support for neutron  
> based networks
> network 1048740 [oVirt][network][RFE] Allow deleting Neutron based 
> network (in Neutron)
> network 1048880 [vdsm][openstacknet] Migration fails for vNIC using 
> OVS + security groups
> sla 994712  Remove underscores for pre-defined policy names
> sla 1038616 [RFE] Support for hosted engine
> storage 888711  PosixFS issues
> storage 961532  [RFE] Handle iSCSI lun resize
> storage 1009610 [RFE] Provide clear warning when SPM become 
> inaccessible and needs fencing
> storage 1034081 Misleading error message when adding an existing 
> Storage Domain
> storage 1038053 [RFE] Allow domain of multiple types in a single Data 
> Center
> storage 1045842 After deleting image failed ui display message: Disk 
> gluster-test was successfully removed from...
> ux

Re: [Users] VirtIO disk latency

2014-01-09 Thread Sander Grendelman
On Thu, Jan 9, 2014 at 10:16 AM, Markus Stockhausen
 wrote:
...
> - access NFS inside the hypervisor - 12.000 I/Os per second - or 83us latency
> - access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or 
> 125us latency
> - access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - or 
> 450us latency

I can do a bit of testing on local disk and FC (with some extra setup
maybe also NFS).
What is your exact testing method? ( commands, file sizes, sofware
versions, mount options etc.)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Sander Grendelman
On Thu, Jan 9, 2014 at 9:39 AM, Markus Stockhausen
 wrote:
>> Von: squadra [squa...@gmail.com]
>> Gesendet: Donnerstag, 9. Januar 2014 09:30
>> An: Markus Stockhausen
>> Cc: Karli Sjöberg; users@ovirt.org
>> Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
>>
>> try it, i bet that you will get better latency results with proper 
>> configured iscsitarget/initiator.
>
> I guess you did not take time to read the hole post. The latency I speak
> of comes ontop the NFS latency. So my setup has
>
> - 83us latency per I/O in the hypervisor on a NFS share
> - 450us latency per I/O in the VM on a disk hosted on the same NFS share
>
> If ISCSI could reduce latency to 40us instead of 83us in our wishfulst dreams
> the QEMU penalty hits too hard.

There are some interesting tests here:
http://www.linux-kvm.org/page/Virtio/Block/Latency
Results seem to depend a lot on the guest OS IO stack/drivers (I see
you use win2k3?).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt DR setup

2014-01-09 Thread Sander Grendelman
No experience there but I can make some suggestions.

The upcoming self-hosted engine feature I mentioned before should have
built-in HA for these kind of situations.
http://www.ovirt.org/Features/Self_Hosted_Engine

Another option: AFAIK oVirt engine configuration is all in the database,
configure it with an external DB and use postgres replication to have a
warm standby DB on your secondary location, combine this with
some kind of (rsync?) replication of OS/configuration.

On Thu, Jan 9, 2014 at 9:15 AM, Hans Emmanuel  wrote:
> Thanks for your reply .
>
> Yes we can replicate storage data with Gluster geo replication . Then what
> should be strategy for replicating Ovirt Engine confs and db ?
>
>
> On Thu, Jan 9, 2014 at 1:34 PM, Sander Grendelman 
> wrote:
>>
>> Depending on the bandwidth/latency between the locations you could go for
>> replicated GlusterFS strorage and make sure that data is replicated
>> accross
>> both sites. There is a self-hosted engine feature coming up, I don't know
>> how
>> that will fit into replication.
>>
>>
>> On Thu, Jan 9, 2014 at 6:04 AM, Hans Emmanuel 
>> wrote:
>> > Could any one please give me some suggestions ?
>> >
>> >
>> > On Wed, Jan 8, 2014 at 11:39 AM, Hans Emmanuel 
>> > wrote:
>> >>
>> >> Hi all ,
>> >>
>> >> I would like to know about the possibility of setup Disaster Recovery
>> >> Site
>> >> (DR) for an Ovirt cluster . i.e if site 1 goes down I need to trigger
>> >> the
>> >> site 2 to come in to action with the minimal down time .
>> >>
>> >> I am open to use NFS shared storage or local storage for data storage
>> >> domain . I know we need to replicate the storage domain and Ovirt confs
>> >> and
>> >> DB across the sites  , but couldn't find any doc for the same , isn't
>> >> that
>> >> possible with Ovirt ?
>> >>
>> >>  Hans Emmanuel
>> >>
>> >>
>> >> NOthing to FEAR but something to FEEL..
>> >>
>> >
>> >
>> >
>> > --
>> > Hans Emmanuel
>> >
>> > NOthing to FEAR but something to FEEL..
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
>
>
> --
> Hans Emmanuel
>
> NOthing to FEAR but something to FEEL..
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt DR setup

2014-01-09 Thread Sander Grendelman
Depending on the bandwidth/latency between the locations you could go for
replicated GlusterFS strorage and make sure that data is replicated accross
both sites. There is a self-hosted engine feature coming up, I don't know how
that will fit into replication.


On Thu, Jan 9, 2014 at 6:04 AM, Hans Emmanuel  wrote:
> Could any one please give me some suggestions ?
>
>
> On Wed, Jan 8, 2014 at 11:39 AM, Hans Emmanuel 
> wrote:
>>
>> Hi all ,
>>
>> I would like to know about the possibility of setup Disaster Recovery Site
>> (DR) for an Ovirt cluster . i.e if site 1 goes down I need to trigger the
>> site 2 to come in to action with the minimal down time .
>>
>> I am open to use NFS shared storage or local storage for data storage
>> domain . I know we need to replicate the storage domain and Ovirt confs and
>> DB across the sites  , but couldn't find any doc for the same , isn't that
>> possible with Ovirt ?
>>
>>  Hans Emmanuel
>>
>>
>> NOthing to FEAR but something to FEEL..
>>
>
>
>
> --
> Hans Emmanuel
>
> NOthing to FEAR but something to FEEL..
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SSD Caching

2014-01-08 Thread Sander Grendelman
On Thu, Jan 9, 2014 at 8:50 AM, Amedeo Salvati  wrote:
>
> you can use flashcache under centos6, it's stable and give you a boost for
> read/write, but I never user with gluster:
>
> https://github.com/facebook/flashcache/
>
> under fedora you have more choice: flashcache, bcache, dm-cache

dm-cache (and probably bcache) will be in RHEL7
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [QE] 3.4.0 Release tracker

2014-01-08 Thread Sander Grendelman
On Wed, Jan 8, 2014 at 10:29 AM, Sandro Bonazzola  wrote:
> Il 08/01/2014 10:23, Sander Grendelman ha scritto:
>> Now that BZ#1038525 (live snapshot merge for backup api) is closed as a
>> duplicate of BZ#647386 ( You are not authorized to access bug #647386 )
>>
>> Shouldn't BZ#647386 targeted for 3.4? Or for a future version?
>>
>
> BZ#1038525 is open and targeted to 3.5.0
That is correct, I didn't scroll down far enough, my apologies.
Any idea on why BZ#647386 is not visible for "regular" rhbz users?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [QE] 3.4.0 Release tracker

2014-01-08 Thread Sander Grendelman
Now that BZ#1038525 (live snapshot merge for backup api) is closed as a
duplicate of BZ#647386 ( You are not authorized to access bug #647386 )

Shouldn't BZ#647386 targeted for 3.4? Or for a future version?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] tuned profile for Centos hosts -- new Bugzilla or Regression

2014-01-08 Thread Sander Grendelman
I did not run into this problem at all ( neither on my cos6.4 nor on
my 6.5 hosts ).
The "virtual-host" profile was chosen automatically, installation of
vdsm was literally
as simple as adding the following lines in the %post section of my
kickstart file:

rpm -ivh http://ovirt.org/releases/ovirt-release-el.noarch.rpm
rpm -ivh 
http://mirror.1000mbps.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm

yum -y update epel-release
yum -y install vdsm



On Mon, Jan 6, 2014 at 11:09 PM, Ted Miller  wrote:
> I posted a script (a while back) to get oVirt running on Centos hosts.
>
> One of the items in it has to do with what "tuned" profile to use.  At the
> time I first ran into it, this was a fatal error.  It is now just a warning,
> so it does not prevent installing a host.  But, as a warning, a lot of
> people are probably missing it.
>
> When using Centos 6 as the host OS, the script tries to install a
> "rhs-virtualization" profile.  That profile is not included in Centos.  I
> substituted the "virtual-host" profile.
>
> I believe that this may be a regression as a result of Bugzilla 987293,
> where "rhs-virtualization" was substituted for "virtual-host" for RHEV +
> RHS.  I am guessing that whatever is used as a switch to determine RHEV +
> RHS is also shoving Centos into that same path, which is not appropriate.
>
> My suggestion would be to write the script so that it uses
> "rhs-virtualization" when present, and if it is not present, then it falls
> back to "virtual-host".  (I don't know what (if any) differences there are
> between the two profiles.)
>
> Should I open a new bug, make a comment on 987293, or take some other path?
>
> Ted Miller
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] R: tuned profile for Centos hosts -- new Bugzilla orRegression

2014-01-07 Thread Sander Grendelman
On Wed, Jan 8, 2014 at 7:35 AM, Doron Fediuck  wrote:
>
> Hi guys,
> As far as I know, tuned is not available in centos. So we first need to 
> verify tuned exists.
> Once available, we'll be able to handle
> the relevant profiles for hypervisors.
>
> Can anyone confirm tuned availability on centos?

Yes, I can confirm that:

[root@gnkvm11 ~]# yum info tuned
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
...
Installed Packages
Name: tuned
Arch: noarch
Version : 0.2.19
Release : 13.el6
Size: 222 k
Repo: installed
>From repo   : base
Summary : A dynamic adaptive system tuning daemon
URL : https://fedorahosted.org/tuned/
License : GPLv2+
Description : The tuned package contains a daemon that tunes system
settings dynamically.
: It does so by monitoring the usage of several system
components periodically.
: Based on that information components will then be put
into lower or higher
: power saving modes to adapt to the current usage.
Currently only ethernet
: network and ATA harddisk devices are implemented.

[root@gnkvm11 ~]# ls /etc/tune-profiles/
active-profile  desktop-powersave   functions
laptop-battery-powersave  sap   spindown-disk
virtual-guest
default enterprise-storage  laptop-ac-powersave
latency-performance   server-powersave  throughput-performance
virtual-host
[root@gnkvm11 ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to delete a snapshot

2014-01-06 Thread Sander Grendelman
Which subversion of oVirt was this?
And which version of vdsm?
I ran into the same problem with 3.3.0, snapshot deletion was fixed in 3.3.1
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Broken Snapshots

2014-01-02 Thread Sander Grendelman
I've seen the same problems on 3.3.0 after trying to delete a snapshot.

Snapshot removal was fixed in 3.3.1, which won't help you with your
current broken snapshots.



On Thu, Jan 2, 2014 at 11:54 AM, Maurice James wrote:

> See attachment
>
>
>
> *From:* Meital Bourvine [mailto:mbour...@redhat.com]
> *Sent:* Thursday, January 02, 2014 1:29 AM
> *To:* Maurice James
> *Cc:* users@ovirt.org
> *Subject:* Re: [Users] Broken Snapshots
>
>
>
> What is the error that you're getting?
>
> What do you mean by broken snapshot? How did it happen?
>
> Can you please attach some logs?
>
>
> --
>
> *From: *"Maurice James" 
> *To: *users@ovirt.org
> *Sent: *Thursday, January 2, 2014 1:16:19 AM
> *Subject: *[Users] Broken Snapshots
>
>
>
> How do I get rid of broken snapshots? I cannot delete them from the web
> gui or the shell
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
<>___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Create VLAN for WAN traffic

2013-12-31 Thread Sander Grendelman
On Mon, Dec 30, 2013 at 7:57 PM, Neil Schulz  wrote:

> I'm not very knowledgeable in VLANs. Sorry for the lack of knowledge in
> advance.
>
> Is it possible to create a VLAN for WAN traffic, to separate it from the
> internal network? I'd imagine so. It was a automated and simple process
> when use XenServer. I'm trying to switch from Xen to oVirt and when trying
> to recreate this, I'm unable to ping out from the VM.
>
> This leads me to believe the VLAN was created incorrectly. I created
> ifcfg-br1 on the host and through the engine, created the logical network
> with VLAN tagging 20. Does the interface, ifcfg-br1, require a public IP,
> any IP address, no ip address? (Sorry, never created a VLAN for WAN traffic
> as it was automated in XenServer)
>

Assigning an IP-address to a VM network in oVirt is _not_ mandatory,
it is only needed for "management" networks (ovirtmgmnt, display, storage)
where the _hosts_ need connectivity to resources on that network.

Is this a tagged or an untagged vlan? (an untagged vlan means only one vlan
per physical interface andneeds no extra configuration on the OS side)
Which other (physical) interfaces are in your "br1" interface?
Are the (tagged) vlans assigned to this interface?

A vlan interface on linux looks like this: "eth0.20" where eth0 is the
"physical" interface on which tagged vlans are configured and 20 is the
number of one of those interfaces.

In the case of an oVirt VM network the physical interface is bridged (and
sometimes bonded)
so the interface configuration looks like this: "br1.20".

The "normal" route for configuring a new network in ovirt is to configure
it in the "networks" tab
(as a VM network) and then assigin this network to physical- or bonded
interfaces on all the
hosts in your cluster.


> From there I have the VM installed and configured with a public IP
> address, however, only get Destination Host Unreachable, meaning it has no
> route out.
>
> I am banging my head on the desk trying to figure this out. Can anyone
> give me any assistance?
>
> Thank you,
> Neil
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] errors after node update

2013-12-30 Thread Sander Grendelman
Sounds familiar, take a look at this thread:
http://lists.ovirt.org/pipermail/users/2013-December/019022.html
In my case all errors were from thin provisioned disks and disks with
snapshots.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO datastore, permission denied

2013-12-23 Thread Sander Grendelman
On Mon, Dec 23, 2013 at 4:56 PM, Blaster  wrote:

>  I have an ISO datastore.  In that datastore I'm using symlinks to point
> to my ISOs on an NFS share.  All was working great.
>
> Along comes Black Friday and a shiny new 3TB hard drive.  Out goes the 5
> yo old 500gb drive with EXT 4 and in comes new 3TB drive with BTRFS.
>
> I installed new drive, shutdown VMs and use tar c | tar x to move data
> over.   unmount old, remount new.  Fire up VMs, all us well.  Create new
> VM, attach boot ISO and I get:
>  VM Gremlin is down. Exit message: internal error process exited while
> connecting to monitor: qemu-system-x86_64: -drive
> file=/rhev/data-center/mnt/_disk01_iso/4c70693a-d228-453e-b40d-93a214ec524b/images/----/Fedora-Live-Desktop-x86_64-20-1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=:
> could not open disk image
> /rhev/data-center/mnt/_disk01_iso/4c70693a-d228-453e-b40d-93a214ec524b/images/----/Fedora-Live-Desktop-x86_64-20-1.iso:
> Permission denied .
>
>  huh?  I search archives and see others have had this error in the
> past...Follow the suggestions...Run the nfstest python script, passes,
> check getsebool shows virt_use_nfs --> on.
>
> Also went through: http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues
>
> My NFS server is Solaris 11.1, ZFS storage.
>
I'm a bit confused now, is the NFS server linux or Solaris?  ZFS or BTRFS?

>
> If I copy the ISO directly to the directory it works fine.   What am I
> missing?
>

Maybe selinux labeling?
What happens if you (temporarily!) set selinux to permissive with
"setenforce 0"?

Which user owns the file? What are the permissions on the file? And on the
rest of the iso store?
Any logging on the NFS server? Can you mount the ISO store from a different
server?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Excessive syslog logging from vdsm/sampling.py

2013-12-23 Thread Sander Grendelman
The "not assigned to domain"-errors appear to be for thin provisioned disks
and for snapshots (which are also shown as thin provisioned by the engine).

One of the errors has gone away once I removed all snapshots from the VM.

Three recurring errors remain:

[root@gnkvm02 ~]# tail -n 1000 /var/log/messages | awk '/not assigned to
domain/ {print $8}' | sort | uniq -c
333
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/02944008-af6a-4901-b889-1f5fbd02bbd1:
332
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/a2da1070-8de7-4e1f-b736-4c88d089a5cc:
335
vmId=`1075a178-a4c6-4a8f-a199-56401cd0652f`::b526b148-b810-47c6-9bdd-4fd8d8226855/341b32d6-4276-454d-b3f0-789b705c99cc:
[root@gnkvm02 ~]#

The first two are for the two thin provisioned disks of a VM with three
disks.
The last one is for a VM with one preallocated disk which has a snapshot.

The syslog is spammed every two seconds by these three messages.

Dec 23 16:16:44 gnkvm02 vdsm vm.Vm ERROR
vmId=`1075a178-a4c6-4a8f-a199-56401cd0652f`::b526b148-b810-47c6-9bdd-4fd8d8226855/341b32d6-4276-454d-b3f0-789b705c99cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/47015859-1995-4ce2-880c-a3c7068a67dd/341b32d6-4276-454d-b3f0-789b705c99cc
not assigned to domain
Dec 23 16:16:46 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/a2da1070-8de7-4e1f-b736-4c88d089a5cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/c928f2c7-9ba0-496f-8be9-b8804fdc1a6d/a2da1070-8de7-4e1f-b736-4c88d089a5cc
not assigned to domain
Dec 23 16:16:46 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/02944008-af6a-4901-b889-1f5fbd02bbd1:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/bcbe3bc8-895d-40be-898b-9e425eefe167/02944008-af6a-4901-b889-1f5fbd02bbd1
not assigned to domain
Dec 23 16:16:46 gnkvm02 vdsm vm.Vm ERROR
vmId=`1075a178-a4c6-4a8f-a199-56401cd0652f`::b526b148-b810-47c6-9bdd-4fd8d8226855/341b32d6-4276-454d-b3f0-789b705c99cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/47015859-1995-4ce2-880c-a3c7068a67dd/341b32d6-4276-454d-b3f0-789b705c99cc
not assigned to domain
Dec 23 16:16:48 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/a2da1070-8de7-4e1f-b736-4c88d089a5cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/c928f2c7-9ba0-496f-8be9-b8804fdc1a6d/a2da1070-8de7-4e1f-b736-4c88d089a5cc
not assigned to domain
Dec 23 16:16:48 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/02944008-af6a-4901-b889-1f5fbd02bbd1:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/bcbe3bc8-895d-40be-898b-9e425eefe167/02944008-af6a-4901-b889-1f5fbd02bbd1
not assigned to domain
Dec 23 16:16:48 gnkvm02 vdsm vm.Vm ERROR
vmId=`1075a178-a4c6-4a8f-a199-56401cd0652f`::b526b148-b810-47c6-9bdd-4fd8d8226855/341b32d6-4276-454d-b3f0-789b705c99cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/47015859-1995-4ce2-880c-a3c7068a67dd/341b32d6-4276-454d-b3f0-789b705c99cc
not assigned to domain
Dec 23 16:16:50 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/a2da1070-8de7-4e1f-b736-4c88d089a5cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/c928f2c7-9ba0-496f-8be9-b8804fdc1a6d/a2da1070-8de7-4e1f-b736-4c88d089a5cc
not assigned to domain
Dec 23 16:16:50 gnkvm02 vdsm vm.Vm ERROR
vmId=`007ca72e-d0d0-4477-87d4-fb60328cd882`::b526b148-b810-47c6-9bdd-4fd8d8226855/02944008-af6a-4901-b889-1f5fbd02bbd1:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/bcbe3bc8-895d-40be-898b-9e425eefe167/02944008-af6a-4901-b889-1f5fbd02bbd1
not assigned to domain
Dec 23 16:16:50 gnkvm02 vdsm vm.Vm ERROR
vmId=`1075a178-a4c6-4a8f-a199-56401cd0652f`::b526b148-b810-47c6-9bdd-4fd8d8226855/341b32d6-4276-454d-b3f0-789b705c99cc:
invalid argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/47015859-1995-4ce2-880c-a3c7068a67dd/341b32d6-4276-454d-b3f0-789b705c99cc
not assigned to domain


On Fri, Dec 20, 2013 at 1:53 PM, Sander Grendelman wrote:

> Hi Nir,
>
> I've applied the patches on both nodes, attached are syslogs and vdsm
> logs _after_ the patches were applied.
> It seems like the amount of logging is decreasing slightly.
> The disk statistics do not seem to get updated however.
> Unfortunately I don&

Re: [Users] Cannot configure storage

2013-12-23 Thread Sander Grendelman
On Mon, Dec 23, 2013 at 8:23 AM, Nauman Abbas  wrote:
> Hello all
>
> I'm booting oVirt node from USB and want to use my hard drive for storing
> VMs and stuff. When I have booted the node from USB I try to mount the hard
> drive into a folder in the file-system but I am getting the following error.
>
> mount: /dev/sda1 is already mounted or /mnt busy
>
> Anyone who knows about this error?

Could it be that /dev/sda is your USB drive?
What is the output of
fdisk -l
blockdev --report
?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Agents for Windows

2013-12-19 Thread Sander Grendelman
On Thu, Dec 19, 2013 at 11:19 AM, Lindsay Mathieson
 wrote:
> On Thu, 19 Dec 2013 03:37:20 AM Itamar Heim wrote:
>> >   Windows Guest Agent Fully Supported
>> >
>> > The Windows guest agent is now fully supported and delivered with its
>> > own installer in the Supplementary channel together with virtio-win
>> > drivers.
>
> Is there somewhere I can download this windows gest agent installer from?

If you've got a red-hat subscription it's in the virtio-win package in
the supplementary channel.
It looks like there's no srpm available for that one.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Excessive syslog logging from vdsm/sampling.py

2013-12-18 Thread Sander Grendelman
On Wed, Dec 18, 2013 at 4:10 PM, Nir Soffer  wrote:
>
> Well in node1.log, we have 7687 errors:
> $ grep 'has no attribute' vdsm-node1.log | wc -l
> 7687
>
> But no such errors in vdsm-node2.log:
> $ grep 'has no attribute' vdsm-node2.log | wc -l
> 0
>
> Can you explain what is the difference between node1.log and node2.log?
Two different ovirt nodes (node 1 = gnkvm01, node 2 = gnkvm02).

Configuration should be identical.
Node 1 was put in maintenance before vdsm restart.
Node 2 vdsmd was restarted without maintenance (with one running testvm).

>
> Can you send before and after log files, or point to the time in the log 
> where you started the version with the patch?

On node1 vdsmd was restarted at 13:33
On node2 vdsmd was restarted at 13:49

I've don't have vdsm logfiles from before the problem was observed,
only syslog files.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Python SDK list vms slow, excessive cpu usage

2013-12-18 Thread Sander Grendelman
On Wed, Dec 18, 2013 at 9:13 AM, Sven Kieske  wrote:
> Hi,
>
> thanks to Sander for disclosing this issue.
You're welcome :)
>
> We were also looking into using python to query the vms.
> But this looks not very promising.
>
> We have the need to query for much more vms, and we can't
> wait for just the possibility of a fix in 3.4.
>
> Maybe this can be circumvented by using the REST-API?
> I didn't test it for performance yet, but it seems to
> be reasonable fast without high load.

The python SDK also uses the REST-API, the VM list is just one (fast) fetch of

https://my.engine.url/api/vms

You could parse the XML yourself but using the SDK is a lot more
generic an easier to maintain.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How to pxeboot ovrit node into memory automatically?

2013-12-18 Thread Sander Grendelman
The device might be grabbed/locked by vdsm/sanlock.

On Wed, Dec 18, 2013 at 9:16 AM, Fabian Deutsch  wrote:
> Am Dienstag, den 17.12.2013, 14:24 -0800 schrieb David Li:
>> Hi Fabiand,
>>
>>
>> I have a iSCSI partition on my stateless node. But the problem is I
>> can't format the partition. The same partition can be formatted and
>> mounted on a regular machine without problems.
>>
>>
>> [root@localhost ~]# mkfs.ext3 /dev/sda
>> mke2fs 1.42.7 (21-Jan-2013)
>> /dev/sda is entire device, not just one partition!
>> Proceed anyway? (y,n) y
>>
>> /dev/sda is apparently in use by the system; will not make a
>> filesystem here!
>>
>>
>>
>>
>> The error message doesn't seem to make sense as the iSCSI partition
>> isn't used by any other machine.
>>
>>
>> Any idea why this isn't working in a stateless node?
>
> This doesn't seem to be a stateless problem.
>
> Please run findmnt /dev/sda to see if it's mounted.
>
> Please also consider to ask your questions on users@ovirt.org or
> node-de...@ovirt.org - there are more people there to answer.
>
> node-de...@ovirt.org is especially good for Node stuff.
>
> - fabian
>
>>
>>
>> David
>>
>>
>> __
>> From: Fabian Deutsch 
>> To: David Li 
>> Cc: "users@ovirt.org" 
>> Sent: Thursday, December 12, 2013 7:40 AM
>> Subject: Re: [Users] How to pxeboot ovrit node into memory
>> automatically?
>>
>>
>> Am Mittwoch, den 11.12.2013, 17:32 -0800 schrieb David Li:
>> > Hi Fabian,
>> >
>> > I booted the new node stateless. But how can connect to it
>> from ovirt
>> > engine cli? The cli asks for three things: URL, user name
>> and
>> > password. I am not sure how to put them on the kernel boot
>> option
>> > line.
>> >
>> > David
>> >
>> >
>> >
>>
>> Hey David,
>>
>> please take a look here to find the correct cmdline:
>> 
>> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.0/html/Hypervisor_Deployment_Guide/sect-Deployment_Guide-Installing_Red_Hat_Enterprise_Virtualization_Hypervisors-RHEV_Hypervisor_Kernel_Parameters_and_Automated_Installation.html
>>
>> - fabian
>>
>> >
>> >
>> 
>> __
>> > From: Fabian Deutsch ;
>> > To: David Li ;
>> > Cc: users@ovirt.org ;
>> > Subject: Re: [Users] How to pxeboot ovrit node into memory
>> > automatically?
>> > Sent: Wed, Dec 11, 2013 8:00:40 AM
>> >
>> >
>> > Am Dienstag, den 10.12.2013, 16:00 -0800 schrieb David Li:
>> > > Hi Fabian,
>> > >
>> > >
>> > > Does the stateless ovirt node have the VDSM built-in?
>> >
>> > Hey David,
>> >
>> > stateless is a feature or a special mode of oVirt Node which
>> is also
>> > available on the "oVirt Node for oVirt Engine" image [1].
>> > But please note that you will have to approve the Node in
>> Engine after
>> > each reboot if you use the stateless feature. This bug [2]
>> might also
>> > tackle the problem of a stateless Node.
>> >
>> > - fabian
>> >
>> > --
>> > [1]
>> >
>> 
>> http://resources.ovirt.org/releases/3.3/iso/ovirt-node-iso-3.0.3-1.1.vdsm.fc19.iso
>> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=875088
>> >
>> > >
>> > > David
>> > >
>> > >
>> > >
>> >
>> __
>> > >From: Fabian Deutsch 
>> > >To: David Li 
>> > >Cc: "users@ovirt.org" 
>> > >Sent: Thursday, December 5, 2013 11:47 PM
>> > >Subject: Re: [Users] How to pxeboot ovrit node into
>> memory
>> > >automatically?
>> > >
>> > >
>> > >Am Donnerstag, den 05.12.2013, 16:30 -0800 schrieb
>> David Li:
>> > >> Hi Fabian,
>> > >>
>> > >> I think the bad route is a link local route
>> 169.254.0.0/16.
>> > >I am not quite sure if it's a real bug but in my
>> case I have
>> > >to remove this route.
>> > >> Also I am using ramdisk based node without a
>> local disk.
>> > Can
>> > >anything be persisted?
>> > >
>> > >Hey David,
>> > >
>> > >when you refer to stateless mode, then no, nothing
>> can be
>> > >persisted.
>> > >Regarding the bug - Please provide the whole
>> cmdline you are
>> > >

Re: [Users] oVirt run once Linux boot options

2013-12-18 Thread Sander Grendelman
I don't know if that's a (current) feature.
The host would have to fetch the kernel and initrd from a remote server
and then boot the VM with those files.

It seems like the current feature can use files on an iso ( e.g.
iso:///images/pxeboot/initrd.img )
and maybe local (on the node) files?

We use a seperate "linux install" network with a dhcp/tftp/pxe server
for kickstart deployments. (you could use cobbler for that with spacewalk).
After installation we change the guest network to the final config.

On Wed, Dec 18, 2013 at 3:41 PM, Jakub Bittner  wrote:
> Dne 18.12.2013 15:24, Fabian Deutsch napsal(a):
>
>> Am Mittwoch, den 18.12.2013, 14:42 +0100 schrieb Jakub Bittner:
>>>
>>> Hello,
>>>
>>> I am wondering howto use this feature. Is it possible to install server
>>> from this dialog? I specified kernel and initrd path as http url to our
>>> spacewalk server and to kernel parameters I added
>>> ks=http://spacewalk.server.url/ks.cfg, but it did not start.
>>>
>>> Is it possible to connect oVirt with spacewalk 2.0 (redhat satellite
>>> 5.6) to provision servers?
>>
>> Hey Jakub,
>>
>> I guess that oVirt Engine could be deployed (partly) using a kickstart -
>> but maybe others can correct me here.
>> As for oVirt Node - you can't use a kickstart to roll it out, but you
>> can use a wide range of kernel arguments to deploy a Node onto your
>> machines.
>> Take a look here for a list of kernel argument which can be used to
>> automatically install Node:
>>
>> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.0/html/Hypervisor_Deployment_Guide/sect-Deployment_Guide-Installing_Red_Hat_Enterprise_Virtualization_Hypervisors-RHEV_Hypervisor_Kernel_Parameters_and_Automated_Installation.html
>>
>> - fabian
>
> I am sorry, but I mean server as oVirt VM. Virtualized guest. Not oVirt
> nodes. ;-)
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Solaris 11.1 ISO panic on boot

2013-12-18 Thread Sander Grendelman
On Wed, Dec 18, 2013 at 1:25 PM, René Koch (ovido)  wrote:
> On Tue, 2013-12-17 at 23:20 -0600, Blaster wrote:
>> On 12/17/2013 9:18 AM, René Koch (ovido) wrote:
>> Well, I'm running 3.3.1 but I had the same problem with 3.3.  Wonder if
>> it's my Athlon X4 640 CPU?
>> Any known issues with AMD CPUs?
>

...

> On my workstation with old AMD X4 605e CPU I also get a bunch of panic
> messages when trying to boot from the Solaris ISO. Tried it with
> virt-manager with OS family Solaris under Fedora 19.
>
> It seems that Solaris has issues with KVM and AMD CPUs.
> Sadly I don't have enough time to investigate this further :(
>

I can confirm that the Solaris 11 iso also fails (panic) on my Opteron
G3 cluster (Opteron 8354 cpus).
Tried with and without:
- nested virt extensions.
- realtek or e1000 nic
- memory balloon

I also tried the workaround as mentioned in
http://docs.oracle.com/cd/E27300_01/E27307/html/vmrns-bugs.html#idp1175312
no luck.

Looks like a kvm-on-amd bug.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Excessive syslog logging from vdsm/sampling.py

2013-12-18 Thread Sander Grendelman
- vdsm-4.13.0-11.el6.x86_64
- FC storage domain
- no floppies attached to VMs as far as I know (any way to test this?)

On Wed, Dec 18, 2013 at 10:21 AM, Itamar Heim  wrote:
> On 12/17/2013 09:04 AM, Sander Grendelman wrote:
>>
>> Any ideas on this one?
>>
>> On Mon, Dec 16, 2013 at 11:31 AM, Sander Grendelman
>>  wrote:
>>>
>>> The syslog on my ovirt nodes is constantly logging errors from
>>> sampling.py:
>>>
>>> Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
>>> vmId=`66aa-2299-4d93-931d-b7a2e421b7e9`::Stats function failed:
>>> #012Traceback (most
>>> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
>>> in collect#012statsFunction()#012  File
>>> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
>>> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
>>> line 509, in _highWrite#012if not vmDrive.blockDev or
>>> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
>>> attribute 'format'
>>> Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
>>> vmId=`22654002-cbef-454d-b001-7823da5f592f`::Stats function failed:
>>> #012Traceback (most
>>> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
>>> in collect#012statsFunction()#012  File
>>> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
>>> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
>>> line 509, in _highWrite#012if not vmDrive.blockDev or
>>> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
>>> attribute 'format'
>>> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
>>> vmId=`d3dae626-279b-4bcf-afc4-7a3c198a3035`::Stats function failed:
>>> #012Traceback (most
>>> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
>>> in collect#012statsFunction()#012  File
>>> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
>>> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
>>> line 513, in _highWrite#012self._vm._dom.blockInfo(vmDrive.path,
>>> 0)#012  File "/usr/share/vdsm/vm.py", line 835, in f#012ret =
>>> attr(*args, **kwargs)#012  File
>>> "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
>>> 76, in wrapper#012ret = f(*args, **kwargs)#012  File
>>> "/usr/lib64/python2.6/site-packages/libvirt.py", line 1797, in
>>> blockInfo#012if ret is None: raise libvirtError
>>> ('virDomainGetBlockInfo() failed', dom=self)#012libvirtError: invalid
>>> argument: invalid path
>>>
>>> /rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/347f2238-c018-4370-94df-bd1e81f8b854/9e5dad95-73ea-4e5c-aa13-522efd9bad11
>>> not assigned to domain
>>> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
>>> vmId=`c6f56584-1ccd-4c02-be94-897a4e747d34`::Stats function failed:
>>> #012Traceback (most
>>> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
>>> in collect#012statsFunction()#012  File
>>> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
>>> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
>>> line 509, in _highWrite#012if not vmDrive.blockDev or
>>> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
>>> attribute 'format'
>>> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
>>> vmId=`0ae3a3d7-ead9-4c0d-9df0-3901b6e6859c`::Stats function failed:
>>> #012Traceback (most
>>> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
>>> in collect#012statsFunction()#012  File
>>> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
>>> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
>>> line 509, in _highWrite#012if not vmDrive.blockDev or
>>> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
>>> attribute 'format'
>>>
>>> These messages might have something to do with reading
>>> information/statistics through the API.
>>> But I've stopped all my (monitoring) processes using the API.
>>>
>>> Restarting ovirt-engine and vdsm did not resolve the issue.
>>>
>>> Both nodes and engine server are running oVirt 3.3.1 on CentOS 6.4
>>>
>>> This may be a red herring but the values  of disk statistics read
>>> through the API are "static"/unchanging.
>>>
>>> Any clues on whether these messages are serious and any ideas on how
>>> to stop them
>>> spamming my system logs?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> vdsm version?
> is a floppy attached?
> which type of storage domain?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Excessive syslog logging from vdsm/sampling.py

2013-12-17 Thread Sander Grendelman
Any ideas on this one?

On Mon, Dec 16, 2013 at 11:31 AM, Sander Grendelman
 wrote:
> The syslog on my ovirt nodes is constantly logging errors from sampling.py:
>
> Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
> vmId=`66aa-2299-4d93-931d-b7a2e421b7e9`::Stats function failed:
> #012Traceback (most
> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
> in collect#012statsFunction()#012  File
> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
> line 509, in _highWrite#012if not vmDrive.blockDev or
> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
> attribute 'format'
> Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
> vmId=`22654002-cbef-454d-b001-7823da5f592f`::Stats function failed:
> #012Traceback (most
> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
> in collect#012statsFunction()#012  File
> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
> line 509, in _highWrite#012if not vmDrive.blockDev or
> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
> attribute 'format'
> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
> vmId=`d3dae626-279b-4bcf-afc4-7a3c198a3035`::Stats function failed:
> #012Traceback (most
> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
> in collect#012statsFunction()#012  File
> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
> line 513, in _highWrite#012self._vm._dom.blockInfo(vmDrive.path,
> 0)#012  File "/usr/share/vdsm/vm.py", line 835, in f#012ret =
> attr(*args, **kwargs)#012  File
> "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 76, in wrapper#012ret = f(*args, **kwargs)#012  File
> "/usr/lib64/python2.6/site-packages/libvirt.py", line 1797, in
> blockInfo#012if ret is None: raise libvirtError
> ('virDomainGetBlockInfo() failed', dom=self)#012libvirtError: invalid
> argument: invalid path
> /rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/347f2238-c018-4370-94df-bd1e81f8b854/9e5dad95-73ea-4e5c-aa13-522efd9bad11
> not assigned to domain
> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
> vmId=`c6f56584-1ccd-4c02-be94-897a4e747d34`::Stats function failed:
> #012Traceback (most
> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
> in collect#012statsFunction()#012  File
> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
> line 509, in _highWrite#012if not vmDrive.blockDev or
> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
> attribute 'format'
> Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
> vmId=`0ae3a3d7-ead9-4c0d-9df0-3901b6e6859c`::Stats function failed:
> #012Traceback (most
> recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
> in collect#012statsFunction()#012  File
> "/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
> self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
> line 509, in _highWrite#012if not vmDrive.blockDev or
> vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
> attribute 'format'
>
> These messages might have something to do with reading
> information/statistics through the API.
> But I've stopped all my (monitoring) processes using the API.
>
> Restarting ovirt-engine and vdsm did not resolve the issue.
>
> Both nodes and engine server are running oVirt 3.3.1 on CentOS 6.4
>
> This may be a red herring but the values  of disk statistics read
> through the API are "static"/unchanging.
>
> Any clues on whether these messages are serious and any ideas on how
> to stop them
> spamming my system logs?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Excessive syslog logging from vdsm/sampling.py

2013-12-16 Thread Sander Grendelman
The syslog on my ovirt nodes is constantly logging errors from sampling.py:

Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
vmId=`66aa-2299-4d93-931d-b7a2e421b7e9`::Stats function failed:
#012Traceback (most
recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
in collect#012statsFunction()#012  File
"/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
line 509, in _highWrite#012if not vmDrive.blockDev or
vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
attribute 'format'
Dec 16 11:22:47 gnkvm01 vdsm vm.Vm ERROR
vmId=`22654002-cbef-454d-b001-7823da5f592f`::Stats function failed:
#012Traceback (most
recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
in collect#012statsFunction()#012  File
"/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
line 509, in _highWrite#012if not vmDrive.blockDev or
vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
attribute 'format'
Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
vmId=`d3dae626-279b-4bcf-afc4-7a3c198a3035`::Stats function failed:
#012Traceback (most
recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
in collect#012statsFunction()#012  File
"/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
line 513, in _highWrite#012self._vm._dom.blockInfo(vmDrive.path,
0)#012  File "/usr/share/vdsm/vm.py", line 835, in f#012ret =
attr(*args, **kwargs)#012  File
"/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
76, in wrapper#012ret = f(*args, **kwargs)#012  File
"/usr/lib64/python2.6/site-packages/libvirt.py", line 1797, in
blockInfo#012if ret is None: raise libvirtError
('virDomainGetBlockInfo() failed', dom=self)#012libvirtError: invalid
argument: invalid path
/rhev/data-center/mnt/blockSD/b526b148-b810-47c6-9bdd-4fd8d8226855/images/347f2238-c018-4370-94df-bd1e81f8b854/9e5dad95-73ea-4e5c-aa13-522efd9bad11
not assigned to domain
Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
vmId=`c6f56584-1ccd-4c02-be94-897a4e747d34`::Stats function failed:
#012Traceback (most
recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
in collect#012statsFunction()#012  File
"/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
line 509, in _highWrite#012if not vmDrive.blockDev or
vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
attribute 'format'
Dec 16 11:22:48 gnkvm01 vdsm vm.Vm ERROR
vmId=`0ae3a3d7-ead9-4c0d-9df0-3901b6e6859c`::Stats function failed:
#012Traceback (most
recent call last):#012  File "/usr/share/vdsm/sampling.py", line 351,
in collect#012statsFunction()#012  File
"/usr/share/vdsm/sampling.py", line 226, in __call__#012retValue =
self._function(*args, **kwargs)#012  File "/usr/share/vdsm/vm.py",
line 509, in _highWrite#012if not vmDrive.blockDev or
vmDrive.format != 'cow':#012AttributeError: 'Drive' object has no
attribute 'format'

These messages might have something to do with reading
information/statistics through the API.
But I've stopped all my (monitoring) processes using the API.

Restarting ovirt-engine and vdsm did not resolve the issue.

Both nodes and engine server are running oVirt 3.3.1 on CentOS 6.4

This may be a red herring but the values  of disk statistics read
through the API are "static"/unchanging.

Any clues on whether these messages are serious and any ideas on how
to stop them
spamming my system logs?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Solaris 11.1 ISO panic on boot

2013-12-16 Thread Sander Grendelman
There's also a thread about Solaris 11 here:
http://lists.ovirt.org/pipermail/users/2013-September/016168.html

On Mon, Dec 16, 2013 at 10:35 AM, Joop  wrote:
> Blaster wrote:
>>
>> I have a Solaris 11.1 system running on a nice piece of hardware that I’d
>> like to make better use of.  I would like to virtualize it under ovirt.
>>
>> I created an ovirt guest using the RHEL 64 bit template.  I attached the
>> Solaris 11.1 ISO and booted.  As soon as the Solaris banner is displayed,
>> poof, kernel panic.
>>
>> Of course every whizzes by so fast I have no chance of getting a screen
>> shot.
>>
>> Has anyone been successful in boot Solaris 11.1 under ovirt?  Should I
>> have picked a different template?
>>
>>
>
> I did a search through my message folder and found the thread. It was about
> solaris10 and networking didn't work. So maybe this is different or not.
> Could you test if the same iso can be booted using virt-manager or kvm/qemu
> directly?
>
> Joop
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] hot add ram to a vm

2013-12-13 Thread Sander Grendelman
On Fri, Dec 13, 2013 at 1:42 PM, Michal Skrivanek
 wrote:
> Too bad QEMU doesn't support it. So it's still a long way.

qemu/libvirt "kind of" supports it (at least in F19).

In virt-manager you can define "current" and "maximum" memory allocation.

The VM only sees the current allocation and you can hot add memory up
to the maximum allocation.

Example from the xml definition:

  3145728
  1048576

After booting:

[root@grkvm201 ~]# uname -a
Linux grkvm201.plusine.intern 2.6.32-358.23.2.el6.x86_64 #1 SMP Wed
Oct 16 18:37:12 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@grkvm201 ~]# free -lm
 total   used   free sharedbuffers cached
Mem:   837296540  0  8 84
Low:   837296540
High:0  0  0
-/+ buffers/cache:203634
Swap:  511  0511
[root@grkvm201 ~]#

memory increased to 2GB through virt-manager:

[root@grkvm201 ~]# free -lm
 total   used   free sharedbuffers cached
Mem:  1861295   1565  0  8 84
Low:  1861295   1565
High:0  0  0
-/+ buffers/cache:202   1659
Swap:  511  0511
[root@grkvm201 ~]# cat /proc/meminfo
MemTotal:1906176 kB
MemFree: 1603456 kB
Buffers:9140 kB
Cached:86860 kB
SwapCached:0 kB
Active:92564 kB
Inactive:  68064 kB
Active(anon):  68692 kB
Inactive(anon):   12 kB
Active(file):  23872 kB
Inactive(file):68052 kB
Unevictable:   21600 kB
Mlocked:   11380 kB
SwapTotal:524280 kB
SwapFree: 524280 kB
Dirty:12 kB
Writeback: 0 kB
AnonPages: 86232 kB
Mapped:20436 kB
Shmem:   288 kB
Slab:  75132 kB
SReclaimable:  14576 kB
SUnreclaim:60556 kB
KernelStack:1464 kB
PageTables: 4224 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 1477368 kB
Committed_AS: 547624 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   20040 kB
VmallocChunk:   34359688500 kB
HardwareCorrupted: 0 kB
AnonHugePages: 18432 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0

Memory decreased to 1,5GB through virt-manager:
[root@grkvm201 ~]# free -lm
 total   used   free sharedbuffers cached
Mem:  1349295   1054  0  8 84
Low:  1349295   1054
High:0  0  0
-/+ buffers/cache:201   1147
Swap:  511  0511
[root@grkvm201 ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Attach nfs domain to gluster dc

2013-12-13 Thread Sander Grendelman
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier  wrote:
...
> # nfs
> -A INPUT -p tcp -m tcp --dport 111   -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 2049  -j ACCEPT
> -A INPUT -p udp -m udp --dport 2049  -j ACCEPT
> -A INPUT -p udp -m udp --dport 41729  -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 48491  -j ACCEPT
> -A INPUT -p udp -m udp --dport 43828  -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 48491  -j ACCEPT
> -A INPUT -p udp -m udp --dport 47492  -j ACCEPT
> -A INPUT -p tcp -m tcp --dport 58837  -j ACCEPT

The above rules might break after a reboot.

Best practice is to set the normally dynamic nfs ports to fixed
values in /etc/sysconfig/nfs and then open those ports in the firewall.

>
> Now I'm changing the settings by overriding the defaults in the domain
> and auto negotiating the protocol. This firewall correction may be a
> good thing to add in the deploy.

Are you doing this on a node or on your engine server?

The engine-setup configured both /etc/sysconfig/nfs and iptables for me
on my engine server (for the iso domain).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Abysmal network performance inside hypervisor node

2013-12-11 Thread Sander Grendelman
Then I'm also out of ideas.

You could take al look at the options in /etc/vdsm/vdsm.conf
( information at /usr/share/doc/vdsm-*/vdsm.conf.sample )

On Wed, Dec 11, 2013 at 2:48 PM, Markus Stockhausen
 wrote:
>> Von: sander.grendel...@gmail.com [sander.grendel...@gmail.com]" im 
>> Auftrag von "
>> Gesendet: Mittwoch, 11. Dezember 2013 14:43
>> An: Markus Stockhausen
>> Cc: ovirt-users
>> Betreff: Re: [Users] Abysmal network performance inside hypervisor node
>>
>> I'm just wondering if the tested interfaces are bridged because I've
>> seen some issues with
>> network througput and bridged interfaces on my local system (F19).
>>
>> Basically, if an IP is configured on the bridge itself ( in oVirt this
>> is the case if the network
>> is configured as a VM network ) latency goes up and throughput goes down.
>>
>> Can you rule this one out by using an unbridged interface?
>
> I see. But my storage infiniband network is working without a bridge.
> That was the mounting network device of my initial mail. So with
> (ovirtmgmt) and without bridge (infiniband) I have the same problem.
>
> [root@colovn01 ~]# ifconfig ib1
> ib1: flags=4163  mtu 2044
> inet 10.10.30.1  netmask 255.255.255.0  broadcast 10.10.30.255
> inet6 fe80::223:7dff:ff94:d3fe  prefixlen 64  scopeid 0x20
> Infiniband hardware address can be incorrect! Please read BUGS section in 
> ifconfig(8).
> infiniband 
> 80:00:00:49:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00  txqueuelen 256  
> (InfiniBand)
> RX packets 3575120  bytes 7222538528 (6.7 GiB)
> RX errors 0  dropped 10  overruns 0  frame 0
> TX packets 416942  bytes 29156648 (27.8 MiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Abysmal network performance inside hypervisor node

2013-12-11 Thread Sander Grendelman
I'm just wondering if the tested interfaces are bridged because I've
seen some issues with
network througput and bridged interfaces on my local system (F19).

Basically, if an IP is configured on the bridge itself ( in oVirt this
is the case if the network
is configured as a VM network ) latency goes up and throughput goes down.

Can you rule this one out by using an unbridged interface?

On Wed, Dec 11, 2013 at 2:30 PM, Markus Stockhausen
 wrote:
>> Von: sander.grendel...@gmail.com [sander.grendel...@gmail.com]"
>> Gesendet: Mittwoch, 11. Dezember 2013 13:27
>> An: Markus Stockhausen
>> Cc: ovirt-users
>> Betreff: Re: [Users] Abysmal network performance inside hypervisor node
>>
>> Could this have something to do with the bridging config used by vdsm?
>>
>> @Markus: what does your network config look like?
>> And what is the interface you use to access the NFS store?
>
> We have an IB IPoIB network for storage. Thats why you see a throughput
> at 400 MB/sec. For a cross check I mounted the NFS share through the
> ovirtmgmt interface (our normal trusted internal network) on a 1GBit
> Intel network card.
>
> The throughput is exatly the same - throttled at round about 16MB/sec:
>
> Maybe that is no problem at all for the VMs. But in my case it would be 
> helpful
> to know the switch to temporarily flood the channels for fast migration.
>
> [root@colovn01 _ISOs]# df .
> Dateisystem   1K-blocksBenutzt  Verfügbar Verw% Eingehängt auf
> 192.168.10.30:/var/nas1 11611801600 9618833408 1992968192   83% /mnt
> [root@colovn01 _ISOs]# dd if=SLES-11-DVD-x86_64-GM-DVD1.iso of=/dev/null
> 5148409+0 Datensätze ein
> 5148408+0 Datensätze aus
> 2635984896 Bytes (2,6 GB) kopiert, 157,247 s, 16,8 MB/s
>
> [root@colovn01 _ISOs]# ifconfig ovirtmgmt
> ovirtmgmt: flags=4163  mtu 1500
> inet 192.168.10.51  netmask 255.255.255.0  broadcast 192.168.10.255
> inet6 fe80::230:48ff:fed7:9ec4  prefixlen 64  scopeid 0x20
> ether 00:30:48:d7:9e:c4  txqueuelen 0  (Ethernet)
> RX packets 203382  bytes 2673889310 (2.4 GiB)
> RX errors 0  dropped 2092  overruns 0  frame 0
> TX packets 160826  bytes 19946957 (19.0 MiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Abysmal network performance inside hypervisor node

2013-12-11 Thread Sander Grendelman
Could this have something to do with the bridging config used by vdsm?

@Markus: what does your network config look like?
And what is the interface you use to access the NFS store?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt auditing

2013-12-05 Thread Sander Grendelman
Is this something you could use https:///api/events for?

On Thu, Dec 5, 2013 at 4:51 PM, Jakub Bittner  wrote:
> Hello,
>
> I am curious how to audit user actions in oVirt web interface. From
> engine.log we are able to extract when user logged in, when he updated
> vnicProfile and so, but we can not get exact changes (behavior).
>
> Right now I can get logs like:
>
> 2013-12-05 16:35:46,270 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ajp--127.0.0.1-8702-6) Correlation ID: 7e60ae1, Call Stack: null, Custom
> Event ID: -1, Message: Interface nic1 (VirtIO) was updated for VM
> test.test.org.   (User: user1)
>
> But it would be nice to get logs like:
>
> 2013-12-05 16:35:46,270 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ajp--127.0.0.1-8702-6) Correlation ID: 7e60ae1, Call Stack: null, Custom
> Event ID: -1, Message: Interface nic1 (VirtIO) was updated for VM
> test.test.org from secure_vlan to unsecure_vlan.   (User: user1)
>
> My point is to have a feature which can give us possibility to construct
> exact user behavior and action in managing oVirt. It could be useful not
> even in hunting bugs, but primary in security problem hunting.
>
> Thank you.
>
> Jakub Bittner
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] backups

2013-11-27 Thread Sander Grendelman
> The work-around for this is to SSH into the guest first, put the database
> into backup mode(maybe run sync a time or two to flush out as much from RAM
> as possible), take the snap shot, ssh back in to resume the database, backup
> the snap, delete the snap.

The main problem in oVirt with this strategy would be the lack of live
snapshot deletion.
Having to shut down the VM to delete/merge a snapshot is not nice :(
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Where do you run the engine?

2013-11-27 Thread Sander Grendelman
On Wed, Nov 27, 2013 at 3:51 PM, Ernest Beinrohr
 wrote:
> Just curious, where/how you run the engine. I run it in libvirt/kvm on one
> of my storage domains.

I run it on our esx cluster (seriously).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] API read-only access / roles

2013-11-21 Thread Sander Grendelman
Hi Doron,

The user I've defined in [1] works for me.
A built-in login-/read-only role would be nice,
but it's quite easy to define a custom role so
more of a nice-to-have instead of a must-have.

Thanks for asking!

Sander.

On Wed, Nov 20, 2013 at 5:40 PM, Doron Fediuck  wrote:
> Hi Sander,
> We're closing the ovirt 3.4 scope, and wondering if you're handling
> Zabbix based on [1].
> If so please let me know and I'll update the 3.4 features list.
>
> Thanks,
> Doron
>
> [1] http://lists.ovirt.org/pipermail/users/2013-November/017946.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] API read-only access / roles

2013-11-19 Thread Sander Grendelman
On Mon, Nov 18, 2013 at 5:18 PM, René Koch (ovido)  wrote:
> Very nice - do you use my check_rhev3 Nagios plugin
> (https://github.com/ovido/check_rhev3) or are you working on
> your own script?

At the moment: both. The problem with using Nagios scripts in Zabbix is that
the trigger/alarm decision is made in different places. In Nagios this is done
in the check scripts / on the "client" side while Zabbix mainly
collects data and
fires triggers if certain conditions in that data are met.

New(er) Zabbix versions also have a feature called "low level discovery" that
automatically creates items.

It also seems that there is better RESTful/ovirt API support in python
so I'm giving
that a try too. Although perl is usually my poison of choice too ;)

>> For this I've created a "AdminLoginOnly" role that only has
>> System->Configure System->Login Permissions access.
>>
>> Is this the way to go for this king of configuration? Or is there
>> a way to further minimize the permissions of this user?
>
> I create a custom role with these permissions for Nagios monitoring,
> too.
> I was thinking that in oVirt 3.3 there should be a predefined
> viewers-role, but can't find it in my setup :(

OK, that would have been nice, do you have any history on this one?

>> Another issue is that a "Login" event is generated every time
>> the user connects through the API. This makes the "Events"
>> pane less useful / readable. Is there a way to disable this for
>> some users/roles?
>
>
> It depends if you have your own script or check_rhev3:
> - check_rhev3 1.2: use option -o
> - check_rhev3 1.3: you should not see any login information in this
> version anymore
> - custom script: see this page on information how to use the JSESSIONID
> cookie: http://www.ovirt.org/Features/RESTSessionManagement

Thanks for the info I'll look into this.

It does make the logic in the script a bit harder because you have to store
the sessionid somewhere and check if the session is still valid.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] API read-only access / roles

2013-11-18 Thread Sander Grendelman
I'm working on (Zabbix) monitoring through the RESTful API.

Which role should I assign to the monitoring user?

The user only needs read access to the data but it looks like
I nead to assign at least an "Admin" role to the user to be
able to read data through the API.

For this I've created a "AdminLoginOnly" role that only has
System->Configure System->Login Permissions access.

Is this the way to go for this king of configuration? Or is there
a way to further minimize the permissions of this user?

Another issue is that a "Login" event is generated every time
the user connects through the API. This makes the "Events"
pane less useful / readable. Is there a way to disable this for
some users/roles?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fence-virt support

2013-11-18 Thread Sander Grendelman
On Mon, Nov 18, 2013 at 3:08 PM, Eli Mesika  wrote:
>> - fence-virtd uses a keyfile, no username and password.
>
> This is a real problem , we are not supporting currently other authentication 
> methods
Yes, It's probably going to take a bit of a hack to make this
work with the current mechanism/gui because the key file
has to be an actual _file_.

I just made a local config + keyfile for my testnodes.

>
>> - fence-virtd uses port=vmname to identify a VM
>>
>> The gui has mandatory username and password fields and
>> the standard port/sshport field only takes numeric values.
>
> For that we have the options field , you could omit the port from the fence 
> mapping and then add in the options "port="

Yes that's what I did. One of the problems with that is that
port= maps to the sshport field in the apc part of the gui
which brings us back to BZ 1014513.

>> Some of the problems I ran into are probably related to
>> https://bugzilla.redhat.com/show_bug.cgi?id=1020344
>
> This is actually related to another BZ
>  https://bugzilla.redhat.com/show_bug.cgi?id=1014513
>
Yes, that's actually the one I meant but I couldn't find it.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fence-virt support

2013-11-18 Thread Sander Grendelman
It "kind of" worked.

I did an insert into the database to add an "xvm" fence mode.
After that I had to first change the mode to ipmilan to get
rid of a couple of mandatory fields. The setup also breaks
when I try to edit a host.

The fence mechanism makes a couple of assumptions that
don't work with fence-virtd:

- fence-virtd uses a keyfile, no username and password.
- fence-virtd uses port=vmname to identify a VM

The gui has mandatory username and password fields and
the standard port/sshport field only takes numeric values.

Some of the problems I ran into are probably related to
https://bugzilla.redhat.com/show_bug.cgi?id=1020344

On Mon, Nov 18, 2013 at 1:26 PM, Eli Mesika  wrote:
>
>
> - Original Message -
>> From: "Itamar Heim" 
>> To: "Sander Grendelman" 
>> Cc: users@ovirt.org, "Eli Mesika" 
>> Sent: Thursday, November 14, 2013 3:04:39 AM
>> Subject: Re: [Users] Fence-virt support
>>
>> On 11/13/2013 04:27 PM, Sander Grendelman wrote:
>> > I'm running an ovirt environment (two virt hosts and one engine host)
>> > on libvirt/kvm on fedora 19. (nested KVM).
>> >
>> > I want to fence the virtualized virtualization hosts from the engine host
>> > (or their partner host) through libvirt. Fence-virt can do this.
>> >
>> > I know this is a bit of a niche case, but it's very useful for testing/demo
>> > purposes.
>>
>> you can just edit the configs to add it (may be overridden during upgrade):
>> VdsFenceType, VdsFenceOptionMapping and VdsFenceOptionTypes
>
> Did that worked for you or do you need any further help?
> Thanks
> Eli
>
>>
>> >
>> > On Wed, Nov 13, 2013 at 8:33 PM, Itamar Heim  wrote:
>> >> On 11/13/2013 07:47 AM, Sander Grendelman wrote:
>> >>>
>> >>> I'm currently building a ovirt test-environment using nested
>> >>> virtualization on libvirt/kvm.
>> >>>
>> >>> For the most part this works great. However, I can't configure
>> >>> fencing/power management
>> >>> because only hardware BMC's/fencing devices are supported.
>> >>>
>> >>> Is this something that could/should be included in a future oVirt
>> >>> version?
>> >>> Or is there another option/workaround to test power management?
>> >>> ___
>> >>> Users mailing list
>> >>> Users@ovirt.org
>> >>> http://lists.ovirt.org/mailman/listinfo/users
>> >>>
>> >>
>> >> please elaborate a bit more on what's missing.
>> >> what are you trying to fence and from where?
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fence-virt support

2013-11-13 Thread Sander Grendelman
I'm running an ovirt environment (two virt hosts and one engine host)
on libvirt/kvm on fedora 19. (nested KVM).

I want to fence the virtualized virtualization hosts from the engine host
(or their partner host) through libvirt. Fence-virt can do this.

I know this is a bit of a niche case, but it's very useful for testing/demo
purposes.

On Wed, Nov 13, 2013 at 8:33 PM, Itamar Heim  wrote:
> On 11/13/2013 07:47 AM, Sander Grendelman wrote:
>>
>> I'm currently building a ovirt test-environment using nested
>> virtualization on libvirt/kvm.
>>
>> For the most part this works great. However, I can't configure
>> fencing/power management
>> because only hardware BMC's/fencing devices are supported.
>>
>> Is this something that could/should be included in a future oVirt version?
>> Or is there another option/workaround to test power management?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> please elaborate a bit more on what's missing.
> what are you trying to fence and from where?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.4 planning

2013-11-13 Thread Sander Grendelman
> * other: Zabbix monitoring
> Monitoring the oVirt environment should work with my check_rhev3 plugin
> by adding it as an external check to Zabbix. I'll test this and if it's
> working I'll provide a short guide on how to do it.
> Displaying data/triggers in oVirt isn't possible yet with my Monitoring
> UI-plugin, but on the feature list (help is welcome)...

I'm very interested in this Zabbix plugin/check, was thinking about implementing
something myself.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Snapshot delele/merge broken. Was: Live storage migration snapshot removal (fails)

2013-11-13 Thread Sander Grendelman
Snapshot deletion on preallocated FC domain disks is broken in ovirt-stable/3.3

Both online and offline snapshot deletion leads to an error and the
snapshot state is changed to "BROKEN".

When I apply the patch at http://gerrit.ovirt.org/#/c/19983/ from
https://bugzilla.redhat.com/show_bug.cgi?id=1015071 snapshot deletion
works again.

Can/will this patch be backported to ovirt-stable?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Fence-virt support

2013-11-13 Thread Sander Grendelman
I'm currently building a ovirt test-environment using nested
virtualization on libvirt/kvm.

For the most part this works great. However, I can't configure
fencing/power management
because only hardware BMC's/fencing devices are supported.

Is this something that could/should be included in a future oVirt version?
Or is there another option/workaround to test power management?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] EL5 support for VirtIO SCSI?

2013-11-13 Thread Sander Grendelman
According to https://access.redhat.com/site/solutions/20511 virtio
work on > rhel5.3.

You have to edit /etc/modprobe.conf and generate a new initrd.

On Wed, Nov 13, 2013 at 9:32 AM, Sven Kieske  wrote:
> Hi,
>
> afaik the rhel 5 kernel series just has not the necessary drivers for
> all virtio stuff, so it's not supported and does not work, unless
> you want to patch your own kernel.
>
> Am 13.11.2013 06:44, schrieb Paul Jansen:
>> I have just set up an Ovirt 3.3.0 install and have done a test install of 
>> Centos 6.4 in a VM.  The VM was configured with an IDE drive and a 
>> virtio-scsi drive.  The Centos 6.4 install sees both drives OK.
>> I'm wanting to do some testing on a product that is based on EL5, but I'm 
>> finding that it cannot see the virtio-scsi drive.  It does show up in the 
>> output of 'lspci', but I don't see a corresponding 'sd' device.
>>
>> I've just tried installing Centos 5.10 and the support is not there.
>>
>> Does anyone know of any tricks to allow EL5 to see the virtio-scsi device?
>
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +49-5772-293-100
> F: +49-5772-293-333
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Installation of self hosted engine

2013-11-13 Thread Sander Grendelman
yum --enablerepo=ovirt-nightly

On Wed, Nov 13, 2013 at 9:29 AM, Pavel Gandalipov  wrote:
> Thank you, Yedidyah for fast reply.
>
>> As a minimum, install 'ovirt-hosted-engine-setup' and
>> run 'hosted-engine --deploy'.
>
>
> These are my repositories from your docs.
>
> [root@master02 ~]# ls -ls /etc/yum
> yum/ yum.conf yum.repos.d/
> [root@master02 ~]# ls -ls /etc/yum.repos.d/
> total 24
> 4 -rw-r--r--. 1 root root 1199 Aug 31 02:32 fedora-updates-testing.repo
> 4 -rw-r--r--. 1 root root 1141 Aug 31 02:32 fedora-updates.repo
> 4 -rw-r--r--. 1 root root  782 Aug 22 21:57 fedora-virt-preview.repo
> 4 -rw-r--r--. 1 root root  782 Jun  5  2012 fedora-virt-preview.repo.1
> 4 -rw-r--r--. 1 root root 1180 Aug 31 02:32 fedora.repo
> 4 -rw-r--r--. 1 root root  831 Aug 22 21:57 ovirt.repo
> [root@master02 ~]#
>
> [root@master02 ~]# yum search ovirt-hosted
> Loaded plugins: versionlock
> Warning: No matches found for: ovirt-hosted
> No matches found
> [root@master02 ~]#
>
> [root@master02 ~]# rpm -qa | grep ovirt
> ovirt-engine-webadmin-portal-3.3.0.1-1.fc19.noarch
> ovirt-iso-uploader-3.3.1-1.fc19.noarch
> ovirt-engine-cli-3.3.0.5-1.fc19.noarch
> ovirt-host-deploy-java-1.1.1-1.fc19.noarch
> ovirt-engine-tools-3.3.0.1-1.fc19.noarch
> ovirt-host-deploy-1.1.1-1.fc19.noarch
> ovirt-release-fedora-8-1.noarch
> ovirt-engine-userportal-3.3.0.1-1.fc19.noarch
> ovirt-engine-setup-3.3.0.1-1.fc19.noarch
> ovirt-engine-backend-3.3.0.1-1.fc19.noarch
> ovirt-engine-lib-3.3.0.1-1.fc19.noarch
> ovirt-engine-3.3.0.1-1.fc19.noarch
> ovirt-image-uploader-3.3.1-1.fc19.noarch
> ovirt-engine-restapi-3.3.0.1-1.fc19.noarch
> ovirt-engine-sdk-python-3.3.0.7-1.fc19.noarch
> ovirt-engine-dbscripts-3.3.0.1-1.fc19.noarch
> ovirt-log-collector-3.3.1-1.fc19.noarch
>
>
> What can i try?
>
>
>
>
> 2013/11/13 Yedidyah Bar David 
>>
>> Hi,
>>
>> 
>>
>> From: "Pavel Gandalipov" 
>> To: users@ovirt.org
>> Sent: Wednesday, November 13, 2013 9:22:05 AM
>> Subject: [Users] Installation of self hosted engine
>>
>>
>> Hello, could you help me with installation of self hosted engine on bare
>> Ovirt 3.3.
>>
>>
>> It's actually "on bare metal", with "bare ovirt 3.3" inside it.
>>
>> Where can i find any packages for installation?
>>
>>
>> For now you can use the nightly repo. As a minimum, install
>> 'ovirt-hosted-engine-setup' and
>> run 'hosted-engine --deploy'.
>>
>> I don't think we have yet any wiki page for it.
>>
>> You can have a look at [1], which is for migrating an existing 3.3 engine
>> to hosted-engine,
>> but instead of backup/restore simply do a new setup inside the VM.
>>
>> [1] http://www.ovirt.org/Migrate_to_Hosted_Engine
>> --
>> Didi
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration snapshot removal (fails)

2013-11-12 Thread Sander Grendelman
On Mon, Nov 11, 2013 at 4:33 PM, Sander Grendelman
 wrote:
> Done: https://bugzilla.redhat.com/show_bug.cgi?id=1029069

The patch at http://gerrit.ovirt.org/#/c/19983/ from
https://bugzilla.redhat.com/show_bug.cgi?id=1015071 seems to fix this issue.

I could not patch storage/image.py in our main environment so I had a
nice exercise in
building an oVirt environment on top of nested KVM virtualization on my laptop.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration snapshot removal (fails)

2013-11-11 Thread Sander Grendelman
Done: https://bugzilla.redhat.com/show_bug.cgi?id=1029069

On Mon, Nov 11, 2013 at 4:02 PM, Dan Kenigsberg  wrote:
> On Mon, Nov 11, 2013 at 03:11:22PM +0100, Sander Grendelman wrote:
>> I can open a bug on this.
>>
>> Should I file it for the entire "snapshot removal fails"-problem or
>> just this error?
>
> "live storage migration snapshot removal fails due to unexpected
> qemu-img output", followed by your analisys below, sounds great to me.
>
>>
>>
>> I've looked at the parsing error and the problem seems to be that the
>> qemuImg.check() function expects a qemu-img version that outputs a
>> string like: "Image end offset: 262144".
>>
>> The only version I could find that does that is the one in the F19 
>> virt-preview
>> repository. Stock F19 qemu-img doesn't have this output. Nor do the stock EL6
>> or qemu-img-rhev EL6 versions.
>>
>> The only function that calls qemuImg.check is 
>> blockVolume.shrinkToOptimalSize()
>> this function needs (and uses) the parsed offset to shrink the LV containing 
>> the
>> qcow2 image.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-11 Thread Sander Grendelman
I've sent an e-mail with this question to the centos mailing list:
http://lists.centos.org/pipermail/centos/2013-November/138344.html

On Mon, Nov 11, 2013 at 1:19 PM, Itamar Heim  wrote:
> On 11/07/2013 12:00 PM, Sander Grendelman wrote:
>>
>> On Wed, Nov 6, 2013 at 5:41 PM, Martijn Grendelman
>>  wrote:
>>>
>>>
>>> So is qemu-kvm-rhev from mentioned source RPM a drop-in replacement for
>>> qemu-kvm from CentOS ? Would it make sense to install it instead of
>>> qemu-kvm?
>>
>>
>> That's the way it worked for me.
>>
>> - Put one host in maintenance
>> - Install *-rhev packas on that host
>>yum localinstall qemu-*rhev*.rpm)
>>this replaces the regular packages:
>>Nov 06 16:15:38 Installed:
>> 2:qemu-img-rhev-0.12.1.2-2.355.el6.9.x86_64
>>Nov 06 16:15:40 Installed:
>> 2:qemu-kvm-rhev-0.12.1.2-2.355.el6.9.x86_64
>>Nov 06 16:15:40 Installed:
>> 2:qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.9.x86_64
>>Nov 06 16:15:40 Erased: qemu-kvm-tools
>>Nov 06 16:15:41 Erased: qemu-kvm
>>Nov 06 16:15:41 Erased: qemu-img
>> - Restart vdsm, enable host migrate test VM to host, test live storage
>> migration: OK
>> - Put host back in maintenance and reboot.
>> - Activate host and repeat maintenance/installation steps with other
>> hosts.
>>
>> The "solution" for the live snaphost problem as proposed in
>> https://bugzilla.redhat.com/show_bug.cgi?id=1009100 is:
>>
>>   |BZ| Vdsm should check (hopefully via libvirt) if the underlying
>> qemu supports live snapshot, and report this feature to Engine.
>>   |BZ| If it does not, block this feature in UI.
>>
>> That is something that should be implemented, however,  there should
>> also be some way to use a full-featured kvm with oVirt.
>>
>> Can oVirt carry the qemu-*rhev* packages?
>> Or is this something that should be in epel? Or CentOS?
>
>
> I think worth veryfing with centos first on their plans to carry this
> package before we carry (and maintain it) in ovirt repos.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration snapshot removal (fails)

2013-11-11 Thread Sander Grendelman
I can open a bug on this.

Should I file it for the entire "snapshot removal fails"-problem or
just this error?


I've looked at the parsing error and the problem seems to be that the
qemuImg.check() function expects a qemu-img version that outputs a
string like: "Image end offset: 262144".

The only version I could find that does that is the one in the F19 virt-preview
repository. Stock F19 qemu-img doesn't have this output. Nor do the stock EL6
or qemu-img-rhev EL6 versions.

The only function that calls qemuImg.check is blockVolume.shrinkToOptimalSize()
this function needs (and uses) the parsed offset to shrink the LV containing the
qcow2 image.

On Fri, Nov 8, 2013 at 4:33 PM, Federico Simoncelli  wrote:
> - Original Message -
>> From: "Dan Kenigsberg" 
>> To: "Sander Grendelman" 
>> Cc: users@ovirt.org, fsimo...@redhat.com, ykap...@redhat.com
>> Sent: Friday, November 8, 2013 4:06:53 PM
>> Subject: Re: [Users] Live storage migration snapshot removal (fails)
>>
>> On Fri, Nov 08, 2013 at 02:20:39PM +0100, Sander Grendelman wrote:
>>
>> 
>>
>> > 9d4e8a43-4851-42ff-a684-f3d802527cf7/c512267d-ebba-4907-a782-fec9b6c95116
>> > 52178458-1764-4317-b85b-71843054aae9::WARNING::2013-11-08
>> > 14:02:53,772::image::1164::Storage.Image::(merge) Auto shrink after
>> > merge failed
>> > Traceback (most recent call last):
>> >   File "/usr/share/vdsm/storage/image.py", line 1162, in merge
>> > srcVol.shrinkToOptimalSize()
>> >   File "/usr/share/vdsm/storage/blockVolume.py", line 320, in
>> > shrinkToOptimalSize
>> > qemuImg.FORMAT.QCOW2)
>> >   File "/usr/lib64/python2.6/site-packages/vdsm/qemuImg.py", line 109, in
>> >   check
>> > raise QImgError(rc, out, err, "unable to parse qemu-img check output")
>> > QImgError: ecode=0, stdout=['No errors were found on the image.'],
>> > stderr=[], message=unable to parse qemu-img check output
>>
>>
>> I'm not sure that it's the only problem in this flow, but there's a
>> clear bug in lib/vdsm/qemuImg.py's check() function: it fails to parse
>> the output of qemu-img.
>>
>> Would you open a bug on that? I found no open one.
>
> I remember that this was discussed and the agreement was that if the offset
> is not reported by qemu-img we should have used the old method to calculate
> the new volume size.
>
> We'll probably need to verify it. Sander can you open a bug on this?
>
> Thanks,
> --
> Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-07 Thread Sander Grendelman
On Wed, Nov 6, 2013 at 5:41 PM, Martijn Grendelman
 wrote:
>
> So is qemu-kvm-rhev from mentioned source RPM a drop-in replacement for
> qemu-kvm from CentOS ? Would it make sense to install it instead of
> qemu-kvm?

That's the way it worked for me.

- Put one host in maintenance
- Install *-rhev packas on that host
  yum localinstall qemu-*rhev*.rpm)
  this replaces the regular packages:
  Nov 06 16:15:38 Installed: 2:qemu-img-rhev-0.12.1.2-2.355.el6.9.x86_64
  Nov 06 16:15:40 Installed: 2:qemu-kvm-rhev-0.12.1.2-2.355.el6.9.x86_64
  Nov 06 16:15:40 Installed:
2:qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.9.x86_64
  Nov 06 16:15:40 Erased: qemu-kvm-tools
  Nov 06 16:15:41 Erased: qemu-kvm
  Nov 06 16:15:41 Erased: qemu-img
- Restart vdsm, enable host migrate test VM to host, test live storage
migration: OK
- Put host back in maintenance and reboot.
- Activate host and repeat maintenance/installation steps with other hosts.

The "solution" for the live snaphost problem as proposed in
https://bugzilla.redhat.com/show_bug.cgi?id=1009100 is:

 |BZ| Vdsm should check (hopefully via libvirt) if the underlying
qemu supports live snapshot, and report this feature to Engine.
 |BZ| If it does not, block this feature in UI.

That is something that should be implemented, however,  there should
also be some way to use a full-featured kvm with oVirt.

Can oVirt carry the qemu-*rhev* packages?
Or is this something that should be in epel? Or CentOS?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-06 Thread Sander Grendelman
> On Wed, Nov 6, 2013 at 12:42 PM, Jakub Bittner <> wrote:
>>
>> do you use qemu-kvm or qemu-kvm-rhev rpm?
>
> I have same problem here. I think it is related to
> https://bugzilla.redhat.com/show_bug.cgi?id=1009100, because before live
> migration takes place it tries to create live snapshot and it fails.. So, is
> there somewhere qemu-kvm-rhev for centos?
>
> Offline migration test still in progress, but I believe it is going to work,
> because I dont need live snapshot to be created.

I can confirm that live storage migration works with qemu-kvm-rhev.

For my test I have built the package using
http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.355.el6_4.9.src.rpm

[root@gnkvm01 ~]# rpm -qa '*kvm*'
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.9.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.9.x86_64
[root@gnkvm01 ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-06 Thread Sander Grendelman
> do you use qemu-kvm or qemu-kvm-rhev rpm?

qemu-kvm:

[root@gnkvm01 ~]# rpm -qa '*kvm*'
qemu-kvm-tools-0.12.1.2-2.355.
0.1.el6_4.9.x86_64
qemu-kvm-0.12.1.2-2.355.0.1.el6_4.9.x86_64
[root@gnkvm01 ~]# yum list |grep kvm
qemu-kvm.x86_64 2:0.12.1.2-2.355.0.1.el6_4.9
qemu-kvm-tools.x86_64   2:0.12.1.2-2.355.0.1.el6_4.9
[root@gnkvm01 ~]#

It seems that qemu-kvm-rhev is not available for CentOS/oVirt?

On Wed, Nov 6, 2013 at 12:06 PM, Itamar Heim  wrote:
> On 11/06/2013 10:42 AM, Sander Grendelman wrote:
>>
>> Can anyone reproduce / comment on this?
>>
>> Can this be caused by
>> http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4
>> ?
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> do you use qemu-kvm or qemu-kvm-rhev rpm?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-06 Thread Sander Grendelman
Can anyone reproduce / comment on this?

Can this be caused by
http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4
?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-06 Thread Sander Grendelman
Can anyone reproduce / comment on this?

Can this be caused by
http://www.ovirt.org/Vdsm_Developers#Missing_dependencies_on_RHEL_6.4
?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-04 Thread Sander Grendelman
Moving the storage of a (running) VM to a different (FC) storage domain fails.

Steps to reproduce:
1) Create new VM
2) Start VM
3) Start move of the VM to a different storage domain

When I look at the logs it seems that vdsm/libvirt tries to use an
option that is unsupported by libvirt or the qemu-kvm version on
CentOS 6.4:

"libvirtError: unsupported configuration: reuse is not supported with
this QEMU binary"

Information in the "Events" section of the oVirt engine manager:

2013-Nov-04, 14:45 VM migratest powered off by grendelmans (Host: gnkvm01).
2013-Nov-04, 14:05 User grendelmans moving disk migratest_Disk1 to
domain gneva03_vmdisk02.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' has been completed.
2013-Nov-04, 14:04 Failed to create live snapshot 'Auto-generated for
Live Storage Migration' for VM 'migratest'. VM restart is recommended.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' was initiated by grendelmans.
2013-Nov-04, 14:04 VM migratest started on Host gnkvm01
2013-Nov-04, 14:03 VM migratest was started by grendelmans (Host: gnkvm01).

Information from the vdsm log:

Thread-100903::DEBUG::2013-11-04
14:04:56,548::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS:  =
'';  = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,615::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' released the operation mutex
Thread-100903::DEBUG::2013-11-04
14:04:56,622::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
'/bin/dd iflag=direct skip=38 bs=512
if=/dev/dfbbc8dd-bfae-44e1-8876-2bb82921565a/metadata count=1' (cwd
None)
Thread-100903::DEBUG::2013-11-04
14:04:56,642::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
SUCCESS:  = '1+0 records in\n1+0 records out\n512 bytes (512 B)
copied, 0.000208694 s, 2.5 MB/s\n';  = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,643::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0
records in', '1+0 records out', '512 bytes (512 B) copied, 0.000208694
s, 2.5 MB/s'], size: 512
Thread-100903::INFO::2013-11-04
14:04:56,644::logUtils::47::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'volumeID':
'4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}]}
Thread-100903::DEBUG::2013-11-04
14:04:56,644::task::1168::TaskManager.Task::(prepare)
Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::finished: {'info':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-