Re: [CentOS-virt] KVM online backup images

2012-11-27 Thread Philip Durbin
see also http://fedoraproject.org/wiki/Features/Virt_Live_Snapshots#Live_backup 
and http://www.redhat.com/archives/libvir-list/2012-July/msg00782.html via 
http://irclog.perlgeek.de/crimsonfu/2012-10-24

On Nov 27, 2012, at 6:23 AM, Rudi Servo rudise...@gmail.com wrote:

 I don't believe that Centoss yet capable of such feature, live backup is 
 recent and FAIK its available on Fedora 17/18.
 To workaround this issue I use DRBD and LVM snapshot.
 
 This feature is a must have since it's capable to snapshot disk-only (ie. 
 qcow2), making easier to rsync and copy a entire disk without having the hole 
 storage allocated or having big lvm's back and forth.
 
 Hope I helped
 
 On 11/27/2012 07:45 AM, Andry Michaelidou wrote:
 Hello to you all!
 
 We are implementing here at the University KVM virtualization for our 
 servers and services and i was wondering if anyone try to automatically 
 backup images.
 I am actually using logical volumes for the VM guests. All virtual clients 
 are installed in their LVM logical volume. We are already use IBM TSM for 
 backup as we used to when we had physical machines, ie install client in OS 
 and manage files and data backup.
 I want to have an image backup additional to files backup, but i want to 
 take the image online, without pause or suspend the VM guests.
 Did anyone try to create image backups online? What about 
 http://wiki.qemu.org/Features/Livebackup? 
 Can you please advise? 
 --
 Andry Michaelidou Papa | IT Systems Administrator |Department of Computer 
 Science | University of Cyprus 
 Tel: +357.22.892734 | Fax: +357.22.8927231 | http://www.cs.ucy.ac.cy
 
 
 
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
 
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Package lists for Cloud images

2012-10-04 Thread Philip Durbin
On 10/3/12 5:26 PM, Karanbir Singh wrote:
 we plan on publishing Vagrant box's as well - I've been talking with
 Mitchell to get them listed on vagrantup as well, and included in the
 docs he publishes.

Great news! Thanks, Karanbir!

In the meantime, if anyone knows or trusts any of the CentOS base boxes 
here, please let me know: http://www.vagrantbox.es

Thanks,

Phil
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Package lists for Cloud images

2012-10-03 Thread Philip Durbin
could these be used as vagrant base boxes?

On Oct 3, 2012, at 12:29 PM, Karanbir Singh mail-li...@karan.org wrote:

 hi Guys,
 
 As we get ready to start publishing Cloud Images ( or rather images
 consumable in various virt platforms, including public and private
 clouds ) - it would be great to have a baseline package manifest worked
 out.
 
 What / how many images should we build. At this time we were thinking of
 doing :
 
 - CentOS-5 32bit minimal
 - CentOS-6 32bit minimal
 
 - CentOS-5 64bit minimal
 - CentOS-6 64bit minimal
 
 - CentOS-5 64bit LAMP
 - CentOS-6 64bit LAMP
 
 What would be the minimal functional requirements people would expect
 from these images ? and what rpms should be installed ? Should root
 login be enabled or should we require people to go in via a 'centos'
 user. Should the image be self-updating, or should we have a post-login
 message that indicates outstanding updates ?
 
 
 
 -- 
 Karanbir Singh
 +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
 ICQ: 2522219| Yahoo IM: z00dax  | Gtalk: z00dax
 GnuPG Key : http://www.karan.org/publickey.asc
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] centos 5.8 libvirt disk options

2012-09-27 Thread Philip Durbin
how about this?

virt-v2v -ic 'esx://my-vmware-hypervisor.example.com' -os default --network 
default my-vm

via http://irclog.perlgeek.de/crimsonfu/2012-05-24#i_5632151


On Sep 27, 2012, at 8:20 PM, Bill Campbell cen...@celestial.com wrote:

 I am attempting to use libvirtd/kvm on CentOS 5.latest to migrate a SCO
 OpenServer 5.0.6a VM from the old VMware server.
 
 I have converted the multiple vmdk disk files to a single file, then used
 qemu-img convert to create files for libvirtd, both qcow2 and raw formats.
 
 After many attempts to get this working I'm up against what appears to be a
 brick wall.
 
   + The VMware VMs are using straight 'ide' HD emulation which has been
 working well for several years.
 
   + The 'ide' on libvirtd appears to map to SATA which isn't supported by
 OSR5.  I've tried doing a fresh install from CDROM, but the
 installation fails to find the hard disk.  I might be able to find the
 appropriate BTLD for this, but that won't help migrating existing VMs.
 
   + When I tried using 'scsi' libvirtd says this isn't supported.  This
 would be my preferred emulation as we have used SCSI drives since the
 early days of Xenix on Tandy hardware.
 
   + The final problem if these are solved is that SCO is funny about its
 drive geometry, and the current versions of libvirtd and qemu don't
 appear to support the geometry allowing one to specify heads,
 cylinders, etc.
 
 Am I going to have to resort to using VMware workstation for this?
 
 Bill
 -- 
 INTERNET:   b...@celestial.com  Bill Campbell; Celestial Software LLC
 URL: http://www.celestial.com/  PO Box 820; 6641 E. Mercer Way
 Voice:  (206) 236-1676  Mercer Island, WA 98040-0820
 Fax:(206) 232-9186  Skype: jwccsllc (206) 855-5792
 
 Good decisions should be rewarded and bad decisions should be
 punished. The market does just that with its profits and losses. 
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Walkthrough available for bonding, bridging, and VLAN's?

2012-09-26 Thread Philip Durbin
Hi Nico,

I shared some configs here:

[CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging

http://lists.centos.org/pipermail/centos-virt/2012-September/003003.html

I hope this helps. I have another config with the trunked VLANs on a separate 
interface (also bonded, as above) if you want it.

Phil

On Sep 26, 2012, at 11:49 PM, Nico Kadel-Garcia nka...@gmail.com wrote:

 Silvertip257, when you did this CentOS 6/KVM/bonding/bridging, did you
 ever get all the parts playing together correctlhy?
 
 I'm facing a setup with only two NIC's, and need for multiple trunked
 VLAN access, and bonded pairs, and KVM based bridges to get the VM's
 with exposed IP addresses. I can get basically any 2 out of the 3
 server network components working, binding, VLAN's, or KVM bridging,
 but attempts to pull all together on CentOS 6.3 fails. I'm finding
 numerous partial references, and a lot of speculation of this setup
 should work!, but no cases of anyone actually doing it. And I'm
 unable to reach out to the upstream vendor directly until some
 paperwork gets straightened out.
 
 (And oh, I've been away from CentOS for a while, but am in the midst
 of deploying about 50 CentOS VM's on KVM virtualization if I can *get
 this working*)
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] [Advice] CentOS6 + KVM + bonding + bridging

2012-09-06 Thread Philip Durbin
On 09/06/2012 12:19 PM, SilverTip257 wrote:
 My question to the members of this list is what bonding mode(s) are
 you using for a high availability setup?
 I welcome any advice/tips/gotchas on bridging to a bonded interface.

I'm not sure I'd call this high availability... but here's an example of 
bonding two ethernet ports (eth0 and eth1) together into a bond (mode 4) 
and then setting up a bridge for a VLAN (id 375) that some VMs can run on:

[root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth0
DEVICE=eth0
SLAVE=yes
MASTER=bond0
[root@kvm01a network-scripts]# grep -iv hwadd ifcfg-eth1
DEVICE=eth1
SLAVE=yes
MASTER=bond0
[root@kvm01a network-scripts]# cat ifcfg-bond0 | sed 's/[1-9]/x/g'
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=static
IPADDR=x0.xxx.xx.xx
NETMASK=xxx.xxx.xxx.0
DNSx=xx0.xxx.xxx.xxx
DNSx=x0.xxx.xx.xx
DNSx=x0.xxx.xx.x0
[root@kvm01a network-scripts]# cat ifcfg-br375
DEVICE=br375
BOOTPROTO=none
TYPE=Bridge
ONBOOT=yes
[root@kvm01a network-scripts]# cat ifcfg-bond0.375
DEVICE=bond0.375
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
BRIDGE=br375
[root@kvm01a network-scripts]# cat /etc/modprobe.d/local.conf
alias bond0 bonding
options bonding mode=4 miimon=100
[root@kvm01a network-scripts]# grep Mode /proc/net/bonding/bond0
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
[root@kvm01a network-scripts]# egrep '^V|375' /proc/net/vlan/config
VLAN Dev name| VLAN ID
bond0.375  | 375  | bond0

Repeat ad nauseam for the other VLANs you want to put VMs on (assuming 
your switch is trunking them to your hypervisor).

See also http://backdrift.org/howtonetworkbonding via 
http://irclog.perlgeek.de/crimsonfu/2012-08-15#i_5900501

Phil
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] 802.3ad + Centos 6 + KVM (bridging)

2012-09-05 Thread Philip Durbin
yes, mode 4 works fine

On Sep 5, 2012, at 3:40 PM, aurfalien aurfal...@gmail.com wrote:

 Hi all,
 
 Don't mean to double post as I sent this to the general Centos list.
 
 But, does any one have 802.3ad (mode 4) working on there Centos6 KVM setup?
 
 This would be a bridge+bond setup of course.
 
 If not possible, would I still bond the interfaces on the switch and then 
 bond them in the guest rather then from within the hypervisor?
 
 - aurf
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] 802.3ad + Centos 6 + KVM (bridging)

2012-09-05 Thread Philip Durbin
hmm, CentOS 6.2 I'd say

On Sep 5, 2012, at 3:56 PM, aurfalien aurfal...@gmail.com wrote:

 Hi Philip,
 
 Wondering when you got this setup working?
 
 There were some issues as of April or so.
 
 - aurf
 On Sep 5, 2012, at 12:52 PM, Philip Durbin wrote:
 
 yes, mode 4 works fine
 
 On Sep 5, 2012, at 3:40 PM, aurfalien aurfal...@gmail.com wrote:
 
 Hi all,
 
 Don't mean to double post as I sent this to the general Centos list.
 
 But, does any one have 802.3ad (mode 4) working on there Centos6 KVM setup?
 
 This would be a bridge+bond setup of course.
 
 If not possible, would I still bond the interfaces on the switch and then 
 bond them in the guest rather then from within the hypervisor?
 
 - aurf
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
 
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] CentOS 6 kvm disk write performance

2012-08-11 Thread Philip Durbin
Nice post, Julian. It generated some feedback at 
http://irclog.perlgeek.de/crimsonfu/2012-08-10 and a link to 
http://rhsummit.files.wordpress.com/2012/03/wagner_network_perf.pdf

Phil

On Aug 10, 2012, at 8:46 AM, Julian price centos@julianprice.org.uk wrote:

 I have 2 similar servers. Since upgrading one from CentOS 5.5 to 6, disk 
 write performance in kvm guest VMs is much worse.
 
 There are many, many posts about optimising kvm, many mentioning disk 
 performance in CentOS 5 vs 6.  I've tried various changes to speed up write 
 performance, but northing's made a significant difference so far:
 
 - Install virtio disk drivers in guest
 - update the host software
 - Update RAID firmware to latest version
 - Switch the host disk scheduler to deadline
 - Increase host RAM from 8GB to 24GB
 - Increase guest RAM from 2GB to 4GB
 - Try different kvm cache options
 - Switch host from ext4 back to ext3
 - Set noatime on the virtual disk image file
 Note: There is no encryption or on-access virus scanner on any host or guest.
 
 Below are some the block write figures in MB/s from bonnie++ with various 
 configurations:
 
 First, figures for the hosts show that the CentOS 6 server is faster:
 
 54CentOS 5 Host
 50CentOS 5 Host
 69CentOS 6 host
 70CentOS 6 host
 
 Figures for a CentOS 6 guest running on the CentOS 5 host show that the 
 performance hit is less than 50%:
 
 30CentOS 6 guest on CentOS 5 host with no optimisations
 27CentOS 6 guest on CentOS 5 host with no optimisations
 32CentOS 6 guest on CentOS 5 host with no optimisations
 
 Here are the figures a CentOS 6 guest running on the CentOS 6 host with 
 various optimisations.  Even with these optimisations, performance doesn't 
 come close to the un-optimised guest running on the CentoOS 5 host:
 
  5   No optimisations (i.e. same configuration as on CentOS 5)
  4   deadline scheduler
  5   deadline scheduler
 15   noatime,nodiratime
 14   noatime,nodiratime
 15   noatime
 15   noatime + deadline scheduler
 13   virtio
 13   virtio
 10   virtio + noatime
  9   virtio + noatime
 
 The CentOS 6 server has a better RAID card, different disks and more RAM, 
 which might account for the better CentOS 6 host performance.  But why might 
 the guest write performance be so much worse?
 
 Is this a known problem?  If so, what's the cause?If not, is there a way 
 to locate the problem rather than using trial and error?
 
 Thanks,
 Julian
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Basic shared storage + KVM

2012-06-27 Thread Philip Durbin
On 06/21/2012 12:13 PM, Dennis Jacobfeuerborn wrote:
 AFAIK you cannot use Swift storage as a Nova volume backend. Also in order
 to make Swift scale you need at least a couple of nodes.

Is this true?  I haven't had a chance to dig into this, but I asked my 
OpenStack guy about this on IRC the other day:

14:48 pdurbin  westmaas: is this true? AFAIK you cannot use Swift 
storage as a Nova volume backend -- [CentOS-virt] Basic shared storage 
+ KVM - http://lists.centos.org/pipermail/centos-virt/2012-June/002943.html

14:51 westmaas pdurbin: hm, I'm not 100% sure on that. let me ask around.

14:52 pdurbin  westmaas: thanks. i thought the point of swift was that 
it would take away all my storage problems. :) that swift would handle 
all the scaling for me

14:54 westmaas all your object storage

14:54 westmaas not necessarily block storage

14:54 westmaas but at the same time, I can't imagine this not being a goal

14:55 pdurbin  well, i thought the vm images were abstracted away into 
objects or whatever

14:55 pdurbin  i need to do some reading, obviously

14:55 westmaas yeah, the projects aren't tied that closely together yet.

14:56 pdurbin  bummer

14:56 pdurbin  agoddard had a great internal reply to that centos-virt 
thread. about iSCSI options

14:57 pdurbin  i don't see him online but i'll have to ask if he minds 
if i copy and paste his reply back to the list

-- http://irclog.perlgeek.de/crimsonfu/2012-06-25#i_5756369

It looks like I need to dig into this documentation:

Storage: objects, blocks, and files - OpenStack Install and Deploy 
Manual  - Essex - 
http://docs.openstack.org/essex/openstack-compute/install/yum/content/terminology-storage.html

If there's other stuff I should be reading, please send me links!

I'm off to the Red Hat Summit the rest of the week and I'll try to ask 
the OpenStack guys about this.

 You might want to take a look at ceph.com
 The offer an object store that can be attached as a block device (like
 iScsi) but KVM also contains a driver that can directly talk to the storage.
 Then there is CephFS which is basically a posix filesystem on top of the
 object store that has some neat features and would be a closer replacement
 to NFS.

 Another thing to look at is http://www.osrg.net/sheepdog/
 This is very similar to ceph's object storage approach.
 Some large scale benchmarks (1000 nodes) can be found here:
 http://sheepdog.taobao.org/people/zituan/sheepdog1k.html

 Then there is http://www.gluster.org/
 This is probably the most mature solution but I'm not sure if the
 architecture will be able to compete against the other solutions in the
 long run.

These are all good ideas and I need to spend more time reading about 
them.  Thanks.

The main reason I'm writing is that agoddard from above gave me 
permission to copy and paste his thoughts on iSCSI and libvirt.  (He 
isn't subscribed to this mailing list, but I had forwarded what I 
wrote.)  Here is his reply:

From my understanding, these are the options for iSCSI.. I'd love to 
hear about it if anyone has thoughts or alternatives :)

1) iSCSI 1 LUN per volume manually

-- provision a LUN manually for a host on the SAN, attach the LUN to 
libvirt and rock.

Pros: fast storage, reliable, multipathing, live migration should work

Cons: manually configuring the LUN when you deploy the VM (and timing 
this right with automated tasks that are expecting a disk), running out 
of LUNs on the SAN, cleaning up orphaned LUNs, etc etc.

2) iSCSI 1 LUN per volume using API

-- provision a LUN for a host on the SAN, using an API to the SAN to 
orchestrate LUN creation during VM creation, attach the LUN to libvirt 
and rock.

Pros: fast storage, reliable, multipathing, live migration should work

Cons: the SAN has to have an API, you gotta write and test a client for 
it, running out of LUNs on the SAN, API also needs to clean up orphaned 
LUNs.

3) large iSCSI LUN with LVM

-- provision a large LUN to the hosts, put LVM on it and create a 
Logical Volume for each VM disk

Pros: Fast disk creation, easy to delete disk when deleting VM, fast LVM 
snapshots  disk cloning, familiar tools (no need to write APIs)

Cons: Volume group corruption if multiple hosts modify the group at the 
same time, or LVM metadata is out of sync between hosts.

4) large iSCSI LUN with CLVM

-- provision a large LUN to the hosts, put LVM on it and create a 
Logical Volume for each VM disk, use CLVM (clustered LVM) to prevent 
potential issues with VG corruption

Pros: Fast disk creation, easy to delete disk when deleting VM, familiar 
tools (no need to write APIs), safeguard against corruption.

Cons: No snapshot support

5) large iSCSI LUN with LVM, with LVM operations managed by a single host

-- provision a large LUN to the hosts, put LVM on it and create a 
Logical Volume for each VM disk, hand off all LVM operations to a single 
host, or ensure only a single host is running them  at a time.

Pros: Fast disk creation, easy to delete disk when 

Re: [CentOS-virt] Basic shared storage + KVM

2012-06-21 Thread Philip Durbin
To allow for live migration between hypervisors, I've been using NFS for shared 
storage of the disk images for each of my virtual machines.  Live migration 
works great, but I'm concerned about performance as I put more and more virtual 
machines on this infrastructure.  The Red Hat docs warn that NFS won't scale in 
this situation and that iSCSI is preferred.

I'm confused about how to effectively use iSCSI with KVM, however. libvirt can 
create new disk images all by itself in a storage pool backed by NFS, like I'm 
using, but libvirt can not create new disk images in a storage pool backed by 
iSCSI on its own.  One must manually create the LUN on the iSCSI storage each 
time one wants to provision a virtual machine.  I like how easy it is to deploy 
new virtual machines on NFS; I just define the system in Cobbler and kickstart 
it with koan.

I think my solution to the problem of how to scale shared storage may be 
OpenStack, which promises this as a feature of Swift.  Then, perhaps, I'll be 
able to leave NFS behind.

I'd be happy to hear more stories of how to scale shared storage while 
continuing to allow for live migration.

Phil

On Jun 19, 2012, at 5:50 PM, Andrea Chierici andrea.chier...@cnaf.infn.it 
wrote:

 Hi,
 
   Please help me understand why you are doing it this way?  I'm using
 Xen with integrated storage, but I've been considering separating my
 storage from my virtual hosts.  Conceptually, we can ignore the Xen/KVM
 difference for this discussion.  I would imagine using LVM on the
 storage server then setting the LVs up as iSCSI targets.  On the virtual
 host, I imagine I would just configure the new device and hand it to my
 VM.
 
 I am open to any suggestion. I am not really an expert of iscsi, so I 
 don't know what is the best way to implement a solution where a small 
 group of hv support live migration with a shared storage. This way 
 looked rather straightforward and for some level, documented on official 
 redhat manuals.
 The problem is that there is no mention about this LVM problem :(
 Initially I tried configuring the raw iscsi device ad storage pool but 
 virt-manager reported it was 100% occupied even if that was not true 
 (indeed 0% was occupied).
 
 Andrea
 
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

2012-06-04 Thread Philip Durbin
I need to migrate a number of virtual machines from VMware ESX to CentOS 
6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my 
problem at 
https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm 
not sure if there's another RPM in base CentOS or EPEL (something 
standard) I should be using instead.

Originally, I was getting this No root device found in this operating 
system image error when attemting to migrate a Window 2008 VM. . .

 [root@kvm01b ~]# virt-v2v -ic 
'esx://my-vmware-hypervisor.example.com/' \
 -os transferimages --network default my-vm
 virt-v2v: No root device found in this operating system image.

. . . but I solved this with a simply `yum install 
libguestfs-winsupport` since [the docs][v2v-guide] say:

  If you attempt to convert a virtual machine using NTFS without the
  libguestfs-winsupport package installed, the conversion will fail.

Next I got an error about missing drivers for Windows 2008. . .

 [root@kvm01b ~]# virt-v2v -ic 
'esx://my-vmware-hypervisor.example.com/' \
 -os transferimages --network default my-vm
 my-vm_my-vm: 100% []D
 virt-v2v: Installation failed because the following files referenced in
 the configuration file are required, but missing:
 /usr/share/virtio-win/drivers/amd64/Win2008

. . . and I resolved this by grabbing an iso from Fedora at 
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended 
by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers 
and building an RPM from it with this spec file: 
https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec

Now, virt-v2v exits without error:

 [root@kvm01b ~]# virt-v2v -ic 
'esx://my-vmware-hypervisor.example.com/' \
 -os transferimages --network default my-vm
 my-vm_my-vm: 100% []D
 virt-v2v: my-vm configured with virtio drivers.
 [root@kvm01b ~]#

Now, my question is, rather that the [virtio-win RPM from the spec file 
I 
wrote](https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec), 
is there some other more standard RPM in base CentOS or EPEL that will 
resolve the error above?

Here's a bit more detail about my setup:

 [root@kvm01b ~]# cat /etc/redhat-release
 CentOS release 6.2 (Final)
 [root@kvm01b ~]# rpm -q virt-v2v
 virt-v2v-0.8.3-5.el6.x86_64

See also [Bug 605334 – VirtIO driver for windows does not show specific 
OS: Windows 7, Windows 
2003](https://bugzilla.redhat.com/show_bug.cgi?id=605334)

[v2v-guide]: 
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/V2V_Guide/sect-V2V_Guide-Configuration_Changes-Configuration_Changes_for_Windows_Virtual_Machines.html

Thanks,

Phil

p.s. I also posted this on Server Fault.  If you prefer to answer there, 
I would be happy to summarize the answers and report back to the list: 
http://serverfault.com/questions/395347/where-should-centos-users-get-usr-share-virtio-win-drivers-for-virt-v2v
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt