I am very new to Xen. I have used VMware and VirtualBox *desktop* products in 
the past. I am also familiar with zones and ldoms. I realize that none of that 
is particularly relevant to Xen, but I think it helps get a lot of the *really* 
basic stuff out of the way.

I followed the Wiki: 
http://hub.opensolaris.org/bin/view/Community+Group+xen/2008_11_dom0 to setup 
dom0. I believe that is totally functional. (minus some milestone/xvm issues I 
had).

I followed the wiki: 
http://hub.opensolaris.org/bin/view/Community+Group+xen/virtinstall to setup a 
couple domU's. I have a PV OpenSolaris box, a HVM d0ze 2k8, and HVM S10u8 box, 
the S10 box can be live migrated between to dom0's which I am pretty excited 
about. (the other two are zvols on rpool)

I wanted to try to move/clone the opensolaris box to have its disks on NFS 
(vmdk?), but I am really going off the deep end there. I got vdiskadm figured 
out enough to supposedly migrate the disk to VMDK, but I haven't the foggiest 
idea how to "switch" it. virsh seems to indicate that you aren't supposed to 
edit the XML, even though there is an "edit" option that appears to do just 
that :-/. 

I had asked in ##xen and they said I shouldn't be using VMDK, I should be using 
tap:aio. As typical of open source support, instead of giving me the answer 
they lead me down another path. They also suggested that I use "xm" instead of 
virsh. Our man pages list virsh as the preferred mechanism and xm as the 
legacy. The best I can do to "extract" the configuration of the VM from xm is 
"xm list -l DOMAIN" ... but thats not really setting=value like I have seen 
elsewhere. I am not sure if tap:aio is supported on OpenSolaris, nor am I sure 
if its supported over NFS. I also need to change the network, as my dom0 will 
be on a "private" network, and I want my domU's to be on a "service" network. I 
usually use tagged vlans (vnics) with my zones, but I haven't figured that out 
either.

*** First question: Does OpenSolaris support tap:aio, as the ##xen people say 
thats the best performing "file" based virtual disk. Does it work over NFS?


*** Next question: How do I find out what "drivers" are supported for disks for 
example. There are a couple examples on the wiki, but I didn't see anything in 
the man pages, I didn't see any help or list option to tell me what it would 
support. It seems like there should be a list somewhere that lists 
driver/subdriver and maybe some description. At the very least maybe there is a 
way to list a library directory or something? 


*** Next question: virt-manage is broken in nv_126 (known bug), I symlinked the 
vte module per the workarounds in the bug, and now it opens, but its getting a 
libgnomebreakpad error (which I think is safe to ignore). I was able to change 
the "boot device" (net/disk) for my HVM S10 box, but the GUI seems very 
limited. I can't change the disk or network settings. I can delete the disk and 
re-add it, but it doesn't appear do it right. 

#s10-test (HVM) disk
    (device
        (tap
            (uuid f60d38d0-d0cc-f1ab-a437-435238e924cb)
            (bootable 1)
            (devid 768)
            (dev hda:disk)
            (uname tap:vdisk:/xendisks/s10-test/disk0)
            (mode w)
        )

# OS-test (PV) disk
    (device
        (vbd
            (uuid b9805e15-b746-fb41-d9c6-eb6bcc0cab91)
            (bootable 1)
            (driver paravirtualised)
            (dev xvda)
            (uname file:/xendisks/neoga-test1/neoga-test1-disk0-flat.vmdk)
            (mode w)
        )

*** Next question: also in virt-manage, I tried to remove and re-add the 
network on the proper vnic, but its greyed out. I tried to add it with virsh, 
but it really doesnt like me:

neoga# virsh help attach-interface
  NAME
    attach-interface - attach network interface

  SYNOPSIS
    attach-interface <domain> <type> <source> [<target>] [<mac>] [<script>] 
[--capped-bandwidth <string>] [--vlanid <number>]

  DESCRIPTION
    Attach new network interface.

  OPTIONS
    <domain>         domain name, id or uuid
    <type>           network interface type
    <source>         source of network interface
    <target>         target network name
    <mac>            MAC address
    <script>         script used to bridge network interface
    --capped-bandwidth <string>  bandwidth limit for this interface
    --vlanid <number>  VLAN ID attached to this interface


neoga# virsh attach-interface neoga-test1 ethernet aggr0 --vlanid 634
error: No support ethernet in command 'attach-interface'

neoga# virsh attach-interface neoga-test1 vif aggr0 --vlanid 634 
error: No support vif in command 'attach-interface'

neoga# virsh attach-interface neoga-test1 vif-vnic aggr0 --vlanid 634 
error: No support vif-vnic in command 'attach-interface'

neoga# virsh attach-interface neoga-test1 vnic aggr0 --vlanid 634 
error: No support vnic in command 'attach-interface'

----- So? What do I put for type? I cant find a list of acceptable types in the 
man pages, as with disks, I am sure I am just not looking in the right place :)

*** Next question: I was able to presumably add it with "xm" but it doesnt look 
like its bridged to aggr0 anymore?  The xm list -l doesnt have the (bridge 
aggr0).
neoga# xm network-attach neoga-test1 vlanid=634


*** Next question: virsh seems to have some sort of "remote" option, but 
apparently (from the libvirt.org page) requires some extra setup. Before I go 
too far down that road, has that been wrappered or automated in any way? I 
would assume not? have we got any documentation on OpenSolaris specifics, or 
can we mostly follow the linux docs?

*** Next question: does XEN on OS support virtual fiber channel?

I am sure I will have a lot more as I go through. I am planning to deploy a 
"production" infrastructure into a "private cloud" mostly based on OpenSolaris 
machines.

Tommy
-- 
This message posted from opensolaris.org
_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to