Piotr Jasiukajtis wrote:
On Tue, Feb 17, 2009 at 2:37 PM, Mark Johnson <[email protected]> wrote:
Are you using PV drivers? What's the disk entry on the
guest?
No PV drivers.
You really should be using PV drivers. IO performance
is very bad without them :-)

Without PV drivers, all the disk accesses are done
in the qemu code.. i.e. your not using a blockend driver
at all (although they are loaded in case you install
PV drivers).
I tried PV drivers on another host and there is a long way to go to
improve performance of HVM systems (Windows, S10).

With respect to metal or to other virtualization
platforms?

Where the guests MP?  Did you give dom0 some
dedicated CPU cores?




How disk you create the vmdk file?  With vdiskadm?
No, it's created from physical machine via vmware converter.
ok. On a side note, one of these days we will be
putting back convert functionality so you can be
moving between a vmdk, vdi, vhd, and a zvol, disk,
etc.
I guess people are waiting for these days :)


Is the disk on a local filesystem?  If so UFS or ZFS?
Local disk ZFS root. SXCE107.

I assume you are limiting the zfs ARC cache? You have to
do this for dom0.
Right. Anyway, I found there are some issues with local ZFS pools and dom0.
Sometimes 'zfs snapshot' from dom0 can kill (halt) the machine.
I guess you are aware of that?

No I wasn't..  What build are the dom0 bits?



If ZFS, did you set the record size to 8k (zvols default
to 8K.


i.e.


# zfs get recordsize tank/guests

: alpha[1]#; zfs set recordsize=8k tank/guests
: alpha[1]#; zfs get recordsize tank/guests
NAME         PROPERTY    VALUE    SOURCE
tank/guests  recordsize  8K       local
: alpha[1]#;
My vmdk file:

# ls -alh /export/xvm/isos/test.vmdk
-rw-------   1 xvm      root        1,9G lut 17 11:44
/export/xvm/isos/test.vmdk

# zfs get recordsize rpool/export/xvm/isos
NAME                   PROPERTY    VALUE    SOURCE
rpool/export/xvm/isos  recordsize  128K     default

Can you try setting this to 8k, then moving the file
out, then back into the zfs filesystem?
I will try it next time.
I don't have that machine anymore.

OK, thanks.



MRJ



_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to