Scott L. Burson wrote:
> Hi,
> 
> I'm new to Xen.  I booted my Solaris machine (snv_86 on a Tyan S2882-D, 
> dual-dual-core (Opteron 275s), 24GB DRAM, / is on a SCSI drive, also have a 
> ZFS pool on 4 SATA drives) under Xen for the first time a few days ago, and 
> quickly ran into problems.  Before I even installed any domUs, I started a 
> build of a large C++ app I work on, the build tree being on the ZFS 
> filesystem, and after a few minutes it hung.  My recollection (slightly fuzzy 
> now) is that I couldn't access the ZFS pool after that.  I had no problem 
> with the root filesystem, but when I tried to sync in preparation to reboot, 
> the sync hung.  I had to use the reset button.
> 
> I had no time for more experiments and needed to get my build done, so I 
> booted back into metal Solaris, and haven't had time to repeat the 
> experiment.  Still, this is discouraging to say the least, as far as the 
> possibility of using Xen goes.
> 
> Anyone know what's going on?  I've been running Solaris on this machine for a 
> couple of years without a hitch, so I don't think it's a hardware problem.


When you use zfs and Xen together, there's some tuning
you need to do... This will all be done for you in a
product.

First, your disks need to be running in sata mode
vs ata mode. i.e. format should report c0t0d0 vs
c0d0. The performance of disks running in IDE mode
is not bearable and runs into all sorts of problem
in the IDE/ATA driver.

You should limit dom0's memory to 2G. e.g.
    kernel /boot/amd64/xen.gz com1=9600,8n1 console=com1 dom0_mem=2G

The zfs arc has to be limited. e.g.
   echo "set zfs:zfs_arc_max = 0x10000000" >> /etc/system

If you are using files vs zvols for disks, you
should setup the recordsize to 8k. e.g.
   zfs set recordsize=8k tank/guests

You should make sure dom0 doesn't balloon down
significantly.
   svccfg -s xvm/xend setprop config/dom0-min-mem 2000
   svcadm refresh xvm/xend;svcadm restart xvm/xend



MRJ

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to