On Mon, Jan 11, 2010 at 08:39:47AM -0500, Mark Johnson wrote:
> 
> 
> [email protected] wrote:
> >We have 2 sets of identical hardware, identically configured, both 
> >exhibiting disk i/o performance problems with 2 of their 4 DomUs
> >
> >The DomUs in question each act as a nfs filesever. The fileserver is 
> >made up from 2 zvols, one holds the DomU (solaris 10) and the other is 
> >mounted to the DomU and contains the user's files which are then nfs 
> >exported. Both zvols are formatted as UFS. For the first 25-30 nfs 
> >clients performance is OK, after that client performance drops off 
> >rapidly e.g. a "ls -l" of the user's home area taking 90 seconds. 
> >Everything is stock - no tuning.
> 
> When does xentop report for the guest? For both dom0 and dom0,
> what does iostat -x report?

I'm getting one of my guys to stress the system again to get these

> What Solaris 10 update?

10u7

> Have you tried a PV opensolaris guest for the NFS server
> running the latest bits?  If not, can you do this? There
> have been some xnf (NIC driver) fixes which could explain
> this.

No, I assume you mean later than the 2009.06 DVD release.  I've never
built a release yet, relying on you guys to do the work for me :-), but if
you mean building an ON release as per
http://hub.opensolaris.org/bin/view/downloads/on then I'll give it a go.

> >Anyone any suggestions what I can do to improve matters - would using 
> >ZFS rather than UFS for the user disk change matters?
> 
> 
> It should not.
> 
> 
> >The underlying disks are managed by a hardware RAID controller so the 
> >zpool in the Dom0 just sees a single disk.
> 
> Why wouldn't you use the disks as a jbod and give them all to
> zfs?

I'm not sure the Sun/StorageTek controller has a passthru mode

Thanks for the tips, I'll be back in touch when there is more news.

John

-- 
John Landamore

Department of Computer Science
University of Leicester
University Road, LEICESTER, LE1 7RH
[email protected]
Phone: +44 (0)116 2523410       Fax: +44 (0)116 2523604

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to