IIRC, zones dont work as NFS servers, perhaps they do with exclusive IP 
interfaces, but to get that, you basically have to run OpenSolaris. (or SXCE, 
but thats gone now)

Sorry I am not really offering a solution, just trying to save you some time 
going down a path that might not be useful.

Tommy



On Feb 12, 2010, at 3:27 AM, J. Landamore wrote:

> On Tue, Jan 19, 2010 at 08:36:58AM -0500, Mark Johnson wrote:
>> 
>> 
>> J. Landamore wrote:
>>> Mark,
>>> 
>>> Sorry about the delay
>> 
>> Don't see anything obvious below... Did you get a chance to
>> try a PV OpenSolaris guest?
> 
> We tried the PV OpenSolaris guest and there isn't an improvement.  Our
> next move is to scrap xVM completely and try the hardware with stock
> Solaris 10u8 and zones to check that we aren't asking too much of the
> hardware.
> 
> John
> 
>> 
>> 
>> 
>> MRJ
>> 
>> 
>> 
>>> On Mon, Jan 11, 2010 at 08:39:47AM -0500, Mark Johnson wrote:
>>>> 
>>>> [email protected] wrote:
>>>>> We have 2 sets of identical hardware, identically configured, both 
>>>>> exhibiting disk i/o performance problems with 2 of their 4 DomUs
>>>>> 
>>>>> The DomUs in question each act as a nfs filesever. The fileserver is 
>>>>> made up from 2 zvols, one holds the DomU (solaris 10) and the other is 
>>>>> mounted to the DomU and contains the user's files which are then nfs 
>>>>> exported. Both zvols are formatted as UFS. For the first 25-30 nfs 
>>>>> clients performance is OK, after that client performance drops off 
>>>>> rapidly e.g. a "ls -l" of the user's home area taking 90 seconds. 
>>>>> Everything is stock - no tuning.
>>>> When does xentop report for the guest? For both dom0 and dom0,
>>>> what does iostat -x report?
>>> 
>>> During "normal" running the stats are
>>> 
>>> xentop:
>>>     NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%)
>>> VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR 
>>> SSID
>>> achilles -----r       2412  120.1    1580792   18.8    1581056      18.8
>>> 2    1   161632     8712    4        0      682     1006 
>>>  0
>>> Domain-0 -----r      90049  164.1    2097152   25.0   no limit       n/a
>>> 2    0        0        0    0        0        0        0 
>>>  0
>>> 
>>> Dom0 iostat:
>>> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
>>> sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> sd2     486.7    0.0 4713.3    0.0  0.0  0.9    2.0   1  79 
>>> 
>>> DomU iostat:
>>> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
>>> cmdk0     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> cmdk1   126.5  331.7 2593.7 3324.2  0.0  7.5   16.4   1  89 
>>> cmdk2     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> 
>>> 
>>> When performance drops off we get
>>> 
>>> xentop:
>>>     NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%)
>>> VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR 
>>> SSID
>>> achilles --b---       2475    0.6    1580792   18.8    1581056      18.8
>>> 2    1        0        0    4        0        0       14 
>>>  0
>>> Domain-0 -----r      90140    7.1    2097152   25.0   no limit       n/a
>>> 2    0        0        0    0        0        0        0 
>>>  0
>>> 
>>> Dom0 iostat:
>>> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
>>> sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> sd2      39.7  161.3 2199.0 5919.7  0.0 17.7   88.2   0 100 
>>> 
>>> DomU iostat:
>>> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
>>> cmdk0     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
>>> cmdk1     1.3    1.0   26.7    4.0  5.7 32.0 16164.7 100 100 
>>> cmdk2     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0  
>>> 
>>> istats persist like this for 4 or 5 seconds and then drop back towards
>>> "normal" but performance on the client remains very poor.
>>> 
>>> Thanks
>>> 
>>> John
>>> 
>>> 
>>>> What Solaris 10 update?
>>>> 
>>>> Have you tried a PV opensolaris guest for the NFS server
>>>> running the latest bits?  If not, can you do this? There
>>>> have been some xnf (NIC driver) fixes which could explain
>>>> this.
>>>> 
>>>> 
>>>> 
>>>>> Anyone any suggestions what I can do to improve matters - would using 
>>>>> ZFS rather than UFS for the user disk change matters?
>>>> 
>>>> It should not.
>>>> 
>>>> 
>>>> 
>>>>> The underlying disks are managed by a hardware RAID controller so the 
>>>>> zpool in the Dom0 just sees a single disk.
>>>> Why wouldn't you use the disks as a jbod and give them all to
>>>> zfs?
>>>> 
>>> 
>> 
> 
> -- 
> John Landamore
> 
> Department of Computer Science
> University of Leicester
> University Road, LEICESTER, LE1 7RH
> [email protected]
> Phone: +44 (0)116 2523410       Fax: +44 (0)116 2523604
> 
> _______________________________________________
> xen-discuss mailing list
> [email protected]

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to