Bob Says: 

        "But a better solution is to assign a processor set to run only
the application -- a good idea any time you need a predictable
response."

Bob's suggestion above along with "no interrupts on that pset", and a
fixed scheduling class for the application/processes in question could
also be helpful.

Tharindu, would you be able to share the source of your
write-latency-measuring application? This might give us a better idea of
exactly what its measuring and how. This might allow people (way smarter
than me) to do some additional/alternative DTRACE work to help further
drill down towards the source-and-resolution of the issue.

Thanks,

 -- MikeE
 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Elling
Sent: Saturday, July 26, 2008 3:33 PM
To: Bob Friesenhahn
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

Bob Friesenhahn wrote:
> On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
>
>   
>> I suspect that the maximum peak latencies have something to do with 
>> zfs itself (or something in the test program) rather than the pool 
>> configuration.
>>     
>
> As confirmation that the reported timings have virtually nothing to do

> with the pool configuration, I ran the program on a two-drive ZFS 
> mirror pool consisting of two cheap 500MB USB drives.  The average 
> latency was not much worse.  The peak latency values are often larger 
> but the maximum peak is still on the order of 9000 microseconds.
>   

Is it doing buffered or sync writes?  I'll try it later today or
tomorrow...

> I then ran the test on a single-drive UFS filesystem (300GB 15K RPM 
> SAS drive) which is freshly created and see that the average latency 
> is somewhat lower but the maximum peak for each interval is typically 
> much higher (at least 1200 but often 4000). I even saw a measured peak

> as high as 22224.
>
> Based on the findings, it seems that using the 2540 is a complete 
> waste if two cheap USB drives in a zfs mirror pool can almost obtain 
> the same timings.  UFS on the fast SAS drive performed worse.
>
> I did not run your program in a real-time scheduling class (see 
> priocntl).  Perhaps it would perform better using real-time 
> scheduling.  It might also do better in a fixed-priority class.
>   

This might be more important.  But a better solution is to assign a
processor set to run only the application -- a good idea any time you
need a predictable response.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to