On Jul 19, 2007, at 11:11 AM, Alderman, Sean wrote:

> This is good news, thank you for the blog!
>
> If I may ask a couple of questions to the community on the topic of  
> OLTP
> workload and ZFS...
>
> 1.  When evaluating ZFS for our Oracle systems (heavy 8K uncached
> workload), our DBAs used the ZFS vs. VxFS whitepaper
> http://www.sun.com/software/whitepapers/solaris10/zfs_veritas.pdf and
> specifically Figure 3-25 showing that Operations Per Second for 8K
> uncached ops for ZFS is about 1/3 what it is for VxFS.  25% is an
> awesome increase, except when faced with another alternative where a
> 300% improvement would only bring ZFS to an even footing against VxFS.
> The stats in the whitepaper are not entirely clear to me, do the  
> results
> of Eric's work make a significant dent here?

Yeah, the FileBench oltp workload in the paper is the same one i used  
in my testing (and referenced in the blog).

A couple things i should mention in the paper.  It is a great paper,  
but its also based on s10u2 - over a year old!  There's been many  
performance fixes in OpenSolaris, s10u3, and the upcoming s10u4 that  
will help (such as the having to read in a full block rewrite).

I'm not sure if the recordsize property was set to 2KB/8KB.  That  
will make a big difference.

Another thing is that at that time we hadn't done all the evaluating  
we have today (see Neel's blog), so i don't believe we even knew how  
to do some of the tuning.

And lastly, using the separate intent log (with OpenSolaris bits) or  
separate pools (if you're s10uX based) could be quite interesting.   
That wasn't done (on purpose) for this test.

>
> 2.  I am not entirely sure the figures in the whitepaper are still
> reasonable metrics to go by, but regardless, could someone explain why
> the Ops/sec in Figure 3-25 are roughly 4000 for all tested scenarios?
> This seems very strange to me.

Could have been that the recordsize property wasn't tuned at all (so  
it was left at its default of 128KB).  If that's the case, you're  
effectively doing the same test.

> Overall, I've really liked working with ZFS in the circumstances I  
> have
> had luck making it work reliably.  In the case of our DB servers our
> data is too "valuable" to store on ZFS according to those who make the
> decisions.  I'm always looking for more ammo to debate those kind of
> statements.

If the data truly is "valuable", then you definitely want to go with  
ZFS over VxFS/VxVM.  In this case, only ZFS is going to provide end- 
to-end integrity.  VxFS can't do that - your data is always at risk.

Hopefully you can articulate that to the decision makers...

eric

> --
> Sean
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
> Sent: Thursday, July 19, 2007 1:24 PM
> To: ZFS Discussions
> Subject: [zfs-discuss] more love for databases
>
> Here's some info on the changes we've made to the vdev cache (in
> part) to help database performance:
> http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
>
> enjoy your properly inflated I/O,
> eric
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to