Matthew Wakeling wrote: > On Sun, 25 Jan 2009, M. Edward (Ed) Borasky wrote: >> Actually, this isn't so much a 'pgbench' exercise as it is a source of >> 'real-world application' data for my Linux I/O performance visualization >> tools. I've done 'iozone' tests, though not recently. But what I'm >> building is an I/O analysis toolset, not a database application. > > Are these performance results when the analysis tools are active or > inactive? Have you investigated whether the analysis tools might be > slowing the I/O down at all?
There is some overhead with the finest-granularity data capture tool, blktrace. I haven't measured it yet but it is supposedly "about two percent" according to its creators. And the trace files it creates are shipped to a collection server over the network -- they are not stored on the server under test. The other tool set, sysstat, simply reads out counters from /proc. The last time I looked at them they were using minuscule amounts of processor and RAM and doing no disk I/O. But there are (slightly) more efficient ways of getting the required counters from /proc, and blktrace can actually reconstruct the I/O counters. People run *production* servers with much more intrusive performance logging tool sets than what I am using. :) > >> ...they told me that it was because the drive was re-ordering >> operations according to its own internal scheduler! > > A modern SATA drive with native queuing will do that. SCSI drives have > been doing that for twenty years or so. At the CMG meeting I asked the disk drive engineers, "well, if the drives are doing the scheduling, why does Linux go to all the trouble?" Their answer was something like, "smart disk drives are a relatively recent invention." But a. SANs have been smart for years, and people with I/O intensive workloads use SANs designed for them if they can afford them. Linux even has a "no-op" scheduler you can use if your SAN is doing a better job. b. The four I/O schedulers in the 2.6 kernel are relatively recent. I can go dig up the exact release dates on the web, but their first appearance in Red Hat was RHEL 4 / 2.6.9, and I think they've been re-written once since then. c. If SCSI drives have been doing their own scheduling for 20 years, that's even less incentive for Linux to do more than just provide efficient SCSI drivers. I've never gotten down into the nitty-gritty of SCSI, and I'm not sure there's any reason to do so now, given that other protocols seem to be taking the lead. > > Matthew > -- M. Edward (Ed) Borasky I've never met a happy clam. In fact, most of them were pretty steamed. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance