Pavan Deolasee wrote:
On 1/24/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Hmm. So there is some activity there. Could you modify the patch to
count how many of those reads came from OS cache? I'm thinking of doing
a gettimeofday() call before and after read, and counting how many
calls finished in less than say < 1 ms. Also, summing up the total time
spent in reads would be interesting.

Here are some more numbers. I ran two tests of 4 hour each with CLOG cache
size set to 8 blocks (default) and 16 blocks. I counted the number of read()
calls
and specifically those read() calls which took more than 0.5 ms to complete.
As you guessed, almost 99% of the reads complete in less than 0.5 ms, but
the total read() time is still more than 1% of the duration of the test. Is
it
worth optimizing ?

Probably not. I wouldn't trust that 1% of test duration figure too much, gettimeofday() has some overhead of its own...

CLOG (16 blocks)
reads(743317), writes(84), reads > 0.5 ms (5171), time reads (186s), time
reads > 0.5 ms(175s)

CLOG (8 blocks)
reads(1155917), writes(119), reads > 0.5 ms (4040), time reads (146s), time
reads > 0.5 ms(130s)

(amused to see increase in the total read time with 16 blocks)

Hmm. That's surprising.

Also is it worth optimizing on the total read() system calls which might not
cause physical I/O, but
still consume CPU ?

I don't think it's worth it, but now that we're talking about it: What I'd like to do to all the slru files is to replace the custom buffer management with mmapping the whole file, and letting the OS take care of it. We would get rid of some guc variables, the OS would tune the amount of memory used for clog/subtrans dynamically, and we would avoid the memory copying. And I'd like to do the same for WAL.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to