Kenneth Marshall <[EMAIL PROTECTED]> writes: > If the heads of the disk are in the right location, you could easily do > more than 1 commit per disk revolution so the values over 2 seconds could > actually be valid. 9 seconds would be worst case of 1 commit per revolution.
No, because a commit in PG involves appending to the WAL log, which means that in the normal case a commit will have to rewrite the same sector of WAL that the previous commit did. Barring some weird remap-on-the-fly scheme, you're going to wait for the disk to come all the way round for that. Hence, any reported sustained average much larger than one commit per revolution has to be regarded as probably phony. Sometimes you can improve on this using the commit_delay parameters to gang multiple commits into one physical write, but it's tough to do, and we already know that this person didn't do any parameter-tuning whatsoever, and in any case there's no improvement unless you are committing multiple transactions concurrently. BTW, the *real* knock against this test methodology is that it's testing a single serial transaction stream, which is not what PG is designed to shine at. Someday I'd like to see one of these "I can write a database benchmark" guys actually test concurrent updates... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend