On Tue, 20 Feb 2007, Tom Lane wrote:

I can't believe that any production situation could tolerate the
overhead of one-commit-per-log-line.

There aren't that many log lines, and a production environment with lots of commit throughput won't even notice. The installation I work on tuning does 300 small commits per second on a bad day. I can barely measure the overhead of whether or not the log files are involved in that if I'm importing them at the same time. The situation obviously changes if you're logging per-query level detail.

So a realistic tool for this is going to have to be able to wrap blocks of maybe 100 or 1000 or so log lines with BEGIN/COMMIT, and that is exactly as difficult as wrapping them with a COPY command. Thus, I disbelieve your argument. We should not be designing this around an assumed use-case that will only work for toy installations.

Wrapping the commits in blocks to lower overhead is appropriate for toy installations, and probably medium sized ones too. Serious installations, with battery-backed cache writes and similar commit throughput enhancing hardware, can commit a low-volume stream like the logs whenever they please. That's the environment my use-case comes from.

Anyway, it doesn't really matter; I can build a tool with COPY style output as well, it just won't be trivial like the INSERT one would be. My reasons for "would slightly prefer INSERT" clearly aren't strong enough to override the issues you bring up with the average case.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to