Tom Lane wrote:
Your analysis is missing an important point, which is what happens when
multiple transactions successively modify the same page.  With a
sync-the-data-files approach, we'd have to write the data page again for
each commit.  With WAL, the data page will likely not get written at all
(until a checkpoint happens).  Instead there will be per-transaction
writes to the WAL, but the data volume will be less since WAL records
are generally tuple-sized not page-sized.  There's probably no win for
large transactions that touch most of the tuples on a given data page,
but for small transactions it's a win.


Well said. I had not considered that the granularity of WAL entries was different than that of dirtying data pages.

I have no doubt that all of these issues have been hashed out before, and I appreciate you sharing the rationale behind the design decisions.

I can't help but wonder if there is a better way for update intensive environments, which probably did not play a large role in design decisions.

Since I live it, I know of other shops that use an industrial strength RDBMS (Oracle, Sybase, MS SQL, etc.) for batch data processing, not just transaction processing. Often times a large data set comes in, gets loaded then churned for a few mintes/hours then spit out, with relatively little residual data held in the RDBMS.

Why use an RDBMS for this kind of work? Because it's faster/cheaper/better than any alternative we have seen.

I have a 100 GB Oracle installation, small by most standards, but it has well over 1 TB per month flushed through it.

Bulk loads are not a "once in a while" undertaking.

At any rate, thanks again.
Marty


---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster

Reply via email to