On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <jlewis...@gmail.com> wrote:

>
>
> Just a nit, but Oracle implements MVCC.  90% of the databases out there do.
>

Sorry, I spoke imprecisely. What I meant was the difference in how the rows
are stored internally .... in Oracle, the main tablespace contains only the
newest version of a row, which is (where possible) updated in place -
queries in a transaction that can still "see" an older version have to pull
it from the UNDO tablespace (rollback segments in Oracle 8 and older).

In Postgres, all versions of all rows are in the main table, and have
validity ranges associated with them ("this version of this row existed
between transaction ids x and y"). Once a version goes out of scope, it has
to be garbage collected by the vacuuming process so the space can be
re-used.

In general, this means Oracle is faster *if* you're only doing lots of small
transactions (consider how these different models handle an update to a
single field in a single row) but it is more sensitive to the scale of
transactions .... doing a really big transaction against a database with an
OLTP workload can upset Oracle's digestion as it causes a lot of UNDO
lookups, PG's performance is a lot more predictable in this regard.

Both models have benefits and drawbacks ... when designing a schema for
performance it's important to understand these differences.


> I find partitioning pretty useful in this scenario if the data allows is.
> Aging out data just means dropping a partition rather than a delete
> statement.
>
>
Forgot to say this - yes, absolutely agree .... dropping a table is a lot
cheaper than a transactional delete.

In general, I think partitioning is more important / beneficial with PG's
style of MVCC than with Oracle or SQL-Server (which I think is closer to
Oracle than PG).


Cheers
Dave

Reply via email to