DBMail 2 hammers PostgreSQL pretty hard because each message is first
delivered to the internal delivery user, processed if needed, and then
messages and physmessages are created for each real recipient. Finally
the
message and physmessage for the delivery user is deleted. This means
that
each message causes at least one DELETE. I believe this is what causes
the
need for heavy vacuuming in PostgreSQL.
This is indeed what causes/contributing to the heavy vacuuming. BUT!
What you could do would be to insert the data into a TEMP table, which
isn't WAL logged and has nearly no transaction overhead, beyond
creating the temp table. Just a thought. Let me know if this is
something of interest.
So, a detail that I hope a PostgreSQL person can answer definitively:
will
wrapping the delivery into a transaction prevent rows that are
eventually
deleted from even hitting the database?
Sadly, no: but it will help throughput and performance to wrap things
in a transaction. You're better off investigating the possibility of
having a copy-on-write methodology wherein a message is added to the
database, then user accounts point to a given message id. When the
message is updated, it copies itself to a new message/message id and
the change cascades downwards. This would save space and IO. -sc
--
Sean Chittenden