I was wondering about 2.1 vs 2.0 stuff:
Does the IMAP bug that starts pounding the database server with
similar queries also happen in 2.1 CVS?
I haven't seen it in 2.1 code yet. But then, that code is not as widely used.
I've been trying to get my
head round the bug, looking through all code
On Fri, 17 Dec 2004 09:14:37 +0100, Paul J Stevens <[EMAIL PROTECTED]> wrote:
> Don't worry bro. The transactional inserts dpatch was only done earlier this
> week at the explicit request of a user. Other than that, the 2.0.2 packages
> contain: dynamic preforking, glib support, is_header insertion
Don't worry bro. The transactional inserts dpatch was only done earlier this
week at the explicit request of a user. Other than that, the 2.0.2 packages
contain: dynamic preforking, glib support, is_header insertion, and some debian
specifics (path to dbmail.conf, and path to pid dir). All those
What else do you have in that Debian package of yours? You should get
involved with the upstream distribution and see if you can get some of
that stuff in!
:-P
Aaron
Paul J Stevens <[EMAIL PROTECTED]> said:
> I have a dpatch ready for commitment that backports the transaction support
> in the
Aaron,
I have a dpatch ready for commitment that backports the transaction support in the insertion chain from
cvs-head to 2.0
It's in the 2.0.2-0.20041212 debian packages, but I can commit it *now*.
Aaron Stone wrote:
DBMail 2 hammers PostgreSQL pretty hard because each message is first
d
Wait, I didn't finish that thought :-P
So the necessary rearchitecture would involve delivering the
messageblks
first into a memory table, and only copying them into an on-disk
table if
there are local recipients. I believe that this method was used in an
early pre-1.0 version of DBMail, and r
Aaron Stone wrote:
Wait, I didn't finish that thought :-P
So the necessary rearchitecture would involve delivering the messageblks
first into a memory table, and only copying them into an on-disk table if
there are local recipients. I believe that this method was used in an
early pre-1.0 versio
Sean Chittenden wrote:
So much for a quick-fix for PostgreSQL users during 2.0... keep those
daily vacuums running for now.
People should use pg_autovacuum and *not* daily vacuums.
pg_autovacuums whenever it's necessary, which may be *much* more
frequently than once a day (ie, once every 3
Having thought about this for a min or three, I think the best
solution
would be the following:
BEGIN;
CREATE TEMP TABLE newmsg (
-- Whatever schema is necessary
) ON COMMIT DROP;
INSERT INTO other_tbl SELECT foo1, foo2, ${user_id} FROM newmsg;
-- repeat insert as necessary
COMMIT;
The on
So much for a quick-fix for PostgreSQL users during 2.0... keep those
daily vacuums running for now.
People should use pg_autovacuum and *not* daily vacuums.
pg_autovacuums whenever it's necessary, which may be *much* more
frequently than once a day (ie, once every 30min depending on mail
vo
Sean Chittenden <[EMAIL PROTECTED]> said:
> Having thought about this for a min or three, I think the best solution
> would be the following:
>
> BEGIN;
> CREATE TEMP TABLE newmsg (
> -- Whatever schema is necessary
> ) ON COMMIT DROP;
> INSERT INTO other_tbl SELECT foo1, foo2, ${user_id} F
So, a detail that I hope a PostgreSQL person can answer definitively:
will wrapping the delivery into a transaction prevent rows that are
eventually deleted from even hitting the database?
Sadly, no: but it will help throughput and performance to wrap things
in a transaction. You're better off
Aaron Stone <[EMAIL PROTECTED]> said:
> Sean Chittenden <[EMAIL PROTECTED]> said:
>
>>> So, a detail that I hope a PostgreSQL person can answer definitively:
>>> will wrapping the delivery into a transaction prevent rows that are
>>> eventually deleted from even hitting the database?
>>
>> Sadl
Sean Chittenden <[EMAIL PROTECTED]> said:
>> So, a detail that I hope a PostgreSQL person can answer definitively:
>> will wrapping the delivery into a transaction prevent rows that are
>> eventually deleted from even hitting the database?
>
> Sadly, no: but it will help throughput and performan
DBMail 2 hammers PostgreSQL pretty hard because each message is first
delivered to the internal delivery user, processed if needed, and then
messages and physmessages are created for each real recipient. Finally
the
message and physmessage for the delivery user is deleted. This means
that
each m
DBMail 2 hammers PostgreSQL pretty hard because each message is first
delivered to the internal delivery user, processed if needed, and then
messages and physmessages are created for each real recipient. Finally the
message and physmessage for the delivery user is deleted. This means that
each mes
16 matches
Mail list logo