----- Original Message ----- From: "Paul J Stevens" <[EMAIL PROTECTED]>
To: "DBMAIL Developers Mailinglist" <[email protected]>
Sent: Friday, February 11, 2005 6:23
Subject: Re: [Dbmail-dev] reading everything into memory?


Aaron Stone wrote:
On Thu, Feb 10, 2005, ""mc"" <[EMAIL PROTECTED]> said:


The problem was that the MIME parser didn't know what to do with messages
that contained \n (unix line endings) or \r\n ('network' line endings) or
both. So it was exploding on all of the mail received through LMTP through
my code.

And of course I rewrote it again for 2.1 to get rid of the internal mime-parser by using gmime. Still, reading the messages into memory as a whole is at present the only workable solution because that's what mime parsing requires. Assuming there are no leaks, memory consumption will be fairly short lived.

On my current system it is very usual to see large attachments (I have even seen users exchanging DVD sized (!) attachments :S), that's why I am asking such kind of question.


Your idea to insert in blocks will likely create some problems:

>> insert msg set id=xxx, body="first 512 bytes of msg goes here...";
>> update msg set body=body+"another 512 bytes..." where id=xxx;
>> update msg set body=body+"yet another 512bytes" where id=xxx;

this will create massive amounts of insert queries if you're indeed receiving a lot of large attachments. This will either pound your disk IO during insertions, or if your using transactions, you're simply shifting the memory consumption to the database server.

I am not sure about pgsql. But for mysql, if my memory serves, its network code would dynamically allocate more memory to accomodate the huge string sent from the client; in other words, if dbmail and mysql sits on the same box, almost twice the amount of memory would be needed for the insertion of a big mail?


Also, your syntax wont do. Though 'set body=concat(body,"more bodytext")' would, I'm not sure that's compatible sql syntax.

as far as i could tell, the ANSI syntax for string concat is ||.
I used + in the previous msg just for making my idea more clear.
(off topic: mssql uses &, mysql and a few others use concat(), pg uses ||, but in fact mysql also supports || by turning PIPES_AS_CONCAT mode on)

In sum: what's your use case? Have you tested and ran into trouble? If not, I'd say let's not get into any premature optimizations at this point.

I haven't stress tested it honestly. :S I have just barely compiled and configured it on my devel box running postfix + amavisd-new + dbmail. Currently my vpopmail is serving somewhat around 2 or 3 incoming mails/sec avg (after virus and spam are filtered), ~500 imap connections avg (mostly idle I believe), disk overloaded though (it is a slow ide raid1). For the dbmail solution, I think I will be heading out for another db box with faster disks, and I am expecting at least another 500 concurrent sessions without overloading too much.

A question: has anyone tried SQL_CACHE (mysql only) or will that work with dbmail's design? I still haven't tried to find out how dbmail uses queries yet.


I am currently a vpopmail user. Till now I still haven't found a good (or at least working) converter for vpopmail->dbmail, so I would like to make sure dbmail would not behave beyond my expectation before I start to work on a converter.

Another question: would it be feasible to implement some sort of db connection pooling?



Reply via email to