> P.S.: If it was really meant to read everything into memory and then copy 
> the same message again to the insert query, why not consider reading 
> everything into a temp file, and then store the msg in db using something 
> like this:
> 
> insert msg set id=xxx, body="first 512 bytes of msg goes here...";
> update msg set body=body+"another 512 bytes..." where id=xxx;
> update msg set body=body+"yet another 512bytes" where id=xxx;

Lets see..  our current max attachment size is (on our mailservers) 
50MB, with 512bytes blocks this would lead to 102400 database queries.
That would surely kill the database performance even under light load.

But I agree, there are several evils here.
1. We can use system memory
   This would potentially need an insane amount of memory.
   50MB*100connections == 5GB ram
   This is an OK thing for my company since when the gateways runs
   out of their 1GB ram they just begin eating swap for a while.
2. We can use a tempfile
   Now, this would lead to more database calls depending on block size
   and it would probably lead to worse performance.
3. We can deliver directly into database
   This would somehow stream the data staight into the database
   as it was recieved. Simply inefficent.

4. A combination of #1 and #2 could work such that the dbmail delivery
   daemons have a max memsize parameter that you can set to ex: 10MB and
   use a tmp file for the rest. It would then deliver to the database in
   10MB blocks, leading to 5 updates on a 50MB mail. Not too shabby.

Currently though alternative #1 can be restrained to fewer concurrent
deliveries from the MTA, but the MTA can accept many times more
connections from the outside and just cache them until it can be
delivered to dbmail.

-HK

Reply via email to