I don't think we can safely cap message sizes "at a few mb" -- I've certainly
sent myself some hefty sized emails and would be quite frustrated if this
weren't a possibility because of an intrinsic, hard-coded limit.

Yes, it makes the code more complicated, but I think it's only because we
still haven't fully adapted our thinking to our model. It occurred to me today
that if we were MIME parsing at delivery time, we could also store the mime
structure of a message in the database. An IMAP BODY.PEEK for mime structures
would be nearly instantaneous. This is probably the single most common request
from an email client, especially web-based ones that need to refresh their
message list on almost every page hit.

If MySQL can retrieve huge (eg 1 GB) rows easily, and the limitation is only
when inserting or updating, then we might be able to use string concatenation
to append parts of the message until the body row has the whole thing.

Aaron



Ilja Booij <[EMAIL PROTECTED]> said:

> MySQL used to have a limit on the client-server communication that 
> forced us to limit the size of blocks being transferred. Nowadays that 
> limit is much higher.
> 
> Setting the max_allowed_packet variable to a high number (a few MB for 
> instance) on both client and server will allow for sending big blocks 
> between client and server.
> 
> The TEXT field itself has no limit.
> In PostgreSQL, there's a 1GB limit on the TEXT field.
> 
> I think we can safely go to a strategy where we put the message in 2 
> blocks, 1 for the header and 1 for the body. However, if we make our 
> parsing code to handle messages only in that way, we need to produce a 
> script for migrating from a database with split messages. This should'n 
> be too much of a problem though.
> 
> Ilja
> 
> _______________________________________________
> Dbmail-dev mailing list
> Dbmail-dev@dbmail.org
> http://twister.fastxs.net/mailman/listinfo/dbmail-dev
> 

-- 

Reply via email to