| From: Robin Uyeshiro <[EMAIL PROTECTED]>

| Perhaps the IETF, eminent body that it is, could put out somethng that 
| RECOMMENDs that email software vendors display the size of email 
| attachments and maybe the time it would take to download on an analog 
| modem?  ...

That points to a good solution, but does not go far enough.

The IETF should follow the U.S.Postal Service, FedEx, AirBourne and the
rest of the real world carriers, and publish combined dimension and weight
limits for SMTP parcels.   We should change the old RFC 1123 minimum
maximum to be a generous but flat do-not-ever-exceed maximum, stop the
confusion among users, and squelch the years of gabage software from junk
vendors.  A 20-50 MByte limit would allow most of the abuses that have
accumulated to continue, while stopping worse.

And yes, of course, consenting adults can always do anything their
perverse hearts desire with their private packets.


 ..........

from others:

>   So the problem essentially comes down to money. When are message sizes 
>   causing too many problems for admins to cost justify their efforts on? At 
>   that point we either decide to put up nice fences or we build sidewalks.

The issue for the Post Office and SMTP has nothing to do with money nor
with "causing too many problems for admins."  In both cases, the transport
mechanisms simply don't work when pushed far beyond their design limits.
No plausible amount of money would let the U.S.Post Office deliver 500
kg. machinery.  It's not just the carriers who walk beats delivering mail,
but the tiny trucks they use that make the idea of mailing freight
obviously silly.  Imagine that you do give the USPS the money necessary
to replace all of their cute little jitneys with 3 ton (payload, not just
gvw) delivery trucks so that they could serve as a motor carrier.  Now
envision the consequences on city streets and for your letters.

On the wires, no amount of money could make SMTP carry GByte files--at
least not this decade--yes, eventually, maybe that will change, just as
the de facto SMTP limit has grown by at least 20X and probably 100X in
the last 15 years.

Yes, you could pay the USPS to run a motor carrier subsidiary, and you
might even use a few of the existing post offices for the freight yards.
However, would not change the letter and small parcel delivery system.
You would be creating a new network/protocol.  Again there are good reasons
why SMTP is of about the same age as FTP, as all of us here should know
(or keep silent).

SMTP has inevitable, unfixable, and *desirable* design limits would stop
flaunting their need for technical information.  Because SMTP transfers
typically involves at least 4 computers (even when only 1 is listed in
the Received headers) and often half dozen, and because each computer must
make a copy of the message in "stable storage," SMTP is simply and
desirably wrong for massive data transfers.  When you are moving big files,
you do not want to use an obligatory multi-store-and-forward scheme like
SMTP, no more than you want to put a 200 kg engine block into a letter
carrier's shoulder bag.

And no, automatically breaking up a 10 GByte file into 10,000 pieces would
not affect the problem any more than breaking up a 500 kg. freight shipment
would make it fit a USPS jitney, your mailbox, or the parcel-pickup counter
at your local post office.

And no, don't talk about making SMTP not be store-and-forward.  That is
a desirable and vital feature of a long-range mail transport protocol.




} When the sending user injects the message into the mail system, the SMTP
} server could detect large e-mail attachments and replace them with reference
} pointers; it would then cache the attachment someplace; possibly in local
} storage, ...

Thank you very much, but NO THANK YOU!
Having arbitrary SMTP MTA's in the path capriciously make copies of my
messages, and then forge and send new text that I did not write would not
merely be a hassle to implement, but has so many awesome non-technical
implications that I don't know where to start.

} When the receiving user's mail agent sees such a reference, it can directly
} contact the caching server and download the attachment. This retrieval can
} happen immediately or it can wait until the user explicitly requests it. The
} retrieval protocol should support interruption and restart of the download
} process.

If that could work, then why not move all email that way?
(one problem among many, authentication)


} In addition, SMTP servers which handle the e-mail message between the
} initial server and the end user could choose to fetch the attachment on
} behalf of the user and reassemble the e-mail message into a monolithic
} whole. They would presumably do this based on some local policy.

Part of the trouble may be in thinking of an "SMTP server" as something
like an HTTP server run by an ISP.  It's not.
Note also that SMTP servers are MTA's; SMTP is about Mail Transport.
Mail User Agents often don't talk to MTA's at all for receiving email
(e.g. UNIX where they just look in a specially named file)


 
]   > What's the SOLUTION?
]   
]       :
]       * ^Subject:.*How large is too large
]       /dev/null
]   
]   procmail is your friend

] Great solution for the last mile. Shame it makes some people pay to import
] what they will throw away...

I think that missed our esteemed colleague's point.  He recipe does
not discard messages, but instead messages with a particular subject
line, perhaps because he thinks they're uniformly without merit.
He is certainly right about messages that equate monetary costs
with all of the other design parameters that are traded off, at
least in any forum that is supposed to be about engineering.


I suppose it's too much to hope that this modest question won't get
turned into yet another official IETF Working Group that will spend 3
years and produce another 100,000 words of English that will need to be
ignored by people with lives, or at least engineering jobs.


Vernon Schryver    [EMAIL PROTECTED]

Reply via email to