On Tue, 22 Aug 2000 11:52:07 +0700, Oskar Sandberg writes: > Before we release 0.3, I would like to add some size limits on fields,
Sounds like a good idea. This should be put in the docs for client authors, then. If the user tries to insert a file that is too big, I'd prefer to catch this at the user-friendly client level if possible. What message should clients look for if the file is rejected because of size? Hmm. Come to think of it, should we have a client command to query the server for limits like that, in case the limits change down the road? I like the idea of not having to hard-code the file size limit into each variant of client, but even for me it seems a bit zealous to add a query command just for that one purpose. Still, I like the general idea of the client being able to ask the server about what is and is not kosher. Bah. For now, the server just needs to have a readable error message when it spits back a rejected file. Clients should not have a hard-coded size limit, especially when that limit is unstable. Self-reflection functions should wait until the code is more stable. Besides, I should add stuff like that myself instead of harassing Oskar about it. I'm almost to the point where I can ask for CVS access (got any easy assignments to delegate, guys?). Just out of curiosity, where will the server do the file size checks? It seems like it should be in at least two places: when the file is accepted from the client on insert, and when a file is accepted from another server for transmission. Should we bother with additional "this should never happen, but in case it does" checks; such as when a supposedly-ok file is read off disk for a transmission, or when each new chunk of an in-progress file is written to disk? > And what about the trailing? A hundred megs? Two hundred? My preference is to err on the side of being too small rather than too large. It's easy to make it bigger later on, but if we ever have to make it smaller, then old versions will be passing around data that the new versions will reject. 10 megs seems way too small, so that may be a good limit to use during testing (if only to speed up the cycle time when we test the transmission of files at or near that limit). Does anyone object to 50 megs as the targeted limit for 0.3's general release? Does anyone volunteer to help test the transmission of such files? I'm on a fat pipe here at work, but I don't want to run a Freenet node on a company machine, and anyway we're firewalled in. :-( > I _could_ implement something where it can tunnel arbitrarily large > files but limiting the size of the file on disk and then overwriting > the beginning when it reaches the end, but it would be pretty > complicated. Heh. You already know the answer here, Oskar; and you're completely correct. :-) There's no screaming need for that feature, so it makes sense to fully stabilize the core functionality first. --Will (who does not speak for his employers, but is finally getting the time to write real code in Java now!) willdye at freedom.net _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
