On Tue, Aug 22, 2000 at 10:21:44AM -0500, Will Dye wrote: > > On Tue, 22 Aug 2000 11:52:07 +0700, Oskar Sandberg writes: > > > Before we release 0.3, I would like to add some size limits on fields, > > Sounds like a good idea. This should be put in the docs for client > authors, then. If the user tries to insert a file that is too big, I'd > prefer to catch this at the user-friendly client level if possible. > What message should clients look for if the file is rejected because of > size?
As I said, the limits have to absolute and agreed throughout the network. Changing them is the same as a protocol change - a node that has different limits is the same as a node not using the same protocol. On an insert the client won't see anything since there is currently no reply message to the actual sending of the data (this will probably change with the next revision). And on a request this should simply never happen, nodes will simply react to overlength data like they do data that doesn't verify it's signature, by concluding that they node they are talking to must be screwed. > Hmm. Come to think of it, should we have a client command to query the > server for limits like that, in case the limits change down the road? I > like the idea of not having to hard-code the file size limit into each > variant of client, but even for me it seems a bit zealous to add a query > command just for that one purpose. Still, I like the general idea of > the client being able to ask the server about what is and is not kosher. I'm not particularly happy about a hardcoded limit either, but a querying for the value only works if you are sending the data to one node that you are aware of. The truth is you are sending the data to many nodes, most of which you are not (and should not be) aware of. There is no way to query those nodes for the size they tolerate. And if we have nodes with different limits, then the request situation goes completely out of hand since you could make a perfectly valid request and have one node in the chain not want to pass the data because it considers it to large. I'm afraid this has to be hardcoded. We are not looking at the final version of the Freenet protocol yet though, so we don't have to get vertigo about making calls that may haunt us in twenty years quite yet. > Bah. For now, the server just needs to have a readable error message > when it spits back a rejected file. Clients should not have a > hard-coded size limit, especially when that limit is unstable. > Self-reflection functions should wait until the code is more stable. > Besides, I should add stuff like that myself instead of harassing Oskar > about it. I'm almost to the point where I can ask for CVS access (got > any easy assignments to delegate, guys?). It has to be hardcoded. I don't like it either, but the idea of this varying is a nightmare (so a request would have to find the data, then go back and figure out a route back to the client that doesn't skip over any nodes that don't take that file size (and still be secure)). No way. > Just out of curiosity, where will the server do the file size checks? > It seems like it should be in at least two places: when the file is > accepted from the client on insert, and when a file is accepted from > another server for transmission. Should we bother with additional "this > should never happen, but in case it does" checks; such as when a > supposedly-ok file is read off disk for a transmission, or when each new > chunk of an in-progress file is written to disk? The server would do the file size check every time it received a message with a trailing (data) field. Since the trailing field length has to be specified in the message, it is simple to simply reject message with DataLength > 209715200 as malformed. The node does not make any checks on the validity of data it reads off the disk (check the signature or hash), but trusts itself. Given the overhead of doing the checks twice, Scott and I decided that this was a performance trade off that made sense (if Mallory has broken into your system, he can do worse things than fuck with the data (which is really quite pointless as it is verified at every node)). > > And what about the trailing? A hundred megs? Two hundred? > > My preference is to err on the side of being too small rather than too > large. It's easy to make it bigger later on, but if we ever have to > make it smaller, then old versions will be passing around data that the > new versions will reject. 10 megs seems way too small, so that may be a > good limit to use during testing (if only to speed up the cycle time > when we test the transmission of files at or near that limit). Does > anyone object to 50 megs as the targeted limit for 0.3's general > release? Does anyone volunteer to help test the transmission of such > files? I'm on a fat pipe here at work, but I don't want to run a > Freenet node on a company machine, and anyway we're firewalled in. :-( Very good point. Increasing the limit means changing the protocol, but decreasing it means nuking data on the network which is a hell of a lot worse. > > I _could_ implement something where it can tunnel arbitrarily large > > files but limiting the size of the file on disk and then overwriting > > the beginning when it reaches the end, but it would be pretty > > complicated. > > Heh. You already know the answer here, Oskar; and you're completely > correct. :-) There's no screaming need for that feature, so it makes > sense to fully stabilize the core functionality first. > > > --Will > > (who does not speak for his employers, but is finally getting the time > to write real code in Java now!) > > willdye at freedom.net > > > > _______________________________________________ > Freenet-dev mailing list > Freenet-dev at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/freenet-dev > -- \oskar _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
