Regarding hard size limits, I wrote:

> > Sounds like a good idea.  This should be put in the docs for client
> > authors, then.  If the user tries to insert a file that is too big,
> > I'd prefer to catch this at the user-friendly client level if
> > possible.  What message should clients look for if the file is
> > rejected because of size?

Oskar replied:

> As I said, the limits have to absolute and agreed throughout the
> network. Changing them is the same as a protocol change - a node that
> has different limits is the same as a node not using the same
> protocol.  [...] There is no way to query those nodes for the size
> they tolerate.

When I spoke of clients querying servers about limits, I wasn't clear.
The idea is just that if lower-level software has some kind of limit,
it's often desirable to have the GUI wrapper (on the same machine)
intercept attempts to perform operations that exceeded those limits.
For example, instead of passing back "Your insert attempt failed, here's
the error message from the server"; I'd prefer to see "Sorry, you can't
send a single terabyte-sized file on Freenet.  Would you like to start
the file-splitting Wizard?".  I

The problem is that if we hard-code the GUI wrapper to detect the limit,
then the various wrappers may not get updated when the limit changes.
The solution is for the wrapper to somehow query for the limit, but that
means setting up a query mechanism.  Much as my user-centric heart pines
for the best possible GUI, even I admit that for now we should just
intercept the error at the lowest level (where the failure occurs), and
pass a sensible message back up to the top.

I certainly agree that we'd like the limits to be as uniform as possible
throughout the network, especially in these early days.  My boss has
spent the last few *months* working on a distributed datastore.  Even
with the luxury of assuming that everything is trusted once the
connections are established, he's had problems.  Nightmare, thy name is
async.

> > Should we bother with additional "this should never happen, but in
> > case it does" checks; such as when a supposedly-ok file is read off
> > disk for a transmission

> The node does not make any checks on the validity of data it reads off
> the disk (check the signature or hash), but trusts itself. Given the
> overhead of doing the checks twice, Scott and I decided that this was
> a performance trade off that made sense (if Mallory has broken into
> your system, he can do worse things

I wasn't thinking of a hostile break-in as much as a software bug that
somehow scrozzed up the local data store.  If you & Scott have already
talked it over, however, that's more than enough for me.

> > My preference is to err on the side of being too small rather than
> > too large.  It's easy to make it bigger later on, but if we ever
> > have to make it smaller, then old versions will be passing around
> > data that the new versions will reject.  10 megs seems way too
> > small, so that may be a good limit to use during testing

> Very good point. Increasing the limit means changing the protocol,
> but decreasing it means nuking data

Does anyone object to starting at only 10 megs then?  In other
messages in this thread, people seemed to prefer 100 meg as the magic
number.  I defer to the coders on the matter, but I'd sure like some
volunteers to run a couple of 100-meg tests before we commit to it.


--Will
(not speaking for my employers)




_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to