On Tue, Aug 22, 2000 at 12:52:49PM -0500, Will Dye wrote:
>
> On Tue, 22 Aug 2000 17:42:18 BST, Theodore Hong writes:
>
> > It feels like a bad design choice to arbitrarily set a fixed upper
> > limit to the size of data. ("No one will ever need more than 640K of
> > memory...") Why do we need one?
>
> As Oskar stated in his original message on the subject, fields and
> messages are currently read directly into memory. This means you can
> probably crash a Freenet node just by sending it several megs of data
> without a newline. Yes, you can transfer to disk, but this introduces
> complex code that has not been written yet, and will be even harder to
> test properly. You still have a limit to test (disk size), but now the
> limit changes with most every test run, and few people run into it (when
> they aren't getting hit by a buffer overrun attack).
Actually these are different things. One is the matter of the messages,
which need to have some sort of size limit since they are read to memory,
but which OTOH probably won't increase that much with time (it's the
matter of crypto keys getting longer, but that isn't going that fast).
There is also the matter of the data. The node has a personal limit for
how big data it will cache, but it will cache more temporarily while it
tunnels it. The amount of data that people consider normal doubles every
year or something, so...
> Personally, I've slowly come to prefer well-defined limits as opposed
> to vague assurances that "technically, there is no limit". *Something*
> will eventually limit things anyway, often in some hard-to-replicate
> realm like available swap space or some integer used by an alternate
> transmission system to count packets. I'd rather deal with upgrading
> hard limits than with trying to replicate an overrun attack bug.
That is true.
--
\oskar
_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev