> I'm not particularly happy about a hardcoded limit either, but a querying
> for the value only works if you are sending the data to one node that you
> are aware of. The truth is you are sending the data to many nodes, most of
> which you are not (and should not be) aware of. There is no way to query
> those nodes for the size they tolerate.

Well, you *could* send the size of the file with the InsertRequest and
then the insert request could be rejected by any node in the chain based
on the size.

However, after some thought the only reasons that I could see for have a
configurable limit are that 1) it can be reconfigured as it the need for
bigger files arises, 2) if you want to set up your own private freenet
network (could be useful for data replication in intranets and such) then
you might want to change it. Well #1 isn't good because sysadmins are
lazy. It would be better to just change the limit as new versions are
released so everyone was on the same page. And #2 doesn't matter because
if you want to run your own private Freenet you have the source and
probably want to make modifications anyway.

So I don't see anything actually gained by not having a hard limit.
So I say just make it 100M.



_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to