On Wed, Aug 23, 2000 at 02:19:30PM -0500, Will Dye wrote:
> Oskar replied:
> > As I said, the limits have to absolute and agreed throughout the
> > network. Changing them is the same as a protocol change - a node that
> > has different limits is the same as a node not using the same
> > protocol.  [...] There is no way to query those nodes for the size
> > they tolerate.
> 
> When I spoke of clients querying servers about limits, I wasn't clear.
> The idea is just that if lower-level software has some kind of limit,
> it's often desirable to have the GUI wrapper (on the same machine)
> intercept attempts to perform operations that exceeded those limits.
> For example, instead of passing back "Your insert attempt failed, here's
> the error message from the server"; I'd prefer to see "Sorry, you can't
> send a single terabyte-sized file on Freenet.  Would you like to start
> the file-splitting Wizard?".  I

This is just a matter of keeping the constants orthogonal. It's a basic
principal of good design.

> The problem is that if we hard-code the GUI wrapper to detect the limit,
> then the various wrappers may not get updated when the limit changes.
> The solution is for the wrapper to somehow query for the limit, but that
> means setting up a query mechanism.  Much as my user-centric heart pines
> for the best possible GUI, even I admit that for now we should just
> intercept the error at the lowest level (where the failure occurs), and
> pass a sensible message back up to the top.

I don't know what sort of wrappers we are talking about. Somebody was
writing a wrapper by calling the CLI clients, which I know is the Unix way
(often) but in this case is vastly inferior to writing code that actually
uses the java client library.

I'll add a "-version" call to the cli clients that prints such
information as Brandon suggested anyways.

> I certainly agree that we'd like the limits to be as uniform as possible
> throughout the network, especially in these early days.  My boss has
> spent the last few *months* working on a distributed datastore.  Even
> with the luxury of assuming that everything is trusted once the
> connections are established, he's had problems.  Nightmare, thy name is
> async.

I've been here for almost a year now, Ian has been working on Freenet for
two (I think). I wish it was a matter of *months*.

<> 
> > Very good point. Increasing the limit means changing the protocol,
> > but decreasing it means nuking data
> 
> Does anyone object to starting at only 10 megs then?  In other
> messages in this thread, people seemed to prefer 100 meg as the magic
> number.  I defer to the coders on the matter, but I'd sure like some
> volunteers to run a couple of 100-meg tests before we commit to it.

10 megs will annoy people.

> 
> 
> --Will
> (not speaking for my employers)
> 
> 
> 
> 
> _______________________________________________
> Freenet-dev mailing list
> Freenet-dev at lists.sourceforge.net
> http://lists.sourceforge.net/mailman/listinfo/freenet-dev
> 

-- 
\oskar
_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to