Raphael Manfredi wrote:
> Here are a few comments.
> 
> Quoting Bill Pringlemeir <[EMAIL PROTECTED]> from ml.softs.gtk-gnutella.devel:
> :There are several deterministic versions of fair.
> :
> :   - First come, first serve.
> :   - Minimize time from request to completion.
> :   - Maximize content diversity.
> :   - Maximize throughput.
> :
> :All of these can be on a per request or per peer basis.  There are
> :also bounds on resources such as the number of sockets/file
> :descriptors that will be open at one time.
> 
> Actually, maximize throughput is not fair at all.  It's just a local strategy.

I think fairness should be an afterthought. First of all, things should be
effective and efficient. Also things might turn out rather fair anyway, if
you consider that peers with high-bandwidth access typically cannot only
download but also upload faster. Other peers waiting in your queue can simply
be delegated to these after some time. Of course if they have partial
file sharing disabled, they shouldn't be preferred.
 
> :The problem with the "first come, first serve" is that large files
> :will tend to dominate the queue.  As the shorter files finishes, they
> :can be replaced with large requests.  However, this is easiest to
> :implement.

> Tue current implementation of PARQ uses multiple queues (one per slot),
> with exponentially decreasing sizes.  Of course, slots from empty queues
> are stolen by others, just like unused bandwidth is stolen.

The problem might be that this not the spirit of "first come, first serve".
This is only "fair" if everybody gets roughly the same amount of service.
I believe this is how it's used in real-life anyway. Otherwise, you make
appointments which has still some FIFO properties but is overall very
different.

Thus, it's only FIFO if we gave each the same amount of bytes or time.

One issue here is use of HTTP. The server cannot really redefine the request.
If someone requests 500 MB, you can say OK, Not OK or come back later. You
can't say "ok I give you 1 MB". Actually, HTTP has all the bits and pieces
to do this, it just won't work as HTTP clients are far too simplistic. Also,
HTTP applies some restrictions to simplify things which means it's just not
allowed.

Another point is that we might want to cancel transfers if the reader is
too slow and becomes a hog. I think we already do this in extreme cases
but this could be more aggressive which would of course violate HTTP too.

> Maximization of uploading bandwidth is already the strategy of GTKG.
> Whether you use 4 slots or 30 slots, the whole uploading bandwidth will
> be used (assuming remote peers can withstand it).

Maybe, but it doesn't try to upload to the most capable peers. Uploading
to a few fast peers is likely more efficient overall than uploading to
dozens or hundreds of those which can barely grab as much to be not kicked
out as stalling.

-- 
Christian

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
gtk-gnutella-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/gtk-gnutella-devel

Reply via email to