Here are a few comments.

Quoting Bill Pringlemeir <[EMAIL PROTECTED]> from ml.softs.gtk-gnutella.devel:
:There are several deterministic versions of fair.
:
:   - First come, first serve.
:   - Minimize time from request to completion.
:   - Maximize content diversity.
:   - Maximize throughput.
:
:All of these can be on a per request or per peer basis.  There are
:also bounds on resources such as the number of sockets/file
:descriptors that will be open at one time.

Actually, maximize throughput is not fair at all.  It's just a local strategy.

:The problem with the "first come, first serve" is that large files
:will tend to dominate the queue.  As the shorter files finishes, they
:can be replaced with large requests.  However, this is easiest to
:implement.

Tue current implementation of PARQ uses multiple queues (one per slot),
with exponentially decreasing sizes.  Of course, slots from empty queues
are stolen by others, just like unused bandwidth is stolen.

:In a queue with only one served file the minimized wait is to put
:shortest requests first.  This has the problem that the downloading
:peer might be bandwidth limited.  I believe that the multiple serving
:is really only to deal with a limit bandwidth peer.  Ie, the multiple
:serving is actually to maximize throughput.  Certainly another case is
:that several slow peers dominate the active set, while faster peers
:are left waiting in the queue.

Maximization of uploading bandwidth is already the strategy of GTKG.
Whether you use 4 slots or 30 slots, the whole uploading bandwidth will
be used (assuming remote peers can withstand it).

:A better single served file algorithm would require accurate estimates
:of bandwidth between peers as well as the file size.  This is actually
:the real time for the download.

This is almost impossible to estimate, and furthermore, you cannot
guess just how much of a file a remote peer will ask for.

Finally, note that upload strategy could be influenced by the estimated
popularity of a file.  Today, this is hard to achieve, as the download mesh
can become disconnected.  But one could use strategies like "maximize
disseminitation" for rare files, accepting all the requests for such
files or giving more priorities to them.

My intuition is that optimum local strategies are not necessarily optimum
global strategies, and vice versa.

Raphael

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
gtk-gnutella-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/gtk-gnutella-devel

Reply via email to