-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

There's too much to say on this subject so I'm going to put it in bullet
points - hope this doesn't seem rude.

* Inserting a splitfile involves a single sender and multiple receivers,
with different round trip times and link capacities for each receiver

* TCP's congestion control is designed for use between a single sender
and a single receiver - it doesn't work for a single sender and multiple
receivers (see reliable multicast)

* TCP's congestion control also assumes the sender is well behaved - a
badly behaved sender can cause all other flows to back off, for selfish
or malicious reasons

* Despite being well designed and widely tested, TCP is not suitable for
end-to-end congestion control in Freenet

* However, TCP-like congestion control between neighbours would make
sense (see DCCP for TCP-friendly congestion control algorithms)

* This leaves us with two problems to solve:
    1) Route around overloaded nodes
    2) Control the amount of load a node can place on the network

~~~~~~~

1) Route around overloaded nodes

* At the moment this is handled by two mechanisms (backoff and ping
time), which seem to use both end-to-end congestion feedback and local
congestion feedback

* I suggest replacing the backoff and ping mechanisms:
    * TCP-like congestion control between neighbours
    * Forward each packet to the neighbour that:
        * Is online
        * Is ready to receive more packets
        * Is closest to the target

* Rather than having an explicit backoff timer, congested neighbours are
temporarily avoided until they are ready to receive more packets
(according to the local TCP-like congestion control)

* "Route as greedily as possible, given the available capacity"

~~~~~~~

2) Restrict the amount of load a node can place on the network

* Simple approach - round robin:
    * Keep a separate queue for packets from each neighbour
    * Loop over the queues, transmitting the first packet from each
non-empty queue
    * Advantages:
        * Simple, patent-free
        * A selfish node can only use its fair share of its neighbours'
bandwidth
    * Disadvantages:
        * Treats small and large packets equally

* Slightly more sophisticated approach - token buckets:
    * Standard method of flow control for bursty traffic
    * Keep a token bucket for packets from each neighbour
    * Add n tokens per second to the bucket of each online neighbour
    * Loop over the queues as before, but only forward a packet if the
neighbour has enough tokens
    * Remove tokens from the bucket according to the size of the packet
    * Advantages:
        * Simple, patent-free
        * Takes packet size into account
        * Gives neighbours an incentive to stay online when they're not
active (to accumulate tokens)
    * Disadvantages:
        * The link can be idle even though there are packets waiting
        * Might allow an attacker to save up enough tokens for a flood
            * Limit the size of the bucket?
        * Neighbours have an incentive to stay online but they don't
actually have to do anything useful

* Allocate tokens to neighbours in proportion to the amount of data they
return
    * Similar to the approach used by GNUnet, SLIC, SWIFT
    * Advantages:
        * Incentive to store data
        * Incentive to forward data requests and responses
        * Attackers must do useful work to save up tokens for a flood
    * Disadvantages:
        * No incentive to forward inserts (but is this even possible,
since there's no way to verify they've been forwarded?)


Any thoughts?
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iD8DBQFEPTgnyua14OQlJ3sRAnawAKCxIjen7ZYoYkey+uz2rEJ7QRBljwCgoVgU
TttbRqm/OH11mxwBTPYaDmA=
=xF3B
-----END PGP SIGNATURE-----

Reply via email to