When Ian eventually returns from The Real World, I'll invite you to
#freenet-dev (irc.freenode.net) to discuss this. Now, point by point:

On Wed, Apr 12, 2006 at 06:25:59PM +0100, Michael Rogers wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> There's too much to say on this subject so I'm going to put it in bullet
> points - hope this doesn't seem rude.
> 
> * Inserting a splitfile involves a single sender and multiple receivers,
> with different round trip times and link capacities for each receiver
> 
> * TCP's congestion control is designed for use between a single sender
> and a single receiver - it doesn't work for a single sender and multiple
> receivers (see reliable multicast)

Reliable multicast is a contradiction in terms. :) The only way to make
it reliable is to have a backchannel and the ability to retransmit to
each receiver.
> 
> * TCP's congestion control also assumes the sender is well behaved - a
> badly behaved sender can cause all other flows to back off, for selfish
> or malicious reasons

Yes, this is the fundamental problem which we currently ignore, and
which I am not sure CAN be addressed with TCP-like load limiting.
> 
> * Despite being well designed and widely tested, TCP is not suitable for
> end-to-end congestion control in Freenet

Then what is?
> 
> * However, TCP-like congestion control between neighbours would make
> sense (see DCCP for TCP-friendly congestion control algorithms)
> 
> * This leaves us with two problems to solve:
>     1) Route around overloaded nodes
>     2) Control the amount of load a node can place on the network
> 
> ~~~~~~~
> 
> 1) Route around overloaded nodes
> 
> * At the moment this is handled by two mechanisms (backoff and ping
> time), which seem to use both end-to-end congestion feedback and local
> congestion feedback

Backoff happens when a node returns a local=true RejectedOverload (due
to high local ping times), or a timeout. It is solely based on local
feedback. It is a close analog of e.g. Ethernet backoff, and I don't see
any urgent reason to change it.
> 
> * I suggest replacing the backoff and ping mechanisms:
>     * TCP-like congestion control between neighbours
>     * Forward each packet to the neighbour that:
>         * Is online
>         * Is ready to receive more packets
>         * Is closest to the target
> 
> * Rather than having an explicit backoff timer, congested neighbours are
> temporarily avoided until they are ready to receive more packets
> (according to the local TCP-like congestion control)
> 
> * "Route as greedily as possible, given the available capacity"

We do. But we need the amount of load going in to be limited so that
this does not distort routing too much.
> 
> ~~~~~~~
> 
> 2) Restrict the amount of load a node can place on the network
> 
> * Simple approach - round robin:
>     * Keep a separate queue for packets from each neighbour
>     * Loop over the queues, transmitting the first packet from each
> non-empty queue
>     * Advantages:
>         * Simple, patent-free
>         * A selfish node can only use its fair share of its neighbours'
> bandwidth
>     * Disadvantages:
>         * Treats small and large packets equally

I don't see that you are actually addressing the problem here. How do we
limit the amount of load the node can place on the network? Yes we
should have fair queueing, but we need to have the requestors slow down
according to network load in order to minimize backoff. Otherwise the
load will flow to wherever is least overloaded a la 0.5 and routing will
collapse.
> 
> * Slightly more sophisticated approach - token buckets:
>     * Standard method of flow control for bursty traffic
>     * Keep a token bucket for packets from each neighbour
>     * Add n tokens per second to the bucket of each online neighbour
>     * Loop over the queues as before, but only forward a packet if the
> neighbour has enough tokens
>     * Remove tokens from the bucket according to the size of the packet
>     * Advantages:
>         * Simple, patent-free
>         * Takes packet size into account
>         * Gives neighbours an incentive to stay online when they're not
> active (to accumulate tokens)
>     * Disadvantages:
>         * The link can be idle even though there are packets waiting
>         * Might allow an attacker to save up enough tokens for a flood
>             * Limit the size of the bucket?
>         * Neighbours have an incentive to stay online but they don't
> actually have to do anything useful

I appreciate that we need fair queueing.
> 
> * Allocate tokens to neighbours in proportion to the amount of data they
> return
>     * Similar to the approach used by GNUnet, SLIC, SWIFT
>     * Advantages:
>         * Incentive to store data
>         * Incentive to forward data requests and responses
>         * Attackers must do useful work to save up tokens for a flood
>     * Disadvantages:
>         * No incentive to forward inserts (but is this even possible,
> since there's no way to verify they've been forwarded?)

Well there is if they succeeded.
> 
> 
> Any thoughts?
> Michael
-- 
Matthew J Toseland - toad at amphibian.dyndns.org
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20060412/d050d8e4/attachment.pgp>

Reply via email to