> To some extent I agree with what you're saying, we should try to make
> use of fast links. However we're not just trying to solve the problem of
> getting data from A to B as quickly as possible - we're trying to get
> data from the source to the destination as fast as possible, where the
> link between A and B is only one part of the route. Here's the problem:
> what if B's route to the destination is nearly as long as A's (or
> longer)? Then moving the data quickly from A to B doesn't achieve much,
> or perhaps even makes things worse. So while it is important to make use
> of the available bandwidth, it's not a good idea to just send data down
> whichever link has spare capacity - we need to find some kind of
> tradeoff between short routes and fast routes. There are algorithms to
> solve this problem if you can see the whole network (maxflow), but we
> need a decentralised algorithm because each node only knows about its
> local neighbourhood.
>

Ahh I understand now! Because we are distributing in many directions
at once high speed links mean nothing. If they get part 5/24 of a file
distributed faster than the other 23 parts it will still take until
the slowest part reaches the destination before  it can be
re-assembled. So you are always limited by the slowest link in the
chain.
In that case there is no point at all of high speed links.

Thanks for clearing that up

Jarvil

> > I just cant get
> > past the fact that 10k/s x 7-8 nodes means a long time for large data
> > propagation.
>
> Bear in mind that data isn't streamed sequentially from point to point -
> each block is routed independently, so all the links will be used in
> parallel.
>
> Cheers,
> Michael
> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
>

Reply via email to