On Dec 3, 2007 7:03 AM, Bill Cox <[EMAIL PROTECTED]> wrote:

First, let me say that was a nice pros/cons list, and you certainly hit the
nail on the head regarding simplicity.  The goal was to make clients as
simple as possible and almost trivial to implement.

This protocol suffers from the same upload == download bandwidth
> limitation of BitTorrent, potentially reducing the quality of video that
> could be supported.
>

The nice thing about DistribuStream is that the server can monitor
throughput and provide QoS.  Thus you can have a minimum upload throughput
threshold (perhaps computed on-the-fly) and have a set of "reflector" peers
pick up the slack when the peer-to-peer network alone is insufficient.

The requirements for a peer to participate in a file transfer are completely
configurable on the server side.  Right now failing to complete requested
uploads diminishes a peer's trust on the network and deprioritizes them for
receiving data.

I'd much rather implement a client for this protocol than BitTorrent.
> There's no bencoded dictionaries, info blobs, and such, and I wouldn't
> have to reverse-engineer the damned protocol with ethereal.  That said,
> the limitations seem severe.  My initial version of the NetFS protocol
> looked a lot like this, with a tree of publishers and their mirrors
> providing file transfer smarts.  You could similarly get around the
> scaling problem with a way for peers to become mirrors.  It works for
> Gnutella (supernodes).
>

This could be done, but the operation of the protocol in this mode would
need to become less centralized.

As DistribuStream effectively creates a server-controlled botnet, if you
hand off control to a peer, they could use the peers they control to perform
DDoS attacks by instructing them to repeatedly connect to a victim's HTTP
server.


> I like how your application streams to stdout, so you can pipe into
> players.  I took a different path, integrating a FUSE based virtual file
> system.  This lets me open a player and open a file just like I would if
> it were already resident on my machine.  The downside with my approach
> is increased complexity and difficulty porting.  It also made me want to
> support incremental file system updates, which led to having some
> version control messages in the protocol.
>

The downside of streaming to stdout at present is that the client uses a
tempfile as a backbuffer.

The protocol has an UNPROVIDE command which can be used to tell the server
that a portion of the file has been discarded.  Thus it's possible to use a
ring buffer to store a moving window of file data.  Whenever old chunks in
the ring is overwritten by new chunks, the client can simply send UNPROVIDE
messages for those chunks to inform the server it no longer has them.


> I like how your protocol minimizes HAVE and file bitfield traffic, which
> always bothered me in BitTorrent.  I'm thinking of modifying my NetFS
> protocol to make these client/server specific messages, as in your
> protocol, but it will need some work.
>

It will be interesting to see how the additional client/server traffic
impacts the scalability.

I'd really like to do a large scale test on something like PlanetLab, but so
far we haven't managed to get access.

-- 
Tony Arcieri
ClickCaster, Inc.
[EMAIL PROTECTED]
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to