Regarding Bittorrent streaming, we did something similar with FoxTorrent:
when there are lots of seeders it works great, but otherwise it's terrible.
This seems more an artifact of the BitTorrent "positive amnesiac" design
(intentionally forgetting downloaders) than any sort of implementation
issue.  It's basically a direct tradeoff between upload capacity and privacy
-- too low a ratio between upload capacity and download demand and streaming
performance blows (technically it need only be >1.0, but the more the
merrier).

Regarding centralized HAVE vectors, that's an interesting idea -- I hadn't
considered traffic from broadcasting HAVE traffic to be that substantial,
but when you do the math it adds up:

Assuming a simple bitmask where each bit represents a 256KB region, then one
byte of HAVE vector is 2MB of data.  1KB is 2GB of data.  So if you're
downloading a 2GB file at 1MB/s, that means 4 hash-blocks per second,
resulting in 4 HAVE broadcasts per second.  So, assuming 100 peers, that's
400KBps upstream.

But in practice, I'm not sure the numbers add up that severely.  Generally
the files are smaller, the download rates are slower, the peer counts are
lower, the HAVE vectors easily compress, their broadcast rate can be easily
throttled, complete bitmasks can be replaced with cumulative file ranges,
etc.

Indeed, the very fact that HAVE vector traffic can be substantial is a good
reason to decentralize it: for the same upload above, 100 clients
downloading at 1MBps means the server needs to process about 400KBps inbound
(100 clients sending up 4KBps of HAVE traffic), plus almost 40MBps outbound
(fully rebroadcasting each of the 100 separate 4KBps streams to 99 separate
clients). 

Furthermore, all the strategies you'd use to reduce the pain you feel on the
server would apply equally to reducing pain on the client.

Thus so far as I can tell, as much a fan I am of centralized designs,
anything that keeps realtime bandwidth off of the server seems like a win,
HAVE vectors included.

-david

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:p2p-hackers-
> [EMAIL PROTECTED] On Behalf Of Bill Cox
> Sent: Monday, December 03, 2007 6:04 AM
> To: theory and practice of decentralized computer networks
> Subject: Re: [p2p-hackers] DistribuStream
> 
> Hi, Tony.
> 
> Just read the DistribuStream specification.  Very nice, IMO.  Some
> specific feedback:
> 
> Pros:
> - Very simple - hard to overstate the importance of this
> - Server optimized transfers could enable faster, more reliable
> streaming, and faster downloading in general
> - Peer communication is super-simple: just HTTP GET/PUT requests.
> - No HAVE messages need to be sent to connected peers, dramatically
> reducing peer communication overhead, improving scalability.
> - Peers do not need to compute anything, or remember who has what,
> dramatically reducing peer complexity and run-time overhead.
> - No INFO blob, bencoded dictionaries, or other crud to complicate file
> transfers.
> - Since server directs all transfers, he knows how many downloads there
> are, and who did the downloading.  This is very attractive to companies.
> 
> Cons:
> - Server communication is heavy - a decent sized swarm could choke a
> server.  Scalability is potentially compromised.
> - This protocol suffers from the same upload == download bandwidth
> limitation of BitTorrent, potentially reducing the quality of video that
> could be supported.
> 
> I'd much rather implement a client for this protocol than BitTorrent.
> There's no bencoded dictionaries, info blobs, and such, and I wouldn't
> have to reverse-engineer the damned protocol with ethereal.  That said,
> the limitations seem severe.  My initial version of the NetFS protocol
> looked a lot like this, with a tree of publishers and their mirrors
> providing file transfer smarts.  You could similarly get around the
> scaling problem with a way for peers to become mirrors.  It works for
> Gnutella (supernodes).
> 
> I like how your application streams to stdout, so you can pipe into
> players.  I took a different path, integrating a FUSE based virtual file
> system.  This lets me open a player and open a file just like I would if
> it were already resident on my machine.  The downside with my approach
> is increased complexity and difficulty porting.  It also made me want to
> support incremental file system updates, which led to having some
> version control messages in the protocol.
> 
> I like how your protocol minimizes HAVE and file bitfield traffic, which
> always bothered me in BitTorrent.  I'm thinking of modifying my NetFS
> protocol to make these client/server specific messages, as in your
> protocol, but it will need some work.
> 
> Anyway, DistrbuStream looks like fun stuff.
> 
> Regards,
> Bill
> 
> On Wed, 2007-10-31 at 15:08 -0600, Tony Arcieri wrote:
> > DistribuStream is an open source (GPL) implementation of the Peer
> > Distributed Transfer Protocol, an open peer-to-peer communications
> > protocol which facilitates streaming progressive downloads.  You can
> > read about it here:
> >
> > http://distribustream.org/
> >
> > It was developed as a collaboration between ClickCaster, Inc. and the
> > University of Colorado Computer Science Program.
> >
> > PDTP has been registered with the IANA and received a port assignment
> > of 6086.
> >
> > The protocol philosophically different from other P2P protocols which
> > rely on emergent, swarm-like behavior for traffic scheduling.  All
> > client/server and peer-to-peer communications can be modeled as simple
> > state machines.  The onus of traffic scheduling is placed entirely on
> > the server.
> >
> > Client/server communication is accomplished with a lightweight JSON
> > asynchronous messaging format (presently running over TCP).
> > Peer-to-peer communication is accomplished with HTTP/1.1.  The entire
> > protocol behaves as an ad hoc HTTP caching proxy network.
> >
> > The main feature is its facilitation of progressive streaming
> > downloads, making it comparable to proprietary protocols like Joost.
> >
> > The present implementation is early beta quality, but I'd love to
> > begin getting feedback, particularly on the feasibility of the
> > approach.
> >
> > --
> > Tony Arcieri
> > ClickCaster, Inc.
> > [EMAIL PROTECTED]
> > _______________________________________________
> > p2p-hackers mailing list
> > [email protected]
> > http://lists.zooko.com/mailman/listinfo/p2p-hackers
> 
> _______________________________________________
> p2p-hackers mailing list
> [email protected]
> http://lists.zooko.com/mailman/listinfo/p2p-hackers

_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to