Errr...
In your opinion, what could be improved in HTTP protocol to accomodate
(presumably congestion) characteristics of TCP ?
The problem of HTTP is that it starts new TCP connection for every new
downloaded file. It leads to very significant slow down, when working
with many small files.
But it was changed in HTTP 1.1, which can reuse the same TCP connection
for many HTTP requests. Though this change also has its price: at the
server side it not so simple to maintain thousands of simultaneous TCP
connections, even if most of them are passive. The Operation System
begins to play against you at this case...
Just to more strongly agree here... Watching TCP flows is instructive, and
one of the things you will find out about HTTP/1.1 when you watch TCP flows
is that
- some servers close everything whether clients support persistent
connections or not (some concern about having too many sockets open that
doesn't make sense to me, but they aren't my servers),
- many servers close idle connections fairly quickly, even if they are using
persistent connections, so you may be reopening a new connection because of
idle time while you read.
- some server FARMS spread out resources for loadbalancing, which is great
except your connection is to a server, not a server farm, so you open a lot
more TCP connections than you'd expect, and
- at least the open-source TCPs have been closing congestion-avoidance
bypass holes wherever they find them, so you're more likely to trip over a
problem slowing you down than a problem speading you up :-)
Honestly, if I cared about HTTP performance, I'd be running it over SCTP,
for starters. The list of justifications for doing SCTP has a lot to do with
TCP problems when you get outside clean networks that are running HTTP as
your networking environment.
Thanks,
Spencer
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers