Bob,

Let's come back to the real world.  Here are the facts.

P2P (even 10% of capacity) induces massive jitter on the order of 50
milliseconds to 1000+ milliseconds on the broadband segment.  Based on my
tests, higher P2P utilization causes more frequent spikes in packet delay
and lower P2P utilization causes lower frequency but often the same
amplitude in delay which is still very problematic.  This makes gaming,
VoIP, IPTV, video conferencing, or any other isochronous real-time
application unbearable.  
    

 

We can use jitter adaptation techniques on VoIP and video conferencing to
minimize packet discards, but it is severely limited in how much jitter it
can mitigate and there are tradeoffs even when the jitter adaptation works.
If you adapt the buffer to 200 milliseconds just for the occasional spike in
packet delay due to jitter, you're causing the base latency to rise 200
milliseconds which gets tacked on to the 20 ms packetization delay and the
network latency which could be as high as 200 ms when doing intercontinental
calls.  400 ms lag time between the time you say something and the time you
hear something back is not a desirable way to make a phone call.  Now what
happens when the jitter goes up to 1000+ ms?  Are you going to jack up the
base latency 1000 milliseconds or just discard those packets?  I suppose you
could, but that wouldn't be what people expect from a phone call, their
television service, their video call, their online game.  I don't want to
just "absorb" the jitter.  I can, but I won't.  We're not talking about
playing online chess where the delay could be one minute long for all I
care; we're talking about a game of human reaction times to see who can
virtually kill their opponent.

 

Now the solution is to simply fix it in the network with multiple transmit
queues or we can insist on the end-to-end dumb pipe dogma (not to be
confused with the end-to-end arguments) and give people a crappy experience.
As someone who uses P2P, VoIP, video communications, and online gaming, I
prefer using the intelligent network to make the network more efficient.
Without it, I and everyone else I know simply shut P2P off during the day
which ultimately harms the health of P2P because I'm no longer seeding.  I'd
rather not make idiotic compromises by insisting on a dumb network to
satisfy someone's delusional concept of what the Internet is supposed to be.

 

 

 

George Ou

 

From: [email protected]
[mailto:[email protected]] On Behalf Of
Bob Frankston
Sent: Sunday, October 04, 2009 8:42 AM
To: [email protected]
Cc: 'ip'
Subject: [ NNSquad ] The myth of isochronous and the risk of baking-in the
past

 

We keep getting told that P2P traffic interferes with isochronous IPTV.

 

Why this concern with isochronous (and its cousin QoS)? It misses the point
of the Internet which is about learning to take advantage of opportunity.
Isochronous was an issue in the early days of analog signaling when we had
systems that barely worked and there wasn't even the concept of buffering.

 

Today if you switch between an SD and an HD stream (AKA channel) on a cable
system you'll notice many seconds of difference between the two - we don't
really care. We also have the infamous "7-second" delay in US which is
scared of dirty words.

 

So why not just have a buffer to absorb any jitter? In face now we have now
protocols like SVC <http://en.wikipedia.org/wiki/SVC>  that are adaptive and
will adjust the number of bits needed depending upon the capacity available
so that you can fill the buffer and/or show more content or apply whatever
clever approaches you are thinking of.

 

I notice that if I turn away from a stream on my Verizon FiOS connection and
go back to it after a short time it will catch-up from where I left off!
Clever - I presume it uses a simple algorithm to speed up the stream without
any perceptual difference. Such techniques are not uncommon - you don't
notice them.

 

You'd think isochronous would be vital in sports but the US did fine with a
12 hour buffer when the Olympics was held in Sydney. We don't depend on
millisecond timing for watching sports - you don't notice existing
compression delays.

 

I argue that isochronous (and QoS) killed IEEE-1394 (AKA Firewire) by
restricting it's reach and over-defining the solution. It didn't help that
it was a silo with application protocols included. IEEE-1394 may be a good
example of risk in the attitude that we know what the applications are and
should bake them into the network.

 

The focus on Isochronous IPTV is problematic. It posits that we must design
networks within the limitations of television circa 1940. It also presumes
that the purpose of the network is television. As I explain in
http://rmf.vc/?n=IAC that attitude is simply an artifact of the fact we
discovered that if you repurpose a video distribution network you find it's
good for video distribution.

 

Instead we need to recognize that video distribution can be very tolerant of
network behavior. With a little buffering we can stream in real time but as
network capacity increases we can send the data faster than real time and
have an arbitrary amount of buffering available. There are many ways to make
video available depending on what tradeoffs you choose to make. As drive
capacity goes to terabytes buffering becomes the norm. In fact, FiOS seems
to buffer content "just in case" on their DVRs.

 

The best way to make this process fail is to fixate on isochronous and bake
the past into the network architecture. Alas, just as we discovered that if
you repurpose a broadcast network you find video distribution is its purpose
- if you repurpose companies whose business is selling video distribution
you get the misguided notion that the purpose of a network provider is to
provide the same old services.

 

Let's not forget video is just another app - it works better with more speed
but if we restrict ourselves to video we won't get other vital services. 

 

Time to move on from network services to creating opportunity.

 

 

Reply via email to