From: Bob Frankston <[EMAIL PROTECTED]>
Date: April 14, 2007 9:10:58 PM EDT

I was going to pass on commenting this but in reading through the  
release I saw it as a learning opportunity and also I think David,  
Vint and others may be reticent about defending something that  
actually is working rather well despite the problems.



While I’m an advocate of reinventing the Internet the gist of the  
story is a failure to understand why the Internet has become what it  
is. It’s akin to the attempts to fix the US Constitution by getting  
rid of that First Amendment because we now know what speech is good  
and what is not.



What seems to be missing from these efforts is a protocol that’s in  
the spirit of the end-to-end opportunity-creating approach that  
defines the Internet but doing less in the network rather than more.  
I often refer to the existing implementation has having training  
wheels because of the dependency on a single backbone which I call  
“Internet Inc”



Projects like GENI that attempt to make the Internet work better miss  
this point. What we need are protocols which make any subset of  
connected first class network. These systems can then connect in any  
way without being dependent upon the particulars of the path or a  
backbone.



Some of this nascent in P2P and Skype. But there is a tendency to  
build atop the current Internet as with using the @ to extend the  
address in email and SIP rather than starting from the edge using  
self-coined GUIDs which can be stable. Sort of like the MAC address  
in XNS but far more general and distributed.



It would be a “clean-slate” approach in that is valid in its own  
right but still takes advantage of the current Internet as just  
another transport in the same way that today’s Internet used the then-
 existing telecom infrastructure. Today’s Internet still has path  
dependencies that provide an opportunity for the current business  
models depend on controlling the paths so they can be operated as  
profit centers. By removing this vestige of path-dependence we force  
the issue and find we’ll need to fund the transport as physical  
infrastructure rather than as billable services.



This approach follows from viewing the End-to-End argument as a  
constraint and solving for connectivity given that constraint. The  
current Internet made some engineering compromises faced with the  
constraints of the day ­ we should now move on rather than moving  
backwards.



The challenge is that many people see the Internet in terms of the  
accidental properties and still don’t understand how it can work as  
well as it does let alone how it can work better with less governance  
and control.



Skimming the story …

Researchers Explore Scrapping Internet


The Internet "works well in many situations but was designed for  
completely different assumptions," said Dipankar Raychaudhuri, a  
Rutgers University professor overseeing three clean-slate projects.

"It's sort of a miracle that it continues to work well today."



! If this is what is driving the funding then we should be worried.  
It’s the end-to-end principle that still stands. The problem is with  
the engineering compromises that made left us with a path-dependent  
Internet. Again, I’m not saying they were wrong as much the approach  
was useful scaffolding. Yes, it is a miracle that it works today but  
that’s testament to the success of end-to-end despite the compromises  
in the initial implementation.



And it could take billions of dollars to replace all the software and  
hardware deep in the legacy systems.



! Y2K all-over-again? Has no one learned the lessons of how to  
maintain compatibility. If you aren’t path dependent then  
compatibility is greatly simplified.



"The network is now mission critical for too many people, when in the  
(early days) it was just experimental," Zittrain said.



The Internet's early architects built the system on the principle of  
trust. Researchers largely knew one another, so they kept the shared  
network open and flexible ­ qualities that proved key to its rapid  
growth.



But spammers and hackers arrived as the network expanded and could  
roam freely because the Internet doesn't have built-in mechanisms for  
knowing with certainty who sent what.



! This sounds eerily like saying that things are too important for  
the First Amendment which was built on trust. Those who worked on  
Multics had a very strong sense of distrust and the Web happened  
because it didn’t require trust. The dynamic works because we can  
trust but verify thanks to digital protocols. We don't need to  
predefine good behavior.



The network's designers also assumed that computers are in fixed  
locations and always connected. That's no longer the case with the  
proliferation of laptops, personal digital assistants and other  
mobile devices, all hopping from one wireless access point to  
another, losing their signals here and there.



Engineers tacked on improvements to support mobility and improved  
security, but researchers say all that adds complexity, reduces  
performance and, in the case of security, amounts at most to bandages  
in a high-stakes game of cat and mouse.



Workarounds for mobile devices "can work quite well if a small  
fraction of the traffic is of that type," but could overwhelm  
computer processors and create security holes when 90 percent or more  
of the traffic is mobile, said Nick McKeown, co-director of  
Stanford's clean-slate program.



! Sure, the naïve assumption of immobility allowed for the IP address  
to comingle naming with path but that’s not a defining assumption for  
end-to-end. It was an implementation compromise. If we composite from  
the edge mobility becomes the norm. The problem is indeed trying to  
patch around but this description shows why the designers understood  
trust very well and recognized that trust has to be end-to-end and  
not a property of the network. Too bad the temptation of big funding  
makes all problems seem to be network problems.



The Internet will continue to face new challenges as applications  
require guaranteed transmissions ­ not the "best effort" approach  
that works better for e-mail and other tasks with less time sensitivity.



! The alternative to best efforts is a very high priced special  
network. But we have one ­ it’s the phone network but it’s too  
expensive for anyone to use including the phone companies. It’s only  
best efforts that allows us to the available capacity and get voice  
and video performance well above that afforded by the PSTN. Yet just  
as with WAP, people assume that we don’t already have a solution that  
works very well and instead create a crisis the demands their  
favorite hack. John Waclawsky has a nice list of citations of failed  
QoS (Non-best-effort) experiments.



Think of a doctor using teleconferencing to perform a surgery  
remotely, or a customer of an Internet-based phone service needing to  
make an emergency call. In such cases, even small delays in relaying  
data can be deadly.



! We have a term for time critical remote surgery. It’s called  
homicide. Oh, if a packet gets delayed that’s bad but if the phone  
wire breaks, well, that doesn’t count.



And one day, sensors of all sorts will likely be Internet capable.



! One day? Why aren’t they already? But I’ll admit that the current  
Internet isn’t as device friendly because of the conflicting demands  
on the IP address but that’s why we need simple edge identifiers and  
protocols but we can still use the existing Internet as a transport  
and do devices now.



Even if the original designers had the benefit of hindsight, they  
might not have been able to incorporate these features from the get-  
go. Computers, for instance, were much slower then, possibly too weak  
for the computations needed for robust authentication.



! They don’t need hindsight ­ they had foresight. And they knew you  
don’t do authentication in the network itself because that’s  
meaningless. This security stuff is not at all new



Kleinrock, the Internet pioneer at UCLA, questioned the need for a  
transition at all, but said such efforts are useful for their out-of-  
the-box thinking. "A thing called GENI will almost surely not become  
the Internet, but pieces of it might fold into the Internet as it  
advances," he said.



! He’s right ­ we just need to learn our lessons and build on what we  
have but not necessarily in a way that makes us more dependent upon  
particulars.



Any redesign may incorporate mechanisms, known as virtualization, for  
multiple networks to operate over the same pipes, making further  
transitions much easier. Also possible are new structures for data  
packets and a replacement of Cerf's TCP/IP communications protocols.



! Duh? Don’t we have VPNs now?


---------------------------------------------------------------
             WWWhatsup NYC
http://pinstand.com - http://punkcast.com
--------------------------------------------------------------- 

_______________________________________________
Discuss mailing list
[email protected]
http://lists.isoc-ny.org/mailman/listinfo/discuss

Reply via email to