Re: The internet architecture

2008-12-05 Thread Stephane Bortzmeyer
On Thu, Dec 04, 2008 at 04:29:51PM -0500,
 Keith Moore [EMAIL PROTECTED] wrote 
 a message of 28 lines which said:

 It's a question of whether increasing reliance on DNS by trying to
 get apps and other things to use DNS names exclusively, makes those
 apps and other things less reliable.

For a good part, this is already done. You cannot use IP addresses for
many important applications (the Web because of virtual hosting and
email because most MTA setup prevent it).

And, as far as I know, nobody complained. The only person I know who
puts IP addresses on business cards is Louis
Pouzin [EMAIL PROTECTED]

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Stephane Bortzmeyer
On Thu, Dec 04, 2008 at 04:51:20PM -0500,
 Keith Moore [EMAIL PROTECTED] wrote 
 a message of 40 lines which said:

 Not a week goes by when I'm not asked to figure out why people
 can't get to a web server or why email isn't working.  In about
 70% of the web server cases and 30% of the email cases, the answer
 turns out to be DNS related.  IP failures, by contrast, are quite
 rare.

If it were true, I would wonder why people never use legal URLs like
http://[2001:1890:1112:1::20]/...

(And that's certainly not because they are harder to type or to
remember: the above URL, which works on my Firefox, goes to a Web site
which is mostly for technical people, who are able to use bookmarks,
local files for memory, etc.)
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Rémi Després wrote:

 IMO, service names and SRV records  SHOULD be supported asap in all
 resolvers (in addition to host names and A/ records that they
 support today).
 Any view on this?

a) SRV records only apply to applications that use them.  to use them
otherwise would break compatibility.

b) SRV records also increase reliance on DNS which (among other things)
is a barrier to deployment of new applications.  use of SRV would
therefore encourage overloading of existing protocols and service names
to run new applications (another version of the everything-over-http
syndrome).

c) use of SRV would encourage even more meddling with protocols by NATs

d) it's not immediately clear to me that it would be feasible for SRV to
be used by applications that need to do referrals.  they'd need some way
to generate new, unique service names and there would need to be a way
to generate and distribute new DNS dynamic update credentials to those
applications or even application instances.

and that's just the problems I can think of off the top of my head.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Stephane Bortzmeyer wrote:
 On Thu, Dec 04, 2008 at 04:29:51PM -0500,
  Keith Moore [EMAIL PROTECTED] wrote 
  a message of 28 lines which said:
 
 It's a question of whether increasing reliance on DNS by trying to
 get apps and other things to use DNS names exclusively, makes those
 apps and other things less reliable.
 
 For a good part, this is already done. You cannot use IP addresses for
 many important applications (the Web because of virtual hosting and
 email because most MTA setup prevent it).

you're generalizing about the entire Internet from two applications?

 And, as far as I know, nobody complained. The only person I know who
 puts IP addresses on business cards is Louis
 Pouzin [EMAIL PROTECTED]

address literals were quite useful for diagnostic purposes.  the fact
that most MTAs now prevent using them in outgoing mail is quite
unfortunate.   though they were even more useful when you could expect
to do things like

RCPT TO:[EMAIL PROTECTED]

as a way to query a specific SMTP server as to what it would do with a
specific domain name.

but that's really beside the point.  the real point is this:

please figure out how to make DNS more reliable, more in sync with the
world, and less of a single point of failure and control, before
insisting that we place more trust in it.

Keith

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Stephane Bortzmeyer wrote:
 On Thu, Dec 04, 2008 at 04:51:20PM -0500,
  Keith Moore [EMAIL PROTECTED] wrote 
  a message of 40 lines which said:
 
 Not a week goes by when I'm not asked to figure out why people
 can't get to a web server or why email isn't working.  In about
 70% of the web server cases and 30% of the email cases, the answer
 turns out to be DNS related.  IP failures, by contrast, are quite
 rare.
 
 If it were true, I would wonder why people never use legal URLs like
 http://[2001:1890:1112:1::20]/...

because IPv6 literals wouldn't work for the vast majority of users today?

I do see links to URLs with IPv4 address literals.   sometimes they're a
good choice.

but you're really missing the point, which is that DNS fails a lot.

note that DNS failures aren't all with the authoritative servers -
they're often with caches, resolver configuration, etc.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

  Just think how much easier the IPv4 to IPv6 transition would have
  been if nothing above the IP layer cared exactly what an IP
  address looks like or how big it is.

 It wouldn't have made much difference at all,

Wow. I find this statement simply astonishing.

IMO, one of the biggest challenges surrounding IPv6
adoption/deployment is that all applications are potentially impacted,
and each and everyone one of them needs to be explicitely enabled to
work with IPv6. That is a huge challenge, starting with the
observation that there are a bazillion deployed applications that will
NEVER be upgraded.

Boy, wouldn't it be nice of all we had to do was IPv6-enable the
underlying network and stack (along with key OS support routines and
middleware) and have existing apps work over IPv6, oblivious to IPv4
vs. IPv6 underneath.

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.

Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 So it's not a question of whether DNS is less reliable than IP (it is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on DNS by
 trying to get apps and other things to use DNS names exclusively, makes
 those apps and other things less reliable.

No. Your argument seems to be because relying even more on DNS than
we do today makes things more brittle, BAD, BAD BAD, we cannot go
there.

The more relevant engineering question is whether the benefits of such
an approach outweigh the downsides. Sure there are downsides. But
there are also real potential benefits. Some of them potentially game
changers in terms of addressing real deficiencies in what we have
today. It may well be that having applications be more brittle would
be an acceptable cost for getting a viable multihoming approach that
address the route scalability problem. (All depends on what more
brittle really means.) But the only way to answer such questions in a
productive manner is to look pretty closely at a complete
architecture/solution together with experience from real
implementation/usage.

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Henning Schulzrinne



Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


This certainly seems to be the way that modern APIs are heading. If  
I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other  
scripting languages have a socket-like API that does not expose IP  
addresses, but rather connects directly to DNS names. (In many cases,  
they unify file and socket opening and specify the application  
protocol, to, so that one can do fopen(http://www.ietf.org;), for  
example.) Thus, we're well on our way towards the goal of making  
(some) application oblivious to addresses. I suspect that one reason  
for the popularity of these languages is exactly that programmers  
don't want to bother remembering when to use ntohs().


Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Leslie
Keith Moore [EMAIL PROTECTED] wrote:
 
 please figure out how to make DNS more reliable, more in sync with the
 world, and less of a single point of failure and control, before
 insisting that we place more trust in it.

   A while back, in the SIDR mail-list, a banking-level wish-list was
published:
] 
] - That when you establish a discussion with endpoint you are (to the   
]   best of current technology) certain it really is the endpoint.
] 
] - That you are talking (unmolested) to the endpoint you think you are  
]   for the entirety of the session.
] 
] - That what is retrieved by the client is audit-able at both the
]   server and the client.
] 
] - That retrievals are predictable, and perfectly repeatable.
] 
] - That the client _never_ permits a downgrade, or unsecured retrieval   
]   of information
] 
] - That Trust anchor management for both the client ssl and the PRKI
]   is considered in such a way that it minimises the fact there is no
]   such thing as trusted computing.

   How much of this is it reasonable to ask the DNS to do?

--
John Leslie [EMAIL PROTECTED]
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Dave CROCKER



Thomas Narten wrote:

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.



Thomas,

If you are citing BSD merely as an example of a component that imposes knowledge 
of addresses on upper layers, then yes, it does make a good, concrete example.


If you are citing BSD because you think that they made a bad design decision, 
then you are faulting them for something that was common in the networking 
culture at the time.


People  -- as in end users, as in when they were typing into an application -- 
commonly used addresses in those days, and hostnames were merely a preferred 
convenience.  (Just to remind us all, this was before the DNS and the hostname 
table was often out of date.)


Worse, we shouldn't even forgive them/us by saying something like we didn't 
understand the need for name/address split, back then because it's pretty clear 
from the last 15 years of discussion and work that, as a community, we *still* 
don't.  (The Irvine ring was name-based -- 1/4 of the real estate on its network 
card was devoted to the name table -- but was a small LAN, so scaling issues 
didn't apply.)


d/

ps. As to your major point, that having apps de-coupled from addresses would 
make a huge difference, boy oh boy, we are certainly in agreement there...

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon

 IMO, one of the biggest challenges surrounding IPv6 
 adoption/deployment is that all applications are potentially 
 impacted, and each and everyone one of them needs to be 
 explicitely enabled to work with IPv6.

Or NAT-PT needs to improved so that middleboxes can be inserted
into a network to provide instant v4-v6 compatibility.  

 That is a huge 
 challenge, starting with the observation that there are a 
 bazillion deployed applications that will NEVER be upgraded.

Yes, I agree that there is a nice market for such middleboxes.

 Boy, wouldn't it be nice of all we had to do was IPv6-enable 
 the underlying network and stack (along with key OS support 
 routines and
 middleware) and have existing apps work over IPv6, oblivious 
 to IPv4 vs. IPv6 underneath.

Middleboxes can come close to providing that.

 Wouldn't it have been nice if the de facto APIs in use today 
 were more along the lines of ConnectTo(DNS name, service/port).

I don't know if nice is the right word. It would be interesting
and I expect that there would be less challenges because we would
have had a greater focus on making DNS (or something similar) more
reliable. It's not too late to work on this and I think that it
is healthy for multiple technologies to compete on the network.
At this point it is not clear that IPv6 will last for more than
50 years or so. If we do work on standardizing a name-to-name
API today, then there is the possibility that this will eventually
prevail over the IPv6 address API.

Way back when there was an OS called Plan 9 which took the idea of 
a single namespace more seriously than other OSes had. On Plan 9
everything was a file including network devices which on UNIX are
accessed with sockets and addresses. This concept ended up coming
back to UNIX in the form of the portalfs (not to mention procfs).

I think it is well worthwhile to work on this network endpoint
naming API even if it does not provide any immediate benefits
to the IPv6 transition.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Melinda Shore
On 12/5/08 9:59 AM, Dave Crocker [EMAIL PROTECTED] wrote:
 If you are citing BSD because you think that they made a bad design decision,
 then you are faulting them for something that was common in the networking
 culture at the time.

Not to go too far afield, but I think there's consensus
among us old Unix folk that the mistake that CSRG made
wasn't in the use of addresses but in having sockets
instead of using file descriptors.  This was actually
fixed in SysVRSomethingOrOther with the introduction of
a network pseudo-filesystem (open(/net/192.168.1.1, ... )
with ioctls but never got traction.

Melinda

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon
 It may 
 well be that having applications be more brittle would be an 
 acceptable cost for getting a viable multihoming approach 
 that address the route scalability problem. (All depends on 
 what more brittle really means.) But the only way to answer 
 such questions in a productive manner is to look pretty 
 closely at a complete architecture/solution together with 
 experience from real implementation/usage.

I agree.
For instance, the cited DNS problems often disrupt communication
when there is a problem free IP path between points A and B because
DNS relies on third parties to the packet forwarding path. But 3rd
parties can also be used to make things less brittle. For instance
if an application whose packet stream is being disrupted could call
on 3rd parties to check if there are alternative trouble-free paths
and then reroute the stream through a 3rd party proxy. If a strategy
like this is built-into the lower level network API, then an application
session could even survive massive network disruption as long as
it was cyclic.

I have in mind the way that Telebit modems used the PEP protocol 
to test and use the communication capability of each one of several
channels. As long as there was at least one channel available and the
periods of no-channel-availability were short enough, you could get
end-to-end data transfer. On a phone line which was unusable for fax
and in which the human voice was completely drowned out by static,
you could get end-to-end UUCP email transfer. A lot of work related
to this is being done by P2P folks these days, and I think there
is value in defining a better network API that incorporates some
of this work.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


sockets vs. fds

2008-12-05 Thread Dave CROCKER



Melinda Shore wrote:

Not to go too far afield, but I think there's consensus
among us old Unix folk that the mistake that CSRG made
wasn't in the use of addresses but in having sockets
instead of using file descriptors.  This was actually
fixed in SysVRSomethingOrOther with the introduction of
a network pseudo-filesystem (open(/net/192.168.1.1, ... )
with ioctls but never got traction.



It's possible that this represents insight worth sharing broadly, so I'm copying 
the list.


It isn't immediately obvious to me why file descriptors would have had a major 
impact, so can you elaborate?


Thanks.

d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day


Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


That had been the original plan and there were APIs that did that. 
But for some reason, the lunacy of the protocol specific sockets 
interface was preferred.  I know people who have been complaining 
about it for 25 years or thereabouts.


Some knew even then that the purpose of an API was to hide those 
sorts of dependencies.  There seems to be a history here of always 
picking the bad design.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: sockets vs. fds

2008-12-05 Thread Melinda Shore
On 12/5/08 10:18 AM, Dave Crocker [EMAIL PROTECTED] wrote:
 It's possible that this represents insight worth sharing broadly,

I doubt that very much, since it's really about API
design and ideological purity and I think has had only
a negligible impact on deployability, but
 
 It isn't immediately obvious to me why file descriptors would have had a major
 impact, so can you elaborate?

I don't think they have.  Unix (whatever that means
for the purpose of discussion) was designed around a few
abstractions, like pipes, filedescriptors, and processes,
and by the time IP was implemented we'd pretty much settled
on filedescriptors as endpoints for communications.  We
could do things with them like i/o redirection, etc., and
sockets are something else entirely.  That is to say,
in Unix we shouldn't care whether an input or output
stream is a terminal, a file, or a network data stream,
but because of sockets we do have to care.

Melinda

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day

It is so reassuring when modern is a third of a  century old.

Sorry, but I am finding this new found wisdom just a little frustrating.


At 9:40 -0500 2008/12/05, Henning Schulzrinne wrote:

Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


This certainly seems to be the way that modern APIs are heading. 
If I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other 
scripting languages have a socket-like API that does not expose IP 
addresses, but rather connects directly to DNS names. (In many 
cases, they unify file and socket opening and specify the 
application protocol, to, so that one can do 
fopen(http://www.ietf.org;), for example.) Thus, we're well on our 
way towards the goal of making (some) application oblivious to 
addresses. I suspect that one reason for the popularity of these 
languages is exactly that programmers don't want to bother 
remembering when to use ntohs().


Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread John Day
When our group put the first Unix system on the Net in the summer of 
1975, this is how we did it.  The hosts were viewed as part of the 
file system.  It was a natural way to do it.


At 15:01 + 2008/12/05, [EMAIL PROTECTED] wrote:

  IMO, one of the biggest challenges surrounding IPv6

 adoption/deployment is that all applications are potentially
 impacted, and each and everyone one of them needs to be
 explicitely enabled to work with IPv6.


Or NAT-PT needs to improved so that middleboxes can be inserted
into a network to provide instant v4-v6 compatibility. 


 That is a huge
 challenge, starting with the observation that there are a
 bazillion deployed applications that will NEVER be upgraded.


Yes, I agree that there is a nice market for such middleboxes.


 Boy, wouldn't it be nice of all we had to do was IPv6-enable
 the underlying network and stack (along with key OS support
 routines and
 middleware) and have existing apps work over IPv6, oblivious
 to IPv4 vs. IPv6 underneath.


Middleboxes can come close to providing that.


 Wouldn't it have been nice if the de facto APIs in use today
 were more along the lines of ConnectTo(DNS name, service/port).


I don't know if nice is the right word. It would be interesting
and I expect that there would be less challenges because we would
have had a greater focus on making DNS (or something similar) more
reliable. It's not too late to work on this and I think that it
is healthy for multiple technologies to compete on the network.
At this point it is not clear that IPv6 will last for more than
50 years or so. If we do work on standardizing a name-to-name
API today, then there is the possibility that this will eventually
prevail over the IPv6 address API.

Way back when there was an OS called Plan 9 which took the idea of
a single namespace more seriously than other OSes had. On Plan 9
everything was a file including network devices which on UNIX are
accessed with sockets and addresses. This concept ended up coming
back to UNIX in the form of the portalfs (not to mention procfs).

I think it is well worthwhile to work on this network endpoint
naming API even if it does not provide any immediate benefits
to the IPv6 transition.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day
Speak for yourself David.  These problems have been well understood 
and discussed since 1972.  But you are correct, that there were still 
a large unwashed that didn't and I am still not sure why that was. 
This seems to be elementary system architecture.



At 6:59 -0800 2008/12/05, Dave CROCKER wrote:

Thomas Narten wrote:

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.



Thomas,

If you are citing BSD merely as an example of a component that 
imposes knowledge of addresses on upper layers, then yes, it does 
make a good, concrete example.


If you are citing BSD because you think that they made a bad design 
decision, then you are faulting them for something that was common 
in the networking culture at the time.


People  -- as in end users, as in when they were typing into an 
application -- commonly used addresses in those days, and hostnames 
were merely a preferred convenience.  (Just to remind us all, this 
was before the DNS and the hostname table was often out of date.)


Worse, we shouldn't even forgive them/us by saying something like 
we didn't understand the need for name/address split, back then 
because it's pretty clear from the last 15 years of discussion and 
work that, as a community, we *still* don't.  (The Irvine ring was 
name-based -- 1/4 of the real estate on its network card was devoted 
to the name table -- but was a small LAN, so scaling issues didn't 
apply.)


d/

ps. As to your major point, that having apps de-coupled from 
addresses would make a huge difference, boy oh boy, we are certainly 
in agreement there...

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: sockets vs. fds

2008-12-05 Thread Noel Chiappa
 From: Melinda Shore [EMAIL PROTECTED]

 Unix ... was designed around a few abstractions, like pipes,
 filedescriptors, and processes, and by the time IP was implemented we'd
 pretty much settled on filedescriptors as endpoints for communications.
 We could do things with them like i/o redirection, etc. ... in Unix we
 shouldn't care whether an input or output stream is a terminal, a file,
 or a network data stream

Wow, that tickled some brain cells that have been dormant a very, very long
time! My memory of this goes all the way back to what I believe was the very
first mechanism added to V6 Unix to allow random IPC (i.e. between unrelated
processes), which was a pipe-like mechanism produced, if vague memories
serve, by Rand. This is all probably irrelevant now, but here are a few
memories...

One of the problems I recall we had with the Unix stream paradigm is that it
was not a very good semantic match for unreliable asynchronous communication,
where you had no guarantee that the data you were trying to do a 'read' on
would ever arrive (e.g. things like UDP), nor for things which were
effectively record-based (again, UDP).

Sure, TCP could reasonably well be coerced into a stream paradigm, but just
as RPC has failure modes that a local call doesn't, and therefore needs
extended semantics beyond that of a vanilla local call, so too it is with a
Unix stream (which, in V6, was the only I/O mechanism).

Noel
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 Just think how much easier the IPv4 to IPv6 transition would have
 been if nothing above the IP layer cared exactly what an IP
 address looks like or how big it is.
 
 It wouldn't have made much difference at all,
 
 Wow. I find this statement simply astonishing.
 
 IMO, one of the biggest challenges surrounding IPv6
 adoption/deployment is that all applications are potentially impacted,
 and each and everyone one of them needs to be explicitely enabled to
 work with IPv6. That is a huge challenge, starting with the
 observation that there are a bazillion deployed applications that will
 NEVER be upgraded.

There were also a bazillion deployed applications that would never be
upgraded to deal with Y2K.  Somehow people managed.  But part of how
they managed was by replacing some applications rather than upgrading them.

I certainly won't argue that it's not a significant challenge to edit
each application, recompile it, retest it, update its documentation,
educate tech support, and release a new version.   But you'd have all of
those issues with moving to IPv6 even if we had already had a socket API
in place where the address was a variable-length, mostly opaque, object.

Consider also that the real barrier to adapting many applications to
IPv6 (and having them work well) isn't the size of the IPv6 address, or
adapting the program to use sockaddr_in6 and getaddrinfo() rather than
sockaddr_in and gethostbyname().  It's figuring out what it takes to get
the application to work sanely in a world consisting of a mixture of
IPv4 and IPv6, IPv4 private addresses and global addresses and maybe
linklocal addresses (useful on ad hoc networks), IPv6 ULAs and global
addresses and maybe linklocal addresses, the fact that 6to4 traffic is
sometimes blocked (so v6 connections time out), and that 6to4 relay
routers often cause IPv6 connections to work more poorly than native
IPv4 connections, NATs, and so forth.

(size *is* an issue for applications that do referrals, and those are
important cases, but the vast majority of apps don't do that.)

And at least from where I sit, almost all of the applications I use
already support IPv6.  (I realize that's not true for everybody, but it
also tells me that it's feasible.)   From where I sit, the support
that's missing is in the ISPs, and the SOHO routers, and in various
things that block 6to4.  I understand from talking to others that
support is also lagging in firewalls and traffic monitors needed by
enterprise networks.

 Boy, wouldn't it be nice of all we had to do was IPv6-enable the
 underlying network and stack (along with key OS support routines and
 middleware) and have existing apps work over IPv6, oblivious to IPv4
 vs. IPv6 underneath.

Sure it would have been nice.  But for that to have happened would have
required a lot more than having the API treat addresses as opaque
objects of arbitrary size.  It would have required that IPv4 support
variable length addresses in all hosts and routers so that there would
have been no need for applications to try to deal with a mixture of IPv4
and IPv6 hosts that can't talk directly to one another.   It would have
required an absence of NATs so that apps wouldn't need to know how to
route around them.  It would have required that apps be able to be
unaware of the IPv6 address architecture, and for them to not need to do
intelligent address selection, which basically would have required
solving the routing scalability problem.

Basically if we had had all of that stuff in place in the early 1990s,
we would never have needed to do a forklift upgrade of IPv4 - the net
would have evolved approximately as gracefully as it did with CIDR.

 And, if one wants to look back and see could we have done it
 differently, go back to the BSD folk that came up with the socket
 API. It was designed to support multiple network stacks precisely
 because at that point in time, there were many, and TCP/IP was
 certainly not pre-ordained. But that API makes addresses visible to
 APIs. And it is widely used today.
 
 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).

No, because one of two things would have happened:

1. the defacto APIs would have long since been abandoned in favor of
sockets APIs that were more flexible at letting apps deal with various
kinds of network brain damage, or

2. if the defacto APIs were the only ones available on most platforms,
then we wouldn't have any of the applications that we have today, that
manage to get around NAT.  we'd be stuck with email and
everything-else-over-HTTP, with all servers constrained to be in the
core, and prime IP real estate being even more expensive than it is now.

--

Having said that, I'll grant that every barrier to IPv6 adoption is
significant.  The reason is that moving to IPv6 isn't just a matter of
flipping switches at any level.  It's one thing for 

Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 So it's not a question of whether DNS is less reliable than IP (it is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on DNS by
 trying to get apps and other things to use DNS names exclusively, makes
 those apps and other things less reliable.
 
 No. Your argument seems to be because relying even more on DNS than
 we do today makes things more brittle, BAD, BAD BAD, we cannot go
 there.

My argument is that if you really want that sort of approach to work you
need to concentrate on making DNS more reliable and better suited to
this kind of approach, and on getting people to think of DNS differently
than they do now - rather than just talking in terms of changing the
API, which is the easy part.

 The more relevant engineering question is whether the benefits of such
 an approach outweigh the downsides. Sure there are downsides. But
 there are also real potential benefits. 

Mumble.  Years ago, I worked out details of how to build a very scalable
distributed system (called SNIPE) using a DNS-like service (but a lot
more flexible in several ways) to name endpoints and associate the names
with metadata about those endpoints, including their locations.  So I
don't need to be convinced that there are potential benefits.  But that
exercise also gave me an appreciation for the difficulties involved.
And for that exercise I allowed myself the luxury of defining my own
naming service rather than constraining myself to use DNS.  It would
have been much more difficult, though perhaps not impossible, to make
that kind of system work with DNS.

 But the only way to answer such questions in a
 productive manner is to look pretty closely at a complete
 architecture/solution together with experience from real
 implementation/usage.

You certainly need a more complete architecture before you can evaluate
it at all.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: sockets vs. fds

2008-12-05 Thread michael.dillon
 It's possible that this represents insight worth sharing 
 broadly, so I'm copying the list.
 
 It isn't immediately obvious to me why file descriptors would 
 have had a major impact, so can you elaborate?

Down at the end of this page
http://ph7spot.com/articles/in_unix_everything_is_a_file
there is a list of pseudo-filesystems including portalfs from
FreeBSD, which allows nework services to be accessed as a 
filesystem. For those interested in pursuing the idea, you should
have a look at FUSE http://fuse.sourceforge.net/ which allows
anyone to create resources in the filesystem namespace, for instance
by writing a Python script
http://apps.sourceforge.net/mediawiki/fuse/index.php?title=FusePython.
FUSE is also availabel on OS/X http://code.google.com/p/macfuse/

It should be possible to register a URN namespace to go along with this
API so that you could have
urn:fs:example.com:courselist/2009/autumn/languages/
to represent a service that is provided by the courselistFS on some host
that example.com knows about.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
John Day wrote:

 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).
 
 That had been the original plan and there were APIs that did that. But
 for some reason, the lunacy of the protocol specific sockets interface
 was preferred.  

About the time I wrote my second network app (circa 1986) I abstracted
all of the connection establishment stuff (socket, gethostbyname, bind,
connect) into a callable function so that I wouldn't have to muck with
sockaddrs any more.  And I started trying to use that function in
subsequent apps.  What I generally found was that I had to change that
function for every new app, because there were so many cases for which
merely connecting to port XX at the first IP address corresponding to
hostname YY that accepted a connection, was not sufficient for the
applications I was writing.  Now it's possible that I was writing
somewhat unusual applications  (e.g. things that constrained the source
port to be  1024 and which therefore required the app to run as root
initially and then give up its privileges, or SMTP clients for which MX
processing was necessary) but that's nevertheless what I experienced.

These days the situation is similar but I'm having to deal with a
mixture of v4 and v6 peers, or NAT traversal, or brain-damage in
getaddrinfo() implementations, or bugs in the default address selection
algorithm.

With so much lunacy in the how the net works these days, I regard a
flexible API as absolutely necessary for the survival of applications.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: sockets vs. fds

2008-12-05 Thread John Day
Not sure when the RAND work was.  But if you were worrying about UDP 
it was much later. When Illinois put UNIX on the Net in the summer of 
75, we ran into several initial problems.  There were real limits on 
kernel size.  So the NCP was in the kernel, telnet, etc. were 
applications.


In the initial version Telnet was written as two processes (inbound 
and outbond) because pipes were stupid and were blocking.  We hacked 
stty and gtty so they could do the minimal coordination they needed 
to do.  There was a paper on this by Greg Chesson published somewhere.


Once that initial version was working, we went back and designed a 
non-blocking IPC system for UNIX.  All of this is lost in the fogs of 
time.  With that we were able to do Telnet as a single process.



At 11:00 -0500 2008/12/05, Noel Chiappa wrote:

 From: Melinda Shore [EMAIL PROTECTED]

 Unix ... was designed around a few abstractions, like pipes,
 filedescriptors, and processes, and by the time IP was implemented we'd
 pretty much settled on filedescriptors as endpoints for communications.
 We could do things with them like i/o redirection, etc. ... in Unix we
 shouldn't care whether an input or output stream is a terminal, a file,
 or a network data stream

Wow, that tickled some brain cells that have been dormant a very, very long
time! My memory of this goes all the way back to what I believe was the very
first mechanism added to V6 Unix to allow random IPC (i.e. between unrelated
processes), which was a pipe-like mechanism produced, if vague memories
serve, by Rand. This is all probably irrelevant now, but here are a few
memories...

One of the problems I recall we had with the Unix stream paradigm is that it
was not a very good semantic match for unreliable asynchronous communication,
where you had no guarantee that the data you were trying to do a 'read' on
would ever arrive (e.g. things like UDP), nor for things which were
effectively record-based (again, UDP).

Sure, TCP could reasonably well be coerced into a stream paradigm, but just
as RPC has failure modes that a local call doesn't, and therefore needs
extended semantics beyond that of a vanilla local call, so too it is with a
Unix stream (which, in V6, was the only I/O mechanism).

Noel
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Henning Schulzrinne wrote:


 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).
 
 This certainly seems to be the way that modern APIs are heading. If
 I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other scripting
 languages have a socket-like API that does not expose IP addresses, but
 rather connects directly to DNS names. 

and yet, people wonder why so many network applications are still
written in C, despite all of the security issues associated with weak
typing, explicit memory management, and lack of bounds checking on array
references.

(people also need to realize that using a modern API makes it _harder_
to get an application to work well in a mixed IPv4/IPv6 environment.)

 Thus, we're well on
 our way towards the goal of making (some) application oblivious to
 addresses. 

and we're also well on our way towards the goal of having everything run
over HTTP.

 I suspect that one reason for the popularity of these languages is
 exactly that programmers don't want to bother remembering when to
 use ntohs().

probably so.  I can't exactly blame them for that.

Keith

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Dave CROCKER



John Day wrote:
Speak for yourself David.  These problems have been well understood and 
discussed since 1972.  But you are correct, that there were still a 
large unwashed that didn't and I am still not sure why that was. This 
seems to be elementary system architecture.



John,

After your or I or whoever indulges in our flash of brilliant insight, it is the 
unwashed who do all the work.


So I was careful to refer to the community rather than claim that no one at 
all understand the issue.


I measure the community in terms of that pesky rough consensus construct and 
particularly in terms of running code.  Even in terms of the much more relaxed 
measure, namely mindshare, the community reflects no clear consensus on the 
matter of name-vs-address split, beyond now believing that we should do more of 
it.


We are only beginning to see broader use of the distinction between transient, 
within-session naming, for merging data coming in from alternate paths, versus 
global, persistent naming, for initial rendez vous.


d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: sockets vs. fds

2008-12-05 Thread Dave CROCKER



John Day wrote:
Not sure when the RAND work was.  


1977.  One of several IPC efforts for Unix around that time.

I was in that group, but had nothing to do with that work and frankly failed to 
tracking its significance.



 But if you were worrying about UDP it
was much later. When Illinois put UNIX on the Net in the summer of 75, 
we ran into several initial problems. 


The key point that Melinda made, that resonated with me, is the unfortunate 
matter of semantic mismatch, between the Unix 'file' construct and the 
networking 'packet' construct.  Even the 'tcp' construct had some challenges.



d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day

In other words the failure of a university education.


At 8:46 -0800 2008/12/05, Dave CROCKER wrote:

John Day wrote:
Speak for yourself David.  These problems have been well understood 
and discussed since 1972.  But you are correct, that there were 
still a large unwashed that didn't and I am still not sure why that 
was. This seems to be elementary system architecture.



John,

After your or I or whoever indulges in our flash of brilliant 
insight, it is the unwashed who do all the work.


So I was careful to refer to the community rather than claim that 
no one at all understand the issue.


I measure the community in terms of that pesky rough consensus 
construct and particularly in terms of running code.  Even in terms 
of the much more relaxed measure, namely mindshare, the community 
reflects no clear consensus on the matter of name-vs-address split, 
beyond now believing that we should do more of it.


We are only beginning to see broader use of the distinction between 
transient, within-session naming, for merging data coming in from 
alternate paths, versus global, persistent naming, for initial 
rendez vous.


d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day
As we all know, all attempts to turn a sow's ear into a silk purse, 
generally meet with failure.


You can only cover up so much.  I have written (or tried to write) 
too many device emulators and every time it is the same lesson.  ;-)


We use to have this tag line, when you found that pesky bug and of 
course it was staring you right in the face all the time, someone 
would say, Well, you know . . . if you don't do it right, it won't 
work.  ;-)


We seem to have that problem in spades.  ;-)


At 11:29 -0500 2008/12/05, Keith Moore wrote:

John Day wrote:


 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).


 That had been the original plan and there were APIs that did that. But
 for some reason, the lunacy of the protocol specific sockets interface
 was preferred. 


About the time I wrote my second network app (circa 1986) I abstracted
all of the connection establishment stuff (socket, gethostbyname, bind,
connect) into a callable function so that I wouldn't have to muck with
sockaddrs any more.  And I started trying to use that function in
subsequent apps.  What I generally found was that I had to change that
function for every new app, because there were so many cases for which
merely connecting to port XX at the first IP address corresponding to
hostname YY that accepted a connection, was not sufficient for the
applications I was writing.  Now it's possible that I was writing
somewhat unusual applications  (e.g. things that constrained the source
port to be  1024 and which therefore required the app to run as root
initially and then give up its privileges, or SMTP clients for which MX
processing was necessary) but that's nevertheless what I experienced.

These days the situation is similar but I'm having to deal with a
mixture of v4 and v6 peers, or NAT traversal, or brain-damage in
getaddrinfo() implementations, or bugs in the default address selection
algorithm.

With so much lunacy in the how the net works these days, I regard a
flexible API as absolutely necessary for the survival of applications.

Keith


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: sockets vs. fds

2008-12-05 Thread Tony Finch
On Fri, 5 Dec 2008, Dave CROCKER wrote:
 Melinda Shore wrote:
 
  Not to go too far afield, but I think there's consensus among us old
  Unix folk that the mistake that CSRG made wasn't in the use of
  addresses but in having sockets instead of using file descriptors.
  This was actually fixed in SysVRSomethingOrOther with the introduction
  of a network pseudo-filesystem (open(/net/192.168.1.1, ... ) with
  ioctls but never got traction.

 It isn't immediately obvious to me why file descriptors would have had a
 major impact, so can you elaborate?

This isn't a question of sockets versus file descriptors, since sockets
*are* file descriptors. It is actually a question of how to specify
network addresses in the API, i.e. the BSD sockaddr structure versus the
Plan 9 extended pathname semantics. Using pathnames for everything would
eliminate warts like embedding pathnames in sockaddrs in order to address
a local IPC endpoint. On the other hand, filesystem pathnames are a
uniform hierarchial namespace, which isn't true for the combination of
network protocol, address, and port - what happens if you opendir(/net/)?

Tony.
-- 
f.anthony.n.finch  [EMAIL PROTECTED]  http://dotat.at/
FITZROY: WESTERLY 6 TO GALE 8 DECREASING 4 OR 5 FOR A TIME THEN BECOMING
CYCLONIC LATER. VERY ROUGH OR HIGH. SQUALLY SHOWERS. MODERATE OR GOOD.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Rémi Després

Christian Vogt  -  le (m/j/a) 12/4/08 10:26 AM:

In any case, your comment is useful input, as it shows that calling the
proposed stack architecture in [1] hostname-oriented may be wrong.
Calling it service-name-oriented -- or simply name-oriented -- may
be more appropriate.  Thanks for the input.

Full support for the idea of a *name-oriented architecture*.

In it, the locator-identifier separation principle applies naturally: 
names are the identifiers; addresses, or addresses plus ports,  are the 
locators.


Address plus port locators are tneeded to reach applications in hosts 
that have to share their IPv4 address with other hosts ( e.g. behind a 
NAT with configured port-forwarding.)


*Service-names* are the existing tool to advertise address plus port 
locators, and and to permit efficient multihoming because, in *SRV 
records* which are returned by the DNS to service-name queries:
- several  locators  can be received for one name, possibly with a mix 
of IPv4 and IPv6

- locators can include port numbers
- priority and weight parameters of locators provide for backup and load 
sharing control.


IMO, service names and SRV records  SHOULD be supported asap in all 
resolvers (in addition to host names and A/ records that they 
support today).

Any view on this?

Regards,
RD




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Andrew Sullivan
On Fri, Dec 05, 2008 at 09:22:39AM -0500, Keith Moore wrote:
 
 but you're really missing the point, which is that DNS fails a lot.
 
 note that DNS failures aren't all with the authoritative servers -
 they're often with caches, resolver configuration, etc.

Before the thread degenerates completely into DNS is not reliable,
Is too pairs of messages, I'd like to ask what we can do about this.

It seems to me true, from experience and from anecdote, that DNS out
at endpoints has all manner of failure modes that have little to do
with the protocol and a lot to do with decisions that implementers and
operators made, either on purpose or by accident. 

I anticipate that the gradual deployment of DNSSEC (as well as various
other forgery resilience techniques) will expose many of those
failures in the nearish future.

This suggests to me that there will be an opportunity to improve some
of the operations in the wild, so that actually broken implementations
are replaced and foolish or incompetent administration gets
corrected, if only to get things working again.  It'd be nice if we
had some practical examples to analyse and for which we could suggest
repairs so that there would be a convenient cookbook-style reference
for the perplexed.  

If you have a cache of these examples, I'd be delighted to see them.

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Tony Finch
On Fri, 5 Dec 2008, Keith Moore wrote:
 Stephane Bortzmeyer wrote:
 
  For a good part, this is already done. You cannot use IP addresses for
  many important applications (the Web because of virtual hosting and
  email because most MTA setup prevent it).

 you're generalizing about the entire Internet from two applications?

It's a general truth that application protocols need a layer of addressing
of their own, and it isn't sufficient to just identify the host the
application is running on. The special cases are the applications that do
not need extra addressing.

In the cases where protocols do not support their own addressing
architecture, we have usually been forced to retro-fit it or bodge around
it. For example, the HTTP Host: header, the TLS server_name extension, the
subjectAltName x.509 field, the use of full email addresses instead of
usernames as login names for IMAP and POP. XMPP got this right.

Tony.
-- 
f.anthony.n.finch  [EMAIL PROTECTED]  http://dotat.at/
VIKING NORTH UTSIRE SOUTH UTSIRE FORTIES CROMARTY FORTH EASTERLY OR
NORTHEASTERLY 5 OR 6, OCCASIONALLY 7 OR GALE 8 AT FIRST EXCEPT IN NORTH UTSIRE
AND FORTH, BACKING NORTHERLY OR NORTHWESTERLY AND DECREASING 4 AT TIMES.
MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH AT FIRST EXCEPT IN NORTH UTSIRE AND
FORTH. SQUALLY SHOWERS. MODERATE OR GOOD.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread David W. Hankins
On Fri, Dec 05, 2008 at 08:46:48AM -0800, Dave CROCKER wrote:
 John Day wrote:
 discussed since 1972.  But you are correct, that there were still a large 
 unwashed that didn't and I am still not sure why that was. This seems to 

 After your or I or whoever indulges in our flash of brilliant insight, it 
 is the unwashed who do all the work.

I'm assuming you are using the term 'unwashed' to refer to the simple
act of bathing, rather than as in my background I understand it as a
metaphor for Christian baptism.

Surely neither Mr. Crocker nor Mr. Day are referring to IETF baptismal
practices?

Could either of you two elaborate on why you think that bathing (or
the evident lack thereof) is at all relevant to Internet standards?

I'm aware that in the ~400AD's there was actually quite a lot of
(religio) philosophical debate over the practice of bathing (which I
rather would think we'd put behind us after 1600 years), but I've
never heard it said that good engineers do (or don't) bathe.

-- 
Ash bugud-gul durbatuluk agh burzum-ishi krimpatul.
Why settle for the lesser evil?  https://secure.isc.org/store/t-shirt/
-- 
David W. HankinsIf you don't do it right the first time,
Software Engineeryou'll just have to do it again.
Internet Systems Consortium, Inc.   -- Jack T. Hankins


pgpUdvlx8jwJb.pgp
Description: PGP signature
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 There were also a bazillion deployed applications that would never be
 upgraded to deal with Y2K.  Somehow people managed.  But part of how
 they managed was by replacing some applications rather than
 upgrading them.

There were clear business motivations for ensuring that apps survived
Y2K appropriately. There is no similar brick wall with IPv4 address
exhaustion.

 I certainly won't argue that it's not a significant challenge to edit
 each application, recompile it, retest it, update its documentation,
 educate tech support, and release a new version.   But you'd have all of
 those issues with moving to IPv6 even if we had already had a socket API
 in place where the address was a variable-length, mostly opaque,
 object.

I didn't say a better API would have variable-length, mostly opaque
objects. I think others have already chimed in that hiding the
details from the applications is the key to a better API. 

And I understand that Apple has a more modern API, and it made
upgrading their applications to support IPv6 that much easier.

 Consider also that the real barrier to adapting many applications to
 IPv6 (and having them work well) isn't the size of the IPv6 address, or
 adapting the program to use sockaddr_in6 and getaddrinfo() rather than
 sockaddr_in and gethostbyname().

Actually, the real barrier to upgrading applications is lack of
incentive. No ROI.  It's not about technology at all. It's about
business cases.

Wouldn't it be nice if existing apps could run over IPv6 (perhaps in a
degraded form) with no changes? That would change the challenges of
IPv6 deployment rather significantly.

 And at least from where I sit, almost all of the applications I use
 already support IPv6.  (I realize that's not true for everybody, but it
 also tells me that it's feasible.)

Huge numbers of important applications in use today do not support
IPv6. Think beyond email, ssh and a browser. Think business
applications. Talk to someone who works for a software company about
the challenges they have upgrading their software to support IPv6 (or
fixing bugs, or doing any work to old software). It's less about
technology than business cases.

Case in point. There is apparently still significant amounts of
deployed software that cannot handle TLDs of more than 3 characters in
length. That means DNS names with a TLD of .info or .name don't work
in all places and can't be used reliably. I heard just this week that
yahoo can't handle email with .info names. .info has existed as a TLD
for 7 years. Fixing this is not a technical problem, it's a business
problem (i.e., incenting the parties that need to upgrade their
software).

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 There were also a bazillion deployed applications that would never be
 upgraded to deal with Y2K.  Somehow people managed.  But part of how
 they managed was by replacing some applications rather than
 upgrading them.
 
 There were clear business motivations for ensuring that apps survived
 Y2K appropriately. There is no similar brick wall with IPv4 address
 exhaustion.

more like a padded wall with embedded spikes?

 Actually, the real barrier to upgrading applications is lack of
 incentive. No ROI.  It's not about technology at all. It's about
 business cases.

I suppose it follows that people don't actually need those applications
to work in order to continue doing business... in which case, of course
they shouldn't upgrade them.

Either that, or the people who are making these decisions don't really
understand what's important to keeping their businesses running... and
those businesses will fail.

(not that this helps IPv6 any, of course)

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 Thomas Narten wrote:
  Keith Moore [EMAIL PROTECTED] writes:
  
  There were also a bazillion deployed applications that would never be
  upgraded to deal with Y2K.  Somehow people managed.  But part of how
  they managed was by replacing some applications rather than
  upgrading them.
  
  There were clear business motivations for ensuring that apps survived
  Y2K appropriately. There is no similar brick wall with IPv4 address
  exhaustion.

 more like a padded wall with embedded spikes?

More like a swamp, with steam rising from dark looking places. But
still a fair amount of firm ground if you can stay on a narrow and
careful path, though it's hard to tell because one can't see very far
and the swamp looks very big...

But looking back, we are already pretty far in the swamp, so it's not
clear exactly what is changing or how much worse things can or will
get continuing the current trajectory, so why not continue on the
current course just a little bit longer...

  Actually, the real barrier to upgrading applications is lack of
  incentive. No ROI.  It's not about technology at all. It's about
  business cases.

 I suppose it follows that people don't actually need those applications
 to work in order to continue doing business... in which case, of course
 they shouldn't upgrade them.

Keith, this is umbelievably simplisitic logic. Try the following
reality check. The applications run today. Important things would
break if they were turned off. But there is no money to pay for an
upgrade (by the customer) because the budget is only so big, and the
current budget was more focussed on beefing up security and trying to
get VoIP running. Or, the vendor doesn't have an upgrade because the
product is EOL, and the customer can't afford to buy a replacement for
it (again for a number of different reasons). Or, the vendor does have
an upgraded product, but it requires running the latest version of the
product, which doesn't run on the OS release you happen to be running
(and can't change for various reasons), and would require new hardware
on top of things because the new product/OS is a memory pig, or was
rewritten in Java, etc., etc.

 Either that, or the people who are making these decisions don't really
 understand what's important to keeping their businesses running... and
 those businesses will fail.

They may understand very well. But a simple cost/benefit analysis (in
terms of $$ and/or available technical resources) says they can't
afford to upgrade.

Happens all the time. Why do you think people run old software for
years and years and years?

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:

 I suppose it follows that people don't actually need those applications
 to work in order to continue doing business... in which case, of course
 they shouldn't upgrade them.
 
 Keith, this is umbelievably simplisitic logic. 

This whole discussion is unbelievably simplistic logic.  Insults don't
make the logic any better.

 The applications run today. Important things would
 break if they were turned off. But there is no money to pay for an
 upgrade (by the customer) because the budget is only so big, and the
 current budget was more focussed on beefing up security and trying to
 get VoIP running. Or, the vendor doesn't have an upgrade because the
 product is EOL, and the customer can't afford to buy a replacement for
 it (again for a number of different reasons). Or, the vendor does have
 an upgraded product, but it requires running the latest version of the
 product, which doesn't run on the OS release you happen to be running
 (and can't change for various reasons), and would require new hardware
 on top of things because the new product/OS is a memory pig, or was
 rewritten in Java, etc., etc.

Yep.  I've seen it happen many times in various guises.  By now it is
widely understood that many things need maintenance budgets - e.g.
buildings, vehicles, computer and networking hardware.  And we actually
have a decent sense of how much to budget for those things.  But we
don't have a widely-understood idea of what it costs to maintain
software, particularly networking software.  There's both a strong
tendency to believe that software is fixed-cost and an increasing
tendency to fire in-house programmers and push things like software
maintenance to third parties - which is to say, they don't get paid for.
 But when the Internet keeps changing (for many more reasons than IPv4
address space exhaustion) you can't expect the software to stay static
and keep working well.

 Either that, or the people who are making these decisions don't really
 understand what's important to keeping their businesses running... and
 those businesses will fail.
 
 They may understand very well. But a simple cost/benefit analysis (in
 terms of $$ and/or available technical resources) says they can't
 afford to upgrade.
 
 Happens all the time. Why do you think people run old software for
 years and years and years?

Most likely, because they aren't properly estimating cost and/or
benefit, or because they are too focused on short-term costs and
ignoring medium- and long-term costs.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread Hallam-Baker, Phillip
Yes, that is indeed where the world is going.
 
My point is that it would be nice if the IETF had a means of guiding the 
outcome here through an appropriate statement from the IAB.
 
Legacy applications are legacy. But we can and should push new applications to 
use SRV based connections or some elaboration on the same principle. Even with 
legacy applications, MX was a retrofit to SMTP.



From: [EMAIL PROTECTED] on behalf of Henning Schulzrinne
Sent: Fri 12/5/2008 9:40 AM
To: Thomas Narten; IETF discussion list; [EMAIL PROTECTED]
Subject: Re: The internet architecture





 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).

This certainly seems to be the way that modern APIs are heading. If 
I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other 
scripting languages have a socket-like API that does not expose IP 
addresses, but rather connects directly to DNS names. (In many cases, 
they unify file and socket opening and specify the application 
protocol, to, so that one can do fopen(http://www.ietf.org 
http://www.ietf.org/ ), for 
example.) Thus, we're well on our way towards the goal of making 
(some) application oblivious to addresses. I suspect that one reason 
for the popularity of these languages is exactly that programmers 
don't want to bother remembering when to use ntohs().

Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Marc Manthey


Am 05.12.2008 um 11:25 schrieb Rémi Després:


Christian Vogt  -  le (m/j/a) 12/4/08 10:26 AM:
In any case, your comment is useful input, as it shows that calling  
the

proposed stack architecture in [1] hostname-oriented may be wrong.
Calling it service-name-oriented -- or simply name-oriented --  
may

be more appropriate.  Thanks for the input.

Full support for the idea of a *name-oriented architecture*.

In it, the locator-identifier separation principle applies  
naturally: names are the identifiers; addresses, or addresses plus  
ports,  are the locators.


Address plus port locators are tneeded to reach applications in  
hosts that have to share their IPv4 address with other hosts ( e.g.  
behind a NAT with configured port-forwarding.)


*Service-names* are the existing tool to advertise address plus port  
locators, and and to permit efficient multihoming because, in *SRV  
records* which are returned by the DNS to service-name queries:
- several  locators  can be received for one name, possibly with a  
mix of IPv4 and IPv6

- locators can include port numbers
- priority and weight parameters of locators provide for backup and  
load sharing control.


IMO, service names and SRV records  SHOULD be supported asap in all  
resolvers (in addition to host names and A/ records that they  
support today).

Any view on this?


hello Rémi,

i totally agree with you in all points, from my perspective , there is  
no sufficent support
for identifying  and signing tools, like DNS TSIG whitch will be from  
by apples wide area bonjour


http://www.dns-sd.org/ServerSetup.html

I was following a interesting software project , BUT

quote :

On Linux (at least on Debian), you need the mDNSResponder package  
provided by

Apple on the Bonjour downloads page.  Unfortunately, Avahi doesn't yet
implement all of the API functions UIA needs.
---

So sahred secret  http://www.ietf.org/rfc/rfc2845.txt   are not  
implemented into avahi
 for wide area distribiution since 2 years. And Novell / SUSE seems  
to have no interest aswell.


just my 50 cents

regards

Marc

--
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: [EMAIL PROTECTED]
PGP/GnuPG: 0x1ac02f3296b12b4d
jabber :[EMAIL PROTECTED]
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf