Re: Internet Architecture (was US DoD and IPv6)

2010-10-08 Thread Noel Chiappa
{'Borrowing' a new, more appropriate Subject: from a private reply...}

 From: John C Klensin john-i...@jck.com

 What does this say about the IETF and how we make decisions? Does that
 need adjusting?

Perhaps, but even I shrink from tackling that particular windmill!

 while ... recriminations based on hindsight may be satisfying in some
 ways, the question is what to do going forward. 

I couldn't agree with your latter point more.

 There are communities out there who believe that we have managed to
 prove that datagram networks, with packet-level routing, are a
 failure at scale and that we should be going back to an essentially
 connection-oriented design at the network level.

I (perhaps obviously) think that's a rash conclusion: the circa 1975 Internet
architecture was never intended to grow to this size, and that it has done so
at all (albeit via the use of a number of hacks, some merely kludgy, others
downright ugly) is a testament to how well the basic concept (of unreliable
packets, and minimal state in the network) works.

I do think that the architecture does require some work, though, and for a
number of reasons. For one, the real requirements have become more complex as
the network has grown, and a number that have arrived (e.g. provider
independence for users; the need for ISP's to be able to manage traffic
flows; etc) aren't really handled well (or at all) by the existing
architecture. For another, we need to bring some order to the cancerous chaos
that has resulted from piecemeal attempts to deal with the previous point.
Finally, it's a truism of system design that arrangements that work at one
order of scale don't at another, and as the network has grown beyond almost
our wildest expectations - and certainly larger than it was ever designed to
- I think we're seeing that too.

 If not, then there are other focused discussions that would be helpful.
 The latter discussions that have almost started in this and related
 threads, but have (I believe) gotten drowned out by the noise, personal
 accusations about fault, and general finger-pointing.

Well, sometimes one has to clear the air (or underbrush, if you will), and
get everyone's minds open, before one can make forward progress. But I agree,
'hah-hah, your end of the ship is sinking' rapidly becomes totally
unproductive.


 How would you propose moving forward?

Well, IMO there are a number of things we need to do, which can be done all
in parallel.

The first is to be honest with ourselves, and with the people out there who
depend on us to set direction, about what's really likely to happen out in
the network; e.g. a very long period during which we are stuck with multiple
naming realms with various kinds of translators (NATxx) gluing them together.

This whole 'don't worry, everything will be fine' schtick has got to go -
we're now in what I call The Emperor's New Protocol land, where (almost)
everybody knows the theoretical plan isn't going to work, and major revisions
are needed, but nobody wants to come right out and say so formally.

We need to do that.

I think a group (set up by the IAB, who make these kind of pronouncements)
should sit down and draw up a _realistic_ picture of what the future over the
next couple of years (say, up to 5 years out - 5 only, because of a second
parallel effort, below) is likely to be, and get that out, so people have a
realistic appraisal of how things stand (and restore a little bit of the I*'s
credibilty, in the process).

That group (or perhaps a sister group) should produce a formal statement
covering the implications for IETF work of that present/future; i.e. about the
_existing_ architectural environment in which the protocols the IETF designs
have to operate, and thus have to be designed for. E.g. a formal IETF policy
statement 'all IETF protocols MUST work through NAT boxes'. Etc, etc.


The second is to set up a group (and again, this is the IAB's job) to try and
plan some sort of evolutionary architectural path, which would have two
phases: i) to survey all classes of stakeholders (ISP's, content providers,
users) to see what the network needs to be able to do that it can't now, and
ii) provide both a) an architecture that meets those needs, and b) a
_realistic_ plan for getting there. Realistic in economic, interoperability,
etc terms - all the things we have learned (painfully) are so critical to the
viable roll-out of new stuff.

Do note that that effort implies that the current solution _doesn't_ provide
all those things. So we might as well swallow hard and admit _that_ publicly
too.


Whether all the above is politically feasible in the current IETF - who knows?
If not, it's back to 'The Emperor's New Protocol' for another year or so, I
guess, by which time the environment may be a bit more receptive.

Noel
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman

Re: Internet Architecture (was US DoD and IPv6)

2010-10-08 Thread Noel Chiappa
 From: Dave Cridland d...@cridland.net

 So currently, a NAT provides:

 - A degree of de-facto firewalling for everyone.
 - An immunity to renumbering for enterprises.
 - Fully automated network routing for ISPs.

 If technologies could be introduced to tackle especially the last two,
 I think the advantages of NATs would vanish.

Even assuming we could provide 1-3 above in some way (which I am somewhat
dubious about), I would have to say 'I don't think so' to your conclusion -
because I think your list is incomplete. The Internet as actually deployed
depends crucially on having a number of disjoint low-level naming realms -
which necessitate NAT boxes between them.

For one, my understanding of the current plan for interoperability between
IPv6 devices with an IPv6-only address, and 'the' IPv4 Internet, is the
'IPv4/IPv6 Translation' work from BEHAVE, and that's basically NAT. (There
was, a long time ago, some proposal for having such IPv6 devices with an
IPv6-only address 'share' an IPv4 address, to enable access to 'the' IPv4
Internet, but I guess it never came through.) On that alone, NATs will be
with us for decades to come.

For another, there are lots of people who have networks behind NAT boxes, for
a variety of reasons (maybe they couldn't get the address space, maybe - like
all those home wireless networks - it was just easier to do it that way). And
there is, for most of them, no economic incentive to change that, to give up
their private naming realm. (Sure, there will be a few exceptions, for whome
it does make economic sense to get rid of NAT - e.g. large ISPs for whom NAT
hacks make life too complex - but there will still be a lot left after that.

So unless you have a viable scenario in which disjoint naming realms go away,
then you do not have a viable scenario in which NATs go away.

 Noel
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2009-01-05 Thread Fleischman, Eric
I spent several years trying to discern what the IETF's Internet
Architecture became when we abandoned our historic architecture and
started growing the middle of the hour glass through middleboxes and
telephony-originated techniques. For several years in the early 2000s I
became convinced that we had no architecture and I lamented that fact.
Recently I realized that we had stumbled upon a common architecture but
hadn't yet realized it -- map-and-encaps is our current Internet
architecture. I believe that the techniques described in Fred Templin's
current I-D trilogy should be re-written to state that those techniques
describe our current de facto Internet architecture.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2009-01-05 Thread Michael Richardson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 Tony == Tony Finch d...@dotat.at writes:
 Most networkable consumer electronics ships with at least two
 network interfaces. Host multihoming is only rare because it
 doesn't work.

 Why do you say it doesn't work ?

Tony The kind of multihoming I am talking about is where you are
Tony using multiple links for redundncy or resilience - i.e. the
Tony same reasons for site multihoming. I did not mean other kinds

  I think it says something to the extreme *rarity* of host multihoming
(for systems that initiate connections) that there is even this confusion.
  You can be multihomed with one physical interface.  This is rather
more common for machines that mostly accept connections (web servers),
as you can do relatively simple source address based policy routing.
(If it's ISP-A's address in the source field, use ISP-A)
  
  I haven't read every one of the emails in this thread: but where are
the shim6 people?  Maybe they are busy writing drafts + code.

- -- 
] Y'avait une poule de jammé dans l'muffler!|  firewalls  [
]   Michael Richardson, Sandelman Software Works, Ottawa, ON|net architect[
] m...@sandelman.ottawa.on.ca http://www.sandelman.ottawa.on.ca/ |device driver[
] panic(Just another Debian GNU/Linux using, kernel hacking, security guy); [

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Finger me for keys

iQEVAwUBSVqY7ICLcPvd0N1lAQK4LAf9FlJKpbcjCSDs/R3/C4xPvi7j4vBsLn0V
b1POJgfZ1T3dbbVV1nBsDDgnnV3RWoimToSvsUC0ZSiJ7rCfuiDDpfcwrSvPfPP+
6B9/R2hE0BkgJfn9zKj/25cASkqfH9HBs6ZKOl5i6WwWMamDLqp6dfdT+cVUdGtV
TZTAMcBpk/IOpFYtwqvN3x5TyKFnkokArEbje0c+ceWjXbHaM/eh30tR/knbPrTh
JadZoeGmB2NyoLG6243aBIDydwC7AB2O4Uz74Ivx73qCAj1UYFR592miA6O+tQAH
/M9fqUtdJetnRkhCdDf5JjmIbjF2Jmw7We2vqzQkJWfiwc4UUJQJsQ==
=KDlL
-END PGP SIGNATURE-
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


The Internet architecture - pointer to MIF (Multiple Interfaces) discussion

2009-01-05 Thread Hui Deng
Hello,all

We have setup an mailing list to discuss the host which would like to use
multiple interfaces.
several issues has been identified based on problem statement.

http://www.ietf.org/internet-drafts/draft-blanchet-mif-problem-statement-00.txt

Please be awared that it is different from multihoming (site with multiple
interfaces).

Please feel free to join our discussion over there,
https://www.ietf.org/mailman/listinfo/mif

thanks

-Hui
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2009-01-02 Thread Rémi Després




Tony Finch  -  le (m/j/a) 1/1/09 10:01 PM:

  But what we need is an addressing architecture that allows us to tell the
difference between a hostname that has multiple addresses because they are
required for application addressing, or because the destination has
multiple points of connection. (I think IPv4 vs IPv6 is a special case of
the latter.) This is another way of looking at the id/loc split.
  

In my understanding, SCTP has a promising  approach for this: 
- The DNS provides as usual  a set of locators that may be tried
successively to start an SCTP association, until one succeeds. They may
be those of different hosts (e.g. for load sharing on a per connection
basis).
- Once an SCTP association is established with an SCTP endpoint, both
ends may exchange their lists of alternative locators. These locators
being exclusive of endpoint physical hosts, this is adequate for
multihoming support. 


  

  Given multiple addresses for the same hostname, a client has no way
to make an informed decision about which is the best to connect to.
This is why hosts that support IPv6 do not work as well as IPv4-only
hosts.
  

   I guess I don't follow. Indeed, IPv6 hosts that follow RFC [3484]
will defeat some attempts at load-balancing,

  
  
This isn't about load balancing. One example is that RFC 3484 prefers IPv6
to IPv4 even when IPv6 connectivity relies on sub-optimal tunnels. Another
is section 6 rule 1 says a client should "avoid unusable destinations"
without specifying any way for a client to find out which destinations are
usable.
  

SRV resource records do provide indications for weighted load
balancing, nicely distinct from normal vs backup. 
IMO, their use could be extended for multihoming. 
For this, alternative locators of a multihomed endpoint would be
compared to those obtained in  SRV RRs that , in the DNS, are given 
for the name of this endpoint.
For each address that matches an SRV RR, the weight it indicates can be
used for intelligent load balancing. 


  

  There is no support for multiple instances of the same application
per host (i.e. virtual hosting) unless the application has its own
addressing.
  

   I'm not clear what Tony might see as such "support".

  
  
Ned's message had some examples. I suppose that SRV records go some way to
fixing the problems with well-known port numbers, so "no support" is an
exaggeration - but we've failed to back-port this fix to older protocols.
  

Same view.

In addition, if applications use Connect By Name, and resolvers make
SRV queries when names have the service-name format, then:
- applications need not to be concerned
- transport modules can get the right remote application ports. 

RD


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2009-01-01 Thread John Leslie
Tony Finch d...@dotat.at wrote:
 
 In fact the use of the term multihoming to describe multiple IP
 addresses on the same link is a serious problem with practical
 consequences.

   I see no benefit of setting up multi-homing without separate links.
Routing based on source address is a dangerous practice, even when the
source address is trusted.

 The Internet's idea of multihoming as supported by the architecture is
 NOT what I consider to be multihoming. (The lack of support for my
 idea of multihoming makes Internet connections klunky, fragile, and
 immobile, but we all know that, and should be embarrassed by it.)

   I'm not at all clear what support of multihoming Tony is asking
for...

 The Internet's idea of multihoming is reasonably precisely
 encapsulated by RFC 3484. This specification breaks existing practice
 and fails to do what it is designed to.

   RFC 3484, of course, is Default Address Selection for IPv6. I
guess that Tony is referring to Section 10.5 (which, frankly, I have
never succeeded in deciphering). If anyone actually does what 10.5
suggests, I have not stumbled upon it.

 OK, so what is Internet multihoming? If the DNS resolves a hostname
 to multiple IP addresses then those are assumed to be multiple links
 to the same host.

   That is sometimes true, and often not.

   We indeed do that in a few cases, to have a path which bypasses
routing or connectivity problems. More often, we use separate DNS names
to point to the different interfaces on the same host.

   Typically, multiple A)ddress records for the same domain name will
point to different hosts configured to return identical responses to
service requests.

 Unfortunately there is no way for a client to make an informed
 decision about which IP address to choose.

   Our attitude is that clients _should_ be expected to choose blindly.

 RFC 3484 specifies how to make an UNINFORMED decision, or at best,
 how one could in principle inform the decision. However in order to be
 informed, a host needs to be hooked into the routing infrastructure -
 but the Internet architecture says that the dumb network need not
 explain its workings to the intelligent but ignorant hosts.

   ... which seems about right. Layer 3 is supposed to find an
interconnection from one network in the Internet to another. There
seems to be little point in explaining how it does this to the
endpoints.

 As a result, having multiple addreses for the same hostname on
 different links does not work.

   There's some logical inference missing here. It certainly does work,
though presumably it doesn't do something which Tony thinks might be
expected of it.

 It never worked before RFC 3484, and though RFC 3484 tries to fix it,
 it fails because of the lack of routing information.

   I don't follow...

 What is worse is it breaks DNS round robin - which I admit is a hack,
 but it's a hack with 15 years of deployment, therefore not to be
 broken without warning. Na?ve implementations have broken round-robin
 DNS because RFC 3484 ignores it.

   Round-robin seems mostly unrelated -- it was never guaranteed to be
particularly good at load-balancing.

 So to summarize:
 
 A host has no way to use multiple links to provide redundancy or
 resilience, without injecting a PI route into the DFZ - like the
 anycasted root name servers.

   This would be nice to fix, but it's not clear there's a sufficient
constituency interested in fixing it.

 Given multiple addresses for the same hostname, a client has no way
 to make an informed decision about which is the best to connect to.
 This is why hosts that support IPv6 do not work as well as IPv4-only
 hosts.

   I guess I don't follow. Indeed, IPv6 hosts that follow RFC 3483
will defeat some attempts at load-balancing, but it would seem that
this would only affect server farms using IPv6 -- which ought to know
better than to depend on those load-balancing tricks.

 The Internet addressing architecture has a built-in idea that there
 is one instance of each application per host, and applications are
 identified by protocols (port numbers).

   This is a broken idea. It should be abandoned.

 There is no support for multiple instances of the same application
 per host (i.e. virtual hosting) unless the application has its own
 addressing.

   I'm not clear what Tony might see as such support.

 There is no support for distributing an application across multiple
 hosts (or multiple links to the same host) because address selection
 is blind to availability and reachability - whether you consider them
 to be binary or fractional values.

   Again, I'm not clear.

 If you try to use it you are either relying on the non-kosher
 round-robin DNS, or you are likely to suffer failed or sub-optimal
 connections.

   RFC 3484 specifies that implementations of getaddrinfo() should sort
the list of IPv6 and IPv4 addresses that they return. (This has never
seemed to me a particularly good idea.) It goes on to state that
applications

Re: The internet architecture

2009-01-01 Thread ned+ietf
 I've been asked twice now in private email to clarify what I mean, so this
 is going to turn into a massive rant about how the current Internet
 architecture - as it is deployed, and as it seems to be developing - has a
 completely broken idea of how to address endpoints. The multiple meanings
 of the word multihoming relate directly to the multiple points in the
 rant.

Tony, I pretty much agree with everything you say here. There really is a
pretty serious disconnect between what we seem to be able to build
and what applications actually need.

  This is rather more common for machines that mostly accept connections
  (web servers), as you can do relatively simple source address based
  policy routing.

 In my experience policy routing is not what people use multiple addresses
 on the same link for. Multiple addresses on the same link are used for
 virtual hosting for application protocols that don't signal
 application-level addresses within the application protocol. The canonical
 example is HTTP over SSL, but POP and IMAP have the same problem. (People
 sometimes hack around this design error in POP and IMAP by embedding the
 virtual domain in the username, which is yet another example of why every
 application protocol needs its own addressing architecture.)

Indeed. But of course this approach doesn't work when a diffferent POP or IMAP
server is needed for different virtual hosts, so you throw in a proxy to deal
with that case. The proxy in turn can make authentication fairly ... exciting.

But you're still stuck the minute you hit the need for different virtual hosts
to use different certificates. More recent protocols, starting, I believe, with
MTQP (RFC 3887) have a domain parameter on the STARTTLS command. And I think
there's a TLS extension to do this. But older protocols (which means most of
them) don't have the parameter, and TLS-level support is nonexistant in 
practice, so in this case you're stuck with certificate selection based on IP
addresses. In some very unusual scenarios with disjoint client pools or
unusually flexible clients you may be able to get away with doing it based on
source address or destination port respectively, but most of the time you end
up with multiple destination IPs to handle this.

And even the parameter is clunky - it feels like what it is: A tacked on field.
You're absolutely right that this highlights our failure to give application
addressing design the attention it deserves.

 To
 re-iterate, one computer providing multiple different services on the same
 IP address is NOT multihoming, in the same way that one computer providing
 multiple services on different port numbers is not multihoming.

 The dual scenario is multiple computers providing the same service on
 different IP addresses. The simplest deployment is to scale up on the
 cheap by relying on round-robin DNS. In this case there are multiple
 links, but they are often on the same layer 2 segment, so again not
 multihoming in the sense that I meant. If you scale up further, the first
 thing you do is get proper resilience with a load-balancing router (which
 probably requires the architecturally impure NAT).

There are also lots of commonly-ussed solution points between round-robin DNS
and load-balancing routers where more than one host effectively shares
an IP address. These tricks are often, but not always, associated with
various clustering schemes.

One unfortunate side effect of all this is that applications end up having to
know all sorts of stuff about IP addresses - both internal and external. This
in turn contributes to the renumbering problem in ways that IPv6 support  for
multiple addresses per interfaces cannot address.

 If you require more
 than one point of presence, the next step (beyond layer 2 techniques like
 trunking ethernet vlans between multiple sites) is IP routing tricks, i.e.
 anycast. For serious wide-area services, you are likely to use location-
 and availibity-sensitive DNS to deirect users to the right instance.

 Note that NONE of these techniques use the architecturally-supported idea
 of multihoming. In fact most of them deliberately avoid it because it does
 not work!

 The Internet's idea of multihoming as supported by the architecture is NOT
 what I consider to be multihoming. (The lack of support for my idea of
 multihoming makes Internet connections klunky, fragile, and immobile, but
 we all know that, and should be embarrassed by it.) The Internet's idea of
 multihoming is reasonably precisely encapsulated by RFC 3484. This
 specification breaks existing practice and fails to do what it is designed
 to.

 OK, so what is Internet multihoming? If the DNS resolves a hostname to
 multiple IP addresses then those are assumed to be multiple links to the
 same host.

 Unfortunately there is no way for a client to make an informed decision
 about which IP address to choose. RFC 3484 specifies how to make an
 UNINFORMED decision, or at best, how one could

Re: The internet architecture

2009-01-01 Thread John C Klensin
+1 (or +several).  From my point of view, we are trying to use 
the same concepts of multiple addresses per host and multiple 
hosts per address to handle a whole series of unrelated things, 
perhaps with various sorts of virtualization and clustering at 
one extreme and one address per interface/link at the other. 
We haven't gotten any of them right for today's world and 
attempts at most of them interfere with the others.


 john


--On Thursday, January 01, 2009 8:00 AM -0800 
ned+i...@mauve.mrochek.com wrote:



I've been asked twice now in private email to clarify what I
mean, so this is going to turn into a massive rant about how
the current Internet architecture - as it is deployed, and as
it seems to be developing - has a completely broken idea of
how to address endpoints. The multiple meanings of the word
multihoming relate directly to the multiple points in the
rant.


Tony, I pretty much agree with everything you say here. There
really is a pretty serious disconnect between what we seem to
be able to build and what applications actually need.
...



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2009-01-01 Thread Tony Finch
On Thu, 1 Jan 2009, John Leslie wrote:

I'm not at all clear what support of multihoming Tony is asking
 for...

I'm not clear either, because I don't know what mechanisms could make it
work, especially not mechanisms that are deployable.

But what we need is an addressing architecture that allows us to tell the
difference between a hostname that has multiple addresses because they are
required for application addressing, or because the destination has
multiple points of connection. (I think IPv4 vs IPv6 is a special case of
the latter.) This is another way of looking at the id/loc split.

Then there must be a way for an endpoint to make an informed decision
about which of its links to use (source address) and which of the possible
destination links to use. There also needs to be a way to migrate
connections between the available links either pro-actively for seamless
mobility or re-actively for fail-over.

RFC 3484, of course, is Default Address Selection for IPv6. I
 guess that Tony is referring to Section 10.5

I was thinking of the whole thing, actually, because it specifies an
uninformed (therefore broken) version of what I wrote above.

  OK, so what is Internet multihoming? If the DNS resolves a hostname
  to multiple IP addresses then those are assumed to be multiple links
  to the same host.

That is sometimes true, and often not.

Right. My point is that the above seems to be the architectural model, but
actual practice is almost always different.

  but the Internet architecture says that the dumb network need not
  explain its workings to the intelligent but ignorant hosts.

... which seems about right. Layer 3 is supposed to find an
 interconnection from one network in the Internet to another. There
 seems to be little point in explaining how it does this to the
 endpoints.

The endpoint doesn't need to know how: it needs to know if a link is
working, or even better, how well it is working compared to the other
alternatives.

Round-robin seems mostly unrelated -- it was never guaranteed to be
 particularly good at load-balancing.

True, but it often works well enough in practice and has been widely used
for 15 years. The reason for talking about it is it's an example of a
widespread practice that goes against the architecture. It is not
documented in an RFC and is not supported by the IETF to the extent that
the IETF feels free to break it (in RFC 3484 section 6 rule 9).

  A host has no way to use multiple links to provide redundancy or
  resilience

This would be nice to fix, but it's not clear there's a sufficient
 constituency interested in fixing it.

Almost every bit of portable network-capable consumer electronics has the
hardware to benefit from this fix.

  Given multiple addresses for the same hostname, a client has no way
  to make an informed decision about which is the best to connect to.
  This is why hosts that support IPv6 do not work as well as IPv4-only
  hosts.

I guess I don't follow. Indeed, IPv6 hosts that follow RFC [3484]
 will defeat some attempts at load-balancing,

This isn't about load balancing. One example is that RFC 3484 prefers IPv6
to IPv4 even when IPv6 connectivity relies on sub-optimal tunnels. Another
is section 6 rule 1 says a client should avoid unusable destinations
without specifying any way for a client to find out which destinations are
usable.

  There is no support for multiple instances of the same application
  per host (i.e. virtual hosting) unless the application has its own
  addressing.

I'm not clear what Tony might see as such support.

Ned's message had some examples. I suppose that SRV records go some way to
fixing the problems with well-known port numbers, so no support is an
exaggeration - but we've failed to back-port this fix to older protocols.

  There is no support for distributing an application across multiple
  hosts (or multiple links to the same host) because address selection
  is blind to availability and reachability - whether you consider them
  to be binary or fractional values.

Again, I'm not clear.

This is RFC 3484 section 6 rule 1 again. It doesn't work in practice which
is why the real world uses load-balancing routers or anycast or whatever.

Does Tony have an alternative to suggest?

As I said, it was a rant and not intended to be constructive :-) Far
better minds than mine are working on the problem and I'm following their
work with interest - especially whether the proposed improvements to
addressing and routing help with these application-level problems.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
LUNDY FASTNET: EAST OR SOUTHEAST 5 TO 7. MODERATE OR ROUGH. MAINLY FAIR, RAIN
AT FIRST IN FASTNET. MODERATE OR GOOD, OCCASIONALLY POOR AT FIRST.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-31 Thread macbroadcast


Am 24.12.2008 um 19:50 schrieb Bryan Ford:


So in effect we've gotten ourselves in a situation where IP  
addresses are too topology-independent to provide good scalability,  
but too topology-dependent to provide real location-independence at  
least for individual devices, because of equally strong forces  
pulling the IP assignment process in both directions at once.  Hence  
the reason we desperately need locator/identity separation: so that  
locators can be assigned topologically so as to make routing  
scalable without having to cater to conflicting concerns about  
stability or location-independence, and so that identifiers can be  
stable and location-independent without having to cater to  
conflicting concerns about routing efficiency.


As far as specific forms these locators or identifiers should  
take, or specific routing protocols for the locator layer, or  
specific resolution or overlay routing protocols for the identity  
layer, I think there are a lot of pretty reasonable options; my  
paper suggested one, but there are others.


Cheers,
Bryan



thanks brian for your great explanation  , something came to my mind  
imediatly,


.i remember these days when i connect to the internet using my  
1 und 1 - 14,4kb modem in the 90th


there was no NAT,

 i connected with a little programm to a specific ip adress,

there was not even DNS involed at that time. for the programm i was  
talking about ;)



So there were no caches und buffers with information about my usage  
exept  on the server where i was connected to.


Just one question because i am reading a lot about all these routing  
ptotocols in the past , is uia / uip  more usefull in sparse or dense  
networks or both ?


bright new 2009

cheers from cologne

Marc



i believe that  Kademlia   [ 1 ] for example and the  
technologies

mentioned in the  linked paper [ 2 ]
would fit the needs and requirements for a future proof internet.


[ 1 ] http://en.wikipedia.org/wiki/Kademlia
[ 2 ] http://pdos.csail.mit.edu/papers/uip:hotnets03.pdf
--


--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: m...@let.de
jabber :m...@kgraff.net
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-31 Thread Tony Finch
On Tue, 30 Dec 2008, Michael Richardson wrote:
 Tony Finch d...@dotat.at wrote:
 
  The kind of multihoming I am talking about is where you are
  using multiple links for redundncy or resilience - i.e. the
  same reasons for site multihoming.

   I think it says something to the extreme *rarity* of host multihoming
 (for systems that initiate connections) that there is even this confusion.
   You can be multihomed with one physical interface.

NOT in the sense that I am talking about.

In fact the use of the term multihoming to describe multiple IP
addresses on the same link is a serious problem with practical
consequences.

I've been asked twice now in private email to clarify what I mean, so this
is going to turn into a massive rant about how the current Internet
architecture - as it is deployed, and as it seems to be developing - has a
completely broken idea of how to address endpoints. The multiple meanings
of the word multihoming relate directly to the multiple points in the
rant.

 This is rather more common for machines that mostly accept connections
 (web servers), as you can do relatively simple source address based
 policy routing.

In my experience policy routing is not what people use multiple addresses
on the same link for. Multiple addresses on the same link are used for
virtual hosting for application protocols that don't signal
application-level addresses within the application protocol. The canonical
example is HTTP over SSL, but POP and IMAP have the same problem. (People
sometimes hack around this design error in POP and IMAP by embedding the
virtual domain in the username, which is yet another example of why every
application protocol needs its own addressing architecture.) To
re-iterate, one computer providing multiple different services on the same
IP address is NOT multihoming, in the same way that one computer providing
multiple services on different port numbers is not multihoming.

The dual scenario is multiple computers providing the same service on
different IP addresses. The simplest deployment is to scale up on the
cheap by relying on round-robin DNS. In this case there are multiple
links, but they are often on the same layer 2 segment, so again not
multihoming in the sense that I meant. If you scale up further, the first
thing you do is get proper resilience with a load-balancing router (which
probably requires the architecturally impure NAT). If you require more
than one point of presence, the next step (beyond layer 2 techniques like
trunking ethernet vlans between multiple sites) is IP routing tricks, i.e.
anycast. For serious wide-area services, you are likely to use location-
and availibity-sensitive DNS to deirect users to the right instance.

Note that NONE of these techniques use the architecturally-supported idea
of multihoming. In fact most of them deliberately avoid it because it does
not work!

The Internet's idea of multihoming as supported by the architecture is NOT
what I consider to be multihoming. (The lack of support for my idea of
multihoming makes Internet connections klunky, fragile, and immobile, but
we all know that, and should be embarrassed by it.) The Internet's idea of
multihoming is reasonably precisely encapsulated by RFC 3484. This
specification breaks existing practice and fails to do what it is designed
to.

OK, so what is Internet multihoming? If the DNS resolves a hostname to
multiple IP addresses then those are assumed to be multiple links to the
same host.

Unfortunately there is no way for a client to make an informed decision
about which IP address to choose. RFC 3484 specifies how to make an
UNINFORMED decision, or at best, how one could in principle inform the
decision. However in order to be informed, a host needs to be hooked into
the routing infrastructure - but the Internet architecture says that the
dumb network need not explain its workings to the intelligent but
ignorant hosts.

As a result, having multiple addreses for the same hostname on different
links does not work. It never worked before RFC 3484, and though RFC 3484
tries to fix it, it fails because of the lack of routing information. What
is worse is it breaks DNS round robin - which I admit is a hack, but it's
a hack with 15 years of deployment, therefore not to be broken without
warning. Naïve implementations have broken round-robin DNS because RFC
3484 ignores it.

So to summarize:

A host has no way to use multiple links to provide redundancy or
resilience, without injecting a PI route into the DFZ - like the
anycasted root name servers.

Given multiple addresses for the same hostname, a client has no way to
make an informed decision about which is the best to connect to. This is
why hosts that support IPv6 do not work as well as IPv4-only hosts.

The Internet addressing architecture has a built-in idea that there is one
instance of each application per host, and applications are identified by
protocols (port numbers). There is no support for multiple instances

Re: The internet architecture

2008-12-30 Thread Tony Finch
On Mon, 29 Dec 2008, Marshall Eubanks wrote:
 On Dec 29, 2008, at 10:02 PM, Tony Finch wrote:
 On Mon, 29 Dec 2008, Noel Chiappa wrote:

 I have been thinking this for some time too, and it's especially
 true/clear when the multi-homing in question is site multi-homing,
 and not host-multihoming (which is much rarer, is my impression).

 Most networkable consumer electronics ships with at least two network
 interfaces. Host multihoming is only rare because it doesn't work.

 Why do you say it doesn't work ?

The kind of multihoming I am talking about is where you are using multiple
links for redundncy or resilience - i.e. the same reasons for site
multihoming. I did not mean other kinds of multihoming, such as multiple
addresses on the same link (which I don't think should be called
multihoming) nor links which connect to different networks - which
includes VPNs and gateways/routers. (The site-level equivalent of the
latter is private peering.)

The kind of multihoming I mean is the kind that trivially supports
mobility as a degenerate case, since mobility is just repeated failover.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
ROCKALL: SOUTHEASTERLY 5 OR 6, OCCASIONALLY 7, BUT 4 IN NORTH AT FIRST. SLIGHT
OR MODERATE, OCCASIONALLY ROUGH. RAIN OR SHOWERS. MODERATE OR GOOD.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-29 Thread John C Klensin


--On Sunday, 28 December, 2008 16:22 -0500 John Day
jeanj...@comcast.net wrote:

 Why should an application ever see an IP address?
 
 Applications manipulating IP addresses is like a Java program
 manipulating absolute memory pointers. A recipe for problems,
 but then you already know that.

John,

Let me try to explain, in a slightly different way, what I
believe some others have tried to say.

Suppose we all agree with the above as a principle and even
accept your analogy (agreement isn't nearly that general, but
skip that for the moment).  Now consider an IPv6 host or a
multihomed IPv4 host (as distinct from multihomed IPv4 network).
The host will typically have multiple interfaces, multiple IP
addresses, and, at least as we do things today and without other
changes in the architecture, only one name.   One could change
the latter, but having the typical application know about
multiple interfaces is, in most cases, fully as bad as knowing
about the addresses -- one DNS name per interface is more or
less the same as one DNS name per address.

Now the application has to pick which interface to use in, e.g.,
opening a connection to another system.  Doing that optimally,
or even effectively, requires that it know routing information.
But requiring the application to obtain and process routing
information is worse than whatever you think about its using IP
addresses -- the latter may be just a convenient handle (blob)
to identify what we have historically called an interface, but
having the application process and interpret routing information
is completely novel as far as the applications layer is
concerned (as well as being a layer violation, etc., etc.) and
requires skills and knowledge that application writers rarely
have and still more rarely should need to use.


At least to me, that is the key architectural problem here, not
whatever nasty analogies one can draw about IP addresses.

john



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Rémi Després




John,

To pick a local interface for an outgoing connection isn't the
transport layer, e.g. SCTP, well placed to do the job (or some
intermediate layer function like Shim6)?
Thus, ordinary applications wouldn't need to be concerned.

RD

Actually SCTP is  it to some extent.

John C Klensin  -  le (m/j/a) 12/29/08 1:56 PM:

  
--On Sunday, 28 December, 2008 16:22 -0500 John Day
jeanj...@comcast.net wrote:

  
  
Why should an application ever see an IP address?

Applications manipulating IP addresses is like a Java program
manipulating absolute memory pointers. A recipe for problems,
but then you already know that.

  
  
John,

Let me try to explain, in a slightly different way, what I
believe some others have tried to say.

Suppose we all agree with the above as a principle and even
accept your analogy (agreement isn't nearly that general, but
skip that for the moment).  Now consider an IPv6 host or a
multihomed IPv4 host (as distinct from multihomed IPv4 network).
The host will typically have multiple interfaces, multiple IP
addresses, and, at least as we do things today and without other
changes in the architecture, only one name.   One could change
the latter, but having the typical application know about
multiple interfaces is, in most cases, fully as bad as knowing
about the addresses -- one DNS name per interface is more or
less the same as one DNS name per address.

Now the application has to pick which interface to use in, e.g.,
opening a connection to another system. 

See above

   Doing that optimally,
or even effectively, requires that it know routing information.
But requiring the application to obtain and process routing
information is worse than whatever you think about its using IP
addresses -- the latter may be just a convenient handle ("blob")
to identify what we have historically called an interface, but
having the application process and interpret routing information
is completely novel as far as the applications layer is
concerned (as well as being a layer violation, etc., etc.) and
requires skills and knowledge that application writers rarely
have and still more rarely should need to use.


At least to me, that is the key architectural problem here, not
whatever nasty analogies one can draw about IP addresses.

john



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf

  




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread John Day
No it isn't Transport's job.  Transport has one 
and only one purpose: end-to-end reliability and 
flow control.


Managing the resources of the network is the network layer's job.

Although, these distinctions of Network and 
Transport Layer are . . . shall we say, quaint.


Multihoming is fundamentally a routing problem. 
SCTP tries to claim to solve it by changing the 
definition, an old trick. Sort of like medieval 
medicine's response to a disease it can't cure: 
give you a disease I can cure and hope it works. 
Multihoming has nothing to do with what has 
traditionally been called the Transport Layer.


It is a problem of routing not be able to 
recognize that two points of attachment go to the 
same place.  Portraying it as anything else is 
just deluding yourself.


At 14:22 +0100 2008/12/29, Rémi Després wrote:

John,

To pick a local interface for an outgoing 
connection isn't the transport layer, e.g. SCTP, 
well placed to do the job (or some intermediate 
layer function like Shim6)?

Thus, ordinary applications wouldn't need to be concerned.

RD

Actually SCTP is it to some extent.

John C Klensin - le (m/j/a) 12/29/08 1:56 PM:



--On Sunday, 28 December, 2008 16:22 -0500 John Day
mailto:jeanj...@comcast.netjeanj...@comcast.net wrote:




Why should an application ever see an IP address?

Applications manipulating IP addresses is like a Java program
manipulating absolute memory pointers. A recipe for problems,
but then you already know that.




John,

Let me try to explain, in a slightly different way, what I
believe some others have tried to say.

Suppose we all agree with the above as a principle and even
accept your analogy (agreement isn't nearly that general, but
skip that for the moment).  Now consider an IPv6 host or a
multihomed IPv4 host (as distinct from multihomed IPv4 network).
The host will typically have multiple interfaces, multiple IP
addresses, and, at least as we do things today and without other
changes in the architecture, only one name.   One could change
the latter, but having the typical application know about
multiple interfaces is, in most cases, fully as bad as knowing
about the addresses -- one DNS name per interface is more or
less the same as one DNS name per address.

Now the application has to pick which interface to use in, e.g.,
opening a connection to another system.


See above


 Doing that optimally,
or even effectively, requires that it know routing information.
But requiring the application to obtain and process routing
information is worse than whatever you think about its using IP
addresses -- the latter may be just a convenient handle (blob)
to identify what we have historically called an interface, but
having the application process and interpret routing information
is completely novel as far as the applications layer is
concerned (as well as being a layer violation, etc., etc.) and
requires skills and knowledge that application writers rarely
have and still more rarely should need to use.


At least to me, that is the key architectural problem here, not
whatever nasty analogies one can draw about IP addresses.

john



___
Ietf mailing list
mailto:Ietf@ietf.orgIetf@ietf.org
https://www.ietf.org/mailman/listinfo/ietfhttps://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-29 Thread John Day
Let me get this straight.  You are saying that there are other 
reasons why an application should never see an IP address? And you 
feel that your reason is more important than simply getting level of 
abstractions wrong. So you agree?


Yes, of course.  There are lots of ugly things that can happen.  You 
don't have to go very far to run into why.  The question is why have 
we insisted on not doing it right for so long?


Take care,
John


At 7:56 -0500 2008/12/29, John C Klensin wrote:

--On Sunday, 28 December, 2008 16:22 -0500 John Day
jeanj...@comcast.net wrote:


 Why should an application ever see an IP address?

 Applications manipulating IP addresses is like a Java program
 manipulating absolute memory pointers. A recipe for problems,
 but then you already know that.


John,

Let me try to explain, in a slightly different way, what I
believe some others have tried to say.

Suppose we all agree with the above as a principle and even
accept your analogy (agreement isn't nearly that general, but
skip that for the moment).  Now consider an IPv6 host or a
multihomed IPv4 host (as distinct from multihomed IPv4 network).
The host will typically have multiple interfaces, multiple IP
addresses, and, at least as we do things today and without other
changes in the architecture, only one name.   One could change
the latter, but having the typical application know about
multiple interfaces is, in most cases, fully as bad as knowing
about the addresses -- one DNS name per interface is more or
less the same as one DNS name per address.

Now the application has to pick which interface to use in, e.g.,
opening a connection to another system.  Doing that optimally,
or even effectively, requires that it know routing information.
But requiring the application to obtain and process routing
information is worse than whatever you think about its using IP
addresses -- the latter may be just a convenient handle (blob)
to identify what we have historically called an interface, but
having the application process and interpret routing information
is completely novel as far as the applications layer is
concerned (as well as being a layer violation, etc., etc.) and
requires skills and knowledge that application writers rarely
have and still more rarely should need to use.


At least to me, that is the key architectural problem here, not
whatever nasty analogies one can draw about IP addresses.

john


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread macbroadcast

dear john day

would you please reply just to the list , sorry it was not my  
intention to make such a fuzz when i involved brians opinion about  
u.i.a.


Thanks for the other opinions aswell.

regards

and happy new year


Marc



https://sourceforge.net/project/screenshots.php?group_id=122388ssid=96693 




Am 29.12.2008 um 16:32 schrieb John Day:

Let me get this straight.  You are saying that there are other  
reasons why an application should never see an IP address? And you  
feel that your reason is more important than simply getting level of  
abstractions wrong. So you agree?


Yes, of course.  There are lots of ugly things that can happen.  You  
don't have to go very far to run into why.  The question is why have  
we insisted on not doing it right for so long?


Take care,
John


At 7:56 -0500 2008/12/29, John C Klensin wrote:

--On Sunday, 28 December, 2008 16:22 -0500 John Day
jeanj...@comcast.net wrote:


Why should an application ever see an IP address?

Applications manipulating IP addresses is like a Java program
manipulating absolute memory pointers. A recipe for problems,
but then you already know that.


John,

Let me try to explain, in a slightly different way, what I
believe some others have tried to say.

Suppose we all agree with the above as a principle and even
accept your analogy (agreement isn't nearly that general, but
skip that for the moment).  Now consider an IPv6 host or a
multihomed IPv4 host (as distinct from multihomed IPv4 network).
The host will typically have multiple interfaces, multiple IP
addresses, and, at least as we do things today and without other
changes in the architecture, only one name.   One could change
the latter, but having the typical application know about
multiple interfaces is, in most cases, fully as bad as knowing
about the addresses -- one DNS name per interface is more or
less the same as one DNS name per address.

Now the application has to pick which interface to use in, e.g.,
opening a connection to another system.  Doing that optimally,
or even effectively, requires that it know routing information.
But requiring the application to obtain and process routing
information is worse than whatever you think about its using IP
addresses -- the latter may be just a convenient handle (blob)
to identify what we have historically called an interface, but
having the application process and interpret routing information
is completely novel as far as the applications layer is
concerned (as well as being a layer violation, etc., etc.) and
requires skills and knowledge that application writers rarely
have and still more rarely should need to use.


At least to me, that is the key architectural problem here, not
whatever nasty analogies one can draw about IP addresses.

   john




--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: m...@let.de
jabber :m...@kgraff.net
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Rémi Després
Title: Re: The internet architecture




John Day  -  le (m/j/a) 12/29/08 4:24 PM:

  
  
  No it isn't Transport's job.  Transport has one and only one
purpose: end-to-end reliability and flow control.
  
  
  "Managing" the resources of the network is the network
layer's job.

Reliably... and also efficiently.
To transmit as fast as possible, including with load sharing among
several parallel paths, the flow control function (i.e. the transport
layer, right?) has, in my understanding, to know how many address
couples
it uses.

Whether the transport layer can delegate some of its flow control
function to an intermediate layer is IMO a terminology question.


  
  
  Although, these distinctions of Network and Transport Layer are
.
. . shall we say, quaint.

Yes, indeed.

  
  
  Multihoming is fundamentally a routing problem.  SCTP tries
to claim to solve it by changing the definition, an old trick. 

I am not sure what the two definitions are.
Being more specific would be helpful.

  ... Multihoming has
nothing to do with what has traditionally been called the
"Transport Layer."
  
  
  It is a problem of routing not be able to recognize that two
points of attachment go to the same place. ...

In my understanding, knowing that two locators are those of  a common
destination is  the normal result from getting these locators by
translation of an identifier, e.g. a domain name.

RD

  
  
  At 14:22 +0100 2008/12/29, Rémi Després wrote:
  John,

To pick a local interface for an outgoing connection isn't the
transport layer, e.g. SCTP, well placed to do the job (or some
intermediate layer function like Shim6)?
Thus, ordinary applications wouldn't need to be concerned.

RD

  





___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Noel Chiappa
 From: John Day jeanj...@comcast.net

 Multihoming is fundamentally a routing problem.

I have been thinking this for some time too, and it's especially true/clear
when the multi-homing in question is site multi-homing, and not
host-multihoming (which is much rarer, is my impression). You clearly have
two alternative paths to the destination, and have to pick.

Which raises the question of 'why not have the routing do it', and that's a
very valid question. In a routing architecture which had better - read
non-manually configured - _scopes_ for routing information, and more
automatic aggregation (or hiding, to use the more general concept), I think I
would agree that probably it should be hidden in the routing. (The probably
is because I'd have to see the actual details.)

However, we have to 'go to war with the army we have', and the current routing
architecture (which includes the _functional interface_ with the _rest_ of the
architecture, not just how it's arranged internally - and the former is
basically impossible to change) makes it impossible to do that.

 It is a problem of routing not be able to recognize that two points of
 attachment go to the same place. Portraying it as anything else is just
 deluding yourself.

I would agree with this, except I defer to the 'get down off an elephant'
principle. If both points of attachment are bound to a single transport level
entity, then it ought to be relatively easy, and not involve the routing at
all, to detect that both points of attachment lead to the same entity.

Noel
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread John Leslie
John Day jeanj...@comcast.net wrote:
 At 14:22 +0100 2008/12/29, R?mi Despr?s wrote:

 To pick a local interface for an outgoing connection[,]
 isn't the transport layer, e.g. SCTP, well placed to do the job
 (or some intermediate layer function like Shim6)?
 
 No it isn't Transport's job.  Transport has one and only one purpose:
 end-to-end reliability and flow control.

   I must disagree with John Day.

   I accept reliability and flow control as the transport layer's
primary function, but denying it access to multiple interfaces cripples
its ability to perform those functions in a mobility environment.

   And, IMHO, from an architectural viewpoint, multi-homing and mobility
are the same problem.

 Managing the resources of the network is the network layer's job.

   That's only partly true, and completely irrelevant.

 Although, these distinctions of Network and Transport Layer are . . .
 shall we say, quaint.

   True, we can't seem to agree to a distinction and stick with it,
but quaint is hardly the right word -- it's more like deja-vu all
over again.

 Multihoming is fundamentally a routing problem. 

   Absolutely not.

   Routing is a function of discovering connectivity and propagating
information about routes to routers that may want to use them.

   Multihoming is a funtion of maintaining alternate paths in order
to avoid interruptions of connectivity when a primary path hiccups or
becomes congested. (Just like mobility...)

   Multihoming _should_not_ wait for connectivity to fail altogether;
least of all should it wait for connectivity failure to propagate
through an internet.

   We have pretty good routing algorithms for _stable_ networks. We
have to kludge those algorithms to work _at_all_ in unstable networks.
Mobility by its nature _cannot_ be stable. (Multi-homing is most
often done as protection against occasional instability.)

 SCTP tries to claim to solve it by changing the definition, an old
 trick.

   Unfair! SCTP was designed for a specific function: it's quite honest
about the design choices. Don't blame its design for other things folks
are trying to use it for.

 Multihoming has nothing to do with what has traditionally been called
 the Transport Layer.

   If so, perhaps it's time to refine those definitions of Transport
Layer.

 It is a problem of routing not be able to recognize that two points
 of attachment go to the same place.

   Hardly!

   Routing finds paths between nodes using links. These nodes
_are_ points of attachment, not computers (whatever those may be).
Routing _must_ deal in topology, not physical proximity.

 Portraying it as anything else is just deluding yourself.

   Again, hardly!

   We have been punting entirely too much to routing for decades.
There are other perfectly valid ways to divide the problem. And IMHO,
any way that makes the realm of routing reside in a stable space
is a _better_ paradigm.

--
John Leslie j...@jlc.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread John Day

At 17:04 +0100 2008/12/29, Rémi Després wrote:

John Day - le (m/j/a) 12/29/08 4:24 PM:


Re: The internet architecture
No it isn't Transport's job. Transport has one 
and only one purpose: end-to-end reliability 
and flow control.


Managing the resources of the network is the network layer's job.


Reliably... and also efficiently.


Definitely.

To transmit as fast as possible, including with 
load sharing among several parallel paths, the 
flow control function (i.e. the transport layer, 
right?) has, in my understanding, to know how 
many address couples it uses.


Strictly speaking, no, I would disagree that 
these are transport functions. Transport has no 
need to know any address couples.  That is why it 
has a connection id, i.e. concatenated port-ids. 
But then as I said, there really is no distinct 
transport or network layer.


This is where things get involved. because really 
the boundary between network and transport is a 
false boundary.  The last remnant of 
beads-on-a-string thinking.  One sign of this 
is the need for a protocol-id field in IP.  If we 
hadn't gotten into a battle with the PTTs over 
whether or not we needed a Transport Layer at 
all, I think we would have seen this a lot 
sooner.  But the battle caused lines to be drawn 
and forces to dig in.  ;-)




Whether the transport layer can delegate some of 
its flow control function to an intermediate 
layer is IMO a terminology question.


Somewhat.  There is a fair amount of science on 
this topic under the heading of process control. 
The only purpose flow control should have in a 
transport protocol is to keep the sender from 
overrunning the receiver. full stop  Getting 
the terminology right can go along way to solving 
the problem.




Although, these distinctions of Network and 
Transport Layer are . . . shall we say, quaint.



Yes, indeed.



Multihoming is fundamentally a routing problem. 
SCTP tries to claim to solve it by changing the 
definition, an old trick.



I am not sure what the two definitions are.
Being more specific would be helpful.


See below, but you did.  ;-)



... Multihoming has nothing to do with what has 
traditionally been called the Transport Layer.


It is a problem of routing not be able to 
recognize that two points of attachment go to 
the same place. ...


In my understanding, knowing that two locators 
are those of a common destination is the normal 
result from getting these locators by 
translation of an identifier, e.g. a domain name.


I don't believe the routing algorithms translate 
many domain names.  But you are right that a 
domain name is a synonym for a set of IP 
addresses.


Take care,
John



RD



At 14:22 +0100 2008/12/29, Rémi Després wrote:


John,

To pick a local interface for an outgoing 
connection isn't the transport layer, e.g. 
SCTP, well placed to do the job (or some 
intermediate layer function like Shim6)?

Thus, ordinary applications wouldn't need to be concerned.

RD


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-29 Thread michael.dillon
 Yes, of course.  There are lots of ugly things that can 
 happen.  You don't have to go very far to run into why.  The 
 question is why have we insisted on not doing it right for so long?

Perhaps because others were working on the problems of application
communication without IP addresses. AMQP is one such http://amqp.org/
as are all the other protocols that call themselves message queuing.
XMPP might fall into the same category (RFC refs here
http://xmpp.org/rfcs/)
but I'm not familiar enough with the details to be sure that it meets
the criteria for unbroken end-to-end communication through IP addressing
change events.

In many ways, this is all a problem of language and history. At the
time many RFCs were written, the world of networking was very different
and very undeveloped. Getting the bareboned basics of networking right
was very, very important. But it was less important to make things
easy for application developers or application users because the very 
fact of a network delivered such great benefits over what came before,
that other problems seemed unworthy of attention. As all of this recedes
into history, the language that we use to speak about technology has
changed
so that terminology which was historically concise, is now a bit vague
and can be interpreted in more than one way. That's because lots of
other
people now use the same language and apply it to their designs,
architectures,
etc.

I think the only way to resolve the question is to publish an Internet 
architecture description in today's context, that explains what the
Internet architecture is, what it isn't, and why it has succeeded in
being what it is. At the same time, one could point to other work
outside
the IETF that has worked on other problems which are related and
intertwined
with Internet architecture, yet separate from it. And if AMQP really
meets
all the requirements of an IP address free protocol, perhaps it should
be
taken under the IETF's wing.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread John Day

At 11:34 -0500 2008/12/29, John Leslie wrote:

John Day jeanj...@comcast.net wrote:

 At 14:22 +0100 2008/12/29, R?mi Despr?s wrote:


 To pick a local interface for an outgoing connection[,]
 isn't the transport layer, e.g. SCTP, well placed to do the job
 (or some intermediate layer function like Shim6)?


 No it isn't Transport's job.  Transport has one and only one purpose:
 end-to-end reliability and flow control.


   I must disagree with John Day.

   I accept reliability and flow control as the transport layer's
primary function, but denying it access to multiple interfaces cripples
its ability to perform those functions in a mobility environment.


Transport has nothing to do with mobility.



   And, IMHO, from an architectural viewpoint, multi-homing and mobility
are the same problem.


You are absolutely right.  Mobility is nothing more than dynamic multihoming.

Trying to guess where you are coming from, don't blame Transport for 
TCP's faults.





 Managing the resources of the network is the network layer's job.


   That's only partly true, and completely irrelevant.


Highly relevant.  Since only the network layer can manage the network layer.




 Although, these distinctions of Network and Transport Layer are . . .
 shall we say, quaint.


   True, we can't seem to agree to a distinction and stick with it,
but quaint is hardly the right word -- it's more like deja-vu all
over again.


No, quaint. As in isn't it quaint how people saw things in olden times


  Multihoming is fundamentally a routing problem.

   Absolutely not.

   Routing is a function of discovering connectivity and propagating
information about routes to routers that may want to use them.


Boy, if discovering routes and propagating routing information isn't 
a routing  problem, then what is?




   Multihoming is a funtion of maintaining alternate paths in order
to avoid interruptions of connectivity when a primary path hiccups or
becomes congested. (Just like mobility...)


Right.  Routing.


   Multihoming _should_not_ wait for connectivity to fail altogether;
least of all should it wait for connectivity failure to propagate
through an internet.


That is a policy decision that routing is free to make and does.



   We have pretty good routing algorithms for _stable_ networks. We
have to kludge those algorithms to work _at_all_ in unstable networks.
Mobility by its nature _cannot_ be stable. (Multi-homing is most
often done as protection against occasional instability.)


Of course it can.  You just aren't asking the right question.


  SCTP tries to claim to solve it by changing the definition, an old

 trick.


   Unfair! SCTP was designed for a specific function: it's quite honest
about the design choices. Don't blame its design for other things folks
are trying to use it for.


Only blame ti for things its spec claims it does.


  Multihoming has nothing to do with what has traditionally been called

 the Transport Layer.


   If so, perhaps it's time to refine those definitions of Transport
Layer.


Eliminate the transport layer is probably the best thing to do.


  It is a problem of routing not be able to recognize that two points

 of attachment go to the same place.


   Hardly!

   Routing finds paths between nodes using links. These nodes
_are_ points of attachment, not computers (whatever those may be).
Routing _must_ deal in topology, not physical proximity.


Nodes are not points of attachments.  Well, not to the same routing algorithm.




 Portraying it as anything else is just deluding yourself.


   Again, hardly!

   We have been punting entirely too much to routing for decades.
There are other perfectly valid ways to divide the problem. And IMHO,
any way that makes the realm of routing reside in a stable space
is a _better_ paradigm.


O, and BTW, did I say we don't solve multihoming in routers either?
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Randy Presuhn
Hi -

 From: John Day jeanj...@comcast.net
 To: Rémi Després remi.desp...@free.fr; John C Klensin 
 john-i...@jck.com
 Cc: Bryan Ford baf...@mpi-sws.org; ietf@ietf.org
 Sent: Monday, December 29, 2008 7:24 AM
 Subject: Re: The internet architecture

 No it isn't Transport's job.  Transport has one
 and only one purpose: end-to-end reliability and
 flow control.

 Managing the resources of the network is the network layer's job.

 Although, these distinctions of Network and
 Transport Layer are . . . shall we say, quaint.

 Multihoming is fundamentally a routing problem.

Depends on what one is routing *to* - application,
host, or attachment point.

...
 It is a problem of routing not be able to
 recognize that two points of attachment go to the
 same place.  Portraying it as anything else is
 just deluding yourself.

The multiple-entrance, multiple exit problem could also
be attacked with a variation on good ol' multi-link
procedure, but done just below (or as a sublayer of)
transport, but above (connectionless) network, and
not restrict it to datalink.

Randy


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Brian E Carpenter
Noel,

On 2008-12-30 05:28, Noel Chiappa wrote:
  From: John Day jeanj...@comcast.net
 
  Multihoming is fundamentally a routing problem.

(snip)

  It is a problem of routing not be able to recognize that two points of
  attachment go to the same place. Portraying it as anything else is just
  deluding yourself.
 
 I would agree with this, except I defer to the 'get down off an elephant'
 principle. If both points of attachment are bound to a single transport level
 entity, then it ought to be relatively easy, and not involve the routing at
 all, to detect that both points of attachment lead to the same entity.

It ought to be, but unfortunately we have confounded the transport entity
namespace with the network entity namespace with the point of attachment
namespace.

Brian
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread John Leslie
John Day jeanj...@comcast.net wrote:
 At 11:34 -0500 2008/12/29, John Leslie wrote:
 
 I accept reliability and flow control as the transport layer's
 primary function, but denying it access to multiple interfaces cripples
 its ability to perform those functions in a mobility environment.
 
 Transport has nothing to do with mobility.

   To whatever extent we accept the existence of transport for
stationary computers, we must allow it for mobile computers.

   I think we're arguing semantics here, without agreeing on what we
mean by transport. I'd much rather continue that argument on a different
mailing-list.

 Trying to guess where you are coming from, don't blame Transport for 
 TCP's faults.

   TCP has many faults, only some of which are related to its transport
layer functions.

   Where I'm coming from, BTW, is a long series of primal screams when
I hear someone proposing how to deal with transport issues in a
routing protocol.

 Although, these distinctions of Network and Transport Layer are . . .
 shall we say, quaint.

 True, we can't seem to agree to a distinction and stick with it,
 but quaint is hardly the right word -- it's more like deja-vu all
 over again.
 
 No, quaint. As in isn't it quaint how people saw things in olden times

   But, IMHO, we _never_have_ seen network-vs-transport in a consistent
way.

 Multihoming is fundamentally a routing problem.

 Absolutely not.

 Routing is a function of discovering connectivity and propagating
 information about routes to routers that may want to use them.
 
 Boy, if discovering routes and propagating routing information isn't 
 a routing  problem, then what is?

   Multihoming (or mobility, if you prefer) isn't about discovering
alternate connectivity: it's about verifying that something arriving
via a different path is in fact controlled by the same process as
something we're already communicating with. That is _not_ a routing
issue -- routing merely attempts to deliver packets.

 Multihoming is a funtion of maintaining alternate paths in order
 to avoid interruptions of connectivity when a primary path hiccups or
 becomes congested. (Just like mobility...)
 
 Right.  Routing.

   Even if we were to accept a flooding paradigm for routing (and send
every packet out every interface), multihoming would still require
sorting out which packets to trust.

 Multihoming _should_not_ wait for connectivity to fail altogether;
 least of all should it wait for connectivity failure to propagate
 through an internet.
 
 That is a policy decision that routing is free to make and does.

   Although routing _does_ make such decisions, architecturally that's
noise, not a native network layer function. Network-layer is about
getting packets through a network of networks _at_all_ -- not about
managing everyone's Quality-of-Service wishlists. Efficiency issues
in routing are there to keep routing from breaking, for example by
routing packets in a loop until TTL is exhausted.

 We have pretty good routing algorithms for _stable_ networks. We
 have to kludge those algorithms to work _at_all_ in unstable networks.
 Mobility by its nature _cannot_ be stable. (Multi-homing is most
 often done as protection against occasional instability.)
 
 Of course it can.  You just aren't asking the right question.

   Feel free to correct me -- what question should I be asking?

 Multihoming has nothing to do with what has traditionally been called
 the Transport Layer.

 If so, perhaps it's time to refine those definitions of Transport
 Layer.
 
 Eliminate the transport layer is probably the best thing to do.

   I'd be happy to talk about that -- on the t...@ietf.org list.

 Routing finds paths between nodes using links. These nodes
 _are_ points of attachment, not computers (whatever those may be).
 Routing _must_ deal in topology, not physical proximity.
 
 Nodes are not points of attachments. Well, not to the same routing 
 algorithm.

   To a routing algorithm, nodes are nodes, period. True.

   But in the Internet, IP addresses _are_ points of attachment, and
network-layer routing finds paths between points of attachment.

 We have been punting entirely too much to routing for decades.
 There are other perfectly valid ways to divide the problem. And IMHO,
 any way that makes the realm of routing reside in a stable space
 is a _better_ paradigm.
 
 O, and BTW, did I say we don't solve multihoming in routers either?

   Not this week... ;^)

--
John Leslie j...@jlc.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-29 Thread Christian Huitema
  I would agree with this, except I defer to the 'get down off an
 elephant'
  principle. If both points of attachment are bound to a single
 transport level
  entity, then it ought to be relatively easy, and not involve the
 routing at
  all, to detect that both points of attachment lead to the same entity.

 It ought to be, but unfortunately we have confounded the transport
 entity
 namespace with the network entity namespace with the point of attachment
 namespace.

Not really. Many applications are actively managing their network connections, 
and for a good reason. A network connection is not an interface to an abstract 
unified network. On the contrary, a network interface implements a contract 
with a specific network.

Take the example of a laptop with Bluetooth, Wi-Fi, WIMAX and 3G. Four 
connections, with four different providers. Wi-Fi, through the access point, 
communicates with a broadband provider, maybe a cable company such as Comcast. 
WIMAX communicates with the Internet through a wireless provider, maybe 
Clearwire. 3G also offer some kind of Internet access, possibly through a 
different provider such as ATT. And Bluetooth typically does not communicate 
with the Internet, but provides access to some local devices. You will note 
that the different providers have different rules for managing traffic. Behind 
each interface lies a different contract, a different type of service.

Is it possible to manage all these interfaces as if they were a single abstract 
point of attachment? Maybe. That would require a common management system. Can 
that management system be part of the network? Frankly, I doubt it. The 
management system will have to make arbitration between different services, 
deciding which part of the traffic goes which way. These decisions have 
economic consequences. Do you really believe that different providers will 
delegate these economic decisions to some kind of cooperative distributed 
system? If that was realistic, we would have network wide QOS by now...

On the other hand, the end system is in a very good position to implement these 
arbitrations. It has direct access to the requirement of the applications, and 
to the terms of each specific interface contract. Moreover, it can actually 
measure the quality of the offered service, allowing for informed real time 
decisions.

We can debate which part of the end system should implement these decisions, 
whether it should be the application or a common transport layer. I can see 
arguments either way. But the business reality essentially precludes an 
implementation in the network layer. Even if we did reengineer the network 
layer to implement a clean separation between identifiers and locators, the 
business reality will still be there. We will end up with separate identifiers 
for the different provider contracts, and applications, or the transport 
layers, will have to arbitrage between these contracts.

-- Christian Huitema



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture - pointer to RRG discussion

2008-12-29 Thread Robin Whittle
Short version:   Please contribute to the IRTF Routing Research Group
 discussions on how to devise a new Internet
 architecture (including by adding to the current
 architecture) to solve the routing scaling problem.

Hi John and All,

This discussion includes debate about whether in a new Internet
architecture, the responsibility for handling network-centric
problems should in future be handled by hosts.  These network-centric
 real-time events, problems and responsibilities include:

 1 - Multihoming.

 2 - Traffic engineering.

 3 - Portability of address space - or some other, yet to be
 invented, approach which has the same effect of making it easy
 to choose another ISP without the disruption, cost etc.

Please take a look at the current discussions in the IRTF Routing
Research Group.  The RRG has been charged with the responsibility of
recommending to the IESG what sort of architectural changes should be
made to the Internet (really, the IPv4 Internet and the IPv6
Internet) to solve the routing scaling problem.  The deadline is
March 2009.

RRG mailing list archives, wiki and charter:

 http://www.irtf.org/pipermail/rrg/
 http://trac.tools.ietf.org/group/irtf/trac/wiki/RoutingResearchGroup
 http://www.irtf.org/charter?gtype=rggroup=rrg

The RAWS workshop (Amsterdam October 2006):

 http://tools.ietf.org/html/rfc4984
 http://www.iab.org/about/workshops/routingandaddressing/


I wrote a critique of any solution which pushes new responsibilities
onto hosts which concern things which occur in the network:

 Fundamental objections to a host-based scalable routing solution
 http://www.irtf.org/pipermail/rrg/2008-November/000271.html

Bill Herrin has a page which attempts to list the various approaches
to solving the routing scaling problem:

 http://bill.herrin.us/network/rrgarchitectures.html

I think the problem definition there is biased towards the notion
that the solution is to have hosts take on new functions, including
with changes to stacks, APIs and applications.  I wrote some
additional text which I think provides a more complete problem
description:

 http://www.irtf.org/pipermail/rrg/2008-December/000525.html

Also of interest is a recent paper contrasting network centric
core-edge separation schemes with host-centric elimination schemes:

  Towards a Future Internet Architecture: Arguments for Separating
  Edges from Transit Core

  Dan Jen (UCLA), Lixia Zhang (UCLA), Lan Wang (University of
  Memphis), Beichuan Zhang (University of Arizona)

  (HotNets-VII) Calgary, Alberta, Canada October 6-7, 2008
  http://conferences.sigcomm.org/hotnets/2008/papers/18.pdf

Bill is currently calling for RRG members to form strong consensus on
rejecting one or more solution strategies.  (Msg 000554.)  The types
of strategy are (with my descriptions for A, B and C):

Strategy A:

  Network-based core-edge separation schemes, including LISP, APT
  Ivip, TRRP and Six-One Router.  (See the RRG wiki page for links.)

Strategy B:

  Host-centric elimination schemes.  Elimination means all end-user
  network addresses are subsets of ISP prefixes.  Only ISP prefixes
  are advertised in the interdomain routing system.

  See Brian Carpenter's message on how this is unworkable for IPv4
  and very close to the plan of record for IPv6 - except that
  IPv6 doesn't provide multihoming (session survival when the
  currently used ISP-provided address becomes unusable).

   http://www.irtf.org/pipermail/rrg/2008-December/000577.html

Strategy C:

  New interdomain routing system arrangements, including for
  instance: geographical aggregation or compact routing.
  (A critique of compact routing: http://arxiv.org/abs/0708.2309.)

Strategy D:

  Use plain old BGP for the RIB. Algorithmically compress the FIB in
  each router.

Strategy E:

  Make no routing architecture changes. Instead, create a billing
  system through which the folks running core routers are paid by the
  folks announcing each prefix to carry those prefixes. Let economics
  suppress growth to a survivable level.

Strategy F:

  Do nothing. (RFC 1887 § 4.4.1)


My message calling for strong consensus in rejecting Strategies B, C,
D, E and F:

  http://www.irtf.org/pipermail/rrg/2008-December/000565.html



  - Robinhttp://www.firstpr.com.au/ip/ivip/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Brian E Carpenter
Hi Christian,

On 2008-12-30 11:55, Christian Huitema wrote:
 I would agree with this, except I defer to the 'get down off an
 elephant'
 principle. If both points of attachment are bound to a single
 transport level
 entity, then it ought to be relatively easy, and not involve the
 routing at
 all, to detect that both points of attachment lead to the same entity.
 It ought to be, but unfortunately we have confounded the transport
 entity
 namespace with the network entity namespace with the point of attachment
 namespace.
 
 Not really. Many applications are actively managing their network 
 connections, and for a good reason. A network connection is not an interface 
 to an abstract unified network. On the contrary, a network interface 
 implements a contract with a specific network.

It seems to me that you're agreeing with me. It's exactly because the three
namespaces I mentioned are mashed together by TCP/IP that applications have
to do what you describe, rather than just saying open a connection to
Christian's laptop.

Brian

 Take the example of a laptop with Bluetooth, Wi-Fi, WIMAX and 3G. Four 
 connections, with four different providers. Wi-Fi, through the access point, 
 communicates with a broadband provider, maybe a cable company such as 
 Comcast. WIMAX communicates with the Internet through a wireless provider, 
 maybe Clearwire. 3G also offer some kind of Internet access, possibly through 
 a different provider such as ATT. And Bluetooth typically does not 
 communicate with the Internet, but provides access to some local devices. You 
 will note that the different providers have different rules for managing 
 traffic. Behind each interface lies a different contract, a different type of 
 service.
 
 Is it possible to manage all these interfaces as if they were a single 
 abstract point of attachment? Maybe. That would require a common management 
 system. Can that management system be part of the network? Frankly, I doubt 
 it. The management system will have to make arbitration between different 
 services, deciding which part of the traffic goes which way. These decisions 
 have economic consequences. Do you really believe that different providers 
 will delegate these economic decisions to some kind of cooperative 
 distributed system? If that was realistic, we would have network wide QOS by 
 now...
 
 On the other hand, the end system is in a very good position to implement 
 these arbitrations. It has direct access to the requirement of the 
 applications, and to the terms of each specific interface contract. Moreover, 
 it can actually measure the quality of the offered service, allowing for 
 informed real time decisions.
 
 We can debate which part of the end system should implement these decisions, 
 whether it should be the application or a common transport layer. I can see 
 arguments either way. But the business reality essentially precludes an 
 implementation in the network layer. Even if we did reengineer the network 
 layer to implement a clean separation between identifiers and locators, the 
 business reality will still be there. We will end up with separate 
 identifiers for the different provider contracts, and applications, or the 
 transport layers, will have to arbitrage between these contracts.
 
 -- Christian Huitema
 
 
 
 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-29 Thread Christian Huitema
  It ought to be, but unfortunately we have confounded the transport
  entity
  namespace with the network entity namespace with the point of
 attachment
  namespace.
 
  Not really. Many applications are actively managing their network
 connections, and for a good reason. A network connection is not an
 interface to an abstract unified network. On the contrary, a network
 interface implements a contract with a specific network.

 It seems to me that you're agreeing with me. It's exactly because the
 three
 namespaces I mentioned are mashed together by TCP/IP that applications
 have
 to do what you describe, rather than just saying open a connection to
 Christian's laptop.

If Christian's laptop is the transport name space, and if the network 
entity namespace use different network entity names to designate the various 
network contracts, then, yes, we probably agree. Although I am not sure that 
we should place too much emphasis on the name of physical entities like 
Christian's laptop. What if the application process migrates from my laptop 
to my desktop?

-- Christian Huitema



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Tony Finch
On Mon, 29 Dec 2008, Noel Chiappa wrote:

 I have been thinking this for some time too, and it's especially
 true/clear when the multi-homing in question is site multi-homing, and
 not host-multihoming (which is much rarer, is my impression).

Most networkable consumer electronics ships with at least two network
interfaces. Host multihoming is only rare because it doesn't work.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
WIGHT PORTLAND: SOUTHEAST BACKING EAST 3 OR 4, OCCASIONALLY 5. SLIGHT,
OCCASIONALLY MODERATE IN PORTLAND. MAINLY FAIR. MODERATE OR GOOD, OCCASIONALLY
POOR.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-29 Thread Marshall Eubanks


On Dec 29, 2008, at 10:02 PM, Tony Finch wrote:


On Mon, 29 Dec 2008, Noel Chiappa wrote:


I have been thinking this for some time too, and it's especially
true/clear when the multi-homing in question is site multi-homing,  
and

not host-multihoming (which is much rarer, is my impression).


Most networkable consumer electronics ships with at least two network
interfaces. Host multihoming is only rare because it doesn't work.


Why do you say it doesn't work ? I have found host multihoming to be  
very useful at
times, especially if the local network uses several address blocks for  
whatever reason. (Note that of course

there is still a router routing this.)

And, if you use a software based router (which many do) that is  
strictly speaking host multihoming.


Regards
Marshall




Tony.
--
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
WIGHT PORTLAND: SOUTHEAST BACKING EAST 3 OR 4, OCCASIONALLY 5. SLIGHT,
OCCASIONALLY MODERATE IN PORTLAND. MAINLY FAIR. MODERATE OR GOOD,  
OCCASIONALLY

POOR.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-28 Thread Hallam-Baker, Phillip
It depends on what level you are looking at the problem from

In my opinion, application layer systems should not make assumptions that have 
functional (as opposed to performance) implications on the inner semantics of 
IP addresses. From the functionality point of view an IP address should be 
considered by the application to be no more than an opaque identifier.

The reason for that is precisely to allow the routing layer to make 
architectural decisions that do apply semantics to the address and which change 
over periods of time that are relevant to routing layer deployment cycles 
(there are plenty of pre-1995 Internet hosts still in service, I will wager a 
rather smaller percentage of backbone routers from 1995 are still in service 
:-).

Which is why I want to see ad-hoc semantics that applications attempt to apply 
to IP addresses being replaced by DNS level (ie reverse DNS) facilities that 
achieve the same effect in a fashion that does not result in applications 
breaking if those assumptions are broken.


On the geographic nature of IP addresses, clearly some level of aggregation is 
essential but it is equally clear that 100% clean aggregation is never going to 
be achievable either. The longer a block is in service the more it gets 'bashed 
about'. Entropy increases.

At the moment the Internet architecture has a built in assumption that the 
system is going to grow. And that keeps the chaos factor in check because the 
new blocks are a significant proportion of the whole and have nice regular 
assignments at issue. But what when the system stops growing? How do we keep 
the chaos to an acceptable fraction?

This leads me to consider an IP address block assignment as being an inherently 
term limited affair, with the sole exception being root DNS where perpetual 
assignments are going to be necessary. The terms need to be long, years, 
probably decades at minimum. But there needs to be a built in assumption that 
over time there will be a 'recycling' of broken down, atomized address blocks 
into larger clumps that aggregate nicely. Which in turn is only possible if 
nobody (apart from core DNS) cares about their IP address having a specific 
value.



-Original Message-
From: ietf-boun...@ietf.org on behalf of Bryan Ford
Sent: Wed 12/24/2008 1:50 PM
To: macbroadcast
Cc: ietf@ietf.org
Subject: Re: The internet architecture
 
On Dec 22, 2008, at 10:51 PM, macbroadcast wrote:
 IP does not presume hierarchical addresses and worked quite well  
 without it for nearly 20 years.
 IP addresses are topologically independent. Although since CIDR,  
 there has been an attempt to make them align with aspects of the  
 graph.
 Ford's paper does not really get to the fundamentals of the problem.

 I would suggest that deeper thought is required.

 would like to know bryans  opinion

I think I missed some intermediate messages in this discussion thread,  
but I'll try. :)

IP addresses are just an address format (two, actually, one for IPv4  
and another for IPv6); their usefulness and effectiveness depends on  
exactly how they are assigned and used.  CIDR prescribes a way to  
assign and use IP addresses that in theory facilitates aggregation of  
route table entries to make the network scalable, _IF_ those addresses  
are assigned in a hierarchical fashion that directly corresponds to  
the network's topology, which must also be strictly hierarchical in  
order for that aggregation to be possible.  That is, if an edge  
network has only one upstream provider and uses in its network only IP  
addresses handed out from that provider's block, then nobody else in  
the Internet needs to have a routing table entry for that particular  
edge network; only for the provider.  But that whole model breaks down  
as soon as that edge network wants (god forbid!) a bit of reliability  
by having two redundant links to two different upstream providers -  
i.e., the multihoming problem, and hence all the concern over the  
fact that BGP routing tables are ballooning out of control because  
_everybody_ wants to be multihomed and thus wants their own public,  
non-aggregable IP address range, thus completely defeating the  
scalability goals of CIDR.

For some nice theoretical and practical analysis indicating that any  
hierarchical CIDR-like addressing scheme is fundamentally a poor match  
to a well-connected network topology like that of the Internet, see  
Krioukov et al., On Compact Routing for the Internet, CCR '07.  They  
also cast some pretty dark clouds over some alternative schemes as  
well, but that's another story. :)

But to get back to the original issue, CIDR-based IP addressing isn't  
scalable unless the network topology is hierarchical and address  
assignment is done according to network topology: i.e., IP addresses  
MUST be dependent on topology in order for CIDR-based addressing to  
scale.  But in practice, at least up to this point, other concerns  
have substantially trumped

Re: The internet architecture

2008-12-28 Thread TSG

Hallam-Baker, Phillip wrote:


It depends on what level you are looking at the problem from

In my opinion, application layer systems should not make assumptions 
that have functional (as opposed to performance) implications on the 
inner semantics of IP addresses. From the functionality point of view 
an IP address should be considered by the application to be no more 
than an opaque identifier.


The reason for that is precisely to allow the routing layer to make 
architectural decisions that do apply semantics to the address and 
which change over periods of time that are relevant to routing layer 
deployment cycles (there are plenty of pre-1995 Internet hosts still 
in service, I will wager a rather smaller percentage of backbone 
routers from 1995 are still in service :-).


Which is why I want to see ad-hoc semantics that applications attempt 
to apply to IP addresses being replaced by DNS level (ie reverse DNS) 
facilities that achieve the same effect in a fashion that does not 
result in applications breaking if those assumptions are broken.



On the geographic nature of IP addresses, clearly some level of 
aggregation is essential but it is equally clear that 100% clean 
aggregation is never going to be achievable either. The longer a block 
is in service the more it gets 'bashed about'. Entropy increases.


Only if they are used as reliable network monuments. And yes that is the 
proper term for it and one we should start using as a particular form of 
trust anchor.


When we did the Controlling access patent, this was part of the control 
and reporting model we built for it.




At the moment the Internet architecture has a built in assumption that 
the system is going to grow. And that keeps the chaos factor in check 
because the new blocks are a significant proportion of the whole and 
have nice regular assignments at issue. But what when the system stops 
growing? How do we keep the chaos to an acceptable fraction?


This leads me to consider an IP address block assignment as being an 
inherently term limited affair, with the sole exception being root DNS 
where perpetual assignments are going to be necessary. The terms need 
to be long, years, probably decades at minimum. But there needs to be 
a built in assumption that over time there will be a 'recycling' of 
broken down, atomized address blocks into larger clumps that aggregate 
nicely. Which in turn is only possible if nobody (apart from core DNS) 
cares about their IP address having a specific value.




-Original Message-
From: ietf-boun...@ietf.org on behalf of Bryan Ford
Sent: Wed 12/24/2008 1:50 PM
To: macbroadcast
Cc: ietf@ietf.org
Subject: Re: The internet architecture

On Dec 22, 2008, at 10:51 PM, macbroadcast wrote:
 IP does not presume hierarchical addresses and worked quite well 
 without it for nearly 20 years.
 IP addresses are topologically independent. Although since CIDR, 
 there has been an attempt to make them align with aspects of the 
 graph.

 Ford's paper does not really get to the fundamentals of the problem.

 I would suggest that deeper thought is required.

 would like to know bryans  opinion

I think I missed some intermediate messages in this discussion thread, 
but I'll try. :)


IP addresses are just an address format (two, actually, one for IPv4 
and another for IPv6); their usefulness and effectiveness depends on 
exactly how they are assigned and used.  CIDR prescribes a way to 
assign and use IP addresses that in theory facilitates aggregation of 
route table entries to make the network scalable, _IF_ those addresses 
are assigned in a hierarchical fashion that directly corresponds to 
the network's topology, which must also be strictly hierarchical in 
order for that aggregation to be possible.  That is, if an edge 
network has only one upstream provider and uses in its network only IP 
addresses handed out from that provider's block, then nobody else in 
the Internet needs to have a routing table entry for that particular 
edge network; only for the provider.  But that whole model breaks down 
as soon as that edge network wants (god forbid!) a bit of reliability 
by having two redundant links to two different upstream providers - 
i.e., the multihoming problem, and hence all the concern over the 
fact that BGP routing tables are ballooning out of control because 
_everybody_ wants to be multihomed and thus wants their own public, 
non-aggregable IP address range, thus completely defeating the 
scalability goals of CIDR.


For some nice theoretical and practical analysis indicating that any 
hierarchical CIDR-like addressing scheme is fundamentally a poor match 
to a well-connected network topology like that of the Internet, see 
Krioukov et al., On Compact Routing for the Internet, CCR '07.  They 
also cast some pretty dark clouds over some alternative schemes as 
well, but that's another story. :)


But to get back to the original issue, CIDR-based IP addressing isn't 
scalable unless

RE: The internet architecture

2008-12-28 Thread John Day

Why should an application ever see an IP address?

Applications manipulating IP addresses is like a Java program 
manipulating absolute memory pointers. A recipe for problems, but 
then you already know that.



At 11:42 -0800 2008/12/28, Hallam-Baker, Phillip wrote:

Content-class: urn:content-classes:message
Content-Type: multipart/alternative;
boundary=_=_NextPart_001_01C96924.5DF45235

It depends on what level you are looking at the problem from

In my opinion, application layer systems should not make assumptions 
that have functional (as opposed to performance) implications on the 
inner semantics of IP addresses. From the functionality point of 
view an IP address should be considered by the application to be no 
more than an opaque identifier.


The reason for that is precisely to allow the routing layer to make 
architectural decisions that do apply semantics to the address and 
which change over periods of time that are relevant to routing layer 
deployment cycles (there are plenty of pre-1995 Internet hosts still 
in service, I will wager a rather smaller percentage of backbone 
routers from 1995 are still in service :-).


Which is why I want to see ad-hoc semantics that applications 
attempt to apply to IP addresses being replaced by DNS level (ie 
reverse DNS) facilities that achieve the same effect in a fashion 
that does not result in applications breaking if those assumptions 
are broken.



On the geographic nature of IP addresses, clearly some level of 
aggregation is essential but it is equally clear that 100% clean 
aggregation is never going to be achievable either. The longer a 
block is in service the more it gets 'bashed about'. Entropy 
increases.


At the moment the Internet architecture has a built in assumption 
that the system is going to grow. And that keeps the chaos factor in 
check because the new blocks are a significant proportion of the 
whole and have nice regular assignments at issue. But what when the 
system stops growing? How do we keep the chaos to an acceptable 
fraction?


This leads me to consider an IP address block assignment as being an 
inherently term limited affair, with the sole exception being root 
DNS where perpetual assignments are going to be necessary. The terms 
need to be long, years, probably decades at minimum. But there needs 
to be a built in assumption that over time there will be a 
'recycling' of broken down, atomized address blocks into larger 
clumps that aggregate nicely. Which in turn is only possible if 
nobody (apart from core DNS) cares about their IP address having a 
specific value.




-Original Message-
From: ietf-boun...@ietf.org on behalf of Bryan Ford
Sent: Wed 12/24/2008 1:50 PM
To: macbroadcast
Cc: ietf@ietf.org
Subject: Re: The internet architecture

On Dec 22, 2008, at 10:51 PM, macbroadcast wrote:
 IP does not presume hierarchical addresses and worked quite well 
 without it for nearly 20 years.
 IP addresses are topologically independent. Although since CIDR, 
 there has been an attempt to make them align with aspects of the 
 graph.

 Ford's paper does not really get to the fundamentals of the problem.

 I would suggest that deeper thought is required.


 would like to know bryans  opinion


I think I missed some intermediate messages in this discussion thread, 
but I'll try. :)


IP addresses are just an address format (two, actually, one for IPv4 
and another for IPv6); their usefulness and effectiveness depends on 
exactly how they are assigned and used.  CIDR prescribes a way to 
assign and use IP addresses that in theory facilitates aggregation of 
route table entries to make the network scalable, _IF_ those addresses 
are assigned in a hierarchical fashion that directly corresponds to 
the network's topology, which must also be strictly hierarchical in 
order for that aggregation to be possible.  That is, if an edge 
network has only one upstream provider and uses in its network only IP 
addresses handed out from that provider's block, then nobody else in 
the Internet needs to have a routing table entry for that particular 
edge network; only for the provider.  But that whole model breaks down 
as soon as that edge network wants (god forbid!) a bit of reliability 
by having two redundant links to two different upstream providers - 
i.e., the multihoming problem, and hence all the concern over the 
fact that BGP routing tables are ballooning out of control because 
_everybody_ wants to be multihomed and thus wants their own public, 
non-aggregable IP address range, thus completely defeating the 
scalability goals of CIDR.


For some nice theoretical and practical analysis indicating that any 
hierarchical CIDR-like addressing scheme is fundamentally a poor match 
to a well-connected network topology like that of the Internet, see 
Krioukov et al., On Compact Routing for the Internet, CCR '07.  They 
also cast some pretty dark clouds over some alternative schemes as 
well, but that's another

Re: The internet architecture

2008-12-24 Thread Bryan Ford

On Dec 22, 2008, at 10:51 PM, macbroadcast wrote:
IP does not presume hierarchical addresses and worked quite well  
without it for nearly 20 years.
IP addresses are topologically independent. Although since CIDR,  
there has been an attempt to make them align with aspects of the  
graph.

Ford's paper does not really get to the fundamentals of the problem.

I would suggest that deeper thought is required.


would like to know bryans  opinion


I think I missed some intermediate messages in this discussion thread,  
but I'll try. :)


IP addresses are just an address format (two, actually, one for IPv4  
and another for IPv6); their usefulness and effectiveness depends on  
exactly how they are assigned and used.  CIDR prescribes a way to  
assign and use IP addresses that in theory facilitates aggregation of  
route table entries to make the network scalable, _IF_ those addresses  
are assigned in a hierarchical fashion that directly corresponds to  
the network's topology, which must also be strictly hierarchical in  
order for that aggregation to be possible.  That is, if an edge  
network has only one upstream provider and uses in its network only IP  
addresses handed out from that provider's block, then nobody else in  
the Internet needs to have a routing table entry for that particular  
edge network; only for the provider.  But that whole model breaks down  
as soon as that edge network wants (god forbid!) a bit of reliability  
by having two redundant links to two different upstream providers -  
i.e., the multihoming problem, and hence all the concern over the  
fact that BGP routing tables are ballooning out of control because  
_everybody_ wants to be multihomed and thus wants their own public,  
non-aggregable IP address range, thus completely defeating the  
scalability goals of CIDR.


For some nice theoretical and practical analysis indicating that any  
hierarchical CIDR-like addressing scheme is fundamentally a poor match  
to a well-connected network topology like that of the Internet, see  
Krioukov et al., On Compact Routing for the Internet, CCR '07.  They  
also cast some pretty dark clouds over some alternative schemes as  
well, but that's another story. :)


But to get back to the original issue, CIDR-based IP addressing isn't  
scalable unless the network topology is hierarchical and address  
assignment is done according to network topology: i.e., IP addresses  
MUST be dependent on topology in order for CIDR-based addressing to  
scale.  But in practice, at least up to this point, other concerns  
have substantially trumped this scalability concern: edge networks  
want fault tolerance via multihoming and administrative independence  
from their upstream ISPs, so they get their own provider-independent  
IP address blocks for their edge networks, which are indeed topology- 
independent (at least in terms of the assignment of the whole block),  
meaning practically every core router in the world will subsequently  
have to have a separate routing table entry for that edge network.   
But this only works for edge networks whose owners have sufficient  
size and clout and financial resources; we're long past the time when  
an individual could easily get his own private Class C address block  
for his own home network, like I remember doing a long time ago. :)   
So small edge networks and individual devices still have to use IP  
addresses assigned to them topologically out of some upstream  
provider's block, which means they have to change whenever the device  
moves to a different attachment point.


So in effect we've gotten ourselves in a situation where IP addresses  
are too topology-independent to provide good scalability, but too  
topology-dependent to provide real location-independence at least for  
individual devices, because of equally strong forces pulling the IP  
assignment process in both directions at once.  Hence the reason we  
desperately need locator/identity separation: so that locators can  
be assigned topologically so as to make routing scalable without  
having to cater to conflicting concerns about stability or location- 
independence, and so that identifiers can be stable and location- 
independent without having to cater to conflicting concerns about  
routing efficiency.


As far as specific forms these locators or identifiers should  
take, or specific routing protocols for the locator layer, or  
specific resolution or overlay routing protocols for the identity  
layer, I think there are a lot of pretty reasonable options; my paper  
suggested one, but there are others.


Cheers,
Bryan


merry christmas

Marc


i believe that  Kademlia   [ 1 ] for example and the  
technologies

mentioned in the  linked paper [ 2 ]
would fit the needs and requirements for a future proof internet.


[ 1 ] http://en.wikipedia.org/wiki/Kademlia
[ 2 ] http://pdos.csail.mit.edu/papers/uip:hotnets03.pdf
--


___
Ietf 

Re: The internet architecture

2008-12-21 Thread macbroadcast


Am 18.12.2008 um 18:10 schrieb Dick Hardt:



On 17-Dec-08, at 11:06 AM, Scott Brim wrote:


Mark Seery allegedly wrote on 11/30/08 10:38 AM:

Some questions have also risen WRT identity:

http://www.potaroo.net/presentations/2006-11-30-whoareyou.pdf

Is identity a network level thing or an application level thing?


Whatever.  All of the above.  There are many possible ways to use
identifiers, particularly for session (whatever that is, at  
whatever

layer) authentication and re-authentication.  The point to
locator/identifier separation is primarily to get identification- 
related
functions to stop depending on location-dependent tokens, i.e.  
locators.

Once that's done, they can use anything they like -- and they do :-).


Agreed. They do.

That does not mean that identity should not be an important part of  
the internet architecture.


Note also that the paper above mixes identity with identifiers. They  
are not the same thing


ok try this paper and tell me what you think ;)


http://pdos.csail.mit.edu/papers/uip:hotnets03.pdf

-Marc


--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: m...@let.de
jabber :m...@kgraff.net
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-21 Thread macbroadcast

hello,

thanks for your  reply, i prefer answering questions to the list, hope  
thats ok  for you.

Let me try  to answer your question with one sentence:

i believe that  Kademlia   [ 1 ] for example and the technologies   
mentioned in the  linked paper [ 2 ]

would fit the needs and requirements for a future proof internet.

meery christmas

Marc Manthey

[ 1 ] http://en.wikipedia.org/wiki/Kademlia
[ 2 ] http://pdos.csail.mit.edu/papers/uip:hotnets03.pdf
--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: m...@let.de
jabber :m...@kgraff.net
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-18 Thread Dick Hardt


On 17-Dec-08, at 11:06 AM, Scott Brim wrote:


Mark Seery allegedly wrote on 11/30/08 10:38 AM:

Some questions have also risen WRT identity:

http://www.potaroo.net/presentations/2006-11-30-whoareyou.pdf

Is identity a network level thing or an application level thing?


Whatever.  All of the above.  There are many possible ways to use
identifiers, particularly for session (whatever that is, at whatever
layer) authentication and re-authentication.  The point to
locator/identifier separation is primarily to get identification- 
related
functions to stop depending on location-dependent tokens, i.e.  
locators.

Once that's done, they can use anything they like -- and they do :-).


Agreed. They do.

That does not mean that identity should not be an important part of  
the internet architecture.


Note also that the paper above mixes identity with identifiers. They  
are not the same thing


-- Dick

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-17 Thread Stig Venaas
Rémi Després wrote:
 Christian Vogt  -  le (m/j/a) 12/4/08 10:26 AM:
 In any case, your comment is useful input, as it shows that calling the
 proposed stack architecture in [1] hostname-oriented may be wrong.
 Calling it service-name-oriented -- or simply name-oriented -- may
 be more appropriate.  Thanks for the input.
 Full support for the idea of a *name-oriented architecture*.
 
 In it, the locator-identifier separation principle applies naturally:
 names are the identifiers; addresses, or addresses plus ports,  are the
 locators.
 
 Address plus port locators are tneeded to reach applications in hosts
 that have to share their IPv4 address with other hosts ( e.g. behind a
 NAT with configured port-forwarding.)
 
 *Service-names* are the existing tool to advertise address plus port
 locators, and and to permit efficient multihoming because, in *SRV
 records* which are returned by the DNS to service-name queries:
 - several  locators  can be received for one name, possibly with a mix
 of IPv4 and IPv6
 - locators can include port numbers
 - priority and weight parameters of locators provide for backup and load
 sharing control.
 
 IMO, service names and SRV records  SHOULD be supported asap in all
 resolvers (in addition to host names and A/ records that they
 support today).
 Any view on this?

I would have liked some standard API for looking up SRV records. It's
hard to use SRV in portable applications. I've been wondering if that is
something we could do in the IETF, but would probably have to involve or
be discussed with the POSIX/Austin Group guys.

One could possibly extend getaddrinfo() or make something a bit similar.
getaddrinfo() is perhaps already becoming too complex though. A neat
thing with extending getaddrinfo() could be to make existing code use
SRV without changes. Not exactly sure if that is good or not...

Stig

 
 Regards,
 RD
 
 
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-17 Thread Keith Moore
Ken Raeburn wrote:
 On Dec 17, 2008, at 11:01, Keith Moore wrote:
 One could possibly extend getaddrinfo() or make something a bit similar.
 getaddrinfo() is perhaps already becoming too complex though. A neat
 thing with extending getaddrinfo() could be to make existing code use
 SRV without changes. Not exactly sure if that is good or not...

 It's not.  And I've heard rumors that some implementations of
 getaddrinfo() already do this - which is a good reason to not use it
 at all.
 
 Well, if you want portable code with consistent behavior, you can't use
 getaddrinfo with both host and service names specified, and you still
 have to do the SRV queries some other way.  But it may still be the most
 portable way to do thread-safe IPv4+IPv6 address resolution.

Mumble. I have also seen a getaddrinfo() implementation that would fail
if you passed it a literal port number, if that port wasn't listed in
/etc/services.  I kept wondering if they were trying to look up the
service name so they could do an SRV query on that.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-17 Thread Scott Brim
Mark Seery allegedly wrote on 11/30/08 10:38 AM:
 Some questions have also risen WRT identity:
 
 http://www.potaroo.net/presentations/2006-11-30-whoareyou.pdf
 
 Is identity a network level thing or an application level thing?

Whatever.  All of the above.  There are many possible ways to use
identifiers, particularly for session (whatever that is, at whatever
layer) authentication and re-authentication.  The point to
locator/identifier separation is primarily to get identification-related
functions to stop depending on location-dependent tokens, i.e. locators.
 Once that's done, they can use anything they like -- and they do :-).

Scott
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-17 Thread Ken Raeburn

On Dec 17, 2008, at 11:01, Keith Moore wrote:
One could possibly extend getaddrinfo() or make something a bit  
similar.

getaddrinfo() is perhaps already becoming too complex though. A neat
thing with extending getaddrinfo() could be to make existing code use
SRV without changes. Not exactly sure if that is good or not...


It's not.  And I've heard rumors that some implementations of
getaddrinfo() already do this - which is a good reason to not use it  
at all.


Well, if you want portable code with consistent behavior, you can't  
use getaddrinfo with both host and service names specified, and you  
still have to do the SRV queries some other way.  But it may still be  
the most portable way to do thread-safe IPv4+IPv6 address resolution.


I originally thought I wouldn't mind seeing a flag for getaddrinfo in  
some future spec that means do (not) look up SRV records for host 
+service, but I don't think you'd get consistent defaults implemented  
across all the existing implementations, some of which already do SRV  
records and some of which don't; maybe we'd need both do and do  
not flags.  And it still wouldn't help you with looking up  
_sip._tls.example.com, so you still wind up wanting the additional  
API.  Still, a shortcut in getaddrinfo for the simple and most common  
cases might be handy if it could be controlled.


So far as I can tell, getaddrinfo with only host *or* service name  
specified is still portable and consistent... if you keep in mind the  
differences between the different versions of the specs that have been  
implemented.


Ken
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-17 Thread Keith Moore
Stig Venaas wrote:
 I would have liked some standard API for looking up SRV records. It's
 hard to use SRV in portable applications. 

In general there is a need for a standard, general purpose API for DNS
queries - one that lets you query for arbitrary record types.  It also
needs to be thread safe and to work in an

 
 One could possibly extend getaddrinfo() or make something a bit similar.
 getaddrinfo() is perhaps already becoming too complex though. A neat
 thing with extending getaddrinfo() could be to make existing code use
 SRV without changes. Not exactly sure if that is good or not...

It's not.  And I've heard rumors that some implementations of
getaddrinfo() already do this - which is a good reason to not use it at all.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [tae] The Great Naming Debate (was Re: The internet architecture)

2008-12-17 Thread Melinda Shore

Hallam-Baker, Phillip wrote:
10.1.2.3 is simply a string litteral that may be used in place of a 
DNS name. In neither case should the application require knowledge of 
the IP address itself. In fact you don't want that as at some point in 
the distant future, 10.1.2.3 is actually going to map to an IPv6 
address, not an IPv4 address.


While I take your point in general, I do think it should
be pointed out that DNS names are resolved prior to
connection establishment through a directory lookup,
while addresses are routed.

Melinda

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The Great Naming Debate (was Re: The internet architecture)

2008-12-16 Thread Hallam-Baker, Phillip
Two points
 
1) Let us bury the idea that more parts reduces reliability. If anyone thinks 
that they do not understand the function of TCP and should go and read some 
basic networking architecture texts. TCP + IP is more reliable than IP. Ergo it 
is entirely possible to onfigure a service such that DNS + TCP + IP is more 
reliable than TCP + IP.
 
It is also possible to design systems such that more layers create less 
reliability.
 
 
2) From an application point of view example.com and 10.1.2.3 may both be 
regarded as names. They are both rendered as an ascii/unicode string. They both 
require translation into IP format.
 
10.1.2.3 is simply a string litteral that may be used in place of a DNS name. 
In neither case should the application require knowledge of the IP address 
itself. In fact you don't want that as at some point in the distant future, 
10.1.2.3 is actually going to map to an IPv6 address, not an IPv4 address.



From: ietf-boun...@ietf.org on behalf of Bryan Ford
Sent: Sun 12/14/2008 2:51 PM
To: Keith Moore
Cc: t...@ietf.org; ietf@ietf.org
Subject: The Great Naming Debate (was Re: The internet architecture)



So, after being distracted by OSDI last week, I'm now trying to catch 
up on the raging debates on TAE that are already exceeding all the 
wildest dreams I had before setting up the list... :)

On the issue of weaning applications (and potentially transports) away 
from IP addresses in favor of names of some kind, I feel that a lot of 
the disagreement results from a misunderstanding of exactly what I 
(and perhaps others who have made similar proposals) was proposing...

On Dec 4, 2008, at 10:29 PM, Keith Moore wrote:
 Hallam-Baker, Phillip wrote:
 I am trying to parse this claim.

 Are you saying that the DNS is fragile and raw IP relatively robust?

 DNS is layered on top of IP.  So for a large class of IP failures, DNS
 won't work either.  And if IP routing fails, DNS is likely to be
 irrelevant because the application using DNS won't work anyway.

 And in practice, DNS is quite likely to fail due to configuration
 errors, inadequate provisioning, outdated cache entries due to
 unanticipated changes, brain-damaged DNS caches built into NATs, 
 failure
 of registries to transfer a DNS name in a timely fashion, etc.

 So it's not a question of whether DNS is less reliable than IP (it 
 is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on 
 DNS by
 trying to get apps and other things to use DNS names exclusively, 
 makes
 those apps and other things less reliable.  And I'd argue that it 
 does,
 except perhaps in a world where renumbering happened frequently, at
 irregular intervals, and without notice.  And I don't think that's a
 realistic scenario.

I entirely agree in principle with your concerns about reliability: if 
everything has to work correctly in two layers (DNS and IP), then 
that's strictly less likely to happen than hoping everything will work 
correctly in only one layer (just IP) - unless DNS can somehow make up 
for unreliability in the IP layer, which it occasionally might be able 
to do with some effort (e.g., via DNS-based load balancers that take 
end-to-end IP reachability information as input), but it usually 
doesn't because that's not the purpose of DNS.  And I agree that some 
applications (and some users) sometimes need to deal with IP addresses 
directly, and probably still will need to for a long time, maybe 
forever.  You seem to be assuming that my proposal was to disallow 
such visibility into the network entirely, but that wasn't my intent 
at all.  I just would like it to become no longer _mandatory_ for 
every application to know about the structure IP addresses in order to 
accomplish anything.

To be specific, there are (at least) three positions we might be in:

1. ALL applications MUST know about IP addresses, in each IP address 
format that exists, in order to operate at all.  This is the current 
state of the world for applications that use the sockets API, because 
apps have to call gethostbyname etc. and copy the resulting IP 
address(es) into sockaddr_in or sockaddr_in6 structs to pass to 
connect() et al.  Even though the sockets API is generic in that it 
supports multiple address families, its design forces the application 
to have code specific to each family in order to support that family 
at all, which is the key problem.

2. ALL applications MUST use only DNS names for all operations, and 
never provide or see IP addresses for any reason.  This seems to be 
what you're assuming I'm suggesting (i.e., where you say ...by trying 
to get apps and other things to use DNS names exclusively).  
That's a world we might hold up as an ideal to strive for eventually, 
but it's indeed not realistic in the short term, and it's not what I'm 
pushing for.  Besides, there may be many different naming or host 
identity

Re: [tae] The Great Naming Debate (was Re: The internet architecture)

2008-12-16 Thread Keith Moore
Hallam-Baker, Phillip wrote:
 So to be strictly accurate here, applications deal in names, some of
 which are DNS names and some of which are IP address litterals. But an
 'end user' application only deals in names.

how many people are pure end users who never need their tools to be
able to deal with IP address literals?  nobody that I know of.  even
naive users who don't understand what IP addresses do, need for their
apps to be able to deal with them for the cases where things break.  for
instance, when the external network connection goes down and they still
want to be able to talk to their printers. [*]

actually the vast majority of users that I know of do occasionally deal
with IP addresses directly.  and many of these guys aren't computer
scientists or techies of any sort.

again, this is not only hopelessly naive, it's harmful.  you're trying
to build a system that's even more broken than what we have now.

we're a very long way from knowing how to build a naming system that
works so reliably and transparently that we can completely hide IP
addresses from users.

Keith

[*] and yeah, I know about LLMNR and even use it occasionally, but it
also breaks when you take your laptop on the road and still want to be
able to print to the printer at home.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


The Great Naming Debate (was Re: The internet architecture)

2008-12-15 Thread Bryan Ford
So, after being distracted by OSDI last week, I'm now trying to catch  
up on the raging debates on TAE that are already exceeding all the  
wildest dreams I had before setting up the list... :)


On the issue of weaning applications (and potentially transports) away  
from IP addresses in favor of names of some kind, I feel that a lot of  
the disagreement results from a misunderstanding of exactly what I  
(and perhaps others who have made similar proposals) was proposing...


On Dec 4, 2008, at 10:29 PM, Keith Moore wrote:

Hallam-Baker, Phillip wrote:

I am trying to parse this claim.

Are you saying that the DNS is fragile and raw IP relatively robust?


DNS is layered on top of IP.  So for a large class of IP failures, DNS
won't work either.  And if IP routing fails, DNS is likely to be
irrelevant because the application using DNS won't work anyway.

And in practice, DNS is quite likely to fail due to configuration
errors, inadequate provisioning, outdated cache entries due to
unanticipated changes, brain-damaged DNS caches built into NATs,  
failure

of registries to transfer a DNS name in a timely fashion, etc.

So it's not a question of whether DNS is less reliable than IP (it  
is),

or even whether the reliability of DNS + IP is less than that of IP
alone (it is).  It's a question of whether increasing reliance on  
DNS by
trying to get apps and other things to use DNS names exclusively,  
makes
those apps and other things less reliable.  And I'd argue that it  
does,

except perhaps in a world where renumbering happened frequently, at
irregular intervals, and without notice.  And I don't think that's a
realistic scenario.


I entirely agree in principle with your concerns about reliability: if  
everything has to work correctly in two layers (DNS and IP), then  
that's strictly less likely to happen than hoping everything will work  
correctly in only one layer (just IP) - unless DNS can somehow make up  
for unreliability in the IP layer, which it occasionally might be able  
to do with some effort (e.g., via DNS-based load balancers that take  
end-to-end IP reachability information as input), but it usually  
doesn't because that's not the purpose of DNS.  And I agree that some  
applications (and some users) sometimes need to deal with IP addresses  
directly, and probably still will need to for a long time, maybe  
forever.  You seem to be assuming that my proposal was to disallow  
such visibility into the network entirely, but that wasn't my intent  
at all.  I just would like it to become no longer _mandatory_ for  
every application to know about the structure IP addresses in order to  
accomplish anything.


To be specific, there are (at least) three positions we might be in:

1. ALL applications MUST know about IP addresses, in each IP address  
format that exists, in order to operate at all.  This is the current  
state of the world for applications that use the sockets API, because  
apps have to call gethostbyname etc. and copy the resulting IP  
address(es) into sockaddr_in or sockaddr_in6 structs to pass to  
connect() et al.  Even though the sockets API is generic in that it  
supports multiple address families, its design forces the application  
to have code specific to each family in order to support that family  
at all, which is the key problem.


2. ALL applications MUST use only DNS names for all operations, and  
never provide or see IP addresses for any reason.  This seems to be  
what you're assuming I'm suggesting (i.e., where you say ...by trying  
to get apps and other things to use DNS names exclusively).   
That's a world we might hold up as an ideal to strive for eventually,  
but it's indeed not realistic in the short term, and it's not what I'm  
pushing for.  Besides, there may be many different naming or host  
identity schemes we might eventually want to support besides DNS names  
- e.g., UIA personal names, HIP cryptographic host identities, ...


3. Applications MAY be aware of IP addresses if they need to be for  
whatever reason, but aren't ALWAYS forced to have hard-coded  
dependencies on the existence and structure of IP addresses by the  
API's design.  Applications see IP addresses as variable-length string  
blobs of some kind - e.g., along the lines Florian Weimer suggested in  
another message.  Applications can parse/interpret or create these  
blobs if they want/need to, but don't necessarily have to if they're  
just passing the blob through from the GUI or URL parser to the OS's  
protocol stack.  This is the position I think we need to be pushing for.


In short, I don't think either the current fascist extreme of an IP- 
address-only API or the opposite fascist extreme of a DNS-name-only  
protocol stack is very appealing; we need an environment in which  
different kinds of names/addresses/identities can coexist.  You'll  
still be able to enter an IPv4 or IPv6 address instead of a host name  
when you need to, and applications will be 

Re: The Great Naming Debate (was Re: The internet architecture)

2008-12-15 Thread Joe Baptista
This is a very anal retentive discussion your all having here.  I support
Ford here.  Applications should be able to use names and IP addresses.   We
don't need the IP or DNS gestapo taking over application programs.

regards
joe baptista

On Sun, Dec 14, 2008 at 2:51 PM, Bryan Ford brynosau...@gmail.com wrote:

 So, after being distracted by OSDI last week, I'm now trying to catch up on
 the raging debates on TAE that are already exceeding all the wildest dreams
 I had before setting up the list... :)

 On the issue of weaning applications (and potentially transports) away from
 IP addresses in favor of names of some kind, I feel that a lot of the
 disagreement results from a misunderstanding of exactly what I (and perhaps
 others who have made similar proposals) was proposing...

 On Dec 4, 2008, at 10:29 PM, Keith Moore wrote:

 Hallam-Baker, Phillip wrote:

 I am trying to parse this claim.

 Are you saying that the DNS is fragile and raw IP relatively robust?


 DNS is layered on top of IP.  So for a large class of IP failures, DNS
 won't work either.  And if IP routing fails, DNS is likely to be
 irrelevant because the application using DNS won't work anyway.

 And in practice, DNS is quite likely to fail due to configuration
 errors, inadequate provisioning, outdated cache entries due to
 unanticipated changes, brain-damaged DNS caches built into NATs, failure
 of registries to transfer a DNS name in a timely fashion, etc.

 So it's not a question of whether DNS is less reliable than IP (it is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on DNS by
 trying to get apps and other things to use DNS names exclusively, makes
 those apps and other things less reliable.  And I'd argue that it does,
 except perhaps in a world where renumbering happened frequently, at
 irregular intervals, and without notice.  And I don't think that's a
 realistic scenario.


 I entirely agree in principle with your concerns about reliability: if
 everything has to work correctly in two layers (DNS and IP), then that's
 strictly less likely to happen than hoping everything will work correctly in
 only one layer (just IP) - unless DNS can somehow make up for unreliability
 in the IP layer, which it occasionally might be able to do with some effort
 (e.g., via DNS-based load balancers that take end-to-end IP reachability
 information as input), but it usually doesn't because that's not the purpose
 of DNS.  And I agree that some applications (and some users) sometimes need
 to deal with IP addresses directly, and probably still will need to for a
 long time, maybe forever.  You seem to be assuming that my proposal was to
 disallow such visibility into the network entirely, but that wasn't my
 intent at all.  I just would like it to become no longer _mandatory_ for
 every application to know about the structure IP addresses in order to
 accomplish anything.

 To be specific, there are (at least) three positions we might be in:

 1. ALL applications MUST know about IP addresses, in each IP address format
 that exists, in order to operate at all.  This is the current state of the
 world for applications that use the sockets API, because apps have to call
 gethostbyname etc. and copy the resulting IP address(es) into sockaddr_in or
 sockaddr_in6 structs to pass to connect() et al.  Even though the sockets
 API is generic in that it supports multiple address families, its design
 forces the application to have code specific to each family in order to
 support that family at all, which is the key problem.

 2. ALL applications MUST use only DNS names for all operations, and never
 provide or see IP addresses for any reason.  This seems to be what you're
 assuming I'm suggesting (i.e., where you say ...by trying to get apps and
 other things to use DNS names exclusively).  That's a world we might
 hold up as an ideal to strive for eventually, but it's indeed not realistic
 in the short term, and it's not what I'm pushing for.  Besides, there may be
 many different naming or host identity schemes we might eventually want to
 support besides DNS names - e.g., UIA personal names, HIP cryptographic host
 identities, ...

 3. Applications MAY be aware of IP addresses if they need to be for
 whatever reason, but aren't ALWAYS forced to have hard-coded dependencies on
 the existence and structure of IP addresses by the API's design.
  Applications see IP addresses as variable-length string blobs of some kind
 - e.g., along the lines Florian Weimer suggested in another message.
  Applications can parse/interpret or create these blobs if they want/need
 to, but don't necessarily have to if they're just passing the blob through
 from the GUI or URL parser to the OS's protocol stack.  This is the position
 I think we need to be pushing for.

 In short, I don't think either the current fascist extreme of an
 IP-address-only API or the opposite fascist 

Re: The Great Naming Debate (was Re: The internet architecture)

2008-12-15 Thread Marc Manthey




I absolutly agree with brians posting and  recomment   all people  
reading this paper , IMHO, it   solves  some

known problems , even when they don´t exist  in real world yet . ;)

http://pdos.csail.mit.edu/papers/uia:osdi06.pdf


(e.g., via DNS-based load balancers that take end-to-end IP  
reachability information as input),


this would take us beyound  round robin  indeed.

cheers

Marc


--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: m...@let.de
PGP/GnuPG: 0x1ac02f3296b12b4d
jabber :m...@kgraff.net
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The Great Naming Debate (was Re: The internet architecture)

2008-12-14 Thread Keith Moore
Bryan Ford wrote:
 You seem to be assuming that my proposal was to disallow such
 visibility into the network entirely, but that wasn't my intent at
 all.  I just would like it to become no longer _mandatory_ for every
 application to know about the structure IP addresses in order to
 accomplish anything.

 In short, I don't think either the current fascist extreme of an
 IP-address-only API or the opposite fascist extreme of a
 DNS-name-only protocol stack is very appealing; we need an environment
 in which different kinds of names/addresses/identities can coexist.  

Ah, it seems I read far too much into what you wrote earlier.  I
certainly don't think it should be mandatory for all applications that
use the network to know about the structure of IP addresses. The beef I
had was with the various forms of all apps should always use DNS names
arguments.

Note that with getaddrinfo(), arguably they _don't_ need to know about
the structure of IP addresses.  The getaddrinfo() routine allocates a
sockaddr_xx structure of appropriate type for each address found, but
the pointers returned are (as far as the caller knows) to generic
sockaddr structures.  The caller can simply pass a pointer to this
structure to connect().  And the idea was clearly to insulate apps from
having to know about the structure or size of addresses, without
compromising their flexibility.   For several reasons, I don't happen to
think that the result works very well, but the intent was there.  If it
turns out to not work well enough to prevent apps from needing to peek
into addresses, maybe the problem isn't the API, but rather that there
are subtle differences between v4 and v6 that nevertheless matter to
applications.

Keith

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Stephane Bortzmeyer
On Sat, Dec 06, 2008 at 09:33:36PM +0900,
 Masataka Ohta [EMAIL PROTECTED] wrote 
 a message of 16 lines which said:

 The problem to represent a host with raw addresses is that they
 can't represent a host with multiple addresses,

OK

 supporting of which is useful and often required, for example, for
 DNS and SMTP clients.

Required? Neither DNS nor SMTP requires that. I'm not even sure it
is good practice.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Stephane Bortzmeyer
On Sat, Dec 06, 2008 at 08:03:45AM +0100,
 Marc Manthey [EMAIL PROTECTED] wrote 
 a message of 87 lines which said:

 On Linux (at least on Debian), you need the mDNSResponder package
 provided by Apple on the Bonjour downloads page.  Unfortunately,
 Avahi doesn't yet implement all of the API functions UIA needs.

And it is a proprietary protocol, anyway. The IETF protocol on that
field is LLMNR (RFC 4795).

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Stephane Bortzmeyer
On Fri, Dec 05, 2008 at 12:46:58PM -0500,
 Andrew Sullivan [EMAIL PROTECTED] wrote 
 a message of 39 lines which said:

 It seems to me true, from experience and from anecdote, that DNS out
 at endpoints has all manner of failure modes that have little to do
 with the protocol and a lot to do with decisions that implementers
 and operators made, either on purpose or by accident.

Indeed, in one of his messages, Keith Moore listed many problems with
DNS that were completely different in their origin. Almsot none were
protocol-related, most were operational but some were not even linked
to DNS operations, they were layer 8 and 9 issues (such as the
registrar transfer problem in ICANNland) and completely offtopic for
the IETF.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Stephane Bortzmeyer
On Mon, Dec 08, 2008 at 11:25:02AM +0100,
 Marc Manthey [EMAIL PROTECTED] wrote 
 a message of 54 lines which said:

 And it is a proprietary protocol, anyway.

 who told you that  ?

Please tell me what open standard (IETF, ITU, what you want) defines
it.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Masataka Ohta
Stephane Bortzmeyer wrote:

The problem to represent a host with raw addresses is that they
can't represent a host with multiple addresses,

 OK

supporting of which is useful and often required, for example, for
DNS and SMTP clients.

 Required? Neither DNS nor SMTP requires that. I'm not even sure it
 is good practice.

If an SMTP or DNS server has multiple addresses, clients are required,
practically (many servers are behind firewalls) or, for DNS, officially
by RFC1035, to try all the IP addresses of the server, which is a form
of an applicatin layer end to end multihoming.

That DNS is an intelligent intermediate system is blessing rather than
a curse, because it enables many to many mapping between hostnames and
IP addresses, which is essensial for end to end multihoming to reduce
the number of the global routing table entries.

Masataka Ohta


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Stephane Bortzmeyer
On Mon, Dec 08, 2008 at 09:06:54PM +0900,
 Masataka Ohta [EMAIL PROTECTED] wrote 
 a message of 25 lines which said:

 If an SMTP or DNS server has multiple addresses, clients are
 required, practically (many servers are behind firewalls) or, for
 DNS, officially by RFC1035, to try all the IP addresses of the
 server,

Yes, SMTP requires to try all the IP addresses of the MX. But I
thought you said that it was required for a MX to have several
addresses, which is quite different.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Marc Manthey


Am 08.12.2008 um 10:11 schrieb Stephane Bortzmeyer:


On Sat, Dec 06, 2008 at 08:03:45AM +0100,
Marc Manthey [EMAIL PROTECTED] wrote
a message of 87 lines which said:


On Linux (at least on Debian), you need the mDNSResponder package
provided by Apple on the Bonjour downloads page.  Unfortunately,
Avahi doesn't yet implement all of the API functions UIA needs.


And it is a proprietary protocol, anyway.


who told you that  ?

http://www.ops.ietf.org/lists/namedroppers/namedroppers.2004/msg00087.html

some references

http://files.dns-sd.org/draft-cheshire-dnsext-nbp.txt

http://files.dns-sd.org/draft-sekar-dns-llq.txt

http://files.dns-sd.org/draft-sekar-dns-ul.txt

http://files.dns-sd.org/draft-cheshire-nat-pmp.txt



Marc
--
Les Enfants Terribles - WWW.LET.DE
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: [EMAIL PROTECTED]
PGP/GnuPG: 0x1ac02f3296b12b4d
jabber :[EMAIL PROTECTED]
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: DNS query reliability (was Re: The internet architecture)

2008-12-08 Thread Stephane Bortzmeyer
On Sat, Dec 06, 2008 at 06:23:02AM -0800,
 Dave CROCKER [EMAIL PROTECTED] wrote 
 a message of 37 lines which said:

 One could imagine producing a BCP about common DNS implementation and 
 operation errors or, more positively, recommendations for implementation 
 and operation.

 One could equally imagine some group actively pursuing improvements to 
 the major implementations (and operations) that have problems.

 I seem to recall seeing small forays in this direction, in the past.  

Indeed, there are many efforts to improve the DNS usage.

In IETFland, there are RFC 1912, 2182, 4472, 4641, 5358 and many
Internet-Drafts.

Outside IETF, there are efforts such as plug type=shameless
url=http://www.zonecheck.fr/;registries like AFNIC that require a
successful technical test of the name servers before every
delegation/plug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Andrew Sullivan
On Mon, Dec 08, 2008 at 10:37:36AM +0100, Stephane Bortzmeyer wrote:

 DNS that were completely different in their origin. Almsot none were
 protocol-related, most were operational but some were not even linked
 to DNS operations, they were layer 8 and 9 issues (such as the
 registrar transfer problem in ICANNland) and completely offtopic for
 the IETF.

Well, to the extent that the real-world application of the protocol --
even several layers up -- turns out to be more difficult than simple
reading of the protocol specification would suggest, we have a problem
that is _not_ off-topic for the IETF.  That problem is how to make
this very flexible protocol safe for use in the network that actually
exists.  

There is a tendency among those of us who are familiar with DNS to
say, Well, that's not what you're supposed to do, when we hear the
horror stories people have about DNS problems.  But if we really want
people to use names instead of addresses all the time, then we need to
ask ourselves why, in spite of the built-in resilience of DNS, relying
on DNS often makes an application less resilient.  If the explanation
turns out to be exclusively things like the layer 8 and 9 issues
you're talking about, then let's work on fixing them (or come up with
a tech fix to route around that damage).  I suspect, however, that
we'll find ambiguities in the specifications that make certain kinds
of implementations hard.  We should fix that, too (and dnsext is
trying).

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-08 Thread Dave CROCKER



Andrew Sullivan wrote:

   we need to
ask ourselves why, in spite of the built-in resilience of DNS, relying
on DNS often makes an application less resilient.  If the explanation
turns out to be exclusively things like the layer 8 and 9 issues
you're talking about, then let's work on fixing them (or come up with
a tech fix to route around that damage).  I suspect, however, that
we'll find ambiguities in the specifications that make certain kinds
of implementations hard.  We should fix that, too (and dnsext is
trying).



Andrew,

An easy reaction to your note is yes, of course you are correct, but it might 
be worth a small amount of additional consideration:  This is an infrastructure 
service that has demonstrated a long history of utility and a long history of 
problems.  Some of the problems have more impact than others. I'm not sure there 
is any consensus about which ones, but trying to do a rough rand-ordering could 
focus effort to be more efficient (and more useful.)


What we have been missing is any sort of systematic consideration of those 
problems. Which are major and which are minor (for some definitions of major and 
minor)?  Where should community effort be put, to make substantial improvements 
in DNS utility?  There is the usual danger of fragmented effort, with 
uncoordinated, local optimizations that ultimately do not have the desired benefit.


I take your note as highlighting the potential disparity between the DNS reality 
seen by experts, versus the DNS reality seen in the larger and less expert wild.


Besides motivating analysis of what is wrong, we need to apply that perspective 
at a systemic level to the effort at making improvements, so that fixes by 
experts result in real, systems-level improvements.


In all likelihood, the biggest impact on DNS reliability will turn out to come 
from relatively straightforward changes in administration and operation.  But 
there does not seem to be all that much effort in this direction and it seems 
likely that a well-placed and well-touted BCP -- and perhaps some tools to test 
for conformance to it -- could help more than a protocol improvement...


d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-06 Thread Christian Vogt

Keith -

Up front:  Many of your arguments are based on the assumption that the
name-oriented stack architecture proposed in [1] is limited to a new API
between applications and the stack.  If that was so, then the benefits
of the new stack architecture would undoubtedly be limited.  But the
name-oriented stack architecture goes beyond providing a new API.  It
also comprises changes further down the stack to take maximum benefit of
the proposed new naming concepts.  A new API alone would only provide a
new layer of abstraction, which (here I agree with you) would fail to
yield a substantial improvement.  Therefore, please keep in mind when
reading my further responses below, that the proposed name-oriented
stack architecture is more than a new API.

And another general comment up front, regarding your arguments related
to DNS reliability:  Yes, I agree with you that this is important.  For
sure it must be keep in mind when transitioning towards an architecture
that implies a stronger dependency on name-to-address mapping.  The
reason why I am convinced that we will get this right is that (i) the
tools to ensure various levels of DNS reliability and security already
exist due to existing dependencies on the DNS, and (ii) which level of
DNS reliability and security to provide for, in each particular
deployment scenario, will continue to be under the control of the
respective name owners.  I.e., as today, the name owners will be able to
select a hosting provider with sufficient trustworthiness, or to set up
the necessary DNS infrastructure themselves, or even to use hard-coded
name-to-address mappings in hosts.

Of course, a name-oriented stack architecture does not /have/ to use the
DNS as its name-to-address mapping system.  As Stephane said earlier in
this email thread: There are alternatives.  Still, I have two reasons to
believe that the DNS would be appropriate in this regard:

(1) The DNS protocol enables exactly the flexibility in name-to-address
   mapping that we need:  It enables an abstraction from the physical
   implementation of a service -- i.e., from the details of which hosts
   run the service and where these hosts are located.

   Note that the term hostname, which is common in the DNS realm, is
   misleading due to exactly this abstraction.  And I think it has led
   to misunderstandings also in our discussion.  This is why I am now
   using the term name instead.

(2) The DNS has very attractive operational and security properties:
   Since resolution is along administrative relationships,  the right
   incentives are in place for DNS to work correctly.  Of course, this
   doesn't mean that there is no room for improvement.  Certain
   existing administrative procedures may be inappropriate, as you say.
   And clearly there would be benefit to additional cryptographic
   protection such as through DNSSEC.  But the DNS provides a certain
   level of faithfulness already without such improvements, and this is
   in my opinion very attractive.


The service name is a well-known string replacing the overloading of
port numbers for the same purpose, and the hostname maps to the set  
of

IP addresses through which the service is currently reachable.


I'll never forget the time, back when I ran a mail server for a few
thousand users, that mail started dropping on the floor because a call
to getservbyname (smtp, tcp) inside sendmail started failing.

The call failed not just because some dim bulb at Sun decided that it
would be a good idea to take an API that originally did a flat file
lookup and reimplement it with an RPC to an NIS server that didn't  
share

fate with the caller (not to mention several other fragilities
associated with NIS).   On a deeper and more important level, the
failure happened because the whole idea of getservbyname (and similar
things, including the service name parameter to getaddrinfo) is
brain-damaged [*].  smtp is not better than 25, and adding an extra
layer of indirection just to make the port number be human readable is
not an improvement.


There won't be a new indirection layer.  Instead, there will be a
separate name space for services, which will be distinct from the name
space for session identification.  Nowadays, port numbers are overloaded
for both, service identification and session identification.

And the separation of service identification and session identification
is going to have value.  An example is in IPv4 NAT'ing:  Here, the
overloading of port numbers limits the efficiency with which sessions
can be multiplexed onto the same IP address because the overloading
requires one of the two port numbers of a session to take a certain,
well-known value.  This problem is coming up nowadays in the discussions
around IPv6 transition, for which many techniques require aggressive
multiplexing of sessions onto the same IPv4 address.  You may argue that
this alone is not sufficient to motivate a change, but this would be a
separate topic for 

DNS query reliability (was Re: The internet architecture)

2008-12-06 Thread Dave CROCKER



Andrew Sullivan wrote:

It seems to me true, from experience and from anecdote, that DNS out
at endpoints has all manner of failure modes that have little to do
with the protocol and a lot to do with decisions that implementers and
operators made, either on purpose or by accident. 

...

This suggests to me that there will be an opportunity to improve some
of the operations in the wild,

...

If you have a cache of these examples, I'd be delighted to see them.



One could imagine producing a BCP about common DNS implementation and operation 
errors or, more positively, recommendations for implementation and operation.


One could equally imagine some group actively pursuing improvements to the major 
implementations (and operations) that have problems.


I seem to recall seeing small forays in this direction, in the past.  Your query 
might encourage an organized effort that follows through with making actual DNS 
operation -- as opposed to attack or defense of the protocol -- provide the 
needed level of *end-to-end* reliability.


d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Stephane Bortzmeyer
On Thu, Dec 04, 2008 at 04:29:51PM -0500,
 Keith Moore [EMAIL PROTECTED] wrote 
 a message of 28 lines which said:

 It's a question of whether increasing reliance on DNS by trying to
 get apps and other things to use DNS names exclusively, makes those
 apps and other things less reliable.

For a good part, this is already done. You cannot use IP addresses for
many important applications (the Web because of virtual hosting and
email because most MTA setup prevent it).

And, as far as I know, nobody complained. The only person I know who
puts IP addresses on business cards is Louis
Pouzin [EMAIL PROTECTED]

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Stephane Bortzmeyer
On Thu, Dec 04, 2008 at 04:51:20PM -0500,
 Keith Moore [EMAIL PROTECTED] wrote 
 a message of 40 lines which said:

 Not a week goes by when I'm not asked to figure out why people
 can't get to a web server or why email isn't working.  In about
 70% of the web server cases and 30% of the email cases, the answer
 turns out to be DNS related.  IP failures, by contrast, are quite
 rare.

If it were true, I would wonder why people never use legal URLs like
http://[2001:1890:1112:1::20]/...

(And that's certainly not because they are harder to type or to
remember: the above URL, which works on my Firefox, goes to a Web site
which is mostly for technical people, who are able to use bookmarks,
local files for memory, etc.)
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Rémi Després wrote:

 IMO, service names and SRV records  SHOULD be supported asap in all
 resolvers (in addition to host names and A/ records that they
 support today).
 Any view on this?

a) SRV records only apply to applications that use them.  to use them
otherwise would break compatibility.

b) SRV records also increase reliance on DNS which (among other things)
is a barrier to deployment of new applications.  use of SRV would
therefore encourage overloading of existing protocols and service names
to run new applications (another version of the everything-over-http
syndrome).

c) use of SRV would encourage even more meddling with protocols by NATs

d) it's not immediately clear to me that it would be feasible for SRV to
be used by applications that need to do referrals.  they'd need some way
to generate new, unique service names and there would need to be a way
to generate and distribute new DNS dynamic update credentials to those
applications or even application instances.

and that's just the problems I can think of off the top of my head.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Stephane Bortzmeyer wrote:
 On Thu, Dec 04, 2008 at 04:29:51PM -0500,
  Keith Moore [EMAIL PROTECTED] wrote 
  a message of 28 lines which said:
 
 It's a question of whether increasing reliance on DNS by trying to
 get apps and other things to use DNS names exclusively, makes those
 apps and other things less reliable.
 
 For a good part, this is already done. You cannot use IP addresses for
 many important applications (the Web because of virtual hosting and
 email because most MTA setup prevent it).

you're generalizing about the entire Internet from two applications?

 And, as far as I know, nobody complained. The only person I know who
 puts IP addresses on business cards is Louis
 Pouzin [EMAIL PROTECTED]

address literals were quite useful for diagnostic purposes.  the fact
that most MTAs now prevent using them in outgoing mail is quite
unfortunate.   though they were even more useful when you could expect
to do things like

RCPT TO:[EMAIL PROTECTED]

as a way to query a specific SMTP server as to what it would do with a
specific domain name.

but that's really beside the point.  the real point is this:

please figure out how to make DNS more reliable, more in sync with the
world, and less of a single point of failure and control, before
insisting that we place more trust in it.

Keith

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Stephane Bortzmeyer wrote:
 On Thu, Dec 04, 2008 at 04:51:20PM -0500,
  Keith Moore [EMAIL PROTECTED] wrote 
  a message of 40 lines which said:
 
 Not a week goes by when I'm not asked to figure out why people
 can't get to a web server or why email isn't working.  In about
 70% of the web server cases and 30% of the email cases, the answer
 turns out to be DNS related.  IP failures, by contrast, are quite
 rare.
 
 If it were true, I would wonder why people never use legal URLs like
 http://[2001:1890:1112:1::20]/...

because IPv6 literals wouldn't work for the vast majority of users today?

I do see links to URLs with IPv4 address literals.   sometimes they're a
good choice.

but you're really missing the point, which is that DNS fails a lot.

note that DNS failures aren't all with the authoritative servers -
they're often with caches, resolver configuration, etc.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

  Just think how much easier the IPv4 to IPv6 transition would have
  been if nothing above the IP layer cared exactly what an IP
  address looks like or how big it is.

 It wouldn't have made much difference at all,

Wow. I find this statement simply astonishing.

IMO, one of the biggest challenges surrounding IPv6
adoption/deployment is that all applications are potentially impacted,
and each and everyone one of them needs to be explicitely enabled to
work with IPv6. That is a huge challenge, starting with the
observation that there are a bazillion deployed applications that will
NEVER be upgraded.

Boy, wouldn't it be nice of all we had to do was IPv6-enable the
underlying network and stack (along with key OS support routines and
middleware) and have existing apps work over IPv6, oblivious to IPv4
vs. IPv6 underneath.

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.

Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 So it's not a question of whether DNS is less reliable than IP (it is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on DNS by
 trying to get apps and other things to use DNS names exclusively, makes
 those apps and other things less reliable.

No. Your argument seems to be because relying even more on DNS than
we do today makes things more brittle, BAD, BAD BAD, we cannot go
there.

The more relevant engineering question is whether the benefits of such
an approach outweigh the downsides. Sure there are downsides. But
there are also real potential benefits. Some of them potentially game
changers in terms of addressing real deficiencies in what we have
today. It may well be that having applications be more brittle would
be an acceptable cost for getting a viable multihoming approach that
address the route scalability problem. (All depends on what more
brittle really means.) But the only way to answer such questions in a
productive manner is to look pretty closely at a complete
architecture/solution together with experience from real
implementation/usage.

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Henning Schulzrinne



Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


This certainly seems to be the way that modern APIs are heading. If  
I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other  
scripting languages have a socket-like API that does not expose IP  
addresses, but rather connects directly to DNS names. (In many cases,  
they unify file and socket opening and specify the application  
protocol, to, so that one can do fopen(http://www.ietf.org;), for  
example.) Thus, we're well on our way towards the goal of making  
(some) application oblivious to addresses. I suspect that one reason  
for the popularity of these languages is exactly that programmers  
don't want to bother remembering when to use ntohs().


Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Leslie
Keith Moore [EMAIL PROTECTED] wrote:
 
 please figure out how to make DNS more reliable, more in sync with the
 world, and less of a single point of failure and control, before
 insisting that we place more trust in it.

   A while back, in the SIDR mail-list, a banking-level wish-list was
published:
] 
] - That when you establish a discussion with endpoint you are (to the   
]   best of current technology) certain it really is the endpoint.
] 
] - That you are talking (unmolested) to the endpoint you think you are  
]   for the entirety of the session.
] 
] - That what is retrieved by the client is audit-able at both the
]   server and the client.
] 
] - That retrievals are predictable, and perfectly repeatable.
] 
] - That the client _never_ permits a downgrade, or unsecured retrieval   
]   of information
] 
] - That Trust anchor management for both the client ssl and the PRKI
]   is considered in such a way that it minimises the fact there is no
]   such thing as trusted computing.

   How much of this is it reasonable to ask the DNS to do?

--
John Leslie [EMAIL PROTECTED]
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Dave CROCKER



Thomas Narten wrote:

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.



Thomas,

If you are citing BSD merely as an example of a component that imposes knowledge 
of addresses on upper layers, then yes, it does make a good, concrete example.


If you are citing BSD because you think that they made a bad design decision, 
then you are faulting them for something that was common in the networking 
culture at the time.


People  -- as in end users, as in when they were typing into an application -- 
commonly used addresses in those days, and hostnames were merely a preferred 
convenience.  (Just to remind us all, this was before the DNS and the hostname 
table was often out of date.)


Worse, we shouldn't even forgive them/us by saying something like we didn't 
understand the need for name/address split, back then because it's pretty clear 
from the last 15 years of discussion and work that, as a community, we *still* 
don't.  (The Irvine ring was name-based -- 1/4 of the real estate on its network 
card was devoted to the name table -- but was a small LAN, so scaling issues 
didn't apply.)


d/

ps. As to your major point, that having apps de-coupled from addresses would 
make a huge difference, boy oh boy, we are certainly in agreement there...

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon

 IMO, one of the biggest challenges surrounding IPv6 
 adoption/deployment is that all applications are potentially 
 impacted, and each and everyone one of them needs to be 
 explicitely enabled to work with IPv6.

Or NAT-PT needs to improved so that middleboxes can be inserted
into a network to provide instant v4-v6 compatibility.  

 That is a huge 
 challenge, starting with the observation that there are a 
 bazillion deployed applications that will NEVER be upgraded.

Yes, I agree that there is a nice market for such middleboxes.

 Boy, wouldn't it be nice of all we had to do was IPv6-enable 
 the underlying network and stack (along with key OS support 
 routines and
 middleware) and have existing apps work over IPv6, oblivious 
 to IPv4 vs. IPv6 underneath.

Middleboxes can come close to providing that.

 Wouldn't it have been nice if the de facto APIs in use today 
 were more along the lines of ConnectTo(DNS name, service/port).

I don't know if nice is the right word. It would be interesting
and I expect that there would be less challenges because we would
have had a greater focus on making DNS (or something similar) more
reliable. It's not too late to work on this and I think that it
is healthy for multiple technologies to compete on the network.
At this point it is not clear that IPv6 will last for more than
50 years or so. If we do work on standardizing a name-to-name
API today, then there is the possibility that this will eventually
prevail over the IPv6 address API.

Way back when there was an OS called Plan 9 which took the idea of 
a single namespace more seriously than other OSes had. On Plan 9
everything was a file including network devices which on UNIX are
accessed with sockets and addresses. This concept ended up coming
back to UNIX in the form of the portalfs (not to mention procfs).

I think it is well worthwhile to work on this network endpoint
naming API even if it does not provide any immediate benefits
to the IPv6 transition.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Melinda Shore
On 12/5/08 9:59 AM, Dave Crocker [EMAIL PROTECTED] wrote:
 If you are citing BSD because you think that they made a bad design decision,
 then you are faulting them for something that was common in the networking
 culture at the time.

Not to go too far afield, but I think there's consensus
among us old Unix folk that the mistake that CSRG made
wasn't in the use of addresses but in having sockets
instead of using file descriptors.  This was actually
fixed in SysVRSomethingOrOther with the introduction of
a network pseudo-filesystem (open(/net/192.168.1.1, ... )
with ioctls but never got traction.

Melinda

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon
 It may 
 well be that having applications be more brittle would be an 
 acceptable cost for getting a viable multihoming approach 
 that address the route scalability problem. (All depends on 
 what more brittle really means.) But the only way to answer 
 such questions in a productive manner is to look pretty 
 closely at a complete architecture/solution together with 
 experience from real implementation/usage.

I agree.
For instance, the cited DNS problems often disrupt communication
when there is a problem free IP path between points A and B because
DNS relies on third parties to the packet forwarding path. But 3rd
parties can also be used to make things less brittle. For instance
if an application whose packet stream is being disrupted could call
on 3rd parties to check if there are alternative trouble-free paths
and then reroute the stream through a 3rd party proxy. If a strategy
like this is built-into the lower level network API, then an application
session could even survive massive network disruption as long as
it was cyclic.

I have in mind the way that Telebit modems used the PEP protocol 
to test and use the communication capability of each one of several
channels. As long as there was at least one channel available and the
periods of no-channel-availability were short enough, you could get
end-to-end data transfer. On a phone line which was unusable for fax
and in which the human voice was completely drowned out by static,
you could get end-to-end UUCP email transfer. A lot of work related
to this is being done by P2P folks these days, and I think there
is value in defining a better network API that incorporates some
of this work.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day


Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


That had been the original plan and there were APIs that did that. 
But for some reason, the lunacy of the protocol specific sockets 
interface was preferred.  I know people who have been complaining 
about it for 25 years or thereabouts.


Some knew even then that the purpose of an API was to hide those 
sorts of dependencies.  There seems to be a history here of always 
picking the bad design.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day

It is so reassuring when modern is a third of a  century old.

Sorry, but I am finding this new found wisdom just a little frustrating.


At 9:40 -0500 2008/12/05, Henning Schulzrinne wrote:

Wouldn't it have been nice if the de facto APIs in use today were more
along the lines of ConnectTo(DNS name, service/port).


This certainly seems to be the way that modern APIs are heading. 
If I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other 
scripting languages have a socket-like API that does not expose IP 
addresses, but rather connects directly to DNS names. (In many 
cases, they unify file and socket opening and specify the 
application protocol, to, so that one can do 
fopen(http://www.ietf.org;), for example.) Thus, we're well on our 
way towards the goal of making (some) application oblivious to 
addresses. I suspect that one reason for the popularity of these 
languages is exactly that programmers don't want to bother 
remembering when to use ntohs().


Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread John Day
When our group put the first Unix system on the Net in the summer of 
1975, this is how we did it.  The hosts were viewed as part of the 
file system.  It was a natural way to do it.


At 15:01 + 2008/12/05, [EMAIL PROTECTED] wrote:

  IMO, one of the biggest challenges surrounding IPv6

 adoption/deployment is that all applications are potentially
 impacted, and each and everyone one of them needs to be
 explicitely enabled to work with IPv6.


Or NAT-PT needs to improved so that middleboxes can be inserted
into a network to provide instant v4-v6 compatibility. 


 That is a huge
 challenge, starting with the observation that there are a
 bazillion deployed applications that will NEVER be upgraded.


Yes, I agree that there is a nice market for such middleboxes.


 Boy, wouldn't it be nice of all we had to do was IPv6-enable
 the underlying network and stack (along with key OS support
 routines and
 middleware) and have existing apps work over IPv6, oblivious
 to IPv4 vs. IPv6 underneath.


Middleboxes can come close to providing that.


 Wouldn't it have been nice if the de facto APIs in use today
 were more along the lines of ConnectTo(DNS name, service/port).


I don't know if nice is the right word. It would be interesting
and I expect that there would be less challenges because we would
have had a greater focus on making DNS (or something similar) more
reliable. It's not too late to work on this and I think that it
is healthy for multiple technologies to compete on the network.
At this point it is not clear that IPv6 will last for more than
50 years or so. If we do work on standardizing a name-to-name
API today, then there is the possibility that this will eventually
prevail over the IPv6 address API.

Way back when there was an OS called Plan 9 which took the idea of
a single namespace more seriously than other OSes had. On Plan 9
everything was a file including network devices which on UNIX are
accessed with sockets and addresses. This concept ended up coming
back to UNIX in the form of the portalfs (not to mention procfs).

I think it is well worthwhile to work on this network endpoint
naming API even if it does not provide any immediate benefits
to the IPv6 transition.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day
Speak for yourself David.  These problems have been well understood 
and discussed since 1972.  But you are correct, that there were still 
a large unwashed that didn't and I am still not sure why that was. 
This seems to be elementary system architecture.



At 6:59 -0800 2008/12/05, Dave CROCKER wrote:

Thomas Narten wrote:

And, if one wants to look back and see could we have done it
differently, go back to the BSD folk that came up with the socket
API. It was designed to support multiple network stacks precisely
because at that point in time, there were many, and TCP/IP was
certainly not pre-ordained. But that API makes addresses visible to
APIs. And it is widely used today.



Thomas,

If you are citing BSD merely as an example of a component that 
imposes knowledge of addresses on upper layers, then yes, it does 
make a good, concrete example.


If you are citing BSD because you think that they made a bad design 
decision, then you are faulting them for something that was common 
in the networking culture at the time.


People  -- as in end users, as in when they were typing into an 
application -- commonly used addresses in those days, and hostnames 
were merely a preferred convenience.  (Just to remind us all, this 
was before the DNS and the hostname table was often out of date.)


Worse, we shouldn't even forgive them/us by saying something like 
we didn't understand the need for name/address split, back then 
because it's pretty clear from the last 15 years of discussion and 
work that, as a community, we *still* don't.  (The Irvine ring was 
name-based -- 1/4 of the real estate on its network card was devoted 
to the name table -- but was a small LAN, so scaling issues didn't 
apply.)


d/

ps. As to your major point, that having apps de-coupled from 
addresses would make a huge difference, boy oh boy, we are certainly 
in agreement there...

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 Just think how much easier the IPv4 to IPv6 transition would have
 been if nothing above the IP layer cared exactly what an IP
 address looks like or how big it is.
 
 It wouldn't have made much difference at all,
 
 Wow. I find this statement simply astonishing.
 
 IMO, one of the biggest challenges surrounding IPv6
 adoption/deployment is that all applications are potentially impacted,
 and each and everyone one of them needs to be explicitely enabled to
 work with IPv6. That is a huge challenge, starting with the
 observation that there are a bazillion deployed applications that will
 NEVER be upgraded.

There were also a bazillion deployed applications that would never be
upgraded to deal with Y2K.  Somehow people managed.  But part of how
they managed was by replacing some applications rather than upgrading them.

I certainly won't argue that it's not a significant challenge to edit
each application, recompile it, retest it, update its documentation,
educate tech support, and release a new version.   But you'd have all of
those issues with moving to IPv6 even if we had already had a socket API
in place where the address was a variable-length, mostly opaque, object.

Consider also that the real barrier to adapting many applications to
IPv6 (and having them work well) isn't the size of the IPv6 address, or
adapting the program to use sockaddr_in6 and getaddrinfo() rather than
sockaddr_in and gethostbyname().  It's figuring out what it takes to get
the application to work sanely in a world consisting of a mixture of
IPv4 and IPv6, IPv4 private addresses and global addresses and maybe
linklocal addresses (useful on ad hoc networks), IPv6 ULAs and global
addresses and maybe linklocal addresses, the fact that 6to4 traffic is
sometimes blocked (so v6 connections time out), and that 6to4 relay
routers often cause IPv6 connections to work more poorly than native
IPv4 connections, NATs, and so forth.

(size *is* an issue for applications that do referrals, and those are
important cases, but the vast majority of apps don't do that.)

And at least from where I sit, almost all of the applications I use
already support IPv6.  (I realize that's not true for everybody, but it
also tells me that it's feasible.)   From where I sit, the support
that's missing is in the ISPs, and the SOHO routers, and in various
things that block 6to4.  I understand from talking to others that
support is also lagging in firewalls and traffic monitors needed by
enterprise networks.

 Boy, wouldn't it be nice of all we had to do was IPv6-enable the
 underlying network and stack (along with key OS support routines and
 middleware) and have existing apps work over IPv6, oblivious to IPv4
 vs. IPv6 underneath.

Sure it would have been nice.  But for that to have happened would have
required a lot more than having the API treat addresses as opaque
objects of arbitrary size.  It would have required that IPv4 support
variable length addresses in all hosts and routers so that there would
have been no need for applications to try to deal with a mixture of IPv4
and IPv6 hosts that can't talk directly to one another.   It would have
required an absence of NATs so that apps wouldn't need to know how to
route around them.  It would have required that apps be able to be
unaware of the IPv6 address architecture, and for them to not need to do
intelligent address selection, which basically would have required
solving the routing scalability problem.

Basically if we had had all of that stuff in place in the early 1990s,
we would never have needed to do a forklift upgrade of IPv4 - the net
would have evolved approximately as gracefully as it did with CIDR.

 And, if one wants to look back and see could we have done it
 differently, go back to the BSD folk that came up with the socket
 API. It was designed to support multiple network stacks precisely
 because at that point in time, there were many, and TCP/IP was
 certainly not pre-ordained. But that API makes addresses visible to
 APIs. And it is widely used today.
 
 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).

No, because one of two things would have happened:

1. the defacto APIs would have long since been abandoned in favor of
sockets APIs that were more flexible at letting apps deal with various
kinds of network brain damage, or

2. if the defacto APIs were the only ones available on most platforms,
then we wouldn't have any of the applications that we have today, that
manage to get around NAT.  we'd be stuck with email and
everything-else-over-HTTP, with all servers constrained to be in the
core, and prime IP real estate being even more expensive than it is now.

--

Having said that, I'll grant that every barrier to IPv6 adoption is
significant.  The reason is that moving to IPv6 isn't just a matter of
flipping switches at any level.  It's one thing for 

Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 So it's not a question of whether DNS is less reliable than IP (it is),
 or even whether the reliability of DNS + IP is less than that of IP
 alone (it is).  It's a question of whether increasing reliance on DNS by
 trying to get apps and other things to use DNS names exclusively, makes
 those apps and other things less reliable.
 
 No. Your argument seems to be because relying even more on DNS than
 we do today makes things more brittle, BAD, BAD BAD, we cannot go
 there.

My argument is that if you really want that sort of approach to work you
need to concentrate on making DNS more reliable and better suited to
this kind of approach, and on getting people to think of DNS differently
than they do now - rather than just talking in terms of changing the
API, which is the easy part.

 The more relevant engineering question is whether the benefits of such
 an approach outweigh the downsides. Sure there are downsides. But
 there are also real potential benefits. 

Mumble.  Years ago, I worked out details of how to build a very scalable
distributed system (called SNIPE) using a DNS-like service (but a lot
more flexible in several ways) to name endpoints and associate the names
with metadata about those endpoints, including their locations.  So I
don't need to be convinced that there are potential benefits.  But that
exercise also gave me an appreciation for the difficulties involved.
And for that exercise I allowed myself the luxury of defining my own
naming service rather than constraining myself to use DNS.  It would
have been much more difficult, though perhaps not impossible, to make
that kind of system work with DNS.

 But the only way to answer such questions in a
 productive manner is to look pretty closely at a complete
 architecture/solution together with experience from real
 implementation/usage.

You certainly need a more complete architecture before you can evaluate
it at all.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
John Day wrote:

 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).
 
 That had been the original plan and there were APIs that did that. But
 for some reason, the lunacy of the protocol specific sockets interface
 was preferred.  

About the time I wrote my second network app (circa 1986) I abstracted
all of the connection establishment stuff (socket, gethostbyname, bind,
connect) into a callable function so that I wouldn't have to muck with
sockaddrs any more.  And I started trying to use that function in
subsequent apps.  What I generally found was that I had to change that
function for every new app, because there were so many cases for which
merely connecting to port XX at the first IP address corresponding to
hostname YY that accepted a connection, was not sufficient for the
applications I was writing.  Now it's possible that I was writing
somewhat unusual applications  (e.g. things that constrained the source
port to be  1024 and which therefore required the app to run as root
initially and then give up its privileges, or SMTP clients for which MX
processing was necessary) but that's nevertheless what I experienced.

These days the situation is similar but I'm having to deal with a
mixture of v4 and v6 peers, or NAT traversal, or brain-damage in
getaddrinfo() implementations, or bugs in the default address selection
algorithm.

With so much lunacy in the how the net works these days, I regard a
flexible API as absolutely necessary for the survival of applications.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Henning Schulzrinne wrote:


 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).
 
 This certainly seems to be the way that modern APIs are heading. If
 I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other scripting
 languages have a socket-like API that does not expose IP addresses, but
 rather connects directly to DNS names. 

and yet, people wonder why so many network applications are still
written in C, despite all of the security issues associated with weak
typing, explicit memory management, and lack of bounds checking on array
references.

(people also need to realize that using a modern API makes it _harder_
to get an application to work well in a mixed IPv4/IPv6 environment.)

 Thus, we're well on
 our way towards the goal of making (some) application oblivious to
 addresses. 

and we're also well on our way towards the goal of having everything run
over HTTP.

 I suspect that one reason for the popularity of these languages is
 exactly that programmers don't want to bother remembering when to
 use ntohs().

probably so.  I can't exactly blame them for that.

Keith

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Dave CROCKER



John Day wrote:
Speak for yourself David.  These problems have been well understood and 
discussed since 1972.  But you are correct, that there were still a 
large unwashed that didn't and I am still not sure why that was. This 
seems to be elementary system architecture.



John,

After your or I or whoever indulges in our flash of brilliant insight, it is the 
unwashed who do all the work.


So I was careful to refer to the community rather than claim that no one at 
all understand the issue.


I measure the community in terms of that pesky rough consensus construct and 
particularly in terms of running code.  Even in terms of the much more relaxed 
measure, namely mindshare, the community reflects no clear consensus on the 
matter of name-vs-address split, beyond now believing that we should do more of 
it.


We are only beginning to see broader use of the distinction between transient, 
within-session naming, for merging data coming in from alternate paths, versus 
global, persistent naming, for initial rendez vous.


d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day

In other words the failure of a university education.


At 8:46 -0800 2008/12/05, Dave CROCKER wrote:

John Day wrote:
Speak for yourself David.  These problems have been well understood 
and discussed since 1972.  But you are correct, that there were 
still a large unwashed that didn't and I am still not sure why that 
was. This seems to be elementary system architecture.



John,

After your or I or whoever indulges in our flash of brilliant 
insight, it is the unwashed who do all the work.


So I was careful to refer to the community rather than claim that 
no one at all understand the issue.


I measure the community in terms of that pesky rough consensus 
construct and particularly in terms of running code.  Even in terms 
of the much more relaxed measure, namely mindshare, the community 
reflects no clear consensus on the matter of name-vs-address split, 
beyond now believing that we should do more of it.


We are only beginning to see broader use of the distinction between 
transient, within-session naming, for merging data coming in from 
alternate paths, versus global, persistent naming, for initial 
rendez vous.


d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread John Day
As we all know, all attempts to turn a sow's ear into a silk purse, 
generally meet with failure.


You can only cover up so much.  I have written (or tried to write) 
too many device emulators and every time it is the same lesson.  ;-)


We use to have this tag line, when you found that pesky bug and of 
course it was staring you right in the face all the time, someone 
would say, Well, you know . . . if you don't do it right, it won't 
work.  ;-)


We seem to have that problem in spades.  ;-)


At 11:29 -0500 2008/12/05, Keith Moore wrote:

John Day wrote:


 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).


 That had been the original plan and there were APIs that did that. But
 for some reason, the lunacy of the protocol specific sockets interface
 was preferred. 


About the time I wrote my second network app (circa 1986) I abstracted
all of the connection establishment stuff (socket, gethostbyname, bind,
connect) into a callable function so that I wouldn't have to muck with
sockaddrs any more.  And I started trying to use that function in
subsequent apps.  What I generally found was that I had to change that
function for every new app, because there were so many cases for which
merely connecting to port XX at the first IP address corresponding to
hostname YY that accepted a connection, was not sufficient for the
applications I was writing.  Now it's possible that I was writing
somewhat unusual applications  (e.g. things that constrained the source
port to be  1024 and which therefore required the app to run as root
initially and then give up its privileges, or SMTP clients for which MX
processing was necessary) but that's nevertheless what I experienced.

These days the situation is similar but I'm having to deal with a
mixture of v4 and v6 peers, or NAT traversal, or brain-damage in
getaddrinfo() implementations, or bugs in the default address selection
algorithm.

With so much lunacy in the how the net works these days, I regard a
flexible API as absolutely necessary for the survival of applications.

Keith


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Rémi Després

Christian Vogt  -  le (m/j/a) 12/4/08 10:26 AM:

In any case, your comment is useful input, as it shows that calling the
proposed stack architecture in [1] hostname-oriented may be wrong.
Calling it service-name-oriented -- or simply name-oriented -- may
be more appropriate.  Thanks for the input.

Full support for the idea of a *name-oriented architecture*.

In it, the locator-identifier separation principle applies naturally: 
names are the identifiers; addresses, or addresses plus ports,  are the 
locators.


Address plus port locators are tneeded to reach applications in hosts 
that have to share their IPv4 address with other hosts ( e.g. behind a 
NAT with configured port-forwarding.)


*Service-names* are the existing tool to advertise address plus port 
locators, and and to permit efficient multihoming because, in *SRV 
records* which are returned by the DNS to service-name queries:
- several  locators  can be received for one name, possibly with a mix 
of IPv4 and IPv6

- locators can include port numbers
- priority and weight parameters of locators provide for backup and load 
sharing control.


IMO, service names and SRV records  SHOULD be supported asap in all 
resolvers (in addition to host names and A/ records that they 
support today).

Any view on this?

Regards,
RD




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Andrew Sullivan
On Fri, Dec 05, 2008 at 09:22:39AM -0500, Keith Moore wrote:
 
 but you're really missing the point, which is that DNS fails a lot.
 
 note that DNS failures aren't all with the authoritative servers -
 they're often with caches, resolver configuration, etc.

Before the thread degenerates completely into DNS is not reliable,
Is too pairs of messages, I'd like to ask what we can do about this.

It seems to me true, from experience and from anecdote, that DNS out
at endpoints has all manner of failure modes that have little to do
with the protocol and a lot to do with decisions that implementers and
operators made, either on purpose or by accident. 

I anticipate that the gradual deployment of DNSSEC (as well as various
other forgery resilience techniques) will expose many of those
failures in the nearish future.

This suggests to me that there will be an opportunity to improve some
of the operations in the wild, so that actually broken implementations
are replaced and foolish or incompetent administration gets
corrected, if only to get things working again.  It'd be nice if we
had some practical examples to analyse and for which we could suggest
repairs so that there would be a convenient cookbook-style reference
for the perplexed.  

If you have a cache of these examples, I'd be delighted to see them.

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Tony Finch
On Fri, 5 Dec 2008, Keith Moore wrote:
 Stephane Bortzmeyer wrote:
 
  For a good part, this is already done. You cannot use IP addresses for
  many important applications (the Web because of virtual hosting and
  email because most MTA setup prevent it).

 you're generalizing about the entire Internet from two applications?

It's a general truth that application protocols need a layer of addressing
of their own, and it isn't sufficient to just identify the host the
application is running on. The special cases are the applications that do
not need extra addressing.

In the cases where protocols do not support their own addressing
architecture, we have usually been forced to retro-fit it or bodge around
it. For example, the HTTP Host: header, the TLS server_name extension, the
subjectAltName x.509 field, the use of full email addresses instead of
usernames as login names for IMAP and POP. XMPP got this right.

Tony.
-- 
f.anthony.n.finch  [EMAIL PROTECTED]  http://dotat.at/
VIKING NORTH UTSIRE SOUTH UTSIRE FORTIES CROMARTY FORTH EASTERLY OR
NORTHEASTERLY 5 OR 6, OCCASIONALLY 7 OR GALE 8 AT FIRST EXCEPT IN NORTH UTSIRE
AND FORTH, BACKING NORTHERLY OR NORTHWESTERLY AND DECREASING 4 AT TIMES.
MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH AT FIRST EXCEPT IN NORTH UTSIRE AND
FORTH. SQUALLY SHOWERS. MODERATE OR GOOD.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread David W. Hankins
On Fri, Dec 05, 2008 at 08:46:48AM -0800, Dave CROCKER wrote:
 John Day wrote:
 discussed since 1972.  But you are correct, that there were still a large 
 unwashed that didn't and I am still not sure why that was. This seems to 

 After your or I or whoever indulges in our flash of brilliant insight, it 
 is the unwashed who do all the work.

I'm assuming you are using the term 'unwashed' to refer to the simple
act of bathing, rather than as in my background I understand it as a
metaphor for Christian baptism.

Surely neither Mr. Crocker nor Mr. Day are referring to IETF baptismal
practices?

Could either of you two elaborate on why you think that bathing (or
the evident lack thereof) is at all relevant to Internet standards?

I'm aware that in the ~400AD's there was actually quite a lot of
(religio) philosophical debate over the practice of bathing (which I
rather would think we'd put behind us after 1600 years), but I've
never heard it said that good engineers do (or don't) bathe.

-- 
Ash bugud-gul durbatuluk agh burzum-ishi krimpatul.
Why settle for the lesser evil?  https://secure.isc.org/store/t-shirt/
-- 
David W. HankinsIf you don't do it right the first time,
Software Engineeryou'll just have to do it again.
Internet Systems Consortium, Inc.   -- Jack T. Hankins


pgpUdvlx8jwJb.pgp
Description: PGP signature
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 There were also a bazillion deployed applications that would never be
 upgraded to deal with Y2K.  Somehow people managed.  But part of how
 they managed was by replacing some applications rather than
 upgrading them.

There were clear business motivations for ensuring that apps survived
Y2K appropriately. There is no similar brick wall with IPv4 address
exhaustion.

 I certainly won't argue that it's not a significant challenge to edit
 each application, recompile it, retest it, update its documentation,
 educate tech support, and release a new version.   But you'd have all of
 those issues with moving to IPv6 even if we had already had a socket API
 in place where the address was a variable-length, mostly opaque,
 object.

I didn't say a better API would have variable-length, mostly opaque
objects. I think others have already chimed in that hiding the
details from the applications is the key to a better API. 

And I understand that Apple has a more modern API, and it made
upgrading their applications to support IPv6 that much easier.

 Consider also that the real barrier to adapting many applications to
 IPv6 (and having them work well) isn't the size of the IPv6 address, or
 adapting the program to use sockaddr_in6 and getaddrinfo() rather than
 sockaddr_in and gethostbyname().

Actually, the real barrier to upgrading applications is lack of
incentive. No ROI.  It's not about technology at all. It's about
business cases.

Wouldn't it be nice if existing apps could run over IPv6 (perhaps in a
degraded form) with no changes? That would change the challenges of
IPv6 deployment rather significantly.

 And at least from where I sit, almost all of the applications I use
 already support IPv6.  (I realize that's not true for everybody, but it
 also tells me that it's feasible.)

Huge numbers of important applications in use today do not support
IPv6. Think beyond email, ssh and a browser. Think business
applications. Talk to someone who works for a software company about
the challenges they have upgrading their software to support IPv6 (or
fixing bugs, or doing any work to old software). It's less about
technology than business cases.

Case in point. There is apparently still significant amounts of
deployed software that cannot handle TLDs of more than 3 characters in
length. That means DNS names with a TLD of .info or .name don't work
in all places and can't be used reliably. I heard just this week that
yahoo can't handle email with .info names. .info has existed as a TLD
for 7 years. Fixing this is not a technical problem, it's a business
problem (i.e., incenting the parties that need to upgrade their
software).

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:
 Keith Moore [EMAIL PROTECTED] writes:
 
 There were also a bazillion deployed applications that would never be
 upgraded to deal with Y2K.  Somehow people managed.  But part of how
 they managed was by replacing some applications rather than
 upgrading them.
 
 There were clear business motivations for ensuring that apps survived
 Y2K appropriately. There is no similar brick wall with IPv4 address
 exhaustion.

more like a padded wall with embedded spikes?

 Actually, the real barrier to upgrading applications is lack of
 incentive. No ROI.  It's not about technology at all. It's about
 business cases.

I suppose it follows that people don't actually need those applications
to work in order to continue doing business... in which case, of course
they shouldn't upgrade them.

Either that, or the people who are making these decisions don't really
understand what's important to keeping their businesses running... and
those businesses will fail.

(not that this helps IPv6 any, of course)

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Thomas Narten
Keith Moore [EMAIL PROTECTED] writes:

 Thomas Narten wrote:
  Keith Moore [EMAIL PROTECTED] writes:
  
  There were also a bazillion deployed applications that would never be
  upgraded to deal with Y2K.  Somehow people managed.  But part of how
  they managed was by replacing some applications rather than
  upgrading them.
  
  There were clear business motivations for ensuring that apps survived
  Y2K appropriately. There is no similar brick wall with IPv4 address
  exhaustion.

 more like a padded wall with embedded spikes?

More like a swamp, with steam rising from dark looking places. But
still a fair amount of firm ground if you can stay on a narrow and
careful path, though it's hard to tell because one can't see very far
and the swamp looks very big...

But looking back, we are already pretty far in the swamp, so it's not
clear exactly what is changing or how much worse things can or will
get continuing the current trajectory, so why not continue on the
current course just a little bit longer...

  Actually, the real barrier to upgrading applications is lack of
  incentive. No ROI.  It's not about technology at all. It's about
  business cases.

 I suppose it follows that people don't actually need those applications
 to work in order to continue doing business... in which case, of course
 they shouldn't upgrade them.

Keith, this is umbelievably simplisitic logic. Try the following
reality check. The applications run today. Important things would
break if they were turned off. But there is no money to pay for an
upgrade (by the customer) because the budget is only so big, and the
current budget was more focussed on beefing up security and trying to
get VoIP running. Or, the vendor doesn't have an upgrade because the
product is EOL, and the customer can't afford to buy a replacement for
it (again for a number of different reasons). Or, the vendor does have
an upgraded product, but it requires running the latest version of the
product, which doesn't run on the OS release you happen to be running
(and can't change for various reasons), and would require new hardware
on top of things because the new product/OS is a memory pig, or was
rewritten in Java, etc., etc.

 Either that, or the people who are making these decisions don't really
 understand what's important to keeping their businesses running... and
 those businesses will fail.

They may understand very well. But a simple cost/benefit analysis (in
terms of $$ and/or available technical resources) says they can't
afford to upgrade.

Happens all the time. Why do you think people run old software for
years and years and years?

Thomas
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Keith Moore
Thomas Narten wrote:

 I suppose it follows that people don't actually need those applications
 to work in order to continue doing business... in which case, of course
 they shouldn't upgrade them.
 
 Keith, this is umbelievably simplisitic logic. 

This whole discussion is unbelievably simplistic logic.  Insults don't
make the logic any better.

 The applications run today. Important things would
 break if they were turned off. But there is no money to pay for an
 upgrade (by the customer) because the budget is only so big, and the
 current budget was more focussed on beefing up security and trying to
 get VoIP running. Or, the vendor doesn't have an upgrade because the
 product is EOL, and the customer can't afford to buy a replacement for
 it (again for a number of different reasons). Or, the vendor does have
 an upgraded product, but it requires running the latest version of the
 product, which doesn't run on the OS release you happen to be running
 (and can't change for various reasons), and would require new hardware
 on top of things because the new product/OS is a memory pig, or was
 rewritten in Java, etc., etc.

Yep.  I've seen it happen many times in various guises.  By now it is
widely understood that many things need maintenance budgets - e.g.
buildings, vehicles, computer and networking hardware.  And we actually
have a decent sense of how much to budget for those things.  But we
don't have a widely-understood idea of what it costs to maintain
software, particularly networking software.  There's both a strong
tendency to believe that software is fixed-cost and an increasing
tendency to fire in-house programmers and push things like software
maintenance to third parties - which is to say, they don't get paid for.
 But when the Internet keeps changing (for many more reasons than IPv4
address space exhaustion) you can't expect the software to stay static
and keep working well.

 Either that, or the people who are making these decisions don't really
 understand what's important to keeping their businesses running... and
 those businesses will fail.
 
 They may understand very well. But a simple cost/benefit analysis (in
 terms of $$ and/or available technical resources) says they can't
 afford to upgrade.
 
 Happens all the time. Why do you think people run old software for
 years and years and years?

Most likely, because they aren't properly estimating cost and/or
benefit, or because they are too focused on short-term costs and
ignoring medium- and long-term costs.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread Hallam-Baker, Phillip
Yes, that is indeed where the world is going.
 
My point is that it would be nice if the IETF had a means of guiding the 
outcome here through an appropriate statement from the IAB.
 
Legacy applications are legacy. But we can and should push new applications to 
use SRV based connections or some elaboration on the same principle. Even with 
legacy applications, MX was a retrofit to SMTP.



From: [EMAIL PROTECTED] on behalf of Henning Schulzrinne
Sent: Fri 12/5/2008 9:40 AM
To: Thomas Narten; IETF discussion list; [EMAIL PROTECTED]
Subject: Re: The internet architecture





 Wouldn't it have been nice if the de facto APIs in use today were more
 along the lines of ConnectTo(DNS name, service/port).

This certainly seems to be the way that modern APIs are heading. If 
I'm not mistaken, Java, PHP, Perl, Tcl, Python and most other 
scripting languages have a socket-like API that does not expose IP 
addresses, but rather connects directly to DNS names. (In many cases, 
they unify file and socket opening and specify the application 
protocol, to, so that one can do fopen(http://www.ietf.org 
http://www.ietf.org/ ), for 
example.) Thus, we're well on our way towards the goal of making 
(some) application oblivious to addresses. I suspect that one reason 
for the popularity of these languages is exactly that programmers 
don't want to bother remembering when to use ntohs().

Henning
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-05 Thread Marc Manthey


Am 05.12.2008 um 11:25 schrieb Rémi Després:


Christian Vogt  -  le (m/j/a) 12/4/08 10:26 AM:
In any case, your comment is useful input, as it shows that calling  
the

proposed stack architecture in [1] hostname-oriented may be wrong.
Calling it service-name-oriented -- or simply name-oriented --  
may

be more appropriate.  Thanks for the input.

Full support for the idea of a *name-oriented architecture*.

In it, the locator-identifier separation principle applies  
naturally: names are the identifiers; addresses, or addresses plus  
ports,  are the locators.


Address plus port locators are tneeded to reach applications in  
hosts that have to share their IPv4 address with other hosts ( e.g.  
behind a NAT with configured port-forwarding.)


*Service-names* are the existing tool to advertise address plus port  
locators, and and to permit efficient multihoming because, in *SRV  
records* which are returned by the DNS to service-name queries:
- several  locators  can be received for one name, possibly with a  
mix of IPv4 and IPv6

- locators can include port numbers
- priority and weight parameters of locators provide for backup and  
load sharing control.


IMO, service names and SRV records  SHOULD be supported asap in all  
resolvers (in addition to host names and A/ records that they  
support today).

Any view on this?


hello Rémi,

i totally agree with you in all points, from my perspective , there is  
no sufficent support
for identifying  and signing tools, like DNS TSIG whitch will be from  
by apples wide area bonjour


http://www.dns-sd.org/ServerSetup.html

I was following a interesting software project , BUT

quote :

On Linux (at least on Debian), you need the mDNSResponder package  
provided by

Apple on the Bonjour downloads page.  Unfortunately, Avahi doesn't yet
implement all of the API functions UIA needs.
---

So sahred secret  http://www.ietf.org/rfc/rfc2845.txt   are not  
implemented into avahi
 for wide area distribiution since 2 years. And Novell / SUSE seems  
to have no interest aswell.


just my 50 cents

regards

Marc

--
Marc Manthey 50672 Köln - Germany
Hildeboldplatz 1a
Tel.:0049-221-3558032
Mobil:0049-1577-3329231
mail: [EMAIL PROTECTED]
PGP/GnuPG: 0x1ac02f3296b12b4d
jabber :[EMAIL PROTECTED]
IRC: #opencu  freenode.net
twitter: http://twitter.com/macbroadcast
web: http://www.let.de

Opinions expressed may not even be mine by the time you read them, and  
certainly don't reflect those of any other entity (legal or otherwise).


Please note that according to the German law on data retention,  
information on every electronic information exchange with me is  
retained for a period of six months.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The internet architecture

2008-12-04 Thread Christian Vogt


Keith -

thanks for the careful elaboration.  I agree with you that it is very
important to consider the DNS-related issues that you identified when
designing methods that make new use of the DNS.  But note that the
hostname-oriented network protocol stack architecture, as proposed in
[1], does not change the semantics of DNS hostnames.  It continues using
DNS hostnames as an alias for a set of IP addresses.  And as today, for
this is does not matter whether the IP addresses in a set are from a
single host or from multiple hosts.  I therefore do consider the use of
the DNS in [1] feasible.

In a nutshell, the hostname-oriented stack architecture functions like
this:  It provides a new, backwards-compatible API through which
applications can initiate connections by specifying a pair of service
name and DNS hostname.  The service name is a well-known string
replacing the overloading of port numbers for the same purpose, and the
hostname maps to the set of IP addresses through which the service is
currently reachable. (There are further arguments to connection
initiation, but those are not relevant to this discussion.) Similarly, a
DNS hostname can be specified to filter incoming connections based on
the IP addresses that this hostname currently maps to. IP-address-
specific functions are centrally performed further down the stack so
that applications do not (necessarily) have to deal with them. This
includes hostname resolution, address translator traversal and
connectivity verification, address selection, as well as potentially
mobility and multi-homing support.

So the hostname-oriented stack architecture continues with the semantics
of DNS hostnames that we use today:  The IP addresses to which a
hostname points do not necessarily have to be of the same physical
machine, even though the terminology hostname may suggest that they
do.  This flexibility regarding the physical counterpart of a hostname
is actually an attractive feature:  It provides a convenient tool for
abstracting from the implementation of a particular service, which
generally is irrelevant to the host connecting to a service.  You are
right in that some applications may need to obtain a handle to a
specific service instance in order to be able to resume an intermitted
session.  And yes, a hostname-oriented stack architecture should permit
obtaining such a handle, just as it should support legacy applications
that want to know which IP address they are communicating with.

The three main benefits of the hostname-oriented stack architecture are
consequently as follows:

- Application programming becomes potentially easier.

- IP address changes, such as for mobility or multi-homing, do not
  disrupt ongoing sessions.  (This is the same advantage that
  identifier-locator split solutions provide.  Yet in the case of a
  hostname-oriented stack architecture, this is achieved without the
  extra complexity that a new level of indirection requires for
  mapping and security purposes, and which e.g. Mobile IP introduces
  through its concept of a stable IP home address.)

- Transition to IPv6 no longer affects applications.

The reason for all of these benefits is that applications use DNS
hostnames exclusively and do not have to deal with IP-address-related
functions.  And still, the semantics and the use of DNS hostnames remain
as they are today.  And I hope this resolves your concerns.

- Christian


[1] Christian Vogt:  Towards A Hostname-Oriented Network Protocol
Stack for Flexible Addressing in a Dynamic Internet,
http://users.piuha.net/chvogt/pub/2008/vogt-2008-hostname-oriented-stack.pdf



On Nov 30, 2008, Keith Moore wrote:


while it's true that IP addresses don't have the right semantics,
neither do DNS names.


What aspects of DNS semantics makes them inappropriate for this  
function?


I could have been a bit more precise and said that DNS names as
currently used with A, , MX, and SRV records don't have the right
semantics.  Part of the reason is that we don't distinguish between  
DNS

names that are used to name services (which may be implemented on
multiple hosts, each of which has one or more A/ records) and DNS
names that are used to name single hosts (which may have multiple A/ 

records for other reasons).  And the same DNS name may be used  
sometimes
as a host name (via A or  records, say when using ssh) and  
sometimes

as a service name (via MX or SRV records).

In practice DNS names are often sufficient for establishing an initial
association with a service instance (where we don't care which  
instance

we're associate with, as long as the name associates us with one
instance of that service), but they're not sufficient for referral
(where it actually matters which instance of a service we associate
with, and talk to).

Another part of the problem is that DNS names are not used  
consistently

from one protocol/service to another.  A DNS name can be a host, a
service, a collection of email 

Re: The internet architecture

2008-12-04 Thread Christian Vogt


John,

yep, I understand, and do fully agree to, your point that applications
are primarily interested in connecting to a particular service, which
makes it in general irrelevant which physical host is running the
service.  Hence it is in general unnecessary to name a specific physical
machine.  In fact, this reasoning is in line with the new network
protocol stack architecture proposed in [1]:

As I explained to Keith in my previous email, in the proposed stack
architecture, connections are initiated by specifying a remote service
instead of a remote host.  They are initiated using the combination of a
service name and a DNS hostname, where the hostname is, despite its
intuitive meaning of being specific to a physical machine, just a set of
IP addresses through which the service of interest can be reached.

Of course, it would alternatively be possible to directly embed service
semantics into DNS hostnames, similar to what we do today with SRV
records.  The result would then be a name, retrievable via the DNS, that
specifies a service in globally unique manner.  I am open to such
embedding, but the embedding would conceptual make no difference. There
are two reasons why I chose to separate service names from hostnames in
the initial design of the proposed stack architecture: First, due to the
different constraints for service names versus hostnames:  Only
hostnames can be selected freely; service names need to be well known.
Secondly, because service names and hostnames don't always go together:
To accept incoming connections, only a local service must be named, but
a local hostname is unnecessary.  And to filter incoming connections,
only a remote hostname is needed, but not a remote service name.

In any case, your comment is useful input, as it shows that calling the
proposed stack architecture in [1] hostname-oriented may be wrong.
Calling it service-name-oriented -- or simply name-oriented -- may
be more appropriate.  Thanks for the input.

- Christian


[1] Christian Vogt:  Towards A Hostname-Oriented Network Protocol
Stack for Flexible Addressing in a Dynamic Internet,
http://users.piuha.net/chvogt/pub/2008/vogt-2008-hostname-oriented-stack.pdf



On Dec 1, 2008, John Day wrote:

As we have known since the early 80s, naming the host has nothing to  
do

with the problem at hand.  It is sloppy and gets in the way of getting
it right.  Currently, domain name is a synonym for an IP-address.  The
IP-address names the interface and is largely redundant since the MAC
address does the same thing.

Naming the host is sometimes useful for network management purposes,  
but

really has nothing to do with the problem of forwarding or delivering
packets.  I realize there is a lot of sentimental attachment to the  
idea

of naming a host, but as I said that was a sloppy habit we got in very
early and for some reason sloppiness has prevailed.

The entity to be named is whatever the packets are delivered to.  
That
may reside on a host, but that fact is purely coincidental.  As long  
as

sentimentality trumps logic, it will be difficult to get it right, but
in the current climate it will be possible to publish a lot of papers.




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


  1   2   >