RE: Removing features

2003-10-15 Thread Michel Py
Kurtis,

> Kurt Erik Lindqvist wrote:
> There are a hell of a lot traceroutes going on then...

As pointed out by Keith privately, traceroutes are not the only culprit.
Telnet to a host from a private IP, it does a reverse lookup on your IP,
etc. Basically everything that triggers a reverse lookup adds to the
pain, but if reverse lookup is configured correctly on the local DNS
server it removes the pain for all apps as well.


> Now, why don't people do this?

Ask George Carlin.

Michel.




Re: Removing features

2003-10-15 Thread Keith Moore
On Wed, 15 Oct 2003 00:11:56 -0700
"Michel Py" <[EMAIL PROTECTED]> wrote:

> Kurtis,
> 
> > Kurt Erik Lindqvist wrote:
> > There are a hell of a lot traceroutes going on then...
> 
> As pointed out by Keith privately, traceroutes are not the only culprit.
> Telnet to a host from a private IP, it does a reverse lookup on your IP,
> etc. Basically everything that triggers a reverse lookup adds to the
> pain, but if reverse lookup is configured correctly on the local DNS
> server it removes the pain for all apps as well.

great.  now we'll have NAT boxes intercepting outgoing DNS traffic also.



RE: Removing features

2003-10-15 Thread Michel Py
> Keith Moore wrote:
> great.  now we'll have NAT boxes intercepting
> outgoing DNS traffic also.

That was not my point. My point was to have a DNS server in the inside
configured for reverse lookup of private IPs. What you mention would
help though.

Michel.




Re: Removing features

2003-10-15 Thread Valdis . Kletnieks
On Wed, 15 Oct 2003 10:26:17 EDT, Keith Moore said:

> great.  now we'll have NAT boxes intercepting outgoing DNS traffic also.

The really bad part is that they'll on the average do as good a job of intercepting
DNS traffic as they do of filtering outbound 1918-sourced packets in general. After
all, the root DNS boxes shouldn't ever see a 1918 packet unless (a) some site isn't
egress filtering properly *and* (b) their ISP isn't ingress filtering at the edge.

Egress *and* ingress filtering.  Belt and suspenders design.  Too bad there's so
many sites that still manage to leave their fly open anyhow.


pgp0.pgp
Description: PGP signature


RE: Removing features

2003-10-15 Thread Jeroen Massar
-BEGIN PGP SIGNED MESSAGE-

Michel Py wrote:

> > Keith Moore wrote:
> > great.  now we'll have NAT boxes intercepting
> > outgoing DNS traffic also.
> 
> That was not my point. My point was to have a DNS server in the inside
> configured for reverse lookup of private IPs. What you mention would
> help though.

Which most people already have when configuring their local network
as they setup a local DNS server. Usually NAT boxes also include
a DNS server btw. Even my Alcatel Speedtouch *adsl modem* has one.
But I gladly use a much easier to configure bind ofcourse ;)

People not configuring these DNS servers usually use their ISP's
DNS servers and these should comply to AS112 standards, aka
serve empty versions of the rfc1918 zones and make themselves authoritive.
Afaik the latest bind distributions include at least setup examples
for rfc1918 addresses.

Shouldn't there be a BCP for such cases? Aka that ISP's should
have rfc1918/localhost/169.254.x.x zones in the DNS servers that
face their customers?

Greets,
 Jeroen

-BEGIN PGP SIGNATURE-
Version: Unfix PGP for Outlook Alpha 13 Int.
Comment: Jeroen Massar / [EMAIL PROTECTED] / http://unfix.org/~jeroen/

iQA/AwUBP41q/imqKFIzPnwjEQLEcACfWqTKtP0UAkAyRmEOYdDmRGyiE6UAoIF0
cn2z6DmYbo/tBDivtyMKHBdp
=1pV8
-END PGP SIGNATURE-




Re: Removing features

2003-10-15 Thread Keith Moore
> > Keith Moore wrote:
> > great.  now we'll have NAT boxes intercepting
> > outgoing DNS traffic also.
> 
> That was not my point. My point was to have a DNS server in the inside
> configured for reverse lookup of private IPs. 

one of the most-frequently cited justifications for NAT is plug-and-play.
expecting people to set up their own DNS servers sort of nullifies that.



Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Valdis . Kletnieks
On Wed, 15 Oct 2003 12:48:37 EDT, Keith Moore said:
>
> I certainly don't believe "only" in rough consensus and running code -
> I also believe in explicit definition of goals and requirements,
> careful design by knowledgable experts, analysis, iterative
> specification, wide public review, etc.

Of course, proper design and development techniques increase the likelyhood of
running code - and "proof by example running code" is always a good way to
achieve a consensus. 



pgp0.pgp
Description: PGP signature


Re: Removing features

2003-10-15 Thread John C Klensin


--On Wednesday, 15 October, 2003 11:58 -0400 Keith Moore 
<[EMAIL PROTECTED]> wrote:

> Keith Moore wrote:
> great.  now we'll have NAT boxes intercepting
> outgoing DNS traffic also.
That was not my point. My point was to have a DNS server in
the inside configured for reverse lookup of private IPs.
one of the most-frequently cited justifications for NAT is
plug-and-play. expecting people to set up their own DNS
servers sort of nullifies that.
Keith, two observations...

(1) Yes, I think, and think we are in agreement, that this sort 
of thing digs the NAT hole even deeper.

(2) But the typical plug-and-play NAT, at least the ones I have 
run across, is preconfigured with the addresses to be used on 
the "inside" and contains (or is intimately paired with) a DHCP 
server that gives out those addresses.  Installing a DNS filter 
in the thing that would intercept PTR queries for that address 
range, or any 1918 address range, and respond to them in some 
"canned" way while passing other DNS queries out to the network 
as intended is not rocket science and certainly doesn't violate 
any plug-and-play arguments.

Now, whether that interception and diversion of DNS queries is a 
moral activity is a different question.But, if you believe 
strongly enough that having a NAT in the first place puts one 
into a serious state of sin, then the marginal sin of 
intercepting DNS queries for private addresses, to prevent the 
sort of problems those queries cause, seems to me to be fairly 
small.

"where are we going and what are we doing in this handbasket?"

  john




rfc1918 impact

2003-10-15 Thread Leif Johansson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
We should keep nice and descriptive subject-lines...

Michel Py wrote:



| etc. Basically everything that triggers a reverse lookup adds to the
| pain, but if reverse lookup is configured correctly on the local DNS
A lot of the arguments seem to contain the phrase "If  is
configured correctly then ...". Now what does that teach us?
Cheers Leif
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQE/jRtv8Jx8FtbMZncRAg8eAJsEhg6/LOQgaZW3FtSkdiffbp2TvwCgx+x1
dpuw7nwHC2Z8BlAx+qoKyBc=
=7TZn
-END PGP SIGNATURE-



Re: Removing features

2003-10-15 Thread Keith Moore
> Now, whether that interception and diversion of DNS queries is a 
> moral activity is a different question.But, if you believe 
> strongly enough that having a NAT in the first place puts one 
> into a serious state of sin, then the marginal sin of 
> intercepting DNS queries for private addresses, to prevent the 
> sort of problems those queries cause, seems to me to be fairly 
> small.

I probably agree.  But I guess my question is "where does it end?"

That is, how many things do we change elsewhere in the network in order
to minimize the operational problems that crop up with NATs?  What is
the cost of those changes, and how much do they impair the ability of
the network to support applications?




Re: Removing features

2003-10-15 Thread John C Klensin


--On Wednesday, 15 October, 2003 13:45 -0400 Keith Moore 
<[EMAIL PROTECTED]> wrote:

Now, whether that interception and diversion of DNS queries
is a  moral activity is a different question.But, if you
believe  strongly enough that having a NAT in the first place
puts one  into a serious state of sin, then the marginal sin
of  intercepting DNS queries for private addresses, to
prevent the  sort of problems those queries cause, seems to
me to be fairly  small.
I probably agree.  But I guess my question is "where does it
end?"
That is, how many things do we change elsewhere in the network
in order to minimize the operational problems that crop up
with NATs?  What is the cost of those changes, and how much do
they impair the ability of the network to support applications?
That, it seems to me, is a pragmatic way to state the key 
architectural question.  A different version of it, borrowed 
from a different debate, is how much a particular new capability 
is permitted to force deployed systems or applications code to 
change the way they are doing things in the interest of the 
innovation contained in that new capability.

john







rfc1918 impact

2003-10-15 Thread Leif Johansson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


|
| (2) But the typical plug-and-play NAT, at least the ones I have run
| across, is preconfigured with the addresses to be used on the "inside"
| and contains (or is intimately paired with) a DHCP server that gives out
| those addresses.  Installing a DNS filter in the thing that would
| intercept PTR queries for that address range, or any 1918 address range,
| and respond to them in some "canned" way while passing other DNS queries
| out to the network as intended is not rocket science and certainly
| doesn't violate any plug-and-play arguments.
So where is the the leak coming from? If what people claim is true and
if not all, but most NAT-boxes are configured with inside DNS, filtering
and extra cheese, where, I ask you do all of those root-zone requests
and other rfc1918 leaks come from?
Cheers Leif
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQE/jam58Jx8FtbMZncRAu9rAJ9V/vdGmY0UHYRs25IOf333NlSZDwCgohWV
LukIl7zvBBohV0dtG9hVUHs=
=/YAT
-END PGP SIGNATURE-



Re: rfc1918 impact

2003-10-15 Thread John C Klensin


--On Wednesday, 15 October, 2003 22:10 +0200 Leif Johansson 
<[EMAIL PROTECTED]> wrote:

| (2) But the typical plug-and-play NAT, at least the ones I
| have run across, is preconfigured with the addresses to be
| used on the "inside" and contains (or is intimately paired
| with) a DHCP server that gives out those addresses.
| Installing a DNS filter in the thing that would intercept PTR
| queries for that address range, or any 1918 address range,
| and respond to them in some "canned" way while passing other
| DNS queries out to the network as intended is not rocket
| science and certainly doesn't violate any plug-and-play
| arguments.
So where is the the leak coming from? If what people claim is
true and
if not all, but most NAT-boxes are configured with inside DNS,
filtering
and extra cheese, where, I ask you do all of those root-zone
requests
and other rfc1918 leaks come from?
Leif,

I was speaking to the architectural issue, not the deployment 
one.  None of the three plug and play boxes I have here with NAT 
capability has any inside DNS capability (either enabled by 
default or available to be turned on).

It does sound like a recommendation to the effect of "if you are 
going to use NAT, or construct a NAT box, then an 'inside DNS' 
mechanism" would be a reasonable idea.  And I would assume it 
would be an even better one if it made clear what the preferred 
way was to do an "inside DNS" -- I think there might be a couple 
of different ways to do it, and some might be less reprehensible 
than the others.

   john




Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Melinda Shore
It's an interesting document, but it looks to me a bit much
like a problem description and I'm not sure how it relates
to other existing work (the problem description document in
the problem working group, most obviously).  I particularly
liked the discussion of the IETF mission - it could provide
the basis for tackling one problem that's been raised on
a number of occasions, which is that the organization doesn't
have a clear sense of mission or vision.  Even though in the
first paragraph of the "Social Dynamics" section you say that
"As they are neither good nor bad, it is not appropriate to
call them "problems;" rather think of them as social forces
and dynamics" a number of them really are framed as problems.
Indeed, it would be hard to define some way in which statements like
"making integration more difficult" are not problem statements.
I'd really like to see the document, which I think has good
fundamentals, refocused on mission and goals.
Melinda




Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Scott W Brim
On Tue, Oct 14, 2003 11:48:10PM +0200, Harald Tveit Alvestrand allegedly wrote:
> As part of the discussions about change process within
> the IETF, the IESG has come to believe that a somewhat longer statement of 
> the IETF's mission and social dynamics might provide useful context for the 
> community's discussion.  As part of that, we'd like to put the following 
> document out for feedback.
> 
> It incorporates lots of ideas and some text from existing RFCs
> and IETF web pages, but is more focused on change than those have
> been.  We hope it captures a sense of the context of the work of
> improving the IETF, by capturing some of the social dynamics which
> have been an implicit part of the IETF's work and style over the years.

OK, but first, it doesn't clarify the mission, or the social contract.
At most it makes a couple vague statements after describing some general
problems.  It looks like the IESG has some sense of where the
problem-statement/solutions process is going, and wants to run with it.
That's okay -- but please say explicitly that's what's happening, if it
is.

> We also hope that by making some of those implicit elements more
> explicit, we may find it easier to understand how to make changes
> that will "go with the grain" of the IETF's history and culture.

What I want is a renewed, clear statement of the fundamental principles
of the IETF which must not be violated or weakened during the
problem/solution process.  It's important that the leadership of the
IETF keep clear themselves on what the fundamental principles are, and
to reiterate them when necessary (like now).  That's part of the social
contract itself.  There are principles which are at the heart of the
organization and which the (pseudo-)consensus process doesn't get to
touch.

> The IETF Mission
> 
> 
> The IETF's mission has historically been embedded in a shared
> understanding that making engineering choices based on the long
> term interest of the Internet as a whole produces better long-term
> results for each participant than making choices based on short term
> considerations, because the value of those advantages is ultimately
> derived from the health of the whole.  The long term interest of the
> Internet includes the premise that "the Internet is for everyone".
> 
> Two years ago, the IESG felt that making the mission of the IETF
> more explicit was needed.  The following terse statement has since
> been promulgated, first by IESG members and then by others:
> 
>"The purpose of the IETF is to create high quality, relevant,
> and timely standards for the Internet."

The purpose of the IETF has always been to make the Internet work
better, in measurable operational terms.  All else descends from that.
We do standards because we have to, for now and for the future.  Why do
we care about network operators being in the room if our prime mission
is to make standards?  Why do we care if there are two interoperable
implementations?  The operations work of the IETF is important unless it
is being taken care of elsewhere.  It isn't frosting on a standards body
cake, it's just as important as standards.  

Beyond that, yes, the IETF is primarily an SDO, because many operational
issues and agreement on deployment BCPs are being taken care of by other
means, and also because standards is our main measurable output in the
eyes of the outside world.  The above statement applies, but it is not a
basic principle.  It derives from our fundamental responsibility, to
have an Internet that works well today and is robust and flexible enough
to work well in the future.

> It is important that this is "For the Internet,"  and does not include 
> everything that happens to use IP.  IP is being used in a myriad of 
> real-world applications, such as controlling street lights, but the 
> IETF does not standardize those applications.

A very poor distinction.  Everything runs on the Internet eventually,
regardless of what private area it was meant for to start with.
Experience is that everyone wins if there are Internet-compatible ways
of doing things from the beginning.  I fully expect street light control
to run as a secure service along with many other services over a generic
IP network.  However, it's okay to say that priority will be given to
work on the public Internet.

> The IETF has also had a strong operational component, with a tight
> bond, and hence coordination, between protocol developers and
> network operators, and has had many participants who did both.
> This has provided valuable feedback to allow correction of
> misguided standardization efforts, and has provided feedback to
> sort out which standards were actually needed.  As the field has
> grown explosively, specialization has set in, and market pressures
> have risen, there has been less and less operator participation in
> the IETF.

This has nothing to do with either mission or social contract.  Are you
saying "therefore we need to ch

Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Keith Moore
overall, I like the document.  some comments:


> However, while Dave Clark's famous saying
> 
>   "We do not believe in kings, presidents, or voting.
>We believe only in rough consensus and running code,"

is this an accurate quote?  I've usually seen it written

We reject kings, presidents, and voting. 
We believe in rough consensus and running code.

I agree with this form, but not with the way you've stated it.
I certainly don't believe "only" in rough consensus and running code -
I also believe in explicit definition of goals and requirements,
careful design by knowledgable experts, analysis, iterative
specification, wide public review, etc.

>"The purpose of the IETF is to create high quality, relevant,
> and timely standards for the Internet."

I actually believe IETF has a somewhat wider purpose than that.  What
I usually say is "we're trying to help the Internet work better".
We do this partially by authoring and maintaining protocol standards, 
but we use other mechanisms also.  In addition to standards, we produce 
informational and experimental documents and BCPs.   We provide formal 
and informal advice and feedback to various parties about operational
practices,  implementation practices, efforts by other SDOs, proposed
regulations, etc.  All of these are relevant to, and consistent with,
the purpose of helping the Internet to work better.

We *ought* to provide more architectural direction or advice - our
failure to resolve architectural issues in advance of deployment of
products with conflicting views of the architecture (or in some 
cases, a simple lack of care or foresight on those vendors' parts) has
caused a number of conflicts and operational problems, and has impaired
the ability of the Internet to support diverse applications.

I also believe that some amount of experimentation (perhaps not all
that is being done under IETF's purview) is part of the process of
"trying to make the Internet work better"

> The IETF
> has identified interoperability, security, and scalability as
> essential, but without attaching measurements to those
> characteristics.

that's a start.  there are a lot more characteristics than these that
should be considered in a design, that we haven't articulated yet,
but we need to.

> It is important that this is "For the Internet,"  and does not include
> everything that happens to use IP.  IP is being used in a myriad of 
> real-world applications, such as controlling street lights, but the 
> IETF does not standardize those applications.

I disagree with the sentiment as I understand it.  I don't think it's
realistic anymore to take the view that what people run entirely on 
private networks is their own business and outside of IETF's purview. 
NATs, private addresses, and DHCP with short lease times have all had
devistating effects on the Internet's ability to support applications. 
Insecure applications can facilitate the breeding of viruses that affect
the entire network even if their intended interactions are only between
a local client and server.

We do have to limit our scope.  We don't have the ability to scale to
the point where we could standardize everything that uses IP, and it
would be silly of us to try to claim authority to do so.  But it might
be reasonable for us to define standards for how local networks work
(to provide applications with a predictable environment), or to define
standards which all applications should adhere to (to minimize security
issues) which can be incorporated by reference into other protocol
specifications.

regarding the section on "Quality and Architectural Review".  what strikes
me about this section is the (implicit) assumption that architecture is
done after the fact.  rather than looking ahead to minimize and resolve
conflicts well before they acquire the inertia of wide deployment, we 
try to fix things after the fact.






Re: draft-ietf-vrrp-ipv6-spec-05.txt lacks IPR clause

2003-10-15 Thread Bob Hinden
Itojun,

At 06:43 PM 10/14/2003, [EMAIL PROTECTED] wrote:
draft-ietf-vrrp-ipv6-spec-05.txt does not have IPR clause on it,
even though cisco claims to have patent related to it.
http://www.ietf.org/ietf/IPR/cisco-ipr-draft-ietf-vrrp-ipv6-spec.txt
I wasn't aware of this claim on VRRP for IPv6 when the current draft was 
submitted.  I will add an IPR clause in the next version.

Thanks,
Bob



Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Eric Rosen

> "The purpose of  the IETF is to create high  quality, relevant, and timely
> standards for the Internet." 

> It is important that this is "For the Internet," and does not include 
> everything that happens to use IP.  IP is being used in a myriad of 
> real-world applications, such as controlling street lights, but the 
> IETF does not standardize those applications. 

Well, let's test this assertion.  Suppose a consortium of electric companies
develops a UDP-based protocol  for monitoring and controlling street lights.
It turns  out that  this protocol generates  an unbounded amount  of traffic
(say,  proportional to  the square  of the  number of  street lights  in the
world), has no  congestion control, and no security, but  is expected to run
over the Internet. 

According to you, this has nothing to  do with the IETF.  It might result in
the congestive collapse of the Internet,  but who cares, the IETF doesn't do
street  lights.  I would  like  to see  the  criteria  which determine  that
telephones belong on the Internet but street lights don't!

Another problem  with your  formulation is that  the Internet is  a growing,
changing, entity,  so "for the Internet"  often means "for what  I think the
Internet  should  be  in  a  few  years", and  this  is  then  a  completely
unobjective criterion.  One  would hope instead that the  IETF would want to
encourage competition between different  views of Internet evolution, as the
competition of ideas is the way to make progress. 

I also do not understand whether "for the Internet" means something different
than "for IP networking" or not.  

I think  it should  also be part  of the  mission to produce  standards that
facilitate the migration to IP  of applications and infrastructures that use
legacy networking  technologies.  Such  migration seems to  be good  for the
Internet, but I don't know if it is "for the Internet" or not. 




RE: IESG proposed statement on the IETF mission

2003-10-15 Thread Margaret . Wasserman

Hi Scott,

> Similarly for almost all of the rest.  What's the point?  Are you
> reiterating the problem-statement work?  They're doing all right,
> although perhaps you could help push the work to completion.  It would
> be much more useful for you to reaffirm the fundamental 
> principles that are not on the auction block.

>From your perspective, what are those fundamental principles?

Margaret




Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Keith Moore
> One  would hope instead that the  IETF would want to
> encourage competition between different  views of Internet evolution, as the
> competition of ideas is the way to make progress. 

what I would say instead is that the IETF should encourage this competition 
within the sphere of architectural discussion - well in advance of development
of specific standards or deployment of specific products.



Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Scott W Brim
On Wed, Oct 15, 2003 01:01:53PM -0400, [EMAIL PROTECTED] allegedly wrote:
> 
> Hi Scott,
> 
> > Similarly for almost all of the rest.  What's the point?  Are you
> > reiterating the problem-statement work?  They're doing all right,
> > although perhaps you could help push the work to completion.  It would
> > be much more useful for you to reaffirm the fundamental 
> > principles that are not on the auction block.
> 
> >From your perspective, what are those fundamental principles?

I can't do that today, but will reply soon.



Re: Removing features

2003-10-15 Thread Iljitsch van Beijnum
On 15 okt 2003, at 19:45, Keith Moore wrote:

the marginal sin of
intercepting DNS queries for private addresses, to prevent the
sort of problems those queries cause, seems to me to be fairly
small.

I probably agree.  But I guess my question is "where does it end?"
It ends when IPv4 ends. That is, if we can keep NAT out of IPv6.

That is, how many things do we change elsewhere in the network in order
to minimize the operational problems that crop up with NATs?  What is
the cost of those changes, and how much do they impair the ability of
the network to support applications?
There is no answer for these questions. Everyone can unilaterally 
decide to run stuff like NATs. That's actually a strength of our 
archictecture. Also, anyone can unilaterally decide to send traffic. 
That's a big issue with our architecture. Fixing the latter (so, 
amongst other things, root nameservers aren't forced to receive traffic 
from RFC 1918 sources) without getting in the way of the former isn't 
going to be easy.




Re: rfc1918 impact

2003-10-15 Thread Leif Johansson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


|
| Leif,
|
| I was speaking to the architectural issue, not the deployment one.  None
| of the three plug and play boxes I have here with NAT capability has any
| inside DNS capability (either enabled by default or available to be
| turned on).
Exactly! Now why is that?

| It does sound like a recommendation to the effect of "if you are going
| to use NAT, or construct a NAT box, then an 'inside DNS' mechanism"
| would be a reasonable idea.  And I would assume it would be an even
| better one if it made clear what the preferred way was to do an "inside
| DNS" -- I think there might be a couple of different ways to do it, and
| some might be less reprehensible than the others.
|
Of course (I am beeing intentionally obtuse) but isn't it quite unlikely
that any recommendation the we make at this point will have any impact
on how v4 NAT is deployed - we are after all talking about kazillions of
adsl modems, SOHO-routers etc etc? Do you believe that things will be
different with v6 NAT, I.e what are the interoperability problems a NAT
vendor will have unless they implement NAT 'correctly'?
Cheers Leif
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQE/jbek8Jx8FtbMZncRArnHAKCYL6ofsHt7AQHefjm7wx1XpD1dWwCgiMtZ
6HnYUNLxyduWc0MLHSB/OGw=
=wMD6
-END PGP SIGNATURE-



Re: rfc1918 impact

2003-10-15 Thread Iljitsch van Beijnum
On 15 okt 2003, at 23:09, Leif Johansson wrote:

Of course (I am beeing intentionally obtuse) but isn't it quite 
unlikely
that any recommendation the we make at this point will have any impact
RFC 2827 provides exactly these recommendations. Unfortunately many 
operators still can't be bothered or use prehistoric equipment that 
can't handle the filtering.




RE: rfc1918 impact

2003-10-15 Thread Michel Py
Leif / Iljitsch,


>>> It does sound like a recommendation to the effect of "if you are
going 
>>> to use NAT, or construct a NAT box, then an 'inside DNS' mechanism"
>>> would be a reasonable idea.  And I would assume it would be an even 
>>> better one if it made clear what the preferred way was to do an 
>>> "inside DNS" -- I think there might be a couple of different ways to

>>> do it, and some might be less reprehensible than the others.

>> Leif Johansson wrote:
>> Of course (I am beeing intentionally obtuse) but isn't it
>> quite unlikely that any recommendation the we make at this
>> point will have any impact

Keith will not like it, but NAT vendors will take suggestions that will
make NAT better. In this case, having in the NAT box a DNS proxy that
forwards reverse lookups for public addresses to the ISP's DNS servers
and replies a blanket reply for RFC1918 addresses does not look to much
work. That box could also accept dynamic address registration that is
default in the latest MS products. Even if _we_ have a DNS server at
home Joe-Six-Pack does not, therefore the place for that animal is in
the NAT box indeed. You want to convince NAT box vendors to implement
it? Tell them that if they do, Keith will stop beating the crap out of
them :-)

> Iljitsch van Beijnum wrote:
> RFC 2827 provides exactly these recommendations.

Does it? We are not talking about blocking RFC1918 traffic here; what we
are talking is blocking traffic where both SA(after NAT) and DA are
public that contains a DNS request for a PRT like 8191CFR.in-addr.arpa,
which requires to decapsulate the packet to inspect its content. It's
not that simple.

Michel.




Re: rfc1918 impact

2003-10-15 Thread Iljitsch van Beijnum
On 15 okt 2003, at 23:24, Michel Py wrote:

RFC 2827 provides exactly these recommendations.
[FYI: RFC 2827 is about ingress filtering to stop source address 
spoofing]

Does it? We are not talking about blocking RFC1918 traffic here;
I was.

what we
are talking is blocking traffic where both SA(after NAT) and DA are
public that contains a DNS request for a PRT like 8191CFR.in-addr.arpa,
which requires to decapsulate the packet to inspect its content. It's
not that simple.
I don't feel that a lookup for .10.in-addr.arpa is all that 
wrong. This can be handled in many very reasonable ways, and the usual 
caching applies. Requests with unroutable sources are wrong because 
they break the protocol.

Iljitsch




RE: rfc1918 impact

2003-10-15 Thread Daniel Senie
At 05:24 PM 10/15/2003, Michel Py wrote:
Leif / Iljitsch,

>>> It does sound like a recommendation to the effect of "if you are
going
>>> to use NAT, or construct a NAT box, then an 'inside DNS' mechanism"
>>> would be a reasonable idea.  And I would assume it would be an even
>>> better one if it made clear what the preferred way was to do an
>>> "inside DNS" -- I think there might be a couple of different ways to
>>> do it, and some might be less reprehensible than the others.

>> Leif Johansson wrote:
>> Of course (I am beeing intentionally obtuse) but isn't it
>> quite unlikely that any recommendation the we make at this
>> point will have any impact
Keith will not like it, but NAT vendors will take suggestions that will
make NAT better. In this case, having in the NAT box a DNS proxy that
forwards reverse lookups for public addresses to the ISP's DNS servers
and replies a blanket reply for RFC1918 addresses does not look to much
work.
It also may be ill-advised, unless a switch is present for disabling it.

While we can argue ISPs should not use RFC 1918 space, there are many using 
it, including some who use it because their equipment vendors force them to 
do so (Cisco in the CMTS routing space comes to mind. Do a traceroute out 
from behind a cable modem, and the head-end router responds with a 10/8 
address. Nice job folks).

 That box could also accept dynamic address registration that is
default in the latest MS products.
accept, or INTERCEPT? Intercepting the traffic would be nice. After all, 
NAT boxes usually (not always, so a disabling switch would be a good idea) 
are at an administrative border (i.e. gateway to home or office). Those DNS 
updates should not be travelling beyond administrative boundaries in most 
cases.

 Even if _we_ have a DNS server at
home Joe-Six-Pack does not, therefore the place for that animal is in
the NAT box indeed. You want to convince NAT box vendors to implement
it? Tell them that if they do, Keith will stop beating the crap out of
them :-)
Heh. Router vendors would likely be interested because it's a good thing to 
do. As for the part about Keith...


> Iljitsch van Beijnum wrote:
> RFC 2827 provides exactly these recommendations.
Does it? We are not talking about blocking RFC1918 traffic here; what we
are talking is blocking traffic where both SA(after NAT) and DA are
public that contains a DNS request for a PRT like 8191CFR.in-addr.arpa,
which requires to decapsulate the packet to inspect its content. It's
not that simple.
Agree. RFC 2827 only discusses filtering packets whose source address is 
inappropriate for crossing a border. When NAT is used, it's rare (if the 
implementation is any good) to have a problem with bogus source addressed 
packets leaving the private network.




Re: rfc1918 impact

2003-10-15 Thread Keith Moore
> Keith will not like it, but NAT vendors will take suggestions that
> will make NAT better.

I'd be happy for them to take suggestions from me, or IETF.  But there's
a big difference between saying "if you must do NAT, please do it this
way" and "NATs are good if they are implemented this way"
 



RE: rfc1918 impact

2003-10-15 Thread Michel Py
Daniel / Iljitsch,

> Daniel Senie wrote:
> [NAT box acting as a DNS server]
> It also may be ill-advised, unless a switch is present for disabling
it.

Of course.


> While we can argue ISPs should not use RFC 1918 space, there are
> many using it, including some who use it because their equipment
> vendors force them to do so (Cisco in the CMTS routing space
> comes to mind. Do a traceroute out from behind a cable modem,
> and the head-end router responds with a 10/8 address. Nice job
> folks).

Indeed. However, one should consider the trade-off between having
blanket reverse resolution for RFC1918 hops vs. flooding the roots with
bogus requests. Besides, for what I have seen these ISPs that use
RFC1918 space for links do not provide reverse lookup for them anyway
:-(


>> Michel Py wrote:
>> That box could also accept dynamic address registration that is
>> default in the latest MS products.

> Daniel Senie wrote:
> accept, or INTERCEPT? Intercepting the traffic would be nice.
> After all, NAT boxes usually (not always, so a disabling switch
> would be a good idea) are at an administrative border (i.e.
> gateway to home or office). Those DNS updates should not be
> travelling beyond administrative boundaries in most cases.

Both accept and intercept are needed.

For accept, what we are looking at is as follows: The NAT box is also
the DHCP server and gives its own address for the DNS server. Then MS
hosts register with it, making it possible not only to return something
for an RFC1918 reverse lookup and not flood the roots, but also to
return the actual host name instead of a blanket name. If using DHCP,
the MS host on the inside could try to register with a DNS server
outside: it does not know its address and registration to a broadcast
address would not be forwarded.

Intercept would be nice in the following situations:
- When Joe Blow has configured a static IP and static DNS servers that
point to the ISP's DNS servers instead of the NAT box.
- When the NAT box is not the DNS server and there is no other DNS
server.
I both cases, intercept would discard the packets of clients trying to
register with the ISP's DNS servers. This would require a comprehensive
implementation: the box should intercept dynamic DNS registration
packets and reverse lookups with an RFC1918 target, but allow other
requests.


>>> Iljitsch van Beijnum wrote:
>>> RFC 2827 provides exactly these recommendations.

>> Michel Py wrote:
>> Does it? We are not talking about blocking RFC1918 traffic here;
>> what we are talking is blocking traffic where both SA(after NAT)
>> and DA are public that contains a DNS request for a PRT like
>> 8191CFR.in-addr.arpa, which requires to decapsulate the packet 
>> to inspect its content. It's not that simple.

> Daniel Senie wrote:
> Agree. RFC 2827 only discusses filtering packets whose source address
> is inappropriate for crossing a border. When NAT is used, it's rare
> (if the implementation is any good) to have a problem with bogus
> source addressed packets leaving the private network.

Rare indeed, because if the source address is private it is unlikely
that the reply would ever get back to the requester, and NAT
implementations that break DNS do not make it to the market.

Michel.




Re: rfc1918 impact

2003-10-15 Thread Dean Anderson
Remember that Reverse lookups are optional. Many people who start of
saying "if reverse dns is configured correctly..." don't seem to
understand that reverse DNS is also properly configured when it is turned
off.

The abuse, and the numerous security vulnerabilities which have been
introduced by the improper use, as well as the difficulties in IPv6 (both
technical and administrative) has prompted discussion on both DNS working
groups to consider removing Reverse DNS altogether.  As it stands, Reverse
DNS is probably not going to be working or widely used in IPV6, which has
an alternate ICMP host information query so that reverse DNS is not
necessary for the most useful purpose of reverse DNS: traceroute.

The good news is that all this nonsense ends with IPv6.

--Dean

On Wed, 15 Oct 2003, Leif Johansson wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> We should keep nice and descriptive subject-lines...
>
> Michel Py wrote:
>
> 
>
> | etc. Basically everything that triggers a reverse lookup adds to the
> | pain, but if reverse lookup is configured correctly on the local DNS
>
> A lot of the arguments seem to contain the phrase "If  is
> configured correctly then ...". Now what does that teach us?
>
>   Cheers Leif
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.0.7 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQE/jRtv8Jx8FtbMZncRAg8eAJsEhg6/LOQgaZW3FtSkdiffbp2TvwCgx+x1
> dpuw7nwHC2Z8BlAx+qoKyBc=
> =7TZn
> -END PGP SIGNATURE-
>
>
>





Re: rfc1918 impact

2003-10-15 Thread Keith Moore
> Intercept would be nice in the following situations:
> - When Joe Blow has configured a static IP and static DNS servers that
> point to the ISP's DNS servers instead of the NAT box.

so the next time Joe Blow is trying to figure out why a particular DNS server
isn't responding correctly by explicitly sending it queries, the NAT box
will intercept the queries, forge replies, and mask the problem.


Keith



RE: rfc1918 impact

2003-10-15 Thread Michel Py
>> Intercept would be nice in the following situations:
>> - When Joe Blow has configured a static IP and static DNS servers
that
>> point to the ISP's DNS servers instead of the NAT box.

> Keith Moore wrote:
> so the next time Joe Blow is trying to figure out why a particular
> DNS server isn't responding correctly by explicitly sending it
> queries, the NAT box will intercept the queries, forge replies,
> and mask the problem.

No. Read the text again: what we are talking about is intercepting the
MS client trying to register dynamic DNS with the ISP's DNS server, not
queries.

Michel.