Re: national security

2003-12-08 Thread Joe Abley
On 8 Dec 2003, at 10:14, Dean Anderson wrote:

Also, anycasting doesn't work for TCP.
Would you care to elaborate on doesn't work?

I agree.  It is easy to create a blackhole, or even a DDOS on an 
anycast
address.  It is much harder to DDOS 600 IP addresses spread through 
some
200 countries.
It's arguably easier for a distributed attack to cause degrade the 
availability of a service bound to a unicast-reachable address than an 
anycast-reachable address. The former will tend to collect traffic 
along a progressively narrow funnel until congestion occurs; with an 
anycast target the pain is distributed over a set of funnels, and in 
general not all will experience the same degree (or any) pain, 
depending on the distribution and behaviour of the attacking nodes.

In a non-distributed attack anycast victims fare subtantially better 
(since non-select anycast targets are unaffected, and only suffer 
topological fallout from the node sinking the attack traffic).

Joe






Re: national security

2003-12-08 Thread Joe Abley
On 7 Dec 2003, at 07:21, Iljitsch van Beijnum wrote:

I don't think this is an oversight, I'm pretty sure this was 
intentional. However, since in practice the BGP best path selection 
algorithm boils down to looking at the AS path length and this has the 
tendency to be the same length for many paths, BGP is fairly useless 
for deciding the best path for even low ambition definitions of the 
word.
For the service aspects of F we're more concerned with reliability than 
performance. Recursive resolvers ask questions to the root relatively 
infrequently, and the important thing is that they have *a* path to use 
to talk to a root server, not necessarily that they are able to 
automagically select the instance with the lowest instantaneous RTT 
(and continue to find a root regardless of what damage might exist in 
the network elsewhere).

For example, local routing policies might lead a resolver in South 
Africa to select a path to 192.5.5.0/24 in California over the node in 
Johannesburg under normal operation. We hope, though, that in the event 
that the resolver becomes isolated from California, a path exists to 
Johannesburg which will allow F-root service to continue reliably (and, 
for example, to allow names under ZA corresponding to local, reachable, 
services to continue to resolve).

The selection of anycast node has more importance when you consider the 
other, non-service role of F, which is to sink attack traffic: we'd 
like to sink attack traffic as close to its source as possible. 
Fortunately the rough-hewn and clumsy hammer of BGP path selection 
seems good enough to attempt to attain that goal right now, since our 
routing policy generally leads people to favour a local node (peer) 
over a global node (transit) through application of pre-existing 
routing policy. This is a natural result of the common truth that 
peering paths are cheaper than transit ones.



Joe






Re: national security

2003-12-08 Thread Masataka Ohta
Joe Abley;

I don't think this is an oversight, I'm pretty sure this was 
intentional. However, since in practice the BGP best path selection 
algorithm boils down to looking at the AS path length and this has the 
tendency to be the same length for many paths, BGP is fairly useless 
for deciding the best path for even low ambition definitions of the word.


For the service aspects of F we're more concerned with reliability than 
performance. Recursive resolvers ask questions to the root relatively 
infrequently, and the important thing is that they have *a* path to use 
to talk to a root server, not necessarily that they are able to 
automagically select the instance with the lowest instantaneous RTT (and 
continue to find a root regardless of what damage might exist in the 
network elsewhere).
I'm afraid F servers does not follow the intention of my original
proposal of anycast root servers.
The intention is to allow millions or trillions of root servers.

While you can rely on someone else's root server with the BGP
best path selection, it is a lot better to have your own
root server.
In addition, it is not necessary to have any hierarchy between
anycast servers at all, as long as there is a single source of
information. Hierarchy may be useful if a single entity manages
all the anycast root servers. However, you can manage your own.
Finally, using only a single address, F, does not provide any
real robustness.
		Masataka Ohta





Re: national security

2003-12-08 Thread Masataka Ohta
Joe Abley;

I'm afraid F servers does not follow the intention of my original
proposal of anycast root servers.

This may well be the case (I haven't read your original proposal).
The IDs have expired. I'm working on a revised one.

Apologies if I gave the impression that I thought to the contrary.
No, no need of apologies.

Finally, using only a single address, F, does not provide any
real robustness.

Fortunately there are twelve other root nameservers.
But, one should have one's own three root servers with different addresses.

		Masataka Ohta





Re: national security

2003-12-08 Thread Joe Abley
On 8 Dec 2003, at 15:25, Masataka Ohta wrote:

I'm afraid F servers does not follow the intention of my original
proposal of anycast root servers.
This may well be the case (I haven't read your original proposal). 
Apologies if I gave the impression that I thought to the contrary.

Finally, using only a single address, F, does not provide any
real robustness.
Fortunately there are twelve other root nameservers.

Joe






Re: national security

2003-12-07 Thread Iljitsch van Beijnum
On 7-dec-03, at 2:26, Paul Vixie wrote:

... (Selecting the best path is pretty much an after thought in
BGP: the RFC doesn't even bother giving suggestions on how to do 
this.)

congradulations, you're the millionth person to think that was an 
oversight.
I don't think this is an oversight, I'm pretty sure this was 
intentional. However, since in practice the BGP best path selection 
algorithm boils down to looking at the AS path length and this has the 
tendency to be the same length for many paths, BGP is fairly useless 
for deciding the best path for even low ambition definitions of the 
word.

I don't have a problem with some controlled anycasting, but the root
operators shouldn't go overboard.

i don't think you will ever meet a more conservative bunch of people, 
so, OK.
Excellent.

For instance, the .org zone is only served by two addresses, which are
then anycast. There have been reports from people who were unable to
reach either of these addresses when there was some kind of 
reachability
problem. The people managing the .org zone are clearly lacking in
responsibility by limiting the number of addresses from which the 
zone is
available without any good reason.

see the icann agreements to find out how much of this was ultradns's 
choice.
Hm, nothing about this in http://www.icann.org/tlds/agreements/org/. In 
fact, it talks about a maximum of 13 servers in some places. Not that 
it matters much who's bright idea it was.

(And some IPv6 roots wouldn't be bad either.)

there are several.  see www.root-servers.org.  (now if we can just 
advertise.)
Just for fun, I cooked up a named.root file with only those IPv6 
addresses in it. This seems to confuse BIND such that its behavior 
becomes very unpredictable. And only 2 of the 4 v6 addresses are 
reachable as one isn't advertised at all and the other as a /48 which 
are heavily filtered.




Re: national security

2003-12-07 Thread Franck Martin




On Sat, 2003-12-06 at 10:18, Iljitsch van Beijnum wrote:

On 5-dec-03, at 17:16, Dean Anderson wrote:

 Indeed, this is what they do when the agree to put the national root
 nameservers in their own nameserver root configs.  It is far easier to
 have per-country stealth root slaves than it is to make every 
 nameserver
 the stealth slave of every other domain in that country.

I don't think this stealth business is a very good idea. If you want a 
root servers somewhere, use anycast. That means importing BGP problems 
into the DNS, which is iffy enough as it is. But for a small network 
island just having a single set of resolvers and make sure those have 
all the needed information isn't a huge deal. Obviously such a place 
doesn't have a huge number of ISPs so the number of DNS servers will be 
quite limited in the first place.


I'm a little bit confused here, but I'm starting to get the ideas...

In the Pacific Islands:
http://map.sopac.org/tiki/tiki-map.phtml?mapfile=pacific.mapzoom=1size=400Redraw=Redrawminx=110miny=-67.6maxx=230maxy=52.6Topography=1EEZ=112+miles+zone=1Country+Names=1

Countries there have about in general 1000 Internet users, and one ISP usually some 2 or 3 max...



I think what we need to really solve this is a redesign of the DNS, as 
the way it is now it breaks a fundamental design principle of the 
internet: when two nodes have reachability, they should be able to 
communicate, regardless of what else is (un)reachable. (I'm not 
volunteering, though.)


Are we going to something like the Kaza protocol (or peer sharing). The DNS know their environement and other DNS around and get the ones that are available to solve ANY query?


I've been in a situation where root servers where unavailable for the 
better part of a day, and it's pretty frustrating to see your resolver 
cache disappear over tiem so you can no longer reach places to which 
you still have connectivity.





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-06 Thread jfcm
Iljitsch,
have we figures about the frequency of changes in the root file? Always 
wanted to check that, but since it is of interest on a substantial duration 
never did. The only serious figure I have is that ICANN decided that three 
months and half to update major ccTLD secondaries was OK (after KPNQuest).
thank you.
jfc





On 23:18 05/12/03, Iljitsch van Beijnum said:
Content-Transfer-Encoding: 7bit

On 5-dec-03, at 17:16, Dean Anderson wrote:

Indeed, this is what they do when the agree to put the national root
nameservers in their own nameserver root configs.  It is far easier to
have per-country stealth root slaves than it is to make every nameserver
the stealth slave of every other domain in that country.
I don't think this stealth business is a very good idea. If you want a 
root servers somewhere, use anycast. That means importing BGP problems 
into the DNS, which is iffy enough as it is. But for a small network 
island just having a single set of resolvers and make sure those have all 
the needed information isn't a huge deal. Obviously such a place doesn't 
have a huge number of ISPs so the number of DNS servers will be quite 
limited in the first place.

Yet a stealth root is comparably easy:
You just tell your nameserver operators to configure in the IP addresses
for your national root servers, instead of the official root servers.
So I have to trust these fake roots a 100%: not only that they don't 
change the root zone, but also that they're always up to date and never 
down. Tall order. An official anycast setup is much better: updates are 
done the way they should be (last year when I wrote an article I checked 
this: there is no policy anywhere on access to the root zonefile. You can 
download it through FTP or even do a zone transfer in a few places, but 
nothing official) and when your local root clone is down there should be 
at least 12 others elsewhere.

Indeed, it is probably sensible for ISPs to do the same.  This would keep
things working internally in the event of an effective isolation due to a
DOS attack, for example.
I think what we need to really solve this is a redesign of the DNS, as the 
way it is now it breaks a fundamental design principle of the internet: 
when two nodes have reachability, they should be able to communicate, 
regardless of what else is (un)reachable. (I'm not volunteering, though.)

I've been in a situation where root servers where unavailable for the 
better part of a day, and it's pretty frustrating to see your resolver 
cache disappear over tiem so you can no longer reach places to which you 
still have connectivity.







Re: national security

2003-12-06 Thread Dean Anderson
I think there are three confluences which tend to support the notion of 
national root nameservers:

1) Root Server scalability
2) Foriegn distrust of US control on the internet
3) Isolation due to technical or political issues.

On Fri, 5 Dec 2003, Iljitsch van Beijnum wrote:

 On 5-dec-03, at 17:16, Dean Anderson wrote:
 
  Indeed, this is what they do when the agree to put the national root
  nameservers in their own nameserver root configs.  It is far easier to
  have per-country stealth root slaves than it is to make every 
  nameserver
  the stealth slave of every other domain in that country.
 
 I don't think this stealth business is a very good idea. If you want a 
 root servers somewhere, use anycast. That means importing BGP problems 
 into the DNS, which is iffy enough as it is. 

That seems to argue against anycast...

 But for a small network island just having a single set of resolvers and
 make sure those have all the needed information isn't a huge deal.
 Obviously such a place doesn't have a huge number of ISPs so the number
 of DNS servers will be quite limited in the first place.

Its the same deal as distributing the official root nameserver
updates.  Some people don't pay attention to this until they can't get
nameservice to work.  Its a problem, but it isn't made better or worse.

  Yet a stealth root is comparably easy: You just tell your nameserver
  operators to configure in the IP addresses for your national root
  servers, instead of the official root servers.
 
 So I have to trust these fake roots a 100%: not only that they don't 
 change the root zone, but also that they're always up to date and never 
 down. Tall order. An official anycast setup is much better: updates are 
 done the way they should be (last year when I wrote an article I 
 checked this: there is no policy anywhere on access to the root 
 zonefile. You can download it through FTP or even do a zone transfer in 
 a few places, but nothing official) and when your local root clone is 
 down there should be at least 12 others elsewhere.

They aren't exactly fake. They are just not listed by the dig . ns  
query, so they aren't technically authoritative. Though, I suppose they
could be--I'm just assuming they aren't.  As far as trust goes, since they
are run by your government, yes, you can trust them.  Since these zones
don't change much, they can be updated by zone transfers, or by other
official distribution.  As far as reliability goes, that's why you have 
more than one. And you scale it just like any other part of 
infrastructure.

Anycast doesn't make this job easier--it makes it harder. An Anycast
server can't easily do a zone transfer from itself.  This is just another
complication of running anycast.  Anycast is just a means of scaling up 
server infrastructure.  There are other methods of scaling.  Anycast 
doesn't particularly match the political interests.

I also don't conceive of a single national stealth server. I assume that
they may be many. Probably at least 2, and certainly more depending on the
size of the country.  The US would probably have a lot.

  Indeed, it is probably sensible for ISPs to do the same.  This would
  keep things working internally in the event of an effective isolation
  due to a DOS attack, for example.
 
 I think what we need to really solve this is a redesign of the DNS, as 
 the way it is now it breaks a fundamental design principle of the 
 internet: when two nodes have reachability, they should be able to 
 communicate, regardless of what else is (un)reachable. (I'm not 
 volunteering, though.)

I agree completely, but I don't think anything needs to change other than
management of existing services.  The internet has to continue to work
when it is partitioned, regardless of the reasons for the partitioning.
Those reasons could be technical, or political.  And the internet should
then just work when its glued back together again.  But address and DNS 
delegations are hierarchical, so there is not reason that this can't be 
done.

 I've been in a situation where root servers where unavailable for the 
 better part of a day, and it's pretty frustrating to see your resolver 
 cache disappear over tiem so you can no longer reach places to which 
 you still have connectivity.

This is fixed by stealth slaves at large ISPs.  Small ISPs, if isolated 
probably don't have enough customers to really care about getting to the 
other customers. But this might not be true for large ISPs, and might not 
be true of islands and small countries.  A small island in the Pacific 
might have several ISPs but only one underwater cable. If the cable is 
cut, they could be isolated for a while. But there is no reason they 
shouldn't be able to get to other sites that are on the island.




Re: national security

2003-12-06 Thread Iljitsch van Beijnum
On 6-dec-03, at 23:04, Dean Anderson wrote:

I don't think this stealth business is a very good idea. If you want a
root servers somewhere, use anycast. That means importing BGP problems
into the DNS, which is iffy enough as it is.

That seems to argue against anycast...
If there were 65 actual root servers, I would very much prefer the 
situation where I could contact each and any one of those, rather than 
a subset of 13 that are chosen by a protocol that was NOT designed for 
this. (Selecting the best path is pretty much an after thought in 
BGP: the RFC doesn't even bother giving suggestions on how to do this.) 
But the DNS protocol has problems supporting 65 (or 45 or even 25) 
individual root server addresses, it's either no more than around 13 
individual servers or a larger number of anycasted ones.

I don't have a problem with some controlled anycasting, but the root 
operators shouldn't go overboard. For instance, the .org zone is only 
served by two addresses, which are then anycast. There have been 
reports from people who were unable to reach either of these addresses 
when there was some kind of reachability problem. The people managing 
the .org zone are clearly lacking in responsibility by limiting the 
number of addresses from which the zone is available without any good 
reason.

The situation that must be avoided is where all or most root servers 
seem to be in the same location from a certain viewpoint, as a BGP 
black hole towards that location will then make them all unreachable. I 
would prefer it if several root servers weren't anycast at all, just to 
be on the safe side.

(And some IPv6 roots wouldn't be bad either.)

But for a small network island just having a single set of resolvers 
and
make sure those have all the needed information isn't a huge deal.
Obviously such a place doesn't have a huge number of ISPs so the 
number
of DNS servers will be quite limited in the first place.

Its the same deal as distributing the official root nameserver
updates.  Some people don't pay attention to this until they can't get
nameservice to work.  Its a problem, but it isn't made better or worse.
The difference is that official root servers are updated through the 
official channels, which I have no reason to distrust. Having a stealth 
root server means you can't listen to the real root servers anymore 
(because then you'd have a 13/14th chance of learning the list of 
official root servers and forgetting about the stealth one when a 
resolver starts) which is a big fat single point of failure.

So I have to trust these fake roots a 100%:

They aren't exactly fake. They are just not listed by the dig . ns
query, so they aren't technically authoritative. Though, I suppose they
could be--I'm just assuming they aren't.
Ok, let's not debate the word fake.

As far as trust goes, since they
are run by your government, yes, you can trust them.
Their intentions, maybe. Their DNS operating prowess, I don't think so.

Since these zones
don't change much, they can be updated by zone transfers,
You missed the point in one of my previous messages: there is no 
officially supported way to do zone transfers for the root. This can 
stop working at any time.

or by other official distribution.
Which I don't think there is either.

I think what we need to really solve this is a redesign of the DNS, as
the way it is now it breaks a fundamental design principle of the
internet: when two nodes have reachability, they should be able to
communicate, regardless of what else is (un)reachable. (I'm not
volunteering, though.)

I agree completely, but I don't think anything needs to change other 
than
management of existing services.
How is that agreeing with my point that we need a redisign (if we want 
to solve this)???




Re: national security

2003-12-06 Thread Jaap Akkerhuis

have we figures about the frequency of changes in the root file?

The serial # changes twice a day. The contents hardly as far as I
can see.

Always wanted to check that, but since it is of interest on a
substantial duration never did.

It is very easy to check. Just pull over the zone file at a regulary
base and do a diff.

The only serious figure I have is that ICANN decided that three
months and half to update major ccTLD secondaries was OK (after
KPNQuest).

Document your figures.

Note that ns.eu.net was never dead, so I wonder what the relevance
is anyway.

jaap

PS. I wonder how soon someone will tell me I shouldn't be feeding ...



Re: national security

2003-12-06 Thread Bill Manning
% 
% have we figures about the frequency of changes in the root file?
% 
% The serial # changes twice a day. The contents hardly as far as I
% can see.

other contents change about three times a week.

%   jaap
% 
% PS. I wonder how soon someone will tell me I shouldn't be feeding ...

you shouldn't, not w/out a permit.  :)

--bill
Opinions expressed may not even be mine by the time you read them, and
certainly don't reflect those of any other entity (legal or otherwise).



Re: national security

2003-12-06 Thread Paul Vixie
[EMAIL PROTECTED] (Iljitsch van Beijnum) writes:

 ... (Selecting the best path is pretty much an after thought in 
 BGP: the RFC doesn't even bother giving suggestions on how to do this.) 

congradulations, you're the millionth person to think that was an oversight.

 I don't have a problem with some controlled anycasting, but the root 
 operators shouldn't go overboard.

i don't think you will ever meet a more conservative bunch of people, so, OK.

 For instance, the .org zone is only served by two addresses, which are
 then anycast. There have been reports from people who were unable to
 reach either of these addresses when there was some kind of reachability
 problem. The people managing the .org zone are clearly lacking in
 responsibility by limiting the number of addresses from which the zone is
 available without any good reason.

see the icann agreements to find out how much of this was ultradns's choice.

 The situation that must be avoided is where all or most root servers 
 seem to be in the same location from a certain viewpoint, as a BGP 
 black hole towards that location will then make them all unreachable. I 
 would prefer it if several root servers weren't anycast at all, just to 
 be on the safe side.

that's exactly what's likely to continue happening.  diversity is good.

 (And some IPv6 roots wouldn't be bad either.)

there are several.  see www.root-servers.org.  (now if we can just advertise.)

 You missed the point in one of my previous messages: there is no
 officially supported way to do zone transfers for the root. This can stop
 working at any time.

indeed, it's been downhill ever since 10.0.0.53 went away.  now it's chaos.
-- 
Paul Vixie



Re: national security - proposed follow-up

2003-12-06 Thread grenville armitage

jfcm wrote:
[..]
 I suggest we
 start a specialized WG with a clean shit study charter.

Well you've come to the right place! Don't get it much cleaner
than around here, that's for sure.

gja



Re: national security

2003-12-05 Thread Iljitsch van Beijnum
On 5-dec-03, at 1:37, Franck Martin wrote:

Finally before a root-server is installed somewhere, someone will do 
an assessment of the local conditions and taylor it adequately. I want 
countries to request installation of root servers, and I know about 20 
Pacific Islands countries that need root-servers in case their 
Internet link goes dead.
Might I suggest that there is a much easier way to do this: if the 
constituency for such a root server is so small and so homogenous (= 
they all share a single link to the rest of the net) then it would be 
much simpler for all of these users to simply share a single set of 
nameservers, which can then all be primary or secondary for all the 
domains used locally. This allows communication to continue even if the 
root servers are unreachable AND it allows users to register domain 
names under any TLD they like.




Re: national security

2003-12-05 Thread Dean Anderson
Indeed, this is what they do when the agree to put the national root
nameservers in their own nameserver root configs.  It is far easier to
have per-country stealth root slaves than it is to make every nameserver
the stealth slave of every other domain in that country.  

When that country is isolated from the rest of the net, (due to single
connection failure, multiple connection failure, war, etc), then they
still have nameservice for their CCtld and its delegations, and those of
whatever other countries they remain connected to.

Stealth root slaves are such a far better solution, in terms of
configuration, maintenance, and scaling than configuring every nameserver
to be a stealth slave of every other domain.  Imagine the difficulty of
doing that...  Even a small country with a few tens of thousands of
domains makes that unrealistic.  Yet a stealth root is comparably easy:
You just tell your nameserver operators to configure in the IP addresses
for your national root servers, instead of the official root servers.  
Now all you have to do is keep that set operating, which isn't that hard,
and can be done even if the country becomes isolated from the world net,
and the official nameservers.

Indeed, it is probably sensible for ISPs to do the same.  This would keep
things working internally in the event of an effective isolation due to a
DOS attack, for example.

--Dean

On Fri, 5 Dec 2003, Iljitsch van Beijnum wrote:

 On 5-dec-03, at 1:37, Franck Martin wrote:
 
  Finally before a root-server is installed somewhere, someone will do 
  an assessment of the local conditions and taylor it adequately. I want 
  countries to request installation of root servers, and I know about 20 
  Pacific Islands countries that need root-servers in case their 
  Internet link goes dead.
 
 Might I suggest that there is a much easier way to do this: if the 
 constituency for such a root server is so small and so homogenous (= 
 they all share a single link to the rest of the net) then it would be 
 much simpler for all of these users to simply share a single set of 
 nameservers, which can then all be primary or secondary for all the 
 domains used locally. This allows communication to continue even if the 
 root servers are unreachable AND it allows users to register domain 
 names under any TLD they like.
 
 
 




Re: national security

2003-12-05 Thread Matt Larson
On Fri, 05 Dec 2003, Paul Vixie wrote:
 note that f-root, i-root, j-root, k-root, and m-root are all doing anycast
 now, and it's likely that even tonga would find that one or more of these
 rootops could find a way to do a local install.

Apologies for taking this thread perhaps even further off-topic.

VeriSign is planning to anycast J root beyond the 13 sites where it is
(or will be shortly) co-located with the com/net name servers.
Currently under-served areas are particularly appealing as possible
locations.  Briefly, we'll supply the hardware for a J root instance
and manage it if you supply the space, power and bandwidth.  No fees
are involved.  Interested ISPs should please contact me privately.

Matt
--
Matt Larson [EMAIL PROTECTED]
VeriSign Naming and Directory Services



Re: national security

2003-12-05 Thread jfcm
At 05:12 05/12/03, Franck Martin wrote:
On Fri, 2003-12-05 at 15:32, jfcm wrote:
 Paul,
 1. all this presumes that the root file is in good shape and has not been
 tampered.
  How do you know the data in the file you disseminate are not polluted
 or changed?
Because somebody will complain... ;)
Dear Franck,
I am afraid you are right. This leaves however a few questions such as:
1. what/who does say he is right?
2. he complains to who?
3. and then what?
4. if a decision is taken how the pollution will be corrected? Just waiting 
for ttl (and may be people or businesses) to die?
5. who will the hurt parties sue?
6. are they insurred?
7. who pays for the insurrance?
etc.

I understand last time somebody complained, he called Ira Magaziner and 
they solved the case in calling on Joe Sims and subsequently creating ICANN.
jfc





Re: national security

2003-12-05 Thread Iljitsch van Beijnum
On 5-dec-03, at 17:16, Dean Anderson wrote:

Indeed, this is what they do when the agree to put the national root
nameservers in their own nameserver root configs.  It is far easier to
have per-country stealth root slaves than it is to make every 
nameserver
the stealth slave of every other domain in that country.
I don't think this stealth business is a very good idea. If you want a 
root servers somewhere, use anycast. That means importing BGP problems 
into the DNS, which is iffy enough as it is. But for a small network 
island just having a single set of resolvers and make sure those have 
all the needed information isn't a huge deal. Obviously such a place 
doesn't have a huge number of ISPs so the number of DNS servers will be 
quite limited in the first place.

Yet a stealth root is comparably easy:
You just tell your nameserver operators to configure in the IP 
addresses
for your national root servers, instead of the official root servers.
So I have to trust these fake roots a 100%: not only that they don't 
change the root zone, but also that they're always up to date and never 
down. Tall order. An official anycast setup is much better: updates are 
done the way they should be (last year when I wrote an article I 
checked this: there is no policy anywhere on access to the root 
zonefile. You can download it through FTP or even do a zone transfer in 
a few places, but nothing official) and when your local root clone is 
down there should be at least 12 others elsewhere.

Indeed, it is probably sensible for ISPs to do the same.  This would 
keep
things working internally in the event of an effective isolation due 
to a
DOS attack, for example.
I think what we need to really solve this is a redesign of the DNS, as 
the way it is now it breaks a fundamental design principle of the 
internet: when two nodes have reachability, they should be able to 
communicate, regardless of what else is (un)reachable. (I'm not 
volunteering, though.)

I've been in a situation where root servers where unavailable for the 
better part of a day, and it's pretty frustrating to see your resolver 
cache disappear over tiem so you can no longer reach places to which 
you still have connectivity.




Re: national security

2003-12-05 Thread Harald Tveit Alvestrand
Jefsey,

which we are you speaking on behalf of?

--On 27. november 2003 23:20 +0100 jfcm [EMAIL PROTECTED] wrote:

While parallel issues start being discussed and better understood at
WSIS, we have next week a meeting on Internet national security,
sovereignty and innovation capacity.







Re: national security

2003-12-04 Thread jfcm
Dear Mr. Lindqvist,
I am afraid I do not understand some of the points you try to make. I will 
give basic responses, please do not hesitate to elaborate.

On 21:27 02/12/03, Kurt Erik Lindqvist said:
 The post KPQuest updates are a good example of what Govs do not want
 anymore.
I can't make this sentence out. Do you mean the diminish of KPNQwest?
In that case, please explain. And before you do: I probably know more 
about KPNQwest than anyone else on this list with a handful of exceptions 
that where all my colleagues doing the IP Engineering part with me. Please 
go on...
I am refering (post KPNQuest) to the reference management lesson ICANN 
gave concerning root management when the 66 ccTLD secondaries supported by 
KPNQuest were to be updated. NO one will forget at many ccTLDs, and Govs.

 Consider the French (original) meaning of gouvernance. For networks
 it would be net keeping. Many ICANN relational problem would
 disappear.
Ok, enough of references to France/French/European. I am born and grown
up in Finland, I have more or less lived in Germany and the Netherlands
for 6-36 months, I live in Sweden since 9 years and I have a resident
in Switzerland. I have worked on building some of the largest Internet
projects in Europe and the largest pan-European networks. Even with
governments trying to meet their needs. So I should be the perfect
match of what you are trying to represent. And I just don't buy any of
your arguments. Sorry.
I suppose that you are living in a French speaking Switzerland part then. 
May be people there have not a common command of the XIIIth century French 
from North of France (where the word comes from) or from current Senegal 
administration (where the word is in current legal use)?

 What would be the difference if the ccNSO resulted from an MoU? It
 would permit to help/join with ccTLDs, and RIRs, over a far more
 interesting ITU-I preparation. I suppose RIRs would not be afraid an
 ITU-I would not be here 2 years from now.
As someone who is somewhat involved in the policy work of the RIRs, I
really,really, really want you to elaborate on this.
Glad you do. I keep your entries to simplify the reading.

I just fail to see this. What is it with the ITU that will give us

   a) More openness? How do I as an individual impact the ITU process?
This is not the topic (I come initially from a national point of view) and 
not to disuss but to listen.

But this is also an IETF separted issue. As deeply involved for years in 
@large issues (ICANN) and far longer political, public, coporate, 
technology development network issues and for having shared for some years 
in the ITU process (at that time CCITT), I think I will say Yes.

1. As a user I have no impact on IETF ICANN. Not even do not get heard.
2. but (and with a big but unlil ITU adapts and created an I sector for 
us) ITU has the structures and procedures (Focus Groups and Members called 
meetings) just to do that.

You may have studied/shared in the WSIS and observed the way it works?

   b) More effectiveness and a faster adoption rate?
Probably yes. For a simple reason. Internet is just another technology to 
support users data communications needs. I may find faster, better; 
parallel solutions else where. Competition fosters speed and quality or 
death. As a user I am darwinian about the process I use.

   c) A better representation of end-user needs?
Certainly. This is a recurring issue. Quote me the way IETF listen to 
end-users needs. I have been flamed enough as a user representative to know 
it. And don't tell me who do you represent? or I will bore everyone 
responding. This thread show it. As a user I rose a question. Responses:

- question are disputed. I learned a long ago that questions are never 
stupid, but responses may be.
- question asked back to me: who are you. I appreciate that you may warn me 
about KPNQuest to spare us a trolls response. But I wander why the author 
would have any impact on a new question.


 The lack of users networks. Multiorganization TLDs Jerry made
 introduced as a reality we started experiencing. Just consider that
 the large user networks (SWIFT, SITA, VISA, Amadeus, Mnitel, etc.)
 started before 85. OSI brought X.400. CERN brought the Web. But ICANN
 - and unreliable technology - blocks ULDs (User Level Domains).
To be honest, none of those networks are really large compared to the
Internet, or in terms of users and especially bandwidth to some of the
large providers.
I agree. But I fail to see howit relates to the point?

My point is that SWIFT should have been able to become .swift for a very 
long. That .bank was denied to the World Bank Association and that SITA was 
given a try with .aero.

So we can technically compare the capacity of Internet to support the needs 
of a very, very old network like SITA. It does seem to be very appealing on 
the air transportation community. Never saw any ad for aerolinas.aero yet 
howver the mnemonic interest.

And, yes, OSI brought 

Re: national security

2003-12-04 Thread jfcm
At 09:21 03/12/03, Kurt Erik Lindqvist wrote:
I agree and realize this. However, the let's take that argument out in the 
open and not hide it behind national security.
I regret such an agressiveness. I simply listed suggestions I collected to 
ask warning, advise, alternative to problems identified not from inside the 
internet but from outside. I was labelled as a topic of national security 
because it was to prepare a menting on national vulnerability to Internet. 
If it had been about a Web Information and Services Providers, or User 
Networks demands, it would have been the same

I expected warnings, advices, alternative propositions. If you need a long 
discssion among specialists to come with that, please do. I am only 
interested in an authorized outcome. And we will all thank you for that.
jfc









Re: national security

2003-12-04 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 I agree and realize this. However, the let's take that argument out 
 in the open and not hide it behind national security.

 I regret such an agressiveness. I simply listed suggestions I 
 collected to ask warning, advise, alternative to problems identified 
 not from inside the internet but from outside.

Why don't you simply go inside and find out? There is nothing like 
first hand knowledge!

 I was labelled as a topic of national security because it was to 
 prepare a menting on national vulnerability to Internet. If it had 
 been about a Web Information and Services Providers, or User Networks 
 demands, it would have been the same

I know a number of countries that have looked at this from a national 
perspective. None of them have argued that the ITU is the solution. On 
the contrary, the distributed control of the Internet is a good value.

 I expected warnings, advices, alternative propositions. If you need a 
 long discssion among specialists to come with that, please do. I am 
 only interested in an authorized outcome. And we will all thank you 
 for that.

What the collective Internet thinks is documented largely through the 
IETF process, or related organizations. I think that the issues you are 
trying to raise are already answered at any point in history as being a 
reflection of the current set of standards.

- - kurtis -

-BEGIN PGP SIGNATURE-
Version: PGP 8.0.2

iQA/AwUBP8+D+KarNKXTPFCVEQJm9QCgzecWX5+0R1RcADym1rrZHICjvPAAoK2o
DBfR0ezNIcNGpKt4bb+J8bGl
=HL9l
-END PGP SIGNATURE-




Re: national security

2003-12-04 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  The post KPQuest updates are a good example of what Govs do not 
 want
  anymore.
 I can't make this sentence out. Do you mean the diminish of KPNQwest?
 In that case, please explain. And before you do: I probably know more 
 about KPNQwest than anyone else on this list with a handful of 
 exceptions that where all my colleagues doing the IP Engineering part 
 with me. Please go on...

 I am refering (post KPNQuest) to the reference management lesson 
 ICANN gave concerning root management when the 66 ccTLD secondaries 
 supported by KPNQuest were to be updated. NO one will forget at many 
 ccTLDs, and Govs.

I was there when KPNQwest went down. I think I have concluded that what 
you are referring to was a machine called ns.eu.net. That machine has a 
history that goes back to the beginning of the Internet in Europe. 
Through mergers and acquisitions it ended up on the KPNQWest network. 
It was secondary for a large number of domains, including ccTLDs. When 
KPNQwest down, the zone content and address block was transfered to 
RIPE NCC. As far as I can tell it is still there. TLDs where asked to 
move away from the machine over time.

As a matter of fact, several studies the year before KPNQwest went 
down, pointed out the problem with having all the worlds TLDs using 
just a few machines as slave servers. However, the DNS is designed to 
work fine even with one slave not reachable. So even if ns.eu.net would 
have gone off-line abruptly, which it never did, people got, and 
apparently still have, plenty of time to move. I think this incident 
clearly shows the robustness of the current system, more than anything 
else.


 I just fail to see this. What is it with the ITU that will give us

a) More openness? How do I as an individual impact the ITU process?

 This is not the topic (I come initially from a national point of view) 
 and not to disuss but to listen.

 But this is also an IETF separted issue. As deeply involved for years 
 in @large issues (ICANN) and far longer political, public, coporate, 
 technology development network issues and for having shared for some 
 years in the ITU process (at that time CCITT), I think I will say 
 Yes.

 1. As a user I have no impact on IETF ICANN. Not even do not get heard.

IETF and ICANN in this prospect are two completely different 
organizations and processes. In IETF, you are making yourself heard. 
Quite a lot actually.

 2. but (and with a big but unlil ITU adapts and created an I 
 sector for us) ITU has the structures and procedures (Focus Groups and 
 Members called meetings) just to do that.

 You may have studied/shared in the WSIS and observed the way it works?

It certainly doesn't strike me as open at least. I have read the 
following : http://www.itu.int/wsis/participation/accreditation.html. 
An organization where I have to apply for accreditation doesn't sound 
open to me. Actually I am not even sure what WSIS expect as input. To 
me it seems as a forum for governments to be seen. With the hope that 
they will have a forum where they can raise issues to other governments.

What I am missing is a) The input of the professionals b) How they 
expect to use any eventual output.

Again, I fail to see what the ITU process gives that have a clear 
advantage over the current IETF process. And as said, there are also 
governments who have come to understand this and learnt how to deal 
with the IETF process at the same time as making contingency planning.

b) More effectiveness and a faster adoption rate?

 Probably yes. For a simple reason. Internet is just another technology 
 to support users data communications needs. I may find faster, better; 
 parallel solutions else where. Competition fosters speed and quality 
 or death. As a user I am darwinian about the process I use.

So you are saying that the ITU will provide better standards at fast 
speed? That has most certainly not been the case before...

c) A better representation of end-user needs?

 Certainly. This is a recurring issue. Quote me the way IETF listen to 
 end-users needs. I have been flamed enough as a user representative to 
 know it. And don't tell me who do you represent? or I will bore 
 everyone responding. This thread show it. As a user I rose a question. 
 Responses:

The IETF makes decisions by rough consensus. If you have a point that 
is valid enough, you will get enough people to support you. If not, 
life goes on.

 - question are disputed. I learned a long ago that questions are never 
 stupid, but responses may be.

No, but the question might tell a lot about who you are and what your 
motives are.

 - question asked back to me: who are you. I appreciate that you may 
 warn me about KPNQuest to spare us a trolls response. But I wander why 
 the author would have any impact on a new question.

Knowing peoples background is always helpful in understanding a 
discussion.

 I agree. But I fail to see howit 

Re: national security

2003-12-04 Thread Franck Martin




On Fri, 2003-12-05 at 09:00, Kurt Erik Lindqvist wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  The post KPQuest updates are a good example of what Govs do not 
 want
  anymore.
 I can't make this sentence out. Do you mean the diminish of KPNQwest?
 In that case, please explain. And before you do: I probably know more 
 about KPNQwest than anyone else on this list with a handful of 
 exceptions that where all my colleagues doing the IP Engineering part 
 with me. Please go on...

 I am refering (post KPNQuest) to the reference management lesson 
 ICANN gave concerning root management when the 66 ccTLD secondaries 
 supported by KPNQuest were to be updated. NO one will forget at many 
 ccTLDs, and Govs.

I was there when KPNQwest went down. I think I have concluded that what 
you are referring to was a machine called ns.eu.net. That machine has a 
history that goes back to the beginning of the Internet in Europe. 
Through mergers and acquisitions it ended up on the KPNQWest network. 
It was secondary for a large number of domains, including ccTLDs. When 
KPNQwest down, the zone content and address block was transfered to 
RIPE NCC. As far as I can tell it is still there. TLDs where asked to 
move away from the machine over time.

As a matter of fact, several studies the year before KPNQwest went 
down, pointed out the problem with having all the worlds TLDs using 
just a few machines as slave servers. However, the DNS is designed to 
work fine even with one slave not reachable. So even if ns.eu.net would 
have gone off-line abruptly, which it never did, people got, and 
apparently still have, plenty of time to move. I think this incident 
clearly shows the robustness of the current system, more than anything 
else.


There are now organisations installing root servers in all countries that want one. If you are operating a ccTLD, you may want have sitting next to your machines a root server, so if the national Internet link goes down (something major but not impossible when many countries have only one link to the Internet) the system still works for all the national domain names...

This is a not a very well known fact, and I stumbled upon it recently after wanting to complain that root servers where only in developed countries.

Oh, btw to install a root server, any PC will do, it is not something difficult as it carries only a couple of hundred records (200 countries and a few gTLDs), not the millions of a .com.

Cheers




Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-04 Thread Franck Martin




On Fri, 2003-12-05 at 12:16, Suzanne Woolf wrote:

On Fri, Dec 05, 2003 at 10:44:00AM +1200, Franck Martin wrote:
 There are now organisations installing root servers in all countries
 that want one. If you are operating a ccTLD, you may want have sitting
 next to your machines a root server, so if the national Internet link
 goes down (something major but not impossible when many countries have
 only one link to the Internet) the system still works for all the
 national domain names...

We (ISC) are widely anycasting f.root-servers.net. Several of the
other operators of root nameservers have begun to anycast their
servers as well, or announced plans to do so.

Is this what you meant? If not, could you elaborate?


Yes this is what I mean


 This is a not a very well known fact, and I stumbled upon it recently
 after wanting to complain that root servers where only in developed
 countries.

It's hard to quantify what developed means in this context. Our
anycast f-root systems, for example, do need some infrastructure
around them in order to be useful, but we have anycast clusters in
over a dozen locations, most outside of the G8. See
f.root-servers.org.

Well just use the LDS index of the UN if you are in doubt, but we are not here in any contest... Outside the G8 is something. Yes they do need some infrastructure that you may not find in developing country... but then see my last point...


 Oh, btw to install a root server, any PC will do, it is not something
 difficult as it carries only a couple of hundred records (200 countries
 and a few gTLDs), not the millions of a .com.

Operationally, this is a dangerous half-truth. It may be the case that
you can run a nameserver that believes it is authoritative for the
root zone and will answer for it in this way. But under real world
conditions (significant numbers of queries, possibility of DDoS or
other attack, etc.) this is far from adequate.


This is not a dangerous half-truth, It has to be demystified. Let's take the example of a country like Tonga. A simple PC will do for them because the number of Internet Users there is may be about a 1000 people. With anycast properly set up only the packet of that country will reach the local root-server (proximity), so it is unlikely to be under heavy load with a 1000 of people on the Internet there...

Finally before a root-server is installed somewhere, someone will do an assessment of the local conditions and taylor it adequately. I want countries to request installation of root servers, and I know about 20 Pacific Islands countries that need root-servers in case their Internet link goes dead.

cf www.picisoc.org if you want to join us...


thanks,
Suzanne



Suzanne Woolf+1-650-423-1333
Senior Programme Manager, ISC		

		** Fortune favors the prepared mind **





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-04 Thread Dean Anderson
On 5 Dec 2003, Paul Vixie wrote:

 my experience differs.  when a root name server is present it has to be
 fully fleshed out, because if it isn't working properly or it falls over
 due to a ddos or if it's advertising a route but not answering queries,
 then any other problem will be magnified a hundredfold. 

It depends on the problem profile you working to avoid.

 doing root name service in a half-baked way is much worse than not doing
 it at all, since over time the costs of transit to a good server will be
 less than the costs of error handling and cleanup from having a bad one.

This could be true, but irrelevant. It depends on the costs of transit and 
cleanup.  Transit to a remote island could be very expensive, while labor 
to cleanup any problems might be very cheap.

 moreover, your statement only the packet(s) of that country will reach the
 local root server is presumptive.  

Not necessarilly.  If the country operates a root server that is only
accessible from that country, that is, only in the preloaded in the caches
of that countries' nameservers, then the 'presumption' is true.  The list
of root nameservers is determined by the lists that are pre-loaded into
other nameservers, not by the 'dig . ns' query on a real root.  You could
have hundreds of root slaves, but only a small number of truly global root
servers without any problems at all.  This would probably be a good thing 
for the global servers.  

 under error conditions where transit is leaked, such a server could end
 up receiving global-scope query loads. in our current
 belt-and-suspenders model, we (f-root) closely monitor our peer routing,
 AND we are massively overprovisioned for expected loads, since a ddos
 or a transit-leak can give us dramatically unexpected loads.

This is a feature that is specific to your anycast setup.  Simpler,
non-anycast setups wouldn't have this problem.

 if you know someone who is willing to provision a root name server without
 a similar belt and similar suspenders, then please tell them to stop.

Are all the roots doing anycast?  I've run private roots without any
problems, and have experienced significant improvements for doing so. (see
below)

 on a connectivity island (which might be in the ocean or it might just be
 a campus or dwelling), the way to ensure that local service is not disrupted
 by connectivity breaks is to make your on-island recursive name servers
 stealth slaves of your locally-interesting zones.  in new zealand for
 example it was the custom for many years to use a forwarding hierarchy
 where the nodes near the top were stealth slaves of .NZ, .CO.NZ, etc.
 that way if they lost a pacific cable system they could still reach the
 parts of the internet which were on the new zealand side of the break.

This assumes that you are mixing authoritative and caching nameservers. 
Something that many people (including you) advise against.

Operating a root nameserver is much easier.  Obviously, in the case of an 
island or small country that has only one connection, or perhaps one 
network center, a DDOS that affects the local root is going to affect all 
connectivity. Their only option may be to drop connectivity.  Actual war 
could have the same impact, due to broken communications line. A local 
root in each country is probably a good idea.

I've also found that when setting up non-connected laboratory networks, it
is better to have a 'lab root' server, that acts like a root, since
machines in the lab can't access real root servers.  This greatly enhances
performance in the case where a wrong, or just non-lab domainname is typed
in, since you can get an nxdomain back right away instead of waiting for a
timeout as the root servers at tried.




Re: national security

2003-12-04 Thread jfcm
Paul,
1. all this presumes that the root file is in good shape and has not been 
tampered.
How do you know the data in the file you disseminate are not polluted 
or changed?
2. where is the best documentation - from your own point of veiw - of a 
root server organization.
thank you
jfc

At 02:53 05/12/03, Paul Vixie wrote:
On Fri, Dec 05, 2003 at 10:44:00AM +1200, Franck Martin wrote:
 Oh, btw to install a root server, any PC will do, it is not something
 difficult as it carries only a couple of hundred records (200 countries
 and a few gTLDs), not the millions of a .com.
On Fri, 2003-12-05 at 12:16, Suzanne Woolf wrote:
 Operationally, this is a dangerous half-truth. It may be the case that
 you can run a nameserver that believes it is authoritative for the
 root zone and will answer for it in this way. But under real world
 conditions (significant numbers of queries, possibility of DDoS or
 other attack, etc.) this is far from adequate.
[EMAIL PROTECTED] (Franck Martin) writes:
 This is not a dangerous half-truth, It has to be demystified. Let's take
 the example of a country like Tonga. A simple PC will do for them because
 the number of Internet Users there is may be about a 1000 people. With
 anycast properly set up only the packet of that country will reach the
 local root-server (proximity), so it is unlikely to be under heavy load
 with a 1000 of people on the Internet there...
my experience differs.  when a root name server is present it has to be
fully fleshed out, because if it isn't working properly or it falls over
due to a ddos or if it's advertising a route but not answering queries,
then any other problem will be magnified a hundredfold.  doing root name
service in a half-baked way is much worse than not doing it at all, since
over time the costs of transit to a good server will be less than the costs
of error handling and cleanup from having a bad one.
moreover, your statement only the packet(s) of that country will reach the
local root server is presumptive.  under error conditions where transit
is leaked, such a server could end up receiving global-scope query loads.
in our current belt-and-suspenders model, we (f-root) closely monitor our
peer routing, AND we are massively overprovisioned for expected loads,
since a ddos or a transit-leak can give us dramatically unexpected loads.
if you know someone who is willing to provision a root name server without
a similar belt and similar suspenders, then please tell them to stop.
 Finally before a root-server is installed somewhere, someone will do an
 assessment of the local conditions and taylor it adequately. I want
 countries to request installation of root servers, and I know about 20
 Pacific Islands countries that need root-servers in case their Internet
 link goes dead.

 cf www.picisoc.org if you want to join us...
on a connectivity island (which might be in the ocean or it might just be
a campus or dwelling), the way to ensure that local service is not disrupted
by connectivity breaks is to make your on-island recursive name servers
stealth slaves of your locally-interesting zones.  in new zealand for
example it was the custom for many years to use a forwarding hierarchy
where the nodes near the top were stealth slaves of .NZ, .CO.NZ, etc.
that way if they lost a pacific cable system they could still reach the
parts of the internet which were on the new zealand side of the break.
using a half-baked root-like server to do the same thing would be grossly
irresponsible, both to the local and the global populations.
note that f-root, i-root, j-root, k-root, and m-root are all doing anycast
now, and it's likely that even tonga would find that one or more of these
rootops could find a way to do a local install.  (c-root is also doing
anycast but only inside the cogent/psi backbone; b-root has announced an
intention to anycast, but has not formally launched the programme yet.)




Re: national security

2003-12-04 Thread Franck Martin
On Fri, 2003-12-05 at 15:32, jfcm wrote:
 Paul,
 1. all this presumes that the root file is in good shape and has not been 
 tampered.
  How do you know the data in the file you disseminate are not polluted 
 or changed?
Because somebody will complain... ;)



Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9  D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard



Re: national security

2003-12-03 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On onsdag, dec 3, 2003, at 04:12 Europe/Stockholm, Franck Martin wrote:

 ITU is worried like hell, because the Internet is a process that 
 escapes the Telcos. The telcos in most of our world are in fact 
 governments and governments/ITU are saying dealing with country names 
 is a thing of national sovereignty. What they most of the time fail to 
 see, is that most registry are willing to hand it over to the 
 governments provided they DO understand the issues, and not use DNS to 
 empower telcos in more exclusive licencing power.

 ITU has been also misleading countries by making them think that DNS 
 issues will be solved at ITU meetings. I have been telling countries 
 that they must attend ICANN meetings and no other one. When this 
 happens, US corporations will have less power over ICANN and things 
 will be better.

I agree and realize this. However, the let's take that argument out in 
the open and not hide it behind national security. The countries I 
have worked with, do have national disaster plans that can handle a IP 
network completely cut off from the rest of the world. But those plans 
are made together with the industry, as today you can not have this 
type of planning without co-operation of the large, world wide 
companies. Even if the governments own and control many of the telcos 
of the world, the operation of the sub-sea cables that transport the 
traffic is mostly run by organizations they have no control over.

Best regards,

- - kurtis -

-BEGIN PGP SIGNATURE-
Version: PGP 8.0.2

iQA/AwUBP82dC6arNKXTPFCVEQIqZQCcDd1ffRAvtfBjvUSJXfoaw1ilVkQAnRqH
V/3ZsmgatgorFVGQYmDmXLcM
=yrRB
-END PGP SIGNATURE-




Re: national security

2003-12-03 Thread Dean Anderson
On 3 Dec 2003, Franck Martin wrote:
 ITU is worried like hell, because the Internet is a process that escapes
 the Telcos. The telcos in most of our world are in fact governments and
 governments/ITU are saying dealing with country names is a thing of
 national sovereignty. What they most of the time fail to see, is that
 most registry are willing to hand it over to the governments provided
 they DO understand the issues, and not use DNS to empower telcos in more
 exclusive licencing power.

I'm not sure that this is really the case with respect to assignment of
ccTLD registries. Though I can't personally vouch for this, I think all of
the ccTLD's have been handed to government designated representatives when
the governments asked. So I dispute the implied assertion that there is
present evidence of ICANN, IETF, or IANA involvement or interference in
political or governmental controls.

But of course, governments have the sovereign right to control the
communications of their citizens, and if the governments choose, can 'use
DNS to empower telcos in more exclusive licencing power'.  If governments
are concerned about information anarchy, they will undoubtedly bring it up
through the UN and through the ITU.  Or perhaps they will just employ
national firewalls like China did to block unwanted information.

--Dean




Re: national security

2003-12-02 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 The post KPQuest updates are a good example of what Govs do not want
 anymore.

I can't make this sentence out. Do you mean the diminish of KPNQwest?
In that case, please explain. And before you do: I probably know more
about KPNQwest than anyone else on this list with a handful of
exceptions that where all my colleagues doing the IP Engineering part
with me. Please go on...



 Consider the French (original) meaning of gouvernance. For networks
 it would be net keeping. Many ICANN relational problem would
 disappear.

Ok, enough of references to France/French/European. I am born and grown
up in Finland, I have more or less lived in Germany and the Netherlands
for 6-36 months, I live in Sweden since 9 years and I have a resident
in Switzerland. I have worked on building some of the largest Internet
projects in Europe and the largest pan-European networks. Even with
governments trying to meet their needs. So I should be the perfect
match of what you are trying to represent. And I just don't buy any of
your arguments. Sorry.

 What would be the difference if the ccNSO resulted from an MoU? It
 would permit to help/join with ccTLDs, and RIRs, over a far more
 interesting ITU-I preparation. I suppose RIRs would not be afraid an
 ITU-I would not be here 2 years from now.

As someone who is somewhat involved in the policy work of the RIRs, I 
really,
really, really want you to elaborate on this.

[Quotes rearranged]

 The complexity is that ICANN wants to be two conflicting things
 (American and International) and to organize something multinational.

 Vint, you will never change that IANA is part of the Internet and
 Internet is the current solution of the world for its
 datacommunications. So IANA must be involved. ITU is the way govs
 cooperate in communications (data, telephone, TV, radio) and where
 they have so many mixed interests that they must be cautious (this is
 what protects us, the consumers). So ITU must be involved.

 If you are serious about becoming multinational, you must disengage
 from the US Gov. But IANA will never lose its US Flag without ITU. ITU
 will never develop an acceptable higher layers capacity (ITU-I) before
 long, without ICANN, ccTLD etc.

 So, how long will we have to wait for you to ally (and not to try to
 swallow) with ccTLDs and to sit down with Mr. Zao, stop WSIS worrying
 and permits jointly care about fostering development and innovation.

I just fail to see this. What is it with the ITU that will give us

a) More openness? How do I as an individual impact the ITU process?
b) More effectiveness and a faster adoption rate?
c) A better representation of end-user needs?

 The lack of users networks. Multiorganization TLDs Jerry made
 introduced as a reality we started experiencing. Just consider that
 the large user networks (SWIFT, SITA, VISA, Amadeus, Mnitel, etc.)
 started before 85. OSI brought X.400. CERN brought the Web. But ICANN
 - and unreliable technology - blocks ULDs (User Level Domains).

To be honest, none of those networks are really large compared to the
Internet, or in terms of users and especially bandwidth to some of the
large providers.

And, yes, OSI brought X.400 - but I am not really sure what to make out
of that point...:-)

 I just note that you never cared about Consumers organizationsn, while
 a world e-consumer council would have given you the legitimacy of
 billions and the weight to keep Gov partly at large, and satisfied. A
 National Security Kit would then be one of the ICANN raisons d'être,
 keeping Govs happy.

I think that the national governments that are thinking they need
control over ICANN in order to handle a national emergency simply needs
to understand the problem better. There are non-US governments with
contingency planning that works without any of the I* organizations
being under the control of ITU. I just guess those have done a better
job.

- - kurtis -


-BEGIN PGP SIGNATURE-
Version: PGP 8.0.2

iQA/AwUBP8z1zaarNKXTPFCVEQIl0ACgpdZ2UjHU3BoynpqZWqrXOYfAgPEAniOK
+WPzBgPS0MlmT8whXLLEcWup
=illt
-END PGP SIGNATURE-




Re: national security

2003-12-02 Thread Franck Martin




On Wed, 2003-12-03 at 08:27, Kurt Erik Lindqvist wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1






 What would be the difference if the ccNSO resulted from an MoU? It
 would permit to help/join with ccTLDs, and RIRs, over a far more
 interesting ITU-I preparation. I suppose RIRs would not be afraid an
 ITU-I would not be here 2 years from now.

As someone who is somewhat involved in the policy work of the RIRs, I 
really,
really, really want you to elaborate on this.

[Quotes rearranged]

 The complexity is that ICANN wants to be two conflicting things
 (American and International) and to organize something multinational.

 Vint, you will never change that IANA is part of the Internet and
 Internet is the current solution of the world for its
 datacommunications. So IANA must be involved. ITU is the way govs
 cooperate in communications (data, telephone, TV, radio) and where
 they have so many mixed interests that they must be cautious (this is
 what protects us, the consumers). So ITU must be involved.

 If you are serious about becoming multinational, you must disengage
 from the US Gov. But IANA will never lose its US Flag without ITU. ITU
 will never develop an acceptable higher layers capacity (ITU-I) before
 long, without ICANN, ccTLD etc.

 So, how long will we have to wait for you to ally (and not to try to
 swallow) with ccTLDs and to sit down with Mr. Zao, stop WSIS worrying
 and permits jointly care about fostering development and innovation.

I just fail to see this. What is it with the ITU that will give us

	a) More openness? How do I as an individual impact the ITU process?
	b) More effectiveness and a faster adoption rate?
	c) A better representation of end-user needs?


ITU is worried like hell, because the Internet is a process that escapes the Telcos. The telcos in most of our world are in fact governments and governments/ITU are saying dealing with country names is a thing of national sovereignty. What they most of the time fail to see, is that most registry are willing to hand it over to the governments provided they DO understand the issues, and not use DNS to empower telcos in more exclusive licencing power.

ITU has been also misleading countries by making them think that DNS issues will be solved at ITU meetings. I have been telling countries that they must attend ICANN meetings and no other one. When this happens, US corporations will have less power over ICANN and things will be better.

on a side note, Vint/ICANN if you are reading this, the Pacific Islands Chapter of the Internet Society will have its annual meeting in September 2004 in Vanuatu. I think it is time you send some outreach people to explain here, what the hell is ICANN and how you manage a DNS. (www.picisoc.org). Vint, wanna come? Port Vila, is a very very nice place...

Cheers





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-01 Thread J-F C. (Jefsey) Morfin
Dear Paul,
Thank you for your response even if it is not to the question asked. I 
never made any proposal. I have listed suggestions made by different 
parties (I certainly takes seriously) to address real life problems of 
immediate security for nations subject to catastrophe, war, international 
fights or confronted to a netwok collapse.

And I asked for serious warnings, anlyses, advices, alternative suggestions.

At 19:47 30/11/03, Paul Vixie wrote:
this statement is akin to many others made in ignorance of what dns is.  you
are treating it as a mapping service.  perhaps you have been successful at
treating dns as a mapping service in some local context, and this may have
led you to the impossible conclusion that dns itself is a mapping service.
dns is a coherent, distributed, autonomous, reliable database.  distributing
the root as you claim to believe is necessary would create multiple domain
name systems,
Amusing. Yes I experimented that in 78. And some where unhappy :-). I will 
not tell you the DNS is several things to achieve a common purpose which 
addres maping. The way it does it _tryies_ to be a coherent, distibuted, 
autonomous, reliable database. What by essence it never is, except if you 
stop feeding it and you wait for all the TTL to die.

Here, we are in the case it is impeached to continue trying.

not *a* domain name system with a distributed root.  there is no
way to have *a* domain name system with a distributed root unless we (ietf
or other similar agencies) first defined what that meant.
Interesting. Which agency? An agency under cooperation agreement with ICANN 
or NTIA, or a standarization body like ITU, or the P2P standardization 
committee.

Anyway, I do not look for a fundamental debate. But for serious, 
experienced  and documented considerations about the flexiblity of the 
existing system and its capacity to effectively sustain some duress under 
necessity. And how to best specify/design the solutions then to use.

when you're ready
to commission a multiyear study which would yield documents of the same size
and scope as rfcs 1033+1034+1035+2181, then you'll have demonstrated that
you have some understanding of what you're asking for .
NSA started the study. Work is engaged by the WH. 
http://whitehouse.gov/pcipb. ICANN has documented the way it should be done 
(ICP-3). NSI has commited a 500 million budget on DNS. Other projects are 
at works. The target now is to know what to in the meanwhile and what to do 
to protect onselef from their results.

and note that you would then have to sell the resulting system to the 
internet populance which includes end users, domain holders, registrars, 
registries, ISPs, and as you point out, nations.  lots of luck, but that 
ship already sailed.
:-) amusing. The world lived millions years without the DNS. For 20 years 
international data nets created naming but lived without the DNS. True, for 
less than a decade, since the Web, the world faces a management problem 
because IETF has kept with an early 80's applications architecture. 3/4 of 
the world is just telling USA (WSIS, this week) okay for your 'root 
bluff': how much.

Naming was not created by the DNS and will survive the DNS. The DNS 
application is a good example of an extended service but it must adapt to 
the current needs. It is a 1983 car. It is brillant, it has been 
refurbished a lot, but still it is a 1983 vintage.

in no particular order, i'll address a couple of your other comments.

 5. the possibility of a redundant DNS system. Today the Internet has two
 root files (the same file but presented on two main systems - DNS and 
FTP).
 If one is hacked there is not reference. A redundant system would consist
 in two or more root masters refereeing to different sets of TLD name
 servers (all of them carrying the same files, but possibly of different
 origins for security reasons).

there is a reference.  several references, actually.
hey! Is not a reference unique? As, John would say: wich unique master is 
the master?

there is no possibility of a hack going undetected or uncorrected.
Not disputing that. The point is: what is the worst impact of one of the 
unique copies being hacked and detected. What are the recovery procedures? 
What are the control procedures? Are they fool proof? Are they accepted by 
users?

Police is often immediately notified bank robberies. Yet hold-ups hurt and 
kill people every day.  We are not salesmen here. But cops and insurance 
companies.

Most of all when the hacker seats in the Oval Office, what is the solution? 
Kaspurcheff was not the only root hacker to be known. Jon Postel was too.

WTC was built to resist the worst winds. Not 747s. Many people regret it. 
Our role is to make sure it does not to happen again.

but more important, if you had several root files which indicated 
different servers for some TLD's, you would have (by definition) several 
domain name systems,
1. there are two different root files in use each time 

Re: national security

2003-12-01 Thread Karl Auerbach
On 1 Dec 2003, Paul Vixie wrote:

  ICANN's obligation is to guarantee to the public the stability of DNS at
  the root layer.
 
 i disagree...

From ICANN's own bylaws:

  The mission of The Internet Corporation for Assigned Names and Numbers 
  (ICANN) is to coordinate, at the overall level, the global Internet's 
  systems of unique identifiers, and in particular to ensure the stable 
  ^
  and secure operation of the Internet's unique identifier systems ...
  
[emphasis added]

According to m-w.com, ensure means to make sure, certain, or safe : 
Guarantee.

In other words, ICANN's mission is a promise, a guarantee.

But that's not all:

ICANN's contract, or rather Memorandum of Understanding with the United
States requires, yes requires, that ICANN, yes ICANN, not the RIRs, not
the root server operators, to design, develop, and test the mechanisms,
methods, and procedures ... to oversee the operation of the
authoritative root server system and the allocation of IP number 
blocks.

Those are ICANN's own promises that it has made, in legal document after
legal document, to the United States Government.  ICANN may say otherwise,
you may believe otherwise.  But that's the contractual words in black and
white.  It has been the same language since 1998.

In other words, ICANN has made a contractual committment to tell you, as
an operator of a root server, what mechanisms, methods, and procedures  
you must follow to operate your servers.

And that word oversight in the MoU does not mean that ICANN promises to
merely watch how you and the other root server operators do what you do
very well.  The word oversight includes an ability to reject and to
command.  In other words, ICANN has promised the USG that it's authority
over root operations supersedes your own.

We are all well aware that in actual fact that ICANN has no legal
authority over the root server operators.  And we are all aware that the
root server operators have been wary of entering into agreements with
ICANN regarding the operation of the root servers.  That, however, has not
stopped ICANN from making a written promise to the United States govenment
that it will both oversee the root server operations and formalize its
relationship with the root server operators.

Perhaps ICANN is willing to admit that it has no real authority -
presumably by declaring to the US Department of Commerce that it considers
those sections that I mentioned to be obsolete and not obligatory upon
ICANN, and by removing the obligation to ensure the stable and secure
operation that is contained in its own bylaws - and clearly articulating
to everyone, governments and businesses included, that ICANN is nothing
more than an advisory body that operates only by eminating good vibes in
the hope that others, who do have real power to act, will act in
resonance.

In the meantime ICANN goes about telling governments of the world that it
does far more than emit nudges and hopes;  ICANN tells governments that it
ensures and guarantees.

And outside of the IETF and related communities ICANN does not say that it
is merely an advisory body lacking authority. ICANN's message to the
business and intellectual property communities is that ICANN stands strong
and firm and will let nothing interfere with the stable operation of the
internet.

Your note makes my point - that ICANN is in many regards an empty shell,
and has been one for years, that has no real power except in the realm of
the (over) protection of intellectual property, allocation of a very few
new top level domains, and the determination of who among compeiting 
contenders is worthy to operate contested ccTLDs.

At the end of the day - and it is nearly the end of the day here - the
fact of the matter is that ICANN is telling different stories to different
groups.  To the IETF, ICANN holds itself out as one of the guys, merely a
warm and fuzzy coordinator.  But to the business community, ICANN holds
itself forth as a guarantor of internet stability.  And to the United
States Govenment, ICANN has undertaken to make legal promises to the
effect that it is in charge of DNS, including root server operations, and
IP address allocation.

--karl--

PS, if I am late to the party on anycast issues than it ought to be easy
for ICANN to articulate the answers to my concerns.  This is not an idle
request.  The internet community deserves proof that these questions are
truly answered by hard, reviewable, analysis.  Moreover, with Verisign and
sitefinder lingering on the horizon it is not beyond conception that
Verisign will wave the flag of bias and ask ICANN to demonstrate why
anycast got such an easy entree.






Re: national security

2003-12-01 Thread Masataka Ohta
Paul Vixie;

The switch to anycast for root servers is a good thing.

again there's a tense problem.  there was no switch to anycast.  the last
time those thirteen (or eight) ip addresses were each served by a single host
in a single location was some time in the early 1990's.
So?

Service by multiple hosts in a single location is hardly anycast.

When it was switched to anycast?

   But it was hardly
without risks.  For example, do we really fully comprehend the dynamics of
anycast should there be a large scale disturbance to routing on the order
of 9/11?

yes, actually, we do.  (or at least the f-root operator does.)
Can you explain, the reactions of people who have been engaging
in root server operations against anycast without comprehending
the dynamics of anycast, as observed in the last month in IETF
DNS OP ML?
		Masataka Ohta





Re: national security

2003-12-01 Thread vinton g. cerf


karl, ICANN has responsibility to do what it can to make sure the DNS and ICANN root 
system work. It does not have to disenfranchise the RIRs and the root servers to do 
this.

vint

At 12:02 AM 12/1/2003 -0800, Karl Auerbach wrote:
Verisign will wave the flag of bias and ask ICANN to demonstrate why
anycast got such an easy entree.

because it did not change the results of queries. sitefinder did.


Vint Cerf
SVP Technology Strategy
MCI
22001 Loudoun County Parkway, F2-4115
Ashburn, VA 20147
703 886 1690 (v806 1690)
703 886 0047 fax
[EMAIL PROTECTED]
www.mci.com/cerfsup 




Re: national security

2003-12-01 Thread Paul Vixie
[EMAIL PROTECTED] (J-F C. (Jefsey)  Morfin) writes:

 Most of all when the hacker seats in the Oval Office, what is the solution? 
 Kaspurcheff was not the only root hacker to be known. Jon Postel was too.

good bye, sir.
-- 
Paul Vixie



Re: national security

2003-12-01 Thread John C Klensin


--On Monday, 01 December, 2003 07:24 -0500 vinton g. cerf 
[EMAIL PROTECTED] wrote:

karl, ICANN has responsibility to do what it can to make sure
the DNS and ICANN root system work. It does not have to
disenfranchise the RIRs and the root servers to do this.
Vint,

I would go even further than this.  One of the best actions 
ICANN can take, IMO, is to look at a particular situation (and 
the root system and DNS operations generally are probably good 
examples) and say yep, it is working followed by some version 
of if it ain't broke, don't fix it... or even intervene.  One 
corollary to this is that not only does it not have to 
disenfranchise... but that it arguably should not intervene in 
those activities at all unless there is a strong case that they 
are not working in some significant way.

In that sense, the observation that ICANN has not significantly 
intervened in either the root system or with the address 
registry environment should be judged as a success unless it is 
argued that one or the other is seriously not working.

  john






Re: national security

2003-12-01 Thread Michael H. Lambert
Dear jfc,

As far as I can tell, you have gone only by your initials on this 
thread.  To help some of us weigh this discussion, could you please 
identify yourself by name and affiliation?

Regards,

Michael Lambert

---
Michael H. Lambert  Network Engineer
Pittsburgh Supercomputing CenterV: +1 412 268 4960
4400 Fifth Avenue   F: +1 412 268 8200
Pittsburgh, PA  15213  USA





Re: national security

2003-12-01 Thread jfcm
At 22:21 01/12/03, Paul Vixie wrote:
[EMAIL PROTECTED] (J-F C. (Jefsey)  Morfin) writes:

 Most of all when the hacker seats in the Oval Office, what is the 
solution?
 Kaspurcheff was not the only root hacker to be known. Jon Postel was too.

good bye, sir.
--
Paul Vixie
Dear Mr. Vixie,
Things will not fall a part on Dec 6th by midnight. But if 189 States and 
USA do not agree on something reasonable on THIS point, we will enter a 
period where there will be progressive disagreements over the naming, IMHO 
to no one's benefit. And the necessary changes will then not occur 
smoothly. Europe supports the US position with some internal differences 
which permit to help a compromise.

Unless you really want to say good bye to the whole thing, why don't you help?

For example, are we not able to just devise a procedure and a system which 
build the root file from the TLD Managers owns real time data? Would Vint 
have responded that, it was stability for ICANN and IETF for years. OK, 
ICANN's stablity through power greed is inadequate, but is that not also 
inadequate to permit it? And not to consider who to change that situation?

Be sure that whatever the outcome of Dec. 5/6, the IANA US root file 
management is condemned. And probably ICANN in two years time if it stick 
to it. The USA are not going to support them. As they did not in Marrakech 
for the IDNs. What would be their advantage?

The important issue is to know what will replace it? An automated 
compilation of the TLD Managers data by IANA would be preferable to an ITU 
system, after a rought debate and transfer.
Best regards
jfc morfin







Re: national security

2003-12-01 Thread Michael Froomkin - U.Miami School of Law

Alas for this rosy vision, ICANN *tried* to boss the RIRs and get them to
sign contracts agreeing to pay it and obey it, but they balked.  So all
credit to the RIRs - and none to ICANN - on this one.


On Mon, 1 Dec 2003, John C Klensin wrote:

 
 
 --On Monday, 01 December, 2003 07:24 -0500 vinton g. cerf 
 [EMAIL PROTECTED] wrote:
 
  karl, ICANN has responsibility to do what it can to make sure
  the DNS and ICANN root system work. It does not have to
  disenfranchise the RIRs and the root servers to do this.
 
 Vint,
 
 I would go even further than this.  One of the best actions 
 ICANN can take, IMO, is to look at a particular situation (and 
 the root system and DNS operations generally are probably good 
 examples) and say yep, it is working followed by some version 
 of if it ain't broke, don't fix it... or even intervene.  One 
 corollary to this is that not only does it not have to 
 disenfranchise... but that it arguably should not intervene in 
 those activities at all unless there is a strong case that they 
 are not working in some significant way.
 
 In that sense, the observation that ICANN has not significantly 
 intervened in either the root system or with the address 
 registry environment should be judged as a success unless it is 
 argued that one or the other is seriously not working.
 
john
 
 
 
 
 

-- 
http://www.icannwatch.org   Personal Blog: http://www.discourse.net
A. Michael Froomkin   |Professor of Law|   [EMAIL PROTECTED]
U. Miami School of Law, P.O. Box 248087, Coral Gables, FL 33124 USA
+1 (305) 284-4285  |  +1 (305) 284-6506 (fax)  |  http://www.law.tm
 --It's warm here.--




Re: national security

2003-11-30 Thread Bill Manning
% Anycast may even have preceded the creation of ICANN - perhaps an IETF
% source or one of the root server operators can say when the first ANYCAST
% deployments were done.

Not an IETF source. In discussions on the earliest anycast
instance, there was general agreement that M was anycast
from the time it moved from LA to Tokyo, roughly 1997.

--bill
Opinions expressed may not even be mine by the time you read them, and
certainly don't reflect those of any other entity (legal or otherwise).



Re: national security

2003-11-30 Thread Paul Vixie
i'm going to bend my own policy a bit and reply to a role account:

[EMAIL PROTECTED] (jfcm) writes:

 ...  The interest is not sites nor network protection layers, but nations
 protection from what happens on or with the networks. This is in line
 with the White House document http://whitehouse.gov/pcipb with the
 addition of the risks created by the US (and every other national) cyber
 security effort, and from not mastering the root. In most of the cases
 the identified risks come from a centralized [root] which has to be made
 distributed.

this statement is akin to many others made in ignorance of what dns is.  you
are treating it as a mapping service.  perhaps you have been successful at
treating dns as a mapping service in some local context, and this may have
led you to the impossible conclusion that dns itself is a mapping service.

dns is a coherent, distributed, autonomous, reliable database.  distributing
the root as you claim to believe is necessary would create multiple domain
name systems, not *a* domain name system with a distributed root.  there is no
way to have *a* domain name system with a distributed root unless we (ietf
or other similar agencies) first defined what that meant.  when you're ready
to commission a multiyear study which would yield documents of the same size
and scope as rfcs 1033+1034+1035+2181, then you'll have demonstrated that
you have some understanding of what you're asking for here.  and note that
you would then have to sell the resulting system to the internet populance
which includes end users, domain holders, registrars, registries, ISPs, and
as you point out, nations.  lots of luck, but that ship already sailed.

in no particular order, i'll address a couple of your other comments.

 5. the possibility of a redundant DNS system. Today the Internet has two 
 root files (the same file but presented on two main systems - DNS and FTP). 
 If one is hacked there is not reference. A redundant system would consist 
 in two or more root masters refereeing to different sets of TLD name 
 servers (all of them carrying the same files, but possibly of different 
 origins for security reasons).

there is a reference.  several references, actually.  there is no possibility
of a hack going undetected or uncorrected.  but more important, if you had
several root files which indicated different servers for some TLD's, you
would have (by definition) several domain name systems, not a domain name
system with high redundancy.  until you demonstrate some understanding of
that fundamental and definitional aspect of dns, you won't be taken seriously
among the community who does understand those things.

 Thank you for your comments.
 jfc

please learn the basics before you come in here and start making proposals.
-- 
Paul Vixie



Re: national security

2003-11-30 Thread jfcm
At 17:35 30/11/03, Michael H. Lambert wrote:
Content-Transfer-Encoding: 7bit
Dear jfc,
As far as I can tell, you have gone only by your initials on this 
thread.  To help some of us weigh this discussion, could you please 
identify yourself by name and affiliation?
Sorry for this. The question was asked and replied to, but I did not note 
the list was not in copy.

I am used to do this for long because I think what is debated is important, 
not the persons (when I want to know the competences of someone I go to the 
person's site).

This is precisely a part of the concern: will IETF practically be able to 
understand and treat equally needs/inputs from Pittsburgh and from every 
other place and culture. What to suggest if not? Analysing and working 
together world and technical cultures wide is not that easy (look at my 
exchanges even with John Klensin). If you follow the WSIS you know that 
this type of concern is the crux of the current negociation.

So who I am should not impact on the responses to what is a documented call 
for warning, advice or aternatives on vital issues. Or a specialized body 
must be created.

I am JFC Morfin. You will find my professionnal site at http://utel.net. I 
created in 1978 the SIAT/Intlnet ( http://intlnet.org ) which assumed until 
1986 the role of very very lean ICANN the international public packet 
switch network needs. After 9/11 I called a study group at the  DNSO/BC to 
propose the writing of an ICP-4 ICANN Document over Global Security. The 
limited interest of the next MdR meeting on Security, the annoucements of 
M$ and Richard Clarke's mission led to launch a project and a study named 
dot-root (http://dot-root.com). It strictly abode by ICANN ICP-3 to 
organize a DNS double test bed with an unique and a parallel root systems. 
The study (French) was presented earlier this year to people in charge. As 
part of the follow up there is a meeting on national vulnerabilities to the 
DNS/IPv6 as identified by the study and readers feed-backs. This does not 
directly concern sites nor network protection, but lives, economy, culture, 
way of life, e-government, development, sovereignty, etc protection - not 
theoretical but right-now.

jfc




Re: national security

2003-11-30 Thread Dean Anderson
 IETF is to deliver technical solutions. IANA is to deliver a registry
 service. What is ICANN up to? Except what we agree: to guest forums to
 help consensus there.

 BTW is that very different from ITU? Just that Paul Twomey's Nov 19th
 document would have resulted from a painstakingly g/sTLD consensus and
 would not have worried ccTLDs.

This doesn't completely cover it. For example, the IETF delivers
standards.  The ITU also delivers standards.  The ITU has issued competing
standards such as IS-IS, X.400, X.500 etc, etc, etc.  The IANA and ICANN
functions are also similarly performed by the ITU. For example, the
allocation of Radio frequences, and radio call sign prefixes.  Obviously,
the functions of the IETF, IANA, and ICANN could be done by the ITU, and
the ITU has considerable experience in these areas.

I think there is very little credence to any legal benefits of being
incorporated in Switzerland versus the US.  Indeed, generally, one must
incorporate in any country in which one has permanent employees. If the
ITU were to take over the IETF/IANA/ICANN and their US employees, it would
still be incorporated in the US, as well as in other countries.
Organizations can be sued in any country they do business in whether they
are incorporated there or not, so it doesn't matter too much where they
are incorporated.  There are variations in the fees to maintain a
corporation, but these are minimal. For example, it costs about $300/yr in
Massachussets, versus about $125/yr in Delaware.  Delaware also has
extensive support by the state department of corporations for finding the
necessary corporate agents, who charge nominal fees to be the corporate
agent.  This causes most corporations to be incorporated in Delaware.
Other than minor issues like that, there is little benefit.  While
different countries have different tax structures, these are likely to be
of little to no consequence to a standards organization.

The real issues with moving the IETF/IANA/ICANN functions under the ITU
are questions of economics and democratic constituencies.  It is these
questions that really need to be addressed:

Quite obviously, duplication of the administration efforts results in
wasted money.  The only reason to keep them separate is to perform this
job better.  As has been often pointed out, the IETF is fairly sloppy in
its administration. Clearly, moving the administration of IETF activities
and standards to the ITU would be a benefit for all in terms of savings
and in terms of improved administration.

The main criticism of the IETF/IANA/ICANN by the rest of the world seems
to fall under the democratic constituencies issue.  People outside the US
seem to distrust the US, and feel that their voices are not being heard,
and that they aren't being represented properly.  I don't know whether
there is a truly genuine failing in this respect, but there is clearly a
widespread concern.  The perception is just as serious as an actual
impropriety.

Given that the enconomics seems to point to consolidation with the ITU,
and the fact that many seem to place more trust in the ITU, I think we
ought to seriously consider this option.  I've been though the merger of
standards groups incorporated in different countries, as a technical
consultant, and the results have been very positive.  The benefits are
similar to the benefits gained by the merger of companies.  The main
difference being that the users have much more say in the direction of a
standards group than the customers of two private companies.

Of course, the ITU also needs to agree to sign on to take over this
responsibility, and it will require additional funding for the ITU, and it
is unclear the that the funding for ICANN/IANA/IETF will be transferred to
the ITU if such a change is made.  Essentially, this means that the rest
of the world will have to put more money into ITU funding.


Dean Anderson
CEO
Av8 Internet, Inc





Re: national security

2003-11-30 Thread Valdis . Kletnieks
On Sun, 30 Nov 2003 20:42:18 EST, Dean Anderson said:

 The main criticism of the IETF/IANA/ICANN by the rest of the world seems
 to fall under the democratic constituencies issue.  People outside the US
 seem to distrust the US, and feel that their voices are not being heard,
 and that they aren't being represented properly.  I don't know whether
 there is a truly genuine failing in this respect, but there is clearly a
 widespread concern.  The perception is just as serious as an actual
 impropriety.

I've not followed the innards of the ITU process - have they fixed the
balloting/veto setup that resulted in such all options for everybody
elephantine standards like X.[45]00?  Democratic constituencies aren't
always a good idea.


pgp0.pgp
Description: PGP signature


Re: national security

2003-11-30 Thread Karl Auerbach
On Sat, 29 Nov 2003, vinton g. cerf wrote:

 I can't seem to recall during my 2 1/2 years on ICANN's board that there
 ever was any non-trivial discussion, even in the secrecy of the Board's
 private e-mail list or phone calls, on the matters of IP address
 allocation or operation of the DNS root servers.  Because I was the person
 who repeatedly tried to raise these issues, only to be repeatedly met with
 silence, I am keenly aware of the absence of any substantive effort, much
 less results, by ICANN in these areas.
 
 The fact that there were few board discussions does not mean that staff
 was not involved in these matters. Discussions with RIRs have been lengthy
 and have involved a number of board members. 

Discussions with staff hardly constitutes responsible oversight by ICANN
as a body responsible to the internet public.  All you have said is that
ICANN has not merely abandoned its oversight of DNS and IP addresses to
the root server operators and the RIRs but also that the only elements
within ICANN that even bother to observe are the occassional board member
and some perhaps some unnamed staff members.

I raised the anycast issue several times to the board.  Staff received 
those e-mails.  I do not except as valid an after fact explaination that 
says Even though nobody bothered to answer Karl's inquiries, ICANN's 
staff was really making informed decisions, in secret, about anycast.

ICANN's job is not to make decisions in secret, by unknown members of
staff, based on unknown criteria and using unknown assumptions.  To do 
so, which is what you are saying has been done, is simply yet another 
abandonment of ICANN's obligations.

The switch to anycast for root servers is a good thing.  But it was hardly
without risks.  For example, do we really fully comprehend the dynamics of
anycast should there be a large scale disturbance to routing on the order
of 9/11?  Could the machinery that damps rapid swings of routes turn out 
to create blacked out areas of the net in which some portion of the root 
servers become invisible for several hours?  Could one introduce bogus 
routing information into the net and drag some portion of resolvers to 
bogus root servers?

I'm pretty sure that the root server operators have answers to these
questions.  However, it is incumbent on ICANN not to simply accept that
these people know what they are doing; ICANN must document it, ICANN must
inquire whether some of the decisions are made on public-policy
assumptions (in which case the public needs to become a party to those
decisions).

Considering that we know that there would be no ill effects to adding even
a hundred new top level domains, one has to wonder at the degree of
automatic deference (deference amounting to an institutional decision to
be blind) to the deployment of anycast as compared to the hyper detailed
inquiry into matters even as irrelevant as the pronouncability in English
of a few proposed new top level domains.

In addition, an argument could well be made that anycast violates the
end-to-end principle.  For instance, it's hard, or impossible, to maintain
a TCP connection that spans a routing change that sunsets one anycast
partner and sunrises another.

Given that one of the strongest arguments against Verisign's Sitefinder is
that it breaks things, and that it violates the end-to-end principle,
Verisign lawyers must be very pleased that they can so easily demonstrate
that ICANN is willing to act with overt bias, to let slide, without
inquiry, those things proposed by ICANN friends.

 Sorry, anycast has been out there for quite a while; I am surprised you
 didn't know that.

No need for sarcasm.  As you must be well aware, was the one who explained
to ICANN's Board how anycast works.  Indeed, I was the one who brought the
deployment of anycast roots to the Board's attention.  I know that the
ICANN Board considers its communications secret.  However if I am required
to defend myself from what I consider to be an unwarranted and
unsupportable assertion regarding my professional knowledge I would have
to consider it my right to defend myself and publish any and all relevant
materials from the archives of the Board's e-mail.

But you miss the point - the deployment of anycast for root servers was a
bold operational decision.  It was a decision made by the root server
operators alone, without ICANN.

ICANN's obligation is to guarantee to the public the stability of DNS at
the root layer.  ICANN's failure to engage in the issue of anycast
deployment was simply and clearly and abandonment of ICANN's
responsibilities.

 [I believe that the anycast change was a good one.  However, there is no 
 way to deny that that change was made independently of ICANN.]
 
 Anycast may even have preceded the creation of ICANN

Yes, anycast has been around for a long time.  Multicast, NATs, and OSI
all also preceded the creation of ICANN.  But does that mean that ICANN
should freely and and without question allow the 

Re: national security

2003-11-30 Thread Paul Vixie
karl wrote:

 ...
 ICANN's job is not to make decisions in secret, by unknown members of
 staff, based on unknown criteria and using unknown assumptions.  ...

that sentence is punctuated incorrectly.  there's a period after decisions.

 ... so, which is what you are saying has been done, is simply yet another 
 abandonment of ICANN's obligations.

i think there's a tense problem here.  icann cannot abandon that which it
never had.  perhaps a pretense was abandoned, but not an actual obligation.

 The switch to anycast for root servers is a good thing.

again there's a tense problem.  there was no switch to anycast.  the last
time those thirteen (or eight) ip addresses were each served by a single host
in a single location was some time in the early 1990's.

 But it was hardly
 without risks.  For example, do we really fully comprehend the dynamics of
 anycast should there be a large scale disturbance to routing on the order
 of 9/11?

yes, actually, we do.  (or at least the f-root operator does.)

   Could the machinery that damps rapid swings of routes turn out 
 to create blacked out areas of the net in which some portion of the root 
 servers become invisible for several hours?

nope.  or at least, that risk is unchanged from the multihomed servers that
have actual backhaul between their points of presense.  (those of you who
are bgp-aware all know that there is no way to tell the difference between
a robustly multihomed network and a robustly anycasted network.)  if karl
is going to start worrying about flapdamping, he's late to the party, and
the things to worry about aren't all or even mostly anycasted.

  Could one introduce bogus 
 routing information into the net and drag some portion of resolvers to 
 bogus root servers?

of course!  but then that was always true and will always be true.  the only
fix to it is some form of secure BGP which i guess means route-signing and
route-verification and oh my what a mess that turns into very quickly.  and
the affected (at risk) elements are, again, overwhelmingly NOT anycasted.

 I'm pretty sure that the root server operators have answers to these
 questions.  However, it is incumbent on ICANN not to simply accept that
 these people know what they are doing; ICANN must document it, ICANN must
 inquire whether some of the decisions are made on public-policy
 assumptions (in which case the public needs to become a party to those
 decisions).

that's an interesting view.  i'm not sure i don't share it!  but that's not
how things work now, or how things have ever worked, and i'm shocked that
karl of all people would want to see ICANN's mission creep in this way.

 Considering that we know that there would be no ill effects to adding
 even a hundred new top level domains, one has to wonder at the degree of
 automatic deference (deference amounting to an institutional decision to
 be blind) to the deployment of anycast as compared to the hyper detailed
 inquiry into matters even as irrelevant as the pronouncability in English
 of a few proposed new top level domains.

well, in my role as a root operator i don't care about content either way,
and even as an internet citizen i don't have strong views about adding TLDs
(beyond what i wrote in http://sa.vix.com/~vixie/bad-dns-paper.pdf that is).

but i can tell the difference between a user-visible change that will affect
the internet's financial and information economy, such as adding TLD's, as
opposied to an operator-visible change that users won't even notice and
which creates no new business opportunities.

one of those, and i really do mean only one of them, is in ICANN's ballywick.

 In addition, an argument could well be made that anycast violates the
 end-to-end principle.  For instance, it's hard, or impossible, to
 maintain a TCP connection that spans a routing change that sunsets one
 anycast partner and sunrises another.

i suspect that concerns of this kind were the main reason why the rootops
waited to see several years of results from nominum's and ultradns's use of
dns anycast before doing widescale anycast.  in fact, one concern learned
by watching nominum and ultradns led to the namespace piracy known as either
HOSTNAME.BIND or ID.SERVER (depending on the age of the software).  all of
this was in class CHAOS of course.  but widescale anycast would have been
impractical without this extension.

as to karl's end-to-end argument, anyone using a hash-based load balancer,
including Stupid OSPF Tricks, for local load balancing would be subject to
the same issues.  it is therefore very notable that DNS TCP sessions are
short-lived, and that the end can change identities with radically high
frequencies, and there is no observed impact on network load or likelihood
of retrieving a correct and useful answer.

 ...
 But you miss the point - the deployment of anycast for root servers was a
 bold 

Re: national security

2003-11-30 Thread vinton g. cerf
karl, we raised the question of anycast risk with SECSAC in response to your
concerns and the conclusion was that the risks had not materialized in the
operation of anycast in roots that had already deployed it. 

There are lots of ways in which routing can be wedged - until we get some
form of authentication, that risk will be with us. Moreover, even with
authentication it is possible to misconfigure routing. 

Any table driven system that does not have an obvious syntactic or semantic
way of detection a bad configuration is subject to these risks.

vint

At 06:29 PM 11/30/2003 -0800, Karl Auerbach wrote:
The switch to anycast for root servers is a good thing.  But it was hardly
without risks.  For example, do we really fully comprehend the dynamics of
anycast should there be a large scale disturbance to routing on the order
of 9/11?  Could the machinery that damps rapid swings of routes turn out 
to create blacked out areas of the net in which some portion of the root 
servers become invisible for several hours?  Could one introduce bogus 
routing information into the net and drag some portion of resolvers to 
bogus root servers?

Vint Cerf
SVP Technology Strategy
MCI
22001 Loudoun County Parkway, F2-4115
Ashburn, VA 20147
703 886 1690 (v806 1690)
703 886 0047 fax
[EMAIL PROTECTED]
www.mci.com/cerfsup 




Re: national security

2003-11-29 Thread Paul Robinson
John C Klensin wrote:

With regard to ICANN and its processes, I don't much like the
way a good deal of that has turned out, even while I believe
that things are gradually getting better.  I lament the set of
decisions that led to the US Govt deciding that it needed to be
actively involved and to some of the risks, delays, and socially
undesirable statements that situation has created.  

OK, the big issue for those countries that want ICANN to be disbanded 
and for the Internet to be handed over to the ITU is quite simple: ICANN 
is a US-government controlled entity subject to US/Californian law. 
That's great if you're the US government and even semi-reasonable if 
you're an American. Absolutely awful if you're Chinese or Korean. The 
IETF is about as close as we've got as an authority on the Internet 
that is not bounded by geographic boundaries, governmental control or 
commercial contract. You can make a reasonable argument that we should 
be running the show here, not ICANN.

The UNITC meeting needed to happen several years ago, but now we're 
there, realistically there is only one option left for a single, 
cohesive Internet to remain whilst taking into account ALL the World's 
population: ICANN needs to become a UN body.

general.  So, while ICANN, IMO, continues to need careful
watching -- most importantly to be sure that it does not expand
into governance issues that are outside its rational scope-- I
don't see give it to XXX or everyone runs off in his own
direction as viable alternatives.
Neither do I, but ICANN have clearly demonstrated:

1. They don't listen to us, or those parties who have a genuine vested 
interest in the Internet, UNLESS that party is a US Commercial or 
Governmental entity.

2. Their incompetence at politcal levels has actually caused a delay in 
making the Internet available to those countries that need access to 
affordable communications infrastructures the most.

3. Putting Computer Scientists in charge of anything is fundamentally a 
bad idea. In fact, they have shown they are worse at being in charge 
than politicians and lawyers... they will never get another chance after 
this god-awful mess.

In ICANN's support, the alternative - the ITU idea - is *horrible*. 
The ITU is not about open communications infrastrucutres - it's about 
*closed* infrastructures with contracts and licensing and costs and the 
other paraphenalia we want to limit the effect of in the context of the 
Internet.

On the other hand, one of the nice things about the network as
it is now constituted is that anyone has the option of
opting-out: disconnecting, setting up a private DNS and a
private addressing system, and communicating, if at all, through
a restrictive, address-and-protocol-translating gateway.  We
No, no, no, NO. To allow this would to happen would be a genuine shame. 
How popular is Internet2? Why? I rest my case...

--
Paul Robinson




Re: national security

2003-11-29 Thread vinton g. cerf
At 05:49 PM 11/29/2003 +, Paul Robinson wrote:
John C Klensin wrote:

With regard to ICANN and its processes, I don't much like the
way a good deal of that has turned out, even while I believe
that things are gradually getting better.  I lament the set of
decisions that led to the US Govt deciding that it needed to be
actively involved and to some of the risks, delays, and socially
undesirable statements that situation has created.  

OK, the big issue for those countries that want ICANN to be disbanded and for the 
Internet to be handed over to the ITU is quite simple: ICANN is a US-government 
controlled entity subject to US/Californian law. 

Please read the most recent MOU. The US Department of Commerce has gone to 
considerable effort to outline the path by which ICANN becomes the party responsible 
for the updating of the DNS root. The control you assert is quite limited even today.

Any formal body has to have some jurisdiction in which it is constituted. One can 
argue whether California non-profit law is better or worse than being a UN entity. I 
believe there are arguments against the latter as much as there may arguments against 
the former. 


That's great if you're the US government and even semi-reasonable if you're an 
American. Absolutely awful if you're Chinese or Korean. 

that's not at all clear. ICANN has tried to promote the adoption of IDN, for example, 
in a responsible way. John Klensin's efforts, and others, to promote international 
compatibility to enhance the ability for parties to communicate is commendable. What 
do you think is awful?

The IETF is about as close as we've got as an authority on the Internet that is not 
bounded by geographic boundaries, governmental control or commercial contract. You 
can make a reasonable argument that we should be running the show here, not ICANN.

Not unless you want to take on the full burden of Internet Governance written large. 
Not even ICANN wishes to do that. In fact, ICANN's role is very limited compared to 
the full scope of Internet Governance. Issues such as fraud, taxation, intellectual 
property protection, dispute resolution, illegal actions are governmental matters and 
not even UN has the appropriate jurisdiction. It will take cooperation among 
governments and thoughtful domestic legislation to deal with many of these matters. 
ICANN has high regard for IETF and IAB and for that reason there is an IAB liaison 
appointed to the Board of Directors. 


The UNITC meeting needed to happen several years ago, but now we're there, 
realistically there is only one option left for a single, cohesive Internet to remain 
whilst taking into account ALL the World's population: ICANN needs to become a UN 
body.

nonsense - as constituted today, ICANN is a better forum for interested constituencies 
to debate policy FOR THOSE AREAS THAT ARE IN ICANN'S PURVIEW (not shouting, just 
emphasis on limited purview of ICANN). 

The problem with the arguments I have heard, including yours, is that you may be 
thinking of Internet Governance in the large while ICANN's role is small and should 
stay that way. We need other venues in which to deal with the larger problems and 
perhaps UN or some of its constituents have a role to play. Probably WIPO and WTO do 
as well. 


general.  So, while ICANN, IMO, continues to need careful
watching -- most importantly to be sure that it does not expand
into governance issues that are outside its rational scope-- I
don't see give it to XXX or everyone runs off in his own
direction as viable alternatives.

Neither do I, but ICANN have clearly demonstrated:

1. They don't listen to us, or those parties who have a genuine vested interest in 
the Internet, UNLESS that party is a US Commercial or Governmental entity.

I disagree - please consider the last ICANN meeting in which the Board went some 
distance to making changes in its policies in response to international constituency 
inputs.


2. Their incompetence at politcal levels has actually caused a delay in making the 
Internet available to those countries that need access to affordable communications 
infrastructures the most.

Sorry, it is a lot more complex than you seem to think - the question of who should 
have responsibility for a CCTLD is often very complex - it is sometimes not even clear 
who the government of country X is.


3. Putting Computer Scientists in charge of anything is fundamentally a bad idea. In 
fact, they have shown they are worse at being in charge than politicians and 
lawyers... they will never get another chance after this god-awful mess.

The Board is not made up of computer scientists alone; nor is the staff of ICANN. By 
your assertion, IETF should not be in charge of anything either. I disagree with that, 
too. 

In ICANN's support, the alternative - the ITU idea - is *horrible*. The ITU is not 
about open communications infrastrucutres - it's about *closed* infrastructures with 
contracts and licensing and costs and the 

Re: national security

2003-11-29 Thread Karl Auerbach
On Sat, 29 Nov 2003, Paul Robinson wrote:

 ... realistically there is only one option left for a single, 
 cohesive Internet to remain whilst taking into account ALL the World's 
 population: ICANN needs to become a UN body.

If you look at what ICANN really and truly does you will see that it has
little, if any, real role relating to internet technology.  Rather it is
an organization that, for the most part, imposes the business goals of a
selected and limited set of priviliged stakeholders onto the operation
of businesses that sell domain names.

Moving ICANN from the blind-oversight of the US Deparment of Commerce to
the UN or ITU ill only widen the stage for those privileged
stakeholders.  A move to the UN or ITU, by itself, will not improve the
security of the net or or any nation.

Without major structural reforms (such as I suggest at
http://www.cavebear.com/rw/apfi.htm ) ICANN will remain a non-technical
body that regulates and governs internet business practices.

As for this thread - national security - One has to remember that ICANN's
reaction to 9/11 was to create a committee.  That committee is filled with
intelligent and skilled worthies, many of whom have deep IETF roots.  
However that committee, with respect to the matter of security, was
essentially stillborn and silent.  It has only come to life recently as a
vehicle to rebut Verisign's Sitefinder.  As an institutional matter,
ICANN has demonstrated that it really is not suited to deal with the
technical issues of security, much less the intricate balancing of public
policy in which security choices must necessarily be made.

Moving ICANN to the UN will not, without major structural changes in 
ICANN, improve this.

Some of those changes have occurred already:

ICANN has abandoned the actual operation of the dns root servers to those
who are actually doing that job.  This is a very good thing because the
latter group are not merely extremely competent, but they are also clearly
focused on the job of running root servers and have shown that they do not
care to use their role to enforce someone's idea of intellectual property
protection.

And ICANN has abandoned the allocation of IP addresses to the regional IP
address registries.  Again this is a good thing because there are few
within ICANN who remember that this was one of ICANN's three original
purposes, much less understand the technical and economic impact of
address allocation policies.  The RIRs, on the otherhand *do* understand
this.

Personally I do not care whether ICANN is under the US Department of 
Commerce or becomes a branch of the ITU.  Both are imperfect.  As a US 
Citizen I can (and have) gone to the DoC and argued my side.  I'd probably 
have a smaller voice where things to move to the ITU.  On the other hand, 
most of the people in the world are not US citizens and thus could find 
the ITU more open to them.

For me the core issue is not under what banner ICANN exists.  For me the
issue is restructuring ICANN-like vehicles of internet governance into
things that really have a synoptic view, that are not captured by a few
selected commercial stakeholders, and that need not be brought before a
judge (as I had to do with ICANN) in order to compel them to be open,
transparent, and accountable.


 Neither do I, but ICANN have clearly demonstrated:

 3. Putting Computer Scientists in charge of anything is fundamentally a 
 bad idea

Let's dispell a big chunk of that myth - ICANN has never been controlled
by computer scientists.  The board has a always had a few people with rich
knowledge of the internet, but they were always a very tiny minority.  

Let us not forget that one of ICANN's first acts was to dismantle the job
of Chief Technology Officer.

The myth that ICANN is run by network experts has caused great damage.  
First of all, there is no reason to believe that those versed in computer
science are more capable of making public policy decisions than others.  
That myth of the Golden Age of Technical Kings died at the end of the
1930's.  [Take a look at the H.G.  Wells movie Things To Come to see
that myth in full flower.]

Second, the myth has created a screen of deference that hides the acts of
those privileged stakeholders who have proven to be very skilled at
using ICANN to promote certain intellectual property agendas to the
exclusion of nearly everything else.

 In fact, they have shown they are worse at being in charge than
 politicians and lawyers...

Most of the people involved in all of this affair are good, smart, and
well intended.  There are few Iagos.  ICANN is a glimpse of the future
that occurs when groups with different values and different uses of a
common language don't spend the time to really work down to fundamental
issues and goals.  I blame much of this on e-mail.  E-mail impedes the
development of those personal contacts that are necessary to build the
trust needed to bridge the differences of opinion and find the common
grounds.

The 

Re: national security

2003-11-29 Thread Karl Auerbach
On Sat, 29 Nov 2003, vinton g. cerf wrote:

 I strongly object to your characterization of ICANN as abandoning
 the operation of roots and IP address allocation. These matters have
 been the subject of discussion for some time.

I can't seem to recall during my 2 1/2 years on ICANN's board that there
ever was any non-trivial discussion, even in the secrecy of the Board's
private e-mail list or phone calls, on the matters of IP address
allocation or operation of the DNS root servers.  Because I was the person
who repeatedly tried to raise these issues, only to be repeatedly met with
silence, I am keenly aware of the absence of any substantive effort, much
less results, by ICANN in these areas.

So, based on my source of information, which is a primary source - my own
experience as a Director of ICANN, I must disagree that ICANN has actually
faced either the issue of DNS root server operations or of IP address
allocation.  And ICANN's enhanced architecture for root server security  
was so devoid of content as to be embarrassing - See my note at
http://www.cavebear.com/cbblog-archives/07.html

The DNS root server operators have not shown any willingness to let ICANN
impose requirements on the way they run their computers.  Indeed, the
deployment of anycast-based root servers without even telling ICANN in
advance, much less asking for permission, is indicative of the distance
between the operations of the root servers and ICANN.

[I believe that the anycast change was a good one.  However, there is no 
way to deny that that change was made independently of ICANN.]

Sure, ICANN prepares, or rather, Verisign prepares and ICANN someday hopes
to prepare, the root zone file that the DNS root servers download.  But to
say that preparation of a small, relatively static, text file is the same
as overseeing the root servers is inaccurate.

In addition, the root server operators have shown that they are very able 
to coordinate among themselves without ICANN's assistance.

 ICANN absolutely recognizes the critical role of the RIRs

Again, recognizing the RIRs is an admission that ICANN has abandoned its
role as the forum in which public needs for IP addresses and technical
demands for space and controled growth of routing information are
discussed and balanced.  Fortunately the RIRs have matured and are
themselves the IP address policy forums that ICANN was supposed to have
been.  Moreover, the RIRs have shown that they are more than capable of 
doing a quite good job of coordinating among themselves.


 There is still need for coordination of policy among these groups
 and the other interested constituents and that is the role that
 ICANN will play. 

Again, ICANN can not demonstrate that it has engaged, because it has not
engaged, in the coordination of IP address policy.  Sure, ICANN has
facilitated the creation of a couple of new RIRs.  But again, there is
vast distance between that and ICANN being the vehicle for policy
formulation or oversight to ensure that those policies are in the interest
of the public and technically rational.


I have serious doubts that ICANN will be able to meet its obligations
under the most recent terms of the oft-amended Memorandum of Understanding
between ICANN and the Department of Commerce.  I see no sign that the DNS
root server operators or the RIRs are going to allow themselves to become
dependencies of ICANN and to allow their decisions to be superseded by
decisions of ICANN's Board of Directors.

--karl--







Re: national security

2003-11-29 Thread vinton g. cerf
At 03:39 PM 11/29/2003 -0800, Karl Auerbach wrote:
On Sat, 29 Nov 2003, vinton g. cerf wrote:

 I strongly object to your characterization of ICANN as abandoning
 the operation of roots and IP address allocation. These matters have
 been the subject of discussion for some time.

I can't seem to recall during my 2 1/2 years on ICANN's board that there
ever was any non-trivial discussion, even in the secrecy of the Board's
private e-mail list or phone calls, on the matters of IP address
allocation or operation of the DNS root servers.  Because I was the person
who repeatedly tried to raise these issues, only to be repeatedly met with
silence, I am keenly aware of the absence of any substantive effort, much
less results, by ICANN in these areas.

The fact that there were few board discussions does not mean that staff
was not involved in these matters. Discussions with RIRs have been lengthy
and have involved a number of board members. 

So, based on my source of information, which is a primary source - my own
experience as a Director of ICANN, I must disagree that ICANN has actually
faced either the issue of DNS root server operations or of IP address
allocation.  And ICANN's enhanced architecture for root server security  
was so devoid of content as to be embarrassing - See my note at
http://www.cavebear.com/cbblog-archives/07.html

The DNS root server operators have not shown any willingness to let ICANN
impose requirements on the way they run their computers.  Indeed, the
deployment of anycast-based root servers without even telling ICANN in
advance, much less asking for permission, is indicative of the distance
between the operations of the root servers and ICANN.

Sorry, anycast has been out there for quite a while; I am surprised you
didn't know that. We had discussions about anycast with the SECSAC and
the RSSAC and confirmed that there were few risks. The GAC requested and
received a briefing on this as well.


[I believe that the anycast change was a good one.  However, there is no 
way to deny that that change was made independently of ICANN.]

Anycast may even have preceded the creation of ICANN - perhaps an IETF
source or one of the root server operators can say when the first ANYCAST
deployments were done.


Sure, ICANN prepares, or rather, Verisign prepares and ICANN someday hopes
to prepare, the root zone file that the DNS root servers download.  But to
say that preparation of a small, relatively static, text file is the same
as overseeing the root servers is inaccurate.

In addition, the root server operators have shown that they are very able 
to coordinate among themselves without ICANN's assistance.

 ICANN absolutely recognizes the critical role of the RIRs

Again, recognizing the RIRs is an admission that ICANN has abandoned its
role as the forum in which public needs for IP addresses and technical
demands for space and controled growth of routing information are
discussed and balanced.  Fortunately the RIRs have matured and are
themselves the IP address policy forums that ICANN was supposed to have
been.  Moreover, the RIRs have shown that they are more than capable of 
doing a quite good job of coordinating among themselves.

The RIRs have agreed to use the ASO as the mechanism for conducting
global policy discussions -  you seem to think that unless ICANN is
dictating everything it is doing nothing. Sorry, I don't buy it.



 There is still need for coordination of policy among these groups
 and the other interested constituents and that is the role that
 ICANN will play. 

Again, ICANN can not demonstrate that it has engaged, because it has not
engaged, in the coordination of IP address policy.  Sure, ICANN has
facilitated the creation of a couple of new RIRs.  But again, there is
vast distance between that and ICANN being the vehicle for policy
formulation or oversight to ensure that those policies are in the interest
of the public and technically rational.


I have serious doubts that ICANN will be able to meet its obligations
under the most recent terms of the oft-amended Memorandum of Understanding
between ICANN and the Department of Commerce.  I see no sign that the DNS
root server operators or the RIRs are going to allow themselves to become
dependencies of ICANN and to allow their decisions to be superseded by
decisions of ICANN's Board of Directors.

they don't need to become dependencies for this process to work - you are
setting up a strawman that I don't buy into, karl. What we are looking for
is coordination of policy development in such a way that affected parties
have an opportunity to raise issues. That's what the reform of the ICANN
process was all about. 

I am not interested in having the decision of the Board of Directors supersede
RIR or Root Server recommendations. I am interested in assuring that any 
policies developed have input from affected constituencies and that these
are factored into the policies developed. 

vint cerf



--karl--


Re: national security

2003-11-29 Thread jfcm
Dear Vint,
thank you for commenting on the Internet national survival kit issue this 
way (we are one week before the last Geneva prepcom, where ICANN is 
disputed in a way the survival kit may affect).
Our common goal is to help consensus, not to increase tensions.

At 19:54 29/11/03, vinton g. cerf wrote:
OK, the big issue for those countries that want ICANN to be disbanded 
and for the Internet to be handed over to the ITU is quite simple: ICANN 
is a US-government controlled entity subject to US/Californian law.
Please read the most recent MOU. The US Department of Commerce has gone to 
considerable effort to outline the path by which ICANN becomes the party 
responsible for the updating of the DNS root. The control you assert is 
quite limited even today.


Objection is not that you are the root registry, but the USA and you are 
the registrant. RFC 1591 says IANA is not in the business of defining a 
country. Why to intrefere with countries? What is the intrinsic difference 
between root and TLD updates? The post KPQuest updates are a good example 
of what Govs do not want anymore.

Any formal body has to have some jurisdiction in which it is constituted. 
One can argue whether California non-profit law is better or worse than 
being a UN entity. I believe there are arguments against the latter as 
much as there may arguments against the former.
The complexity is that ICANN wants to be two conflicting things (American 
and International) and to organize something multinational.

that's not at all clear. ICANN has tried to promote the adoption of IDN, 
for example, in a responsible way. John Klensin's efforts, and others, to 
promote international compatibility to enhance the ability for parties to 
communicate is commendable. What do you think is awful?
The IDN solution! :-)
it was doomed when ICANN refused it to be multilingual. Let not dispute on 
that. Vernacularization may come from a true internationalization (0 to Z) 
on the LHS. May be Keith Moore will find a reasonable way.

The IETF is about as close as we've got as an authority on the 
Internet that is not bounded by geographic boundaries, governmental 
control or commercial contract. You can make a reasonable argument that 
we should be running the show here, not ICANN.

Not unless you want to take on the full burden of Internet Governance 
written large. Not even ICANN wishes to do that. In fact, ICANN's role is 
very limited compared to the full scope of Internet Governance.
Four language problems in here I doubt we can reduce. ICANN understands its 
governance role as global coordination of the network. Our respective 
cultures have opposite understandings for governance, global, coordination 
(we will accept concertation), network. I suppose other cultures and 
languages have others. You probably stay in the middle. Hence your need to 
explain again and again we are not what you believe we are. Should this 
not be plain obvious for now.

Consider the French (original) meaning of gouvernance. For networks it 
would be net keeping. Many ICANN relational problem would disappear.

 Issues such as fraud, taxation, intellectual property protection, 
dispute resolution, illegal actions are governmental matters and not even 
UN has the appropriate jurisdiction. It will take cooperation among 
governments and thoughtful domestic legislation to deal with many of 
these matters. ICANN has high regard for IETF and IAB and for that reason 
there is an IAB liaison appointed to the Board of Directors.

The UNITC meeting needed to happen several years ago, but now we're 
there, realistically there is only one option left for a single, cohesive 
Internet to remain whilst taking into account ALL the World's population: 
ICANN needs to become a UN body.

nonsense - as constituted today, ICANN is a better forum for interested 
constituencies to debate policy FOR THOSE AREAS THAT ARE IN ICANN'S 
PURVIEW (not shouting, just emphasis on limited purview of ICANN).
We all will accept the word forum. The role I assign to ICANN is to guest 
forums and to cross polenize among them.

Why then to force participants to abide by your by-laws to come (ccTLDs). 
Paul Twomey's Nov. 19th paper is a contention point. It is seen as a bold 
ICANN move before the 5/6th meeting. And your own response about ccTLDs.

What would be the difference if the ccNSO resulted from an MoU? It would 
permit to help/join with ccTLDs, and RIRs, over a far more interesting 
ITU-I preparation. I suppose RIRs would not be afraid an ITU-I would not be 
here 2 years from now.

The problem with the arguments I have heard, including yours, is that you 
may be thinking of Internet Governance in the large while ICANN's role is 
small and should stay that way. We need other venues in which to deal with 
the larger problems and perhaps UN or some of its constituents have a role 
to play. Probably WIPO and WTO do as well.
Agreement. Then why to have built a big machine to be the IANA + a forums 

Re: national security

2003-11-28 Thread Iljitsch van Beijnum
On 27-nov-03, at 23:20, jfcm wrote:

Some others have technical implications. I would like to quote some 
suggestions listed in the  preparatory document, to get advices I 
could quote at the meeting or in its report. Also to list the 
alternative and additional suggestions some might do.
Ok, I'm not going to quote all the details...

This looks like a big old bag of DNS tricks. Obviously the virtue of 
each can be discussed individually, but wouldn't it make more sense to 
start thinking about a more structured approach to arrive at the 
intended benefits?

For instance, one issue seems to be the ability to continue to reach 
systems using domain names when there is a problem with the DNS 
infrastructure. This could be addressed by modifying the caching 
mechanisms in DNS resolvers. The way things are done now is throw away 
the old shoes (cached information) and then see if new ones can be 
found. Rather bizarre if you think about it.

2. a menu server system permitting to access sites though their IP 
address only. This would be a good promotion for IPv6 due to the 
easiness to support IP virtual hosts addresses. As a security oriented 
alternative to the NSI network unstabilization.
This could lead to pressure to make IP addresses more portable, which 
isn't a good thing.

6. an evolution towards an international root matrix supporting 
proximity root servers and proximity TLDs for abbreviated addressing 
through local TLDs. The organization and the procedure of the common 
authoritative root matrix should be internationally approved and 
subject to the ICP-3 proposed testing rules. A quoted example 
documents the target as hart.sos of pacemakers always resolving to 
the nearest hospital (as decided by local authorities).
This isn't something you can do with simple (or even not so simple) 
n-faced DNS. For instance, here in the Netherlands many service 
providers backhaul all their traffic to Amsterdam over the phone 
infrastructure (dial-up) or ATM (DSL). Those customers then share IP 
address space and DNS resolvers. Getting the location info back in 
there would be almost impossible.

However, it could be useful to create mechanisms that make it easier 
for hosts to discover location information. For instance, through a 
DHCP option. This information can then be used when searching 
directories or search engines. (This would have interesting commercial 
possibilities as well.)

- a national host numbering scheme. With an immediate identification 
of any host on any network whatever the location change or connection 
organized.
This second would also protect IPv6 technology and equipment from a K2 
like syndrome when a new plan could be discussed, as it would have 
permitted to validate the multiple plan possibility.
In the multi6 (multihoming in IPv6) working group, as one of many 
proposals, we've been looking at putting a 64 bit host identifier in 
the bottom 64 bits of an IPv6 address. If such a host identifier is 
crypto-based (ie, a hash of a public key) then it is possible to 
authenticate a host at any time regardless of where the host connects 
to the network at that particular time and without the need for a PKI 
or prior communication.

(I have no idea what K2 syndrome means by the way, and it looks like 
Google doesn't either.)




Re: national security

2003-11-28 Thread Jari Arkko
Anthony,

In the multi6 (multihoming in IPv6) working group, as one of many
proposals, we've been looking at putting a 64 bit host identifier in 
the bottom 64 bits of an IPv6 address. If such a host identifier is 
crypto-based (ie, a hash of a public key) then it is possible to 
authenticate a host at any time regardless of where the host connects 
to the network at that particular time and without the need for a PKI 
or prior communication.
This is precisely the kind of mistake that will exhaust the entire IPv6
address space just as quickly as the IPv4 address space.  Don't
engineers ever learn from the past?
I can't claim to know too much about the specific details in the
multi6 proposal, but there has been other efforts that use cryptographic
identifiers as parts of addresses. However, I do not believe these
proposals consume any more address space than, say, manual or EUI-64
based address assignment. There's still just one address consumed per
node. Perhaps you were thinking that the address contains a MAC field?
This isn't strictly speaking the case, at least not in the way that
the MAC value would change from one packet to another.
Anyway, back to the subject of national security... I have a
question. The main goal appears to be the reduction of dependencies
between network parts, in order to prepare for catastrophic situations.
This is useful goal, though I'm not sure I agree with all the listed
specific items. Are any of the issues that have been talked about being
addressed in the IEPREP WG, or is that group mainly focused on the SIP/
telecom type of issues only?
--Jari




Re: national security

2003-11-28 Thread Jaap Akkerhuis

While parallel issues start being discussed and better understood at WSIS, 
we have next week a meeting on Internet national security, sovereignty and 
innovation capacity.

Who is we in above paragraph?

jaap



Re: national security

2003-11-28 Thread John Kristoff
On Fri, 28 Nov 2003 14:47:41 +0100
Anthony G. Atkielski [EMAIL PROTECTED] wrote:

 (or perhaps not diminished at all).  However, in reality, dividing the
 field in this way may reduce the address space by a factor of as much
 as nineteen orders of magnitude.  Again and again, engineers make this
 mistake, and render large parts of an address space unusable through
 careless, bit-wise allocation of addresses in advance.

The 48-bit addresses in IEEE/L2 protocols are divided in half as
well as have a couple bits set aside to denote local/global scope and
unicast/multicast addresses.  It seems to have worked out pretty well.

John



Re: national security

2003-11-28 Thread jfcm
At 15:20 28/11/03, Jaap Akkerhuis wrote:
   While parallel issues start being discussed and better understood at 
WSIS,
we have next week a meeting on Internet national security, 
sovereignty and
innovation capacity.

Who is we in above paragraph?
Hi! Jaap,
we is a public open follow-up of the dot-root study. If you have French 
and want to participate you are welcome. It will be held in Paris on Dec. 
3rd 14:30 PM. I can mail you the perparatory document if you want. It is 
reserved to decision makers in potential subsequent actions.
jfc




RE: national security

2003-11-27 Thread jfcm
Sorry for the typo - I miss my glasses :-) - http://rs.internic.net (not 
ns.internic.net as I typed).
jfc