Re: [p2pi] WG Review: Application-Layer Traffic Optimization (alto)

2008-10-14 Thread Karl Auerbach

Lars Eggert wrote:

FYI, there's at least one more proposal in this space: the Ono stuff 
from Northwestern 
(http://www.aqualab.cs.northwestern.edu/projects/Ono.html). There was a 
paper at SIGCOMM this year, and their system has the interesting feature 
that it simply freeloads of Akamai's DNS entries in order to determine 
who's close to whom. No ALTO boxes needed.


Since you mentioned DNS as a proximity tool, I thought I'd go slightly 
awry and point out a bit of work I did a while back that, while not at 
itself at the application level, did try to address some application 
layer optimizations concerns that we had when I was working on binding 
video clients to video services.


The main idea was that neither hop count, asn-path length, ICMP-Echo 
time, DNS answer time, nor TCP-connect-time are very good indicators of 
internet proximity for the purposes of applications that are going to 
make different kinds of demands and need different levels of service. 
And because such proximity questions are likely to be asked frequently, 
the cost of asking the question, and the delay incurred in asking that 
question, ought to be low.


So what I did was to try to blend the notions of potential bandwidth and 
packet size dynamics (from the old integrated services work) with some 
ideas from the old multicast mtrace protocol.  What I came up, and it 
was far, far from complete, was something that needed to live inside the 
router infrastructure, although not on any fast-path part of any router. 
 I called the thing the Fast Path Characterization Protocol.


(The name may be misleading, the implementation was intended to find a 
path quickly, *not* that it would sit in any router fast-path switching 
logic.)


So, here it is, 8+ years old: 
http://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html


--karl--
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Update of RFC 2606 based on the recent ICANN changes ?

2008-07-07 Thread Karl Auerbach


I guess you've heard the old joke which asks How could God create the 
world in only seven days? - Because He had no installed base.


If we move this thread up one level of abstraction much of the 
conversation is asking the question of how strongly we respect the 
installed base of software out there on the net.


Do we have any principles we can use to guide our choice of where we put 
the needle along the continuum from no change, no way to any and 
every change is allowed?


--karl--
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Services and top-level DNS names

2008-07-04 Thread Karl Auerbach

John C Klensin wrote:


I'm going to try to respond to both your note and Mark's, using
yours as a base because it better reflects my perspective.


I sense that many of your concerns are well grounded.  And I find it 
interesting that the concerns come not so much from DNS as a system and 
protocol in and of itself but rather from the practices that have 
developed in the ways that DNS is used.


I see that you consider this situation urgent.

But I think that even your sense of urgency may underestimate the 
exigency of the need to coming up with an answer that can be labeled as 
a standard or BCP.


It is my feeling that pretty much every company in the Fortune 1,000,000 
is going to want to go for their name as a TLD.  And the marketing 
people at those companies will exert strong internal pressures to have 
every service - from web to email to SIP to whatever - using only the 
single TLD/company name.


As for contracts - ICANN should, if it is not doing so aready, clearly 
articulate a requirement that TLD operators clearly agree to follow 
written and broadly practiced internet technical standards that pertain 
to DNS.  (I've gone further and suggest that this ought to be ICANN's 
sole criterion for accepting [which does not mean granting] an 
application for a new TLD, but that's another discussion for another day.)


But contracts only go so far.  First of all there is the issue of the 
ccTLDs - they tend to operate outside of the ICANN contractual hierarchy.


Then there is the issue of enforceability of contract provisions.  ICANN 
seems to have an institutional fear of something called third party 
beneficiary status.  This is a thing that a contract can grant to 
certain non-parties so that those parties can step in and demand that 
certain contract terms be enforced against one or both of the parties 
even if the parties themselves are not holding one another to account.


In other words, unless the contracts give these rights, the IETF and 
others might have to stand on the sidelines and able to do no more than 
gnash their teeth in frustration.


ICANN has not demonstrated that it is quick to take up its sword to 
enforce its contractual rights when users are being harmed - For 
instance it kinda took dynamite to get ICANN to notice, much less to 
react to the developing RegistryFly mess.


Also you mentioned that the Brooklyn Bridge park folks might want to sue 
ICANN rather than the people who register brooklynbridgepark as a TLD. 
 My sense is that this might be a poor strategy because ICANN might be 
able to excuse itself as merely acting as a bookeeper and recommending 
that you sue the people who registered the TLD once they show, through 
actual use, that you have suffered some concrete harm.  Also, ICANN 
operates with a somewhat uncertain shield that exists because ICANN is 
able to operate in that vague middle ground between being a private 
entity and an arm of the US government.  (It is this same uncertainty 
that may explain why ICANN has so far avoided squarely facing the 
restraint of trade question in the US or elsewhere.)


Also, you use the word property.  That's a word so full of different 
meanings to different people in different places that it tends to cause 
more trouble than it is worth.  I might suggest that we look at this 
situation as one in which the various actors have various explicit 
rights (contractual or otherwise) and duties towards certain domain 
names.  That way we can deal with these things with clarity rather than 
getting buried under the emotional baggage that comes from the word 
property.


With regard to that final point about requiring only delegation and glue 
records - what about things like LOC and some of the TXT and other 
records for things like DKIM, SPF, SIP, etc?  My point here is that this 
might not be as simple to define as we might initially think.


But the boiler in ICANN's locomotive is coming up to pressure and we can 
expect ICANN's new TLD train to start chugging fairly soon - so answers 
may be needed more at IETF 1988 speed rather than IETF 2008 speed.


--karl--
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Renumbering ... Should we consider an association that spans transports?

2007-09-13 Thread Karl Auerbach

David Conrad wrote:


How do you renumber the IP address stored in the struct sockaddr_in in a
long running critical application?

...
If you had a separation between locator and identifier, the application 
could bind to the identifier and renumbering events could occur on the 
locators without impacting the identifier.


For a long time I've suggested that we begin to look anew at the idea of 
an association as an abstraction over transport.  Yes, I know that 
this smacks if ISO/OSI, but there were a few granules of good ideas there.


The idea is this:  An association is an end-to-end relationship 
between a pair of applications that potentially spans several transport 
lifetimes.


Then, if the underlying transport goes away, perhaps due to movement in 
a mobile network or renumbering, then the association is reconstructed 
on a new transport that is built in accord with the current addressing 
and routing conditions.


Reconstruction does not, as some have assumed, require that the network 
remember anything or hold any state.  Rather, taking a cue from ISO/OSI, 
the trick is that the association layer is merely a means for the 
applications to reliably exchange checkpoint names.  What those 
checkpoint names mean is up to the applications - thus what to do if a 
rebinding to a new transport requires going back to a checkpoint is 
something entirely within the application and its networking library 
code, not some state that is stored in the net.


Basically whenever applications establish a transport they say Ahem, 
where were we when we last spoke.  One answer is We did not last 
speak  Another answer is we last agreed on the checkpoint named 
'foo'.  How they recover from 'foo' is entirely application dependent.


(I have not really considered the security implications - in the absence 
of some form of shared secret or other authentication on association 
re-establishment there would probably be a race condition in which an 
intruder could jump in.)


(I'm also thinking of TCP based applications, not UDP based ones.  For 
them I don't see renumbering as much of a problem, but I may not be 
seeing enough.)


This is not something that can readily be transparently back-ported into 
existing protocols; it's not something of trivial import.  But it can be 
deployed for new applications and not invalidate either existing 
applications or existing application protocols.


And consider, for example, how something like this might have obviated 
the need for the IP layer triangulation in mobile IP.


--karl--

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Renumbering ... Should we consider an association that spans transports?

2007-09-13 Thread Karl Auerbach

Tony Li wrote:

A key question here is whether the 'association' is a single connection 
or not.  While the association may span the change of underlying 
infrastructure, the real question is whether it presents a single 
concatenated transport abstraction or if it's multiple connections.  I 
think we need the former.


Wow, I wasn't expecting so much feedback in so few minutes.

My motivations were these:

Although I'd as a writer of applications like to have a reliable, 
sequenced data stream to my peer application(s), I figured that to do 
that inside TCP would be to reinvent TCP.  And since I don't believe 
many of us think TCP is broken, reinventing TCP would not be a good use 
of our efforts.


So I figured, OK, how can we keep our investment in TCP and existing 
applications and provide a tool that could be of use to people figuring 
out new applications.  Sort of the way that BEEP is.


And having long ago stuck more than an arm into ISO/OSI, and after 
seeing Marshall Rose's implementations of ISO session over TCP, I 
realized that what the OSI people were trying to do at the session 
layer was something simple wrapped up in an amazing amount of 
complexity.  All they were trying to do was give applications a way of 
putting stakes into the ground so that they could go back to an agreed 
upon status if something went wrong and give it another try.


To my mind it would be very wrong to require that the network in some 
way preserve application data for re-presentation; first off that makes 
the network too complicated and second, as several have pointed out, how 
each application recovers varies from application to application.


Keith is very right that we don't want the network (including most of 
the stack code in clients and servers) to do what an application should 
do for itself, particularly with regard to buffering of data.  That's 
why, to my mind, the association mechanism should be limited to merely 
letting the applications agree on a name or number, nothing more, and 
leaving it to the applications to figure out what to do if they need to 
go back to that name/number.  And Keith is right in that what I am 
suggesting would not be a mechanism that would be transparent to 
existing applications.


I've wrestled with the idea of pushing all of this down into the 
transport layer and, yes it could be done.  But I have doubts that given 
the size of the net that it would be broadly adopted or be deployable. 
So I mentally punted and said How about an optional layer above TCP 
that newly designed applications could use?


As Iljitsch points out, the checkpoint mechanism could become a lot of 
overhead for not a lot of benefit - although I sense that the overhead 
of establishing a checkpoint would involve perhaps one-to-two packet 
exchange/round trip times and might be able to occur in parallel with 
ongoing data flow (remember I'm suggesting only the establishment of a 
name, not any buffering of data except in the applications themselves.) 
 And a well designed application protocol should only use a the 
checkpoint mechanism when it really needs to.  It would be silly, for 
example, to checkpoint a DNS-over-TCP connection, but it might make 
sense for some mobile database access application to do it after each 
database update.


But, as Iljitsch also suggests, we can get a lot of this if applications 
simply close the TCP connection then re-open it.  But isn't that really 
a somewhat similar to I've suggested in that it requires the 
applications to go back to some point in the past and resume?


I know that I'm walking close to the edge of a cauldron of worms, but 
I've seen these ideas of some sort of persistent relationship between 
application layer entities pop up in many contexts, such as mobile IP, 
VoIP, HTTP cookies, etc, that it occurred to me that maybe it is 
something that needs some coherent, rather than ad hoc, consideration.


--karl--

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Something better than DNS?

2006-11-28 Thread Karl Auerbach

John Levine wrote:


As someone noted a few days ago, ICANN and the current roots have yet
to address the issues related to IDNs.  There's only one significant
technical issue, mapping non-unique Unicode strings into unique DNS
names


There is an ancillary issues that have not, to my knowledge, been 
adequately researched, and that is the expansion in the size of the 
response packets.


IDN's will tend to be longer than ASCII names.  This will by itself make 
response packets larger.  And, to the degree that root and secondary 
servers are named by IDNS, the various NS records will tend to grow as 
well.  And most users are probably not aware of DNS name compression or 
try to accommodate it in the way that is done with the 
?.root-servers.net convention, so we may get larger packets because name 
compression doesn't give us the same boost it used to.


Brew in longer addresses from IPv6 and we end up with longer response 
packets.


How much longer probably isn't a big issue unless they are big enough to 
trigger a fallback onto TCP rather than UDP or if we get UDP packets 
that exceed path MTU and have to be fragmented.  (By-the-way, why is 
EDNS/RFC 2671 not advancing on the standards track?)


--karl--

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Scenario C prerequisites (Re: Upcoming: further thoughts on where from here)

2004-09-22 Thread Karl Auerbach
On Wed, 22 Sep 2004, Jeffrey Hutzelman wrote:
I think this and a number of other points made here gloss over a key point of 
which some of the participants may not be aware.

Under US law, there is a significant difference between not-for-profit and 
charitable nonprofit
It might be useful to add more precision.
In the US there are two levels from which laws affecting corporations 
arise, state and Federal.  Corporate structures are usually created under 
state law.

Many states of the US have laws that allow non-profit or even 
charitable nonprofit.  Here in California, for example, there are 
several forms of non-profit: public-benefit, charitable, mutual benefit, 
religious, medical, etc.

And here in the US we have a lot of states - 50 of 'em - each with its own 
different corporations laws.

At the federal level there is yet another mountain of law, but we often 
end up talking about tax exemptions under Title 26 Section 501 of the US 
code.  That part of the tax code covers a lot of territory and is very 
complicated and full of subtle distinctions that trigger significant 
differences in treatment as well as imposing rather different kinds of 
limitations and obligations upon the entity that is seeking or obtained 
one or another of these exemptions.

So, when talking about these things we can avoid a lot of confusion if we 
try to be precise about specific state level conceptions of corporations 
and non-profitness and federal level conceptions of federal tax exemption 
and the benefits, limitations, and obligations that come from each.

I might add that one of the questions that ought to be raised, and it is a 
question that I'm certainly neither qualified to answer nor will I even 
attempt to answer, is whether the IETF ought to seek Federal tax exempt 
status at all.  Sometimes it may be better to simply file the papers, pay 
the tax money, and be free of many of the restrictions.

--karl--

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Scenario C prerequisites (Re: Upcoming: further thoughts on where from here)

2004-09-22 Thread Karl Auerbach
On Wed, 22 Sep 2004, Gene Gaines wrote:
ISOC is non-profit, 501(c)(3) tax-exempt, incorporated in the
District of Columbia.
I suggest it would be a serious mistake for the IETF not to
obtain the same status.
There are many kinds of 501(c) exemptions.  They all come with different 
kinds of chains that impose limits on what the organization can do and 
impose affirmative duties.  Simply jumping into one category without 
understanding the nature and form of those chains could lead to a kind of 
organizational buyer's remorse.

Whether one considers the application process easy (or hard), fast (or 
slow), or the IRS to be capricious (or not), it isn't something to be 
undertaken lightly or without understanding the ramifications.  The IRS is 
one of the world's great bureaucracies; I know attorneys whose entire 
practice is focused on just small parts of the US tax code and small parts 
of that bureacracy.

The choice of Federal excemption also may have impact on the liability (or 
rather on the limitation of liability) of unpaid Directors and officers 
both on the basis of State laws that recognize certain protections for 
certain 501 categories (and not for others) and also under Federal laws 
that may provide some protection for volunteer (unpaid) directors under 
some circumstances.

Many have, of course, navigated the maze and been happy with the results. 
And some entities, after having experienced life as a 501(c)(3) have 
discovered the limitations too binding and have changed their status.

The IETF ought to move forward with knowledge and understanding.  It ought 
not go forward blindly and with say 501(c)(3) or bust without knowing 
fully what that means and implies.

The same goes for chosing the state of incorporation and the form under 
that state's laws.

(There is, of course, the option of creating several different legally 
cognizable entities, each shrink-wrapped with its own choice of 
jurisdiction and form.  But that could lead to a situation in which there 
is not one IETF but several that drift in divergent directions.)

I'm not arguing against the 501(c)(3) status - I have neither an opinion 
nor enough knowledge to make an informed choice.  I'm merely noting that 
the issue is complex and involves hard choices that ought to be made with 
knowledge of the tradeoffs.

--karl--


___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Is the IANA a function performed by ICANN ?

2004-06-25 Thread Karl Auerbach
On Fri, 25 Jun 2004, JFC (Jefsey) Morfin wrote:
My understanding was that IANA is a neutral, independent, technical 
authority

the Internet Assigned Numbers Authority (IANA) is a function performed by 
ICANN.
There is a significant lack of clarity in these matters.
ICANN has a number of legal arrangements with the US Dept of Commerce and 
its subagencies.  The most relevant to IANA is a sole-source purchase 
order that comes from the National Atmospheric and Oceanic Administration 
(a sub-agency of the DoC) under which NOAA is purchasing an essentially 
undefined thing called the IANA function and ICANN is providing that thing 
for zero dollars.

Usually in a purchase order the flow of deliverables runs towards the 
purchasor.  In this case nothing particularly tangible flows towards the 
NOAA, but there does seem to be a rather cloudy delegation of authority 
from NOAA to ICANN.  In addition there is the oft-asked and never answered 
question of whether NOAA (or NTIA, another sub-agency of the Dept of 
Commerce) has such authority to delegate in the first place.  (The GAO did 
examine that question and found the authority to be lacking.)

In any event, what commeth via purchase order can goeth via the lapse of 
that purchase order.  Unless there is some unwritten accord, ICANN has no 
guarantee that it will continue to be able to act as the uncompeted, 
sole-source provider of the IANA function.

The IANA function as practiced by ICANN appears to have at several 
distinct parts:

1) The operation of the L root server.  I have never heard one bad 
word about ICANN's performance with respect to the L server  As far 
as I can tell, the hands on those knobs are very competent.

2) The assignment and recordation of protocol numbers.  This is the 
classical Numbers Czar function.  Again, this seems to be competently 
handled, although I have heard complaints that processing of number 
assignments is taking too long.  And I have had my own concern that this 
function is really something that more properly belongs under the IETF's 
umbrella.

3) ccTLD assignment authority.  This is the controversial function.  This 
is a job that requires IANA to decide who gets to act as the internet 
presence of each sovereign nation (that is, each sovereign nation except 
the US - US dealt with the .us TLD outside of ICANN and despite ICANN.)

Yes, there are competing theories of whether a ccTLD is a direct aspect of 
sovreignty or is merely a database key that happens to be isomorphic with, 
but not dependent upon, the existance of soverign states.  My own sense is 
that the latter theory is not flying well among the collective governments 
of the world.

ICANN does not keep time cards, or at least it didn't when I last checked. 
And there is a great intermingling of tasks among the ICANN/IANA staff. 
These two aspects combine to make it very difficult to assign a 
quantitiative cost or level-of-effort number to IANA as a whole and much 
less to the distinct IANA tasks.

However, by observation it appears that the bulk of the cost, and trouble, 
of IANA is in item #3.

I have had concern that ICANN, or rather IANA, is being used as a pawn by 
political factions in a number of countries.  ICANN's redelegations are 
often based on fairly thin bodies of evidence and are supported often by 
some rather questionable assertions (e.g. a letter relinquishing a ccTLD 
that was on an otherwise blank sheet of paper without letterhead and over 
the unverified signature of someone claiming to be the long lost contact.)

During my term I never felt that ICANN or IANA had the staff expertise to 
be able to swim in those shark infested waters.  However, my sense is that 
with the advent of current president and other staff changes that ICANN, 
oops, IANA, is perhaps better equipped for this now than it was 18 months 
ago.  But the question is still present: Might not the question of who is 
the most rightful operator of a ccTLD be better handled by existing 
organizations that are much more sophisticated in matters pertaining to 
who is the rightful sovereign power of a nation?

--karl--



___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: [Ietf] New .mobi, .xxx, ... TLDs?

2004-04-29 Thread Karl Auerbach

 So, place your bets on which slippery slopes ICANN takes us down...

ICANN loves these sponsored TLDs.  It's the only kind they are presently
considering.  Sponsors generally have the cash needed to cover ICANN's
application fee (which is typically on the order of $35,000 to $50,000,
and is non-refundable even if the application is turned down or left in an
indefinite limbo of being neither accepted nor rejected) and any ongoing
tithes.  (Remember, ICANN's budget is growing towards $10,000,000USD per
year and that money has to come from somewhere.)  And the sponsors
generally have well a behaved and limited membership and are thus less
likely to burden ICANN staff with work and are less likely to get into
disputes (and lawsuits) with ICANN.

I personally find many of the proposed uses for these sponsored TLDs
rather silly and non-innovative - and they could well be done further down
the DNS hierarchy and their only reason to be at the top level is for the
prestige (and image marketing value) of having top level domain status.

There are a couple of things about this situation:

First, is that some of the TLD proposals before ICANN, such as .mobi,
might contain some seeds of technical harm to the net.  For example, if
the .mobi folks don't use their own intermediate caching resolvers and
rather allow all those mobile devices to go directly to the roots then
there could be an increase in the traffic going to root and other servers.
This could be exacerbated if those devices are frequently power cycled and
lose their DNS caches.  The .mobi folks haven't said that they are going
to do this, but then again they haven't said that they are aware of this
potential issue.

Second, the idea that a TLD categorizes all of the resource records of all
types found under that TLD seems to me to be wrong.  For instance, assume
there's a TLD called blue. if an A record found under a.b.c.blue leads
me to an IP address on an interface, it is unreasonable to believe that
the only services delivered via that address are of a blue nature.  I
suspect that many people who want these TLDs think of the net only in the
limited sense of the world wide web.

Third is that, as Tony Hain mentioned, there is trademark pressure that
will eventually suggest to every big trademark holder that they ought to
be a TLD.  This may or not come to pass, but we can guess that at least
some of the big trademark folks will give it a try, particularly after
some of ICANN's sponsored TLDs ossify over time into de facto marks and
thus blaze a trail that trademark owners might want to follow.  And where
one trademark owner goes, the herd is sure to follow.

My own view is that we ought to be trying to shape the DNS tree so that it
is well shaped in terms of width and depth of the hierarchy.  I don't know
the metrics of that shape.  Have there been studies regarding the how the
efficiency of DNS and DNS caching vary with DNS label depth and zone
width?

As for uses of TLDs - my own view is that as long as they are allocated
only rarely then the few awards should go to those uses that contain the
maximum innovation, maximimum flexibility, and give the greatest value to
the users of the net.

But I don't see why allocation of new TLDs needs to be a rare event.

Personally, I want a lot of new TLDs, so that folks who have silly ideas
and silly business models can try em out and be given a chance to flop -
but this means that there needs to be a criteria to determine flopness and
to reap the failed TLDs so they don't become dead zones in the DNS
hierarchy.

And I think it ought to be a requirement that any idea proposed for a TLD
first be prototyped somewhere down the DNS hierarchy.

Anybody who wants a new TLD should have to pledge allegance to the
end-to-end principle (i.e. no new sitefinders) and promise to adhere to
applicable internet technical standards and practices.

I also would like to start to break the semantic implications of TLD names
- I'd prefer that any new ones have names that are meaningless in any
language, like ts4-0k7m.  Yeah this has been gone over many times - but
I still have this hope that with some nudging people will stop using DNS
as a directory.

--karl--


___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: national security

2003-12-01 Thread Karl Auerbach
On 1 Dec 2003, Paul Vixie wrote:

  ICANN's obligation is to guarantee to the public the stability of DNS at
  the root layer.
 
 i disagree...

From ICANN's own bylaws:

  The mission of The Internet Corporation for Assigned Names and Numbers 
  (ICANN) is to coordinate, at the overall level, the global Internet's 
  systems of unique identifiers, and in particular to ensure the stable 
  ^
  and secure operation of the Internet's unique identifier systems ...
  
[emphasis added]

According to m-w.com, ensure means to make sure, certain, or safe : 
Guarantee.

In other words, ICANN's mission is a promise, a guarantee.

But that's not all:

ICANN's contract, or rather Memorandum of Understanding with the United
States requires, yes requires, that ICANN, yes ICANN, not the RIRs, not
the root server operators, to design, develop, and test the mechanisms,
methods, and procedures ... to oversee the operation of the
authoritative root server system and the allocation of IP number 
blocks.

Those are ICANN's own promises that it has made, in legal document after
legal document, to the United States Government.  ICANN may say otherwise,
you may believe otherwise.  But that's the contractual words in black and
white.  It has been the same language since 1998.

In other words, ICANN has made a contractual committment to tell you, as
an operator of a root server, what mechanisms, methods, and procedures  
you must follow to operate your servers.

And that word oversight in the MoU does not mean that ICANN promises to
merely watch how you and the other root server operators do what you do
very well.  The word oversight includes an ability to reject and to
command.  In other words, ICANN has promised the USG that it's authority
over root operations supersedes your own.

We are all well aware that in actual fact that ICANN has no legal
authority over the root server operators.  And we are all aware that the
root server operators have been wary of entering into agreements with
ICANN regarding the operation of the root servers.  That, however, has not
stopped ICANN from making a written promise to the United States govenment
that it will both oversee the root server operations and formalize its
relationship with the root server operators.

Perhaps ICANN is willing to admit that it has no real authority -
presumably by declaring to the US Department of Commerce that it considers
those sections that I mentioned to be obsolete and not obligatory upon
ICANN, and by removing the obligation to ensure the stable and secure
operation that is contained in its own bylaws - and clearly articulating
to everyone, governments and businesses included, that ICANN is nothing
more than an advisory body that operates only by eminating good vibes in
the hope that others, who do have real power to act, will act in
resonance.

In the meantime ICANN goes about telling governments of the world that it
does far more than emit nudges and hopes;  ICANN tells governments that it
ensures and guarantees.

And outside of the IETF and related communities ICANN does not say that it
is merely an advisory body lacking authority. ICANN's message to the
business and intellectual property communities is that ICANN stands strong
and firm and will let nothing interfere with the stable operation of the
internet.

Your note makes my point - that ICANN is in many regards an empty shell,
and has been one for years, that has no real power except in the realm of
the (over) protection of intellectual property, allocation of a very few
new top level domains, and the determination of who among compeiting 
contenders is worthy to operate contested ccTLDs.

At the end of the day - and it is nearly the end of the day here - the
fact of the matter is that ICANN is telling different stories to different
groups.  To the IETF, ICANN holds itself out as one of the guys, merely a
warm and fuzzy coordinator.  But to the business community, ICANN holds
itself forth as a guarantor of internet stability.  And to the United
States Govenment, ICANN has undertaken to make legal promises to the
effect that it is in charge of DNS, including root server operations, and
IP address allocation.

--karl--

PS, if I am late to the party on anycast issues than it ought to be easy
for ICANN to articulate the answers to my concerns.  This is not an idle
request.  The internet community deserves proof that these questions are
truly answered by hard, reviewable, analysis.  Moreover, with Verisign and
sitefinder lingering on the horizon it is not beyond conception that
Verisign will wave the flag of bias and ask ICANN to demonstrate why
anycast got such an easy entree.






Re: national security

2003-11-30 Thread Karl Auerbach
On Sat, 29 Nov 2003, vinton g. cerf wrote:

 I can't seem to recall during my 2 1/2 years on ICANN's board that there
 ever was any non-trivial discussion, even in the secrecy of the Board's
 private e-mail list or phone calls, on the matters of IP address
 allocation or operation of the DNS root servers.  Because I was the person
 who repeatedly tried to raise these issues, only to be repeatedly met with
 silence, I am keenly aware of the absence of any substantive effort, much
 less results, by ICANN in these areas.
 
 The fact that there were few board discussions does not mean that staff
 was not involved in these matters. Discussions with RIRs have been lengthy
 and have involved a number of board members. 

Discussions with staff hardly constitutes responsible oversight by ICANN
as a body responsible to the internet public.  All you have said is that
ICANN has not merely abandoned its oversight of DNS and IP addresses to
the root server operators and the RIRs but also that the only elements
within ICANN that even bother to observe are the occassional board member
and some perhaps some unnamed staff members.

I raised the anycast issue several times to the board.  Staff received 
those e-mails.  I do not except as valid an after fact explaination that 
says Even though nobody bothered to answer Karl's inquiries, ICANN's 
staff was really making informed decisions, in secret, about anycast.

ICANN's job is not to make decisions in secret, by unknown members of
staff, based on unknown criteria and using unknown assumptions.  To do 
so, which is what you are saying has been done, is simply yet another 
abandonment of ICANN's obligations.

The switch to anycast for root servers is a good thing.  But it was hardly
without risks.  For example, do we really fully comprehend the dynamics of
anycast should there be a large scale disturbance to routing on the order
of 9/11?  Could the machinery that damps rapid swings of routes turn out 
to create blacked out areas of the net in which some portion of the root 
servers become invisible for several hours?  Could one introduce bogus 
routing information into the net and drag some portion of resolvers to 
bogus root servers?

I'm pretty sure that the root server operators have answers to these
questions.  However, it is incumbent on ICANN not to simply accept that
these people know what they are doing; ICANN must document it, ICANN must
inquire whether some of the decisions are made on public-policy
assumptions (in which case the public needs to become a party to those
decisions).

Considering that we know that there would be no ill effects to adding even
a hundred new top level domains, one has to wonder at the degree of
automatic deference (deference amounting to an institutional decision to
be blind) to the deployment of anycast as compared to the hyper detailed
inquiry into matters even as irrelevant as the pronouncability in English
of a few proposed new top level domains.

In addition, an argument could well be made that anycast violates the
end-to-end principle.  For instance, it's hard, or impossible, to maintain
a TCP connection that spans a routing change that sunsets one anycast
partner and sunrises another.

Given that one of the strongest arguments against Verisign's Sitefinder is
that it breaks things, and that it violates the end-to-end principle,
Verisign lawyers must be very pleased that they can so easily demonstrate
that ICANN is willing to act with overt bias, to let slide, without
inquiry, those things proposed by ICANN friends.

 Sorry, anycast has been out there for quite a while; I am surprised you
 didn't know that.

No need for sarcasm.  As you must be well aware, was the one who explained
to ICANN's Board how anycast works.  Indeed, I was the one who brought the
deployment of anycast roots to the Board's attention.  I know that the
ICANN Board considers its communications secret.  However if I am required
to defend myself from what I consider to be an unwarranted and
unsupportable assertion regarding my professional knowledge I would have
to consider it my right to defend myself and publish any and all relevant
materials from the archives of the Board's e-mail.

But you miss the point - the deployment of anycast for root servers was a
bold operational decision.  It was a decision made by the root server
operators alone, without ICANN.

ICANN's obligation is to guarantee to the public the stability of DNS at
the root layer.  ICANN's failure to engage in the issue of anycast
deployment was simply and clearly and abandonment of ICANN's
responsibilities.

 [I believe that the anycast change was a good one.  However, there is no 
 way to deny that that change was made independently of ICANN.]
 
 Anycast may even have preceded the creation of ICANN

Yes, anycast has been around for a long time.  Multicast, NATs, and OSI
all also preceded the creation of ICANN.  But does that mean that ICANN
should freely and and without question allow the 

Re: national security

2003-11-29 Thread Karl Auerbach
On Sat, 29 Nov 2003, Paul Robinson wrote:

 ... realistically there is only one option left for a single, 
 cohesive Internet to remain whilst taking into account ALL the World's 
 population: ICANN needs to become a UN body.

If you look at what ICANN really and truly does you will see that it has
little, if any, real role relating to internet technology.  Rather it is
an organization that, for the most part, imposes the business goals of a
selected and limited set of priviliged stakeholders onto the operation
of businesses that sell domain names.

Moving ICANN from the blind-oversight of the US Deparment of Commerce to
the UN or ITU ill only widen the stage for those privileged
stakeholders.  A move to the UN or ITU, by itself, will not improve the
security of the net or or any nation.

Without major structural reforms (such as I suggest at
http://www.cavebear.com/rw/apfi.htm ) ICANN will remain a non-technical
body that regulates and governs internet business practices.

As for this thread - national security - One has to remember that ICANN's
reaction to 9/11 was to create a committee.  That committee is filled with
intelligent and skilled worthies, many of whom have deep IETF roots.  
However that committee, with respect to the matter of security, was
essentially stillborn and silent.  It has only come to life recently as a
vehicle to rebut Verisign's Sitefinder.  As an institutional matter,
ICANN has demonstrated that it really is not suited to deal with the
technical issues of security, much less the intricate balancing of public
policy in which security choices must necessarily be made.

Moving ICANN to the UN will not, without major structural changes in 
ICANN, improve this.

Some of those changes have occurred already:

ICANN has abandoned the actual operation of the dns root servers to those
who are actually doing that job.  This is a very good thing because the
latter group are not merely extremely competent, but they are also clearly
focused on the job of running root servers and have shown that they do not
care to use their role to enforce someone's idea of intellectual property
protection.

And ICANN has abandoned the allocation of IP addresses to the regional IP
address registries.  Again this is a good thing because there are few
within ICANN who remember that this was one of ICANN's three original
purposes, much less understand the technical and economic impact of
address allocation policies.  The RIRs, on the otherhand *do* understand
this.

Personally I do not care whether ICANN is under the US Department of 
Commerce or becomes a branch of the ITU.  Both are imperfect.  As a US 
Citizen I can (and have) gone to the DoC and argued my side.  I'd probably 
have a smaller voice where things to move to the ITU.  On the other hand, 
most of the people in the world are not US citizens and thus could find 
the ITU more open to them.

For me the core issue is not under what banner ICANN exists.  For me the
issue is restructuring ICANN-like vehicles of internet governance into
things that really have a synoptic view, that are not captured by a few
selected commercial stakeholders, and that need not be brought before a
judge (as I had to do with ICANN) in order to compel them to be open,
transparent, and accountable.


 Neither do I, but ICANN have clearly demonstrated:

 3. Putting Computer Scientists in charge of anything is fundamentally a 
 bad idea

Let's dispell a big chunk of that myth - ICANN has never been controlled
by computer scientists.  The board has a always had a few people with rich
knowledge of the internet, but they were always a very tiny minority.  

Let us not forget that one of ICANN's first acts was to dismantle the job
of Chief Technology Officer.

The myth that ICANN is run by network experts has caused great damage.  
First of all, there is no reason to believe that those versed in computer
science are more capable of making public policy decisions than others.  
That myth of the Golden Age of Technical Kings died at the end of the
1930's.  [Take a look at the H.G.  Wells movie Things To Come to see
that myth in full flower.]

Second, the myth has created a screen of deference that hides the acts of
those privileged stakeholders who have proven to be very skilled at
using ICANN to promote certain intellectual property agendas to the
exclusion of nearly everything else.

 In fact, they have shown they are worse at being in charge than
 politicians and lawyers...

Most of the people involved in all of this affair are good, smart, and
well intended.  There are few Iagos.  ICANN is a glimpse of the future
that occurs when groups with different values and different uses of a
common language don't spend the time to really work down to fundamental
issues and goals.  I blame much of this on e-mail.  E-mail impedes the
development of those personal contacts that are necessary to build the
trust needed to bridge the differences of opinion and find the common
grounds.

The 

Re: national security

2003-11-29 Thread Karl Auerbach
On Sat, 29 Nov 2003, vinton g. cerf wrote:

 I strongly object to your characterization of ICANN as abandoning
 the operation of roots and IP address allocation. These matters have
 been the subject of discussion for some time.

I can't seem to recall during my 2 1/2 years on ICANN's board that there
ever was any non-trivial discussion, even in the secrecy of the Board's
private e-mail list or phone calls, on the matters of IP address
allocation or operation of the DNS root servers.  Because I was the person
who repeatedly tried to raise these issues, only to be repeatedly met with
silence, I am keenly aware of the absence of any substantive effort, much
less results, by ICANN in these areas.

So, based on my source of information, which is a primary source - my own
experience as a Director of ICANN, I must disagree that ICANN has actually
faced either the issue of DNS root server operations or of IP address
allocation.  And ICANN's enhanced architecture for root server security  
was so devoid of content as to be embarrassing - See my note at
http://www.cavebear.com/cbblog-archives/07.html

The DNS root server operators have not shown any willingness to let ICANN
impose requirements on the way they run their computers.  Indeed, the
deployment of anycast-based root servers without even telling ICANN in
advance, much less asking for permission, is indicative of the distance
between the operations of the root servers and ICANN.

[I believe that the anycast change was a good one.  However, there is no 
way to deny that that change was made independently of ICANN.]

Sure, ICANN prepares, or rather, Verisign prepares and ICANN someday hopes
to prepare, the root zone file that the DNS root servers download.  But to
say that preparation of a small, relatively static, text file is the same
as overseeing the root servers is inaccurate.

In addition, the root server operators have shown that they are very able 
to coordinate among themselves without ICANN's assistance.

 ICANN absolutely recognizes the critical role of the RIRs

Again, recognizing the RIRs is an admission that ICANN has abandoned its
role as the forum in which public needs for IP addresses and technical
demands for space and controled growth of routing information are
discussed and balanced.  Fortunately the RIRs have matured and are
themselves the IP address policy forums that ICANN was supposed to have
been.  Moreover, the RIRs have shown that they are more than capable of 
doing a quite good job of coordinating among themselves.


 There is still need for coordination of policy among these groups
 and the other interested constituents and that is the role that
 ICANN will play. 

Again, ICANN can not demonstrate that it has engaged, because it has not
engaged, in the coordination of IP address policy.  Sure, ICANN has
facilitated the creation of a couple of new RIRs.  But again, there is
vast distance between that and ICANN being the vehicle for policy
formulation or oversight to ensure that those policies are in the interest
of the public and technically rational.


I have serious doubts that ICANN will be able to meet its obligations
under the most recent terms of the oft-amended Memorandum of Understanding
between ICANN and the Department of Commerce.  I see no sign that the DNS
root server operators or the RIRs are going to allow themselves to become
dependencies of ICANN and to allow their decisions to be superseded by
decisions of ICANN's Board of Directors.

--karl--







Re: [Fwd: [Asrg] Verisign: All Your Misspelling Are Belong To Us]

2003-09-16 Thread Karl Auerbach

On Tue, 16 Sep 2003, Zefram wrote:

 ...  I suggest the following courses of action, to be taken
 in parallel and immediately:

 1. Via ICANN, instruct Verisign to remove the wildcard.

It isn't clear that this power is vested in ICANN.  There is a complicated
arrangement of Cooperative Agreements, MOUs, CRADAs, and Purchase Orders
that exist between various agencies of the US Department of Commerce
(including NTIA, NIST, and others) and ICANN and Verisign/NSI.

This web of agreements is sufficiently complicated that often really isn't
exactly clear who can compel Verisign/NSI on any particular point.  In
fact it may well be that the power may not exist.  Or it may take a lot of
legal dollars and time to press the issue.

To make the situation even less clear, there is, I believe, no statement
in the relevant Internet Standards docucuments that clearly rules out this
kind of wildcarding. (Yes, I think we can all agree that this particular
use of wildcarding *is* a bad thing, I'm simply pointing out that to those
who are not technically grounded in DNS matters, that without a clear
prohibition in the Internet Standards, the matter isn't so obvious.)

By-the-way, Neulevel (.us and .biz) did an experiment along these lines
back in May of this year.  It was short lived.  At the time I thought it
was a bad thing, and I still do.  And at the time I wrote and sent to the
ICANN board an evaluation of the risks of that experiment.

--karl--





Re: Pretty clear ... SIP

2003-08-25 Thread Karl Auerbach

  It has been my experience that ASN.1, no matter which encoding rules are
  used, has proven to be a failure and lingering interoperability and
  denial-of-service disaster.

I think the nugget of our discussion is the old, and probably
unanswerable, question of what is the proper balance between present
function and future (unforeseen) extension.

Back in the 1970's I met a very smart system designer.  He drew a
distinction between intricate and complicated.  A fine watch with many
moving parts could be intricate as being a well engineered solution to a
specific problem, while a Rube Goldberg timepiece could be complicated and
not well engineered.  The difference being the fact that unnecessary
elements are elided from an intricate solution unless there is a specific
articulated reason to leave them in.

ASN.1 (along with other general purpose encoders such as XML) carry a 
heavy burden of machinery that is present whether it is needed or used or 
not.

Your point about ASN.1/PER having some security benefits because the
cleartext is less predictable is interesting - and I sense that it is
quite valid.  However I would suggest that there is a much greater risk
that comes from putting the heavy and complex machinery of ASN.1/*ER
engines into deployed implemtations - and that is that most
implementations of products are very poorly tested against any but the
most mainstream of traffic flows.  A complicated engine such as ASN.1/*ER
is full of nooks and crannies where undetected flaws could exist.

A simple encoding, well suited to the particular job at hand, is less 
likely to contain untested code paths, whether those paths be generated by 
compiler or by hand.

(By-the-way, I don't accept the assertion that compiler generated
ASN.1/*ER engines are going to be better than hand tooled ones - most of
the latter that I've seen (some of which I've written) use libraries for
all the heavy lifting - fix a bug in the library and relink and you're
done, just like with a compiler.  And I agree with Rob A. that in many
embedded devices, we still can't ignore memory and processing
inefficiencies.)

The net is slowly evolving an edge layer of devices that are best
described as sealed appliances - these are small devices that will tend to
never experience an update to the factory installed code image.  It is far
more important to get these right before they are shipped than to have the
ability to extend their capabilities in the field.

I do not believe that complicated representations such as ASN.1/*ER
reflect a good balance between engineering needs for the present and
expansibility for the future - thus I put ASN.1 and *ER into the
complicated rather than the intricate category.

It is certainly true that net telephony and conferencing need
extensibility - but I would suggest that the hooks for extensibility ought
to be concisely defined and placed in specific parts of the protocol
structures (such as the SDP part of today's call establishment protocols).  
I see no need to burden the entire protocol representation under a mutable
layer of complexity such as ASN.1 when there is no reason that can be
articulated to require such mutability.

By way of analogy: IPv6 addresses could have been of variable size, like
NSAPs.  But so far I don't think that enough reasons have been put forth
to justify moving away from a relatively solid and fixed-size format to a
variable address format.

Good engineering includes a kinda sixth sense of knowing when to stop
adding features.

 And of course, the protocol specifier is free to specify one of 4 variants
 of PER

When I see phrases like 4 variants I hear one normal way and three
routes for attack.  Which reminds of of why the Internet isn't running
ISO/OSI today.  (Although I was surprised to learn recently that the ICAO
is presently mandating voice over CLNP for some new ground-air and air-air
systems for commercial airplanes.)

ISO/OSI had lots of good ideas.  But it could never focus or prune.  It
built a top heavy, over-optioned Vasa[*] that tipped over and sank before
it could ever be deployed.

[* - The Vasa was a 17th century Swedish warship that was so larded with 
options that it tipped over and sank on its first trip across the harbor.]

I really want VOIP and conferencing to work, not sink.

  As far as SIP vs H.323 goes - apart from the market fact that it is
  getting increasingly more difficult to find new products that support
  H.323 -
 
 Few products?  Every Microsoft OS ships with an h323 client called
 netmeeting.

Netmeeting dates from around 1997.

Most of my hard VOIP phones are either SCCP (let's not talk about that ;-)  
and/or SIP.  I haven't heard much buzz about new H.323 work.

 I find just the opposite. Now I have to worry about the security of SIP
 phones, and that they might be used for evesdropping.  H323 and and
 trusted ASN.1 compilers can go a long way past SIP for ensuring
 trustability of such devices.

You are not alone in your 

Re: Pretty clear ... SIP

2003-08-24 Thread Karl Auerbach
On Sat, 23 Aug 2003, Dean Anderson wrote:

 H.323 and ASN.1 eventually surpass ...

Ummm, based on my own direct experience with ASN.1 since the mid 1980's
(X.400, SNMP, CMIP...), I disagree.

It has been my experience that ASN.1, no matter which encoding rules are
used, has proven to be a failure and lingering interoperability and
denial-of-service disaster.

For example, the flaws in ASN.1 parsers in SNMP engines have proven to be
a decades+ old vulnerability for the net.

We'd be much better off with XML, Scheme expressions, or Python pickles
than with ASN.1 both for expressing data structures in documents and for 
encoding data structures into binary for carriage in packets.

If one wants compression, then it is better to apply it to the entire
packet, or the byte stream - the results will almost always be much, much
better then the item by item packing done by ASN.1's encoding rules.  (I
was always amused that using BER, ASN.1 integers were frequently bigger
than simply 32-bit binary, particularly when carrying unsigned numbers,
which are rather common in networking protocols.)

(In addition, with the 60 octet minimum packet size imposed by many data
links, compression of typically small packets - such as those containing
SDP information - often doesn't result in much of a gain anyway.)

As far as SIP vs H.323 goes - apart from the market fact that it is
getting increasingly more difficult to find new products that support
H.323 - I find H.323 to be qualitiatively worse, as measured in units of
elegance, than SIP.  I too am sorry that SIP has gotten more complex as it
has experienced actual implementation pressures.  However, H.323 started
out as a mish mosh - a large dollup of ISDN, a dab of SDP, a chunk of
RTP/RTCP - and it remains a mish mosh.

--karl--













Re: re the plenary discussion on partial checksums

2003-07-16 Thread Karl Auerbach
On Wed, 16 Jul 2003, Keith Moore wrote:

 so it seems like what we need is a bit in the IP header to indicate that
 L2 integrity checks are optional

A lot of folks seem to forget that from the point of view of IP L2
includes the busses between memory and the L2 network interface.  There
have been more than a few recorded cases where packet errors were
introduced as the packet flowed in or out of memory, unprotected by link
CRCs.

To my way of thinking we don't need a bit in the IP header, we need a bit
in the heads of implementors to remind them that relying on link-by-link
protection can be dangerous even if the links have strong CRCs.

 ... IP option to provide a stronger checksum than normally exists

The last time I saw a comparision of checksum algorithm strengths was back 
in the OSI days when the IP checksum was compared to the OSI Fletcher 
checksum (my memory is that the IP checksum came in second.)

--karl--





Re: utility of dynamic DNS

2002-03-01 Thread Karl Auerbach

On Fri, 1 Mar 2002, John Stracke wrote:

 Try this one: while in your hotel room, you see there's something you need 
 to download  By the time you get dressed, it's still coming down; and you 
 have to go to a meeting  If you're using Mobile IP, you may be able to 
 move from one network to another before the TCP connection dies

There is another alternative way to solve this: an assocation layer
above TCP that allows application/client-to-application/server 
communications to span a sequence of lifetimes of underlying transports

Yes, this would require reworking many existing application protocols 
(In some cases that might be a good thing ;-)

The actual assocation protocol is actually pretty simple - it really
amounts to an exchange of tokens between the two ends so that they have an
agreed upon fallback point should they need to re-establish the transport
and resume  The protocol engines themselves need not store any data - the
applications themselves are responsible for holding enough to resume from
the last agreed upon token  My guess is that something like this could be
readily incorporated into beep if it isn't there already

--karl--





Re: Example of dns (non) fun

2000-12-04 Thread Karl Auerbach


 actually your urls could be:
 
   http://www.bq--aduwvya.fr/
   http://www.deja.fr/
 
 a application may render the bq--aduwvya.fr as déjà.fr or it may not.
 Finally it would be up to the URDP process or the courts as to *if* the
 two domains are the same. We shouldn't worry what the URDP or the courts
 may or may not do.

Yes, the issue of "equivalence" extends well beyond what we can do in the
technical realm -- for instance "red ball" and "bal rouge" can be
considered equivalent for certain purposes (such as certain parts of
trademark law.)

I'd hate to be the one who has to write the library function
int strcmp_that_statisfies_everybody();

My own preference is to create a flexible mechanism (perhaps with some
degree of character set canonicalization) and let the courts, lawyers,
legislatures, etc try to figure out how to overlay restrictive
interpretations.

--karl--





Re: Storage over Ethernet/IP

2000-05-26 Thread Karl Auerbach

 
 a.  TCP is too CPU intensive and creates too much latency for storage I/O operations.
 
 b.  The IP stack is too top heavy and processing packet headers is too
 slow to support storage I/O operations.

There were some papers published duing the late '80's or early '90s by
John Romkey and I belive Dave Clark and Van Jacobson about the length of
instruction sequences to handle TCP.  I'm not sure that those ever became
RFCs.

Those papers came up with figures indicating that if one structures code
"correctly" and if the net path is "clean" (i.e. not a lot of packet loss,
reordering, replication, etc) than the per-packet instruction sequences
(sans IP checksum calculation) were potantially very short.

Does anyone have the references to these papers?

--karl--







Re: runumbering (was: Re: IPv6: Past mistakes repeated?)

2000-04-25 Thread Karl Auerbach


  Which raises the interesting (to me anyway) question: Is there value in
  considering a new protocol, layered on top of TCP, but beneath new
  applications, that provides an "association" the life of which transcends
  the TCP transports upon which it is constructed?
 
 been there, done that.  yes, it is valuable.  but it is expensive
 in terms of the amount of protocol overhead and support that you 
 need to make it work reliably in the face of various failures.
 and it can be slow to recover from such failures.
 
 for instance, you have to build your own data acks on top of TCP,

There is a protocol design that did this, but we tend to shield our eyes
when we look that way - OSI Session layer.  It was a hideous thing indeed
to read on paper and way overburdened with options.  But when one got down
to the wire, the actual protocol traffic was small.  It didn't try to do
any kind of reliability or sequencing or even sensing that a transport had
died.  Rather it maintained a sequence of restart points - and it was only
at those points (which were triggered by the application saying "mark this
point") that there were things that could be called "acks".

So what I am suggesting is that it seems that there is evidence that one
can do an "association" protocol that is relatively lightweight in terms
of machinery, packets, packet headers, and end-node state if one leaves
the heavy lifting of reliability to the underlying TCP protocol.

I bet Marshall Rose has some good comments on this as he actually went and
did some of this.

(By-the-way, I'm not in any way suggesting OSI Session, I'm only trying to
learn from the past.)

--karl--






Re: prohibiting RFC publication

2000-04-08 Thread Karl Auerbach


I'd like note my agreement with to the comments made by Dave Crocker.

And I would like to suggest that there is perhaps yet another aspect of
this debate:

The IETF recently made a strong moral statement against CALEA.  That
statement carried weight; it was noticed; it had impact.

And that statement carried weight precisely because it was unique - it was
a statement of morality that carried weight because such statements are
reserved for the worst of the worst.

If the IETF engages in routine non-acceptance of "informational" documents
on the basis of non-technical concerns the IETF will, I believe, lose its
clear and loud voice when that voice is most needed to be heard.

--karl--







Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-06 Thread Karl Auerbach


 I am writing to request that the RFC Editor not publish 
 draft-cerpa-necp-02.txt as an RFC in its current form,
 for the following reasons:
 
 2. A primary purpose of the NECP protocol appears to be to 
 facilitate the operation of so-called interception proxies.  Such 
 proxies violate the Internet Protocol in several ways: 
 
 3. Aside from the technical implications of intercepting traffic,
 redirecting it to unintended destinations, or forging traffic from
 someone else's IP address - there are also legal, social, moral and
 commercial implications of doing so.

You will need to be far more specific here.  I see absolutely nothing that
is not legal, is not social, or is not moral.  I do see commercial
implications, but whether those are is "good" or "bad" is not a technical
judgement.
 
 In my opinion IETF should not be lending support to such dubious
 practices by publishing an RFC which implicitly endorses them, even
 though the authors are employed by major research institutions and
 hardware vendors.

I take the contrary position.  The IETF ought to be encouraging the
documentation of *all* practices on the net.  It is far better that they
are documented where people can find useful information when they see this
kind of packet activity rather than have them known only to a few
cognescenti.

May I suggest that one treat this in its classical sense - as a Request
for Comments and that those who have technical objections or technical
enhancements publish those comments in an additional document rather than
try to suppress the original one.

Having a document trail that shows what paths and ideas have been found
wanting is nearly as important has having a trail that show what paths
have been found useful.

--karl--




Re: Last Call: Registry Registrar Protocol (RRP) Version 1.1.0 to Informational

2000-01-04 Thread Karl Auerbach

 
 I am glad that NSI has published the I-D for their protocol, now does it
 need to go beyond that and become an RFC, IMHO, no.

Since I-Ds still officially vanish after a while, we need to move it to
RFC to maintain its visibility.  Let's defer comments on the I-D fade out
policy.

 The IETF does not need to publish broken implementations of one companies 
 view of the shared gTLD registration process.

We can learn from the flaws of the past.  And this RRP certainly gives us
a lot to learn from.

I would hope that having an Informational RFC on the RRP would motivate
some folks to think up, write down, and publish "A Better RRP".  Your own
work on the representing the whois data using XML is perhaps a good start.

(I can imagine that many sites with big zone files might find it useful to
have a tool using "A Better RRP" to distribute the administration of their
zone.)

By-the-way, I think that the IETF has already jumped through enough hoops
regarding the mis-perception by some that the letters "RFC" are some sort
of seal of approval.

--karl--