Re: Wiki RFC

2004-01-28 Thread Franck Martin




The page is on:
http://tikiwiki.org/tiki-index.php?page=RFCWiki

Also some background
http://tikiwiki.org/tiki-index.php?page=TextWiki

and the syntax to be put in RFC format:
http://doc.tikiwiki.org/tiki-index.php?page=Wiki+Text+Formatting

Anybody wanting to help is most welcome...

On Thu, 2004-01-29 at 15:40, Harald Tveit Alvestrand wrote:

please do - it would follow in the glorious tradition of publishing RFCs 
for such useful things as the OGG file format, the Vorbis codec and the 
zlib compression formats.

--On 29. januar 2004 15:10 +1200 Franck Martin <[EMAIL PROTECTED]> wrote:

> Well,
>
> As part of the www.tikiwiki.org project, I have been talking about
> writing the wiki syntax into an RFC like documentation. I think the wiki
> syntax in this project is in documentation format but not looking like an
> RFC.
>
> I think we could quickly whack an RFC on wiki, and ask people to
> contribute to it... When it is stabilised we could submit to IETF.
>
> The advantage, is that there is already an implementation, the other
> advantage would be to bring some standardisation in all the wiki.
>
> I think since the creation of the Web, there has not been such a
> revolution as wiki. Imagine an e-mail message wiki formated. No need to
> use HTML anymore in a multi part message. The text can be read as is, or
> wiki interpreted for better display...






Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9  D9C6 BE79 9E60 81D9 1320
"Toute connaissance est une reponse a une question" G.Bachelard








signature.asc
Description: This is a digitally signed message part


Re: Wiki RFC

2004-01-28 Thread Harald Tveit Alvestrand
please do - it would follow in the glorious tradition of publishing RFCs 
for such useful things as the OGG file format, the Vorbis codec and the 
zlib compression formats.

--On 29. januar 2004 15:10 +1200 Franck Martin <[EMAIL PROTECTED]> wrote:

Well,

As part of the www.tikiwiki.org project, I have been talking about
writing the wiki syntax into an RFC like documentation. I think the wiki
syntax in this project is in documentation format but not looking like an
RFC.
I think we could quickly whack an RFC on wiki, and ask people to
contribute to it... When it is stabilised we could submit to IETF.
The advantage, is that there is already an implementation, the other
advantage would be to bring some standardisation in all the wiki.
I think since the creation of the Web, there has not been such a
revolution as wiki. Imagine an e-mail message wiki formated. No need to
use HTML anymore in a multi part message. The text can be read as is, or
wiki interpreted for better display...







Re: Wiki RFC

2004-01-28 Thread Franck Martin




Well,

As part of the www.tikiwiki.org project, I have been talking about writing the wiki syntax into an RFC like documentation. I think the wiki syntax in this project is in documentation format but not looking like an RFC. 

I think we could quickly whack an RFC on wiki, and ask people to contribute to it... When it is stabilised we could submit to IETF.

The advantage, is that there is already an implementation, the other advantage would be to bring some standardisation in all the wiki.

I think since the creation of the Web, there has not been such a revolution as wiki. Imagine an e-mail message wiki formated. No need to use HTML anymore in a multi part message. The text can be read as is, or wiki interpreted for better display...

Cheers

On Thu, 2004-01-29 at 13:25, Harald Tveit Alvestrand wrote:

--On 28. januar 2004 12:49 +1200 Franck Martin <[EMAIL PROTECTED]> wrote:

> I was just wondering if there has been any work to standardise the Wiki
> syntax/system into an RFC?

AFAIK, no.
Do you know of any effort to make stable documentation for the WIKI syntax 
& functionality, or is it just "use the Source, Luke"?





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9  D9C6 BE79 9E60 81D9 1320
"Toute connaissance est une reponse a une question" G.Bachelard








signature.asc
Description: This is a digitally signed message part


Re: Wiki RFC

2004-01-28 Thread Harald Tveit Alvestrand


--On 28. januar 2004 12:49 +1200 Franck Martin <[EMAIL PROTECTED]> wrote:

I was just wondering if there has been any work to standardise the Wiki
syntax/system into an RFC?
AFAIK, no.
Do you know of any effort to make stable documentation for the WIKI syntax 
& functionality, or is it just "use the Source, Luke"?






SUMMARY: Processing of expired Internet-Drafts

2004-01-28 Thread Harald Tveit Alvestrand
Just to make one thing clear

The published processing of expired Internet-Drafts was intended to be a 
reasonably small change to existing procedures.

That's not to say that the procedures are going to live forever. But we 
don't want to make bigger changes than we have to until we're ready to 
overhaul the whole system (and know why).

Anyway, back to the summary. I've tried to represent what people said, 
and group them into a few topics. Conclusions (mine) at the end.

* Use of tombstone files.
Current procedure is to create tombstone files whenever an I-D is no longer 
valid (withdrawn, expired, published as RFC).
Arguments raised are that a file of "old names" is better (Fred, Ted, Eric 
F) or a separate directory (Carl) or a search system (Alexey).

* Tombstones and expiry.
Current procedure is to make them live forever.
Arguments raised are that 700 IDs per year produce a lot of tombstones 
(Fred, Ken), and that we could expire them after 6 months (Zefram, Fred) or 
2 years (David Morris).
My argument is that we've lived with the present system since 2001, and 
produced only around 2500 tombstones; it's not worth bothering to change 
that until we change the system more dramatically.

* Tombstones and version numbers:
Current procedure is to expire name-nn, create tombstone name-nn+1, and if 
it is resurrected, to resurrect as name-nn+1.
(Until some time ago, the tombstone was name-nn)
Arguments raised are that 2 files with the same name and different content 
is a Bad Thing, and that we should increment the number once again for a 
new version (Fred, Zefram, Jim Galvin), or use another extension for 
tombstones (David Morris).
One problem seen is searching for name+1 will now give a hit (Carl), and 
that maintaining mirrors is harder (Fred).
The problem of mirroring can be cleanly solved with rsync (Thomas), or 
comparing file sizes (someone else).

* Status of drafts:
Current procedure is to let the version-numbered files, including 
tombstones, serve as status information, as well as 1id-index listing the 
current drafts.
Argument is that a name without the version number would be more easy to 
find (Iljitch); this is easy to make when you need it (Scott Brim), and 
exists at www.watersprings.org (among other places (Tim Chown) and 
www.potaroo.net (Geoff)

Conclusions, all mine:

- Documenting current procedures is good.
- We won't expire tombstones. They're not a big enough problem yet.
- We'll think about naming tombstones something else than the exact draft 
name (for instance draft-whatever-version-nn-expired.txt???)
- We'll note the issue of referencing names without the version number as 
input for thinking about overhauling the whole I-D system. But that won't 
happen very quickly - it "mostly works".

Seems to make sense?

   Harald





Re: Death of the Internet - details at 11

2004-01-28 Thread Randall R. Stewart (home)
Iljitsch van Beijnum wrote:

On 28-jan-04, at 22:00, Randall R. Stewart (home) wrote:

In other words, when there is a serious solution to
multihoming -- ie, being able to preserve a connection when
using more than one IP Address -- it will likely work for IPv4.


Yes.. SCTP solves the problem for V4 and V6 (missed that bit last time). 

Let me go through these one by one...

That remains to be seen. The list of issues that SCTP has or at least 
seems to have is long. To name a few:

- increased overhead compared to TCP 
Ok lets see. SCTP takes on average 4 more bytes per data packet then
TCP. However, if the TCP implementation enables timestamps then
that is not true and TCP takes more overhead by about 4 bytes...
Unless you are discussing another type of overhead...

- requires significant changes from applications 


Ok let see.. for Mozilla we converted two lines of code

sd = socket(AF_INET6, SOCK_STREAM, IPPROTO_TCP);
became --> sd = socket(AF_INET, SOCK_STREAM, IPPROTO_SCTP);
and

setsockopt(sd, IPPROTO_TCP, TCP_NODELAY, &on_off, sizeof(on_off));
became ---> setsockopt(sd, IPPROTO_SCTP, SCTP_NODELAY, 
&on_off, sizeof(on_off));

Now to take advantage of the stream feature you would need to do
more.. but for pure multi-homing one or two lines of change does
not seem that big of deal to me ...
- no backward compatibility of any kind 
I am not sure what you mean by backward compatable? You definetly
can't have TCP and SCTP talk.. they are after all different protocols...
But if an application needs the redundancy move to it.. its there today
with about 2 lines of coding change...
- source address selection problem isn't addressed fully, if at all 
I don't think I understand this issue either.. We have fully
addressed source address selection in the KAME implementation. It
is not a difficult problem.. it does require some code.. but any good
implementation must address this issue... And since the site scope
went away in IPv6 (at least for now) its easier to do then it was).


I can't be sure right now, but I also suspect SCTP could very well be 
vulnerable to some of the threats identified lately in the multi6 wg. 
I would have to look at your threats... if you add the dynamic address 
feature.. sure
thats why the document has not progressed.. but that will change when the
Purpose Built Keys instantation in SCTP happens.. hopefully soon.. just 
another
draft to write :-D

I think you may want to have a little closer look at SCTP.. you might
want to get the KAME BSD implementation and play with it for
a bit.. I think you would be amazed at how simple it is to
convert an application and you end up with multi-homing for
free with those 2 lines of code :->
R







--
Randall R. Stewart
815-477-2127 (office)
815-342-5222 (cell phone)




Re: Death of the Internet - details at 11

2004-01-28 Thread Iljitsch van Beijnum
On 28-jan-04, at 18:39, John C Klensin wrote:

The reality is that there is very little that we do on the Internet 
today that require connection persistence when a link goes bad (or 
when "using more than one IP address").  If a connection goes down, 
email retries, file transfer connections are reconnected and the file 
(or the balance of the file if checkpointing is in use) is transferred 
again, URLs are refreshed, telnet and tunnel connections are recreated 
over other paths, and so on.  It might be claimed that our 
applications, and our human work habits, are designed to work at least 
moderately well when running over a TCP that is vunerable to dropped 
physical connections.
This assumes that when address A fails, address B keeps working. This 
is only true when routing is symmetric or the multiaddressed endpoint 
is able to detect the failure. And applications need to retry with 
other addresses. They typically don't do this very well if at all in 
IPv4, and only moderately well in IPv6.

Would it be good to have a TCP, or TCP-equivalent, that did not have 
that vunerability, i.e., "could preserve a connection when using more 
than one address"?  Sure, if the cost was not too high on normal 
operations and we could actually get it.
There are several proof of concept multiaddress TCPs.

[sorry for the long quote:]

By contrast, the problem that I find of greatest concern is the one in 
which, if I'm communicating with you, and one or the other of us has 
multiple connections available, and the connection path between us 
(using one address each) disappears or goes bad, we can efficiently 
switch to a different combination... even if all open TCP connections 
drop and have to be reestablished in the interim. For _that_ problem, 
we had a reasonably effective IPv4 solution (at least for those who 
could afford it) for many years -- all one needed was multiple 
interfaces on the relevant equipment (the hosts early on and the 
router later) with, of course, a different connection and address on 
each interface.  But, when we imposed CIDR, and the address-allocation 
restrictions that went with it, it became impossible for someone to 
get the PI space that is required to operate a LAN behind such an 
arrangement (at least without having a NAT associated with the 
relevant router) unless one was running a _very_ large network.
??? Why would you need PI space to be able to give hosts more than one 
address and use those successfully?

And if you had the PI space, why would you bother? Contrary to some 
reports multihoming using independent address space and links to more 
than one ISP works fairly well: failover times are almost always 
shorter than TCP or user timeouts.

(i) if any of the options turn out to require an
approach similar to the one that continue to work for
big enterprises with PI space in IPv4, then we are going
to need (lots) more address space.  And
More than what?

However quite a number of the proposals do not
require any significant infrastructure change.  This bodes
well for rapid deployment, once they make it through the
standards process.

On the other hand, getting the IETF to produce standards track
specifications out of this large pack of candidates could take
another 10 years...
Looking at the rate at which the IETF is coming up with ways to 
automatically determine IPv6 DNS resolver addresses I can hardly 
disagree.

But most of the multi6 proposals have large parts in common with other 
proposals. It seems to me that all we have to do is combine the best 
parts. How hard can that be. (Famous last words.)

Yes.  And it may speak to the IETF's sense of priorities that the 
efforts to which you refer are predominantly going into the much more 
complex and long-term problem, rather than the one that is presumably 
easier to solve and higher leverage.
Which would be?




Re: visa requirements (US citizens)

2004-01-28 Thread Gene Gaines
Visas for travel to Seoul, Korea IETF meeting.

Perhaps I can settle this.

A U.S. citizen does NOT need a visa to visit Korea for a meeting
by a non-profit group such as the Internet Engineering Task Force.
I just confirmed this with the head of the visa section in the
Korean Consulate in Washington DC.

But don't take my word for it. If anyone requests, I will be glad
to get an official letter faxed from the Korean Consulate-General.
I would carry that letter with your current U.S. passport.

More precise statement:

  - U.S. citizens traveling to Korean to attend the IETF meeting
    do not need a visa, as they are traveling to attend a
    non-profit conference.  They can stay in Korea up to 30 days
    for such purposes and for tourism.

  - If you travel to Korea for business purposes, such as meeting
customers or other business purposes, then a visa is required.

  - There also is confusion about government employees.  U.S.
government employees going to Korea just for tourism or a non-
profit conference such as IETF do not need a visa because they
are going a private citizens.  However, government employees
going to Korea for official purposes do need an official visa.

I won't request an official letter unless someone asks me to do
so. I could post on a neutral web site or email to you.

Gene Gaines
[EMAIL PROTECTED]
Sterling, Virginia

On Wednesday, January 28, 2004, 3:54:12 PM, Eric wrote:


> On 1/28/2004 12:46 PM, Kevin C. Almeroth wrote:

>> Seems to me to pretty clear that a visa is not needed.

> These are the future possibilities:

>  1) You got the visa, the guard on duty that day deems it unnecessary,
> and you curse the effort you spent to get it.

>  2) You don't get the visa, the trainee on duty that day deems it is
> necessary, and you curse the ~30 hour round-trip flight, the
> money, and the effort you spent avoiding the visa fetch.



-- 




Re: Death of the Internet - details at 11

2004-01-28 Thread Noel Chiappa
> From: John C Klensin <[EMAIL PROTECTED]>

> For _that_ problem, we had a reasonably effective IPv4 solution .. for
> many years

We only has a "solution" as long as we had a small network. It was not a
solution which would scale. If that was a "solution", then IPv4 is a
"solution" to the need for address space.

> when we imposed CIDR, and the address-allocation restrictions that went
> with it, it became impossible for someone to get the PI space that is
> required ...
> I'll stipulate this is a routing problem as much, or more, than it is
> an address-availability problem.

CIDR was always a single set of mechanisms (ubiquitous use of address masks)
which solved two completely separate problems: i) allocation of address space
in finer-grained chunks, to slow the rate of use, and ii) address table
bloat, leading to routing instability. The routing aspect was not an
afterthought or a later discovery, but an integral aspect from the very
beginning. Thinking of CIDR as a address space allocation mechanism is broken.


> I'll also agree that there appears to be little evidence that IPv6 is
> significantly more routing-friendly than IPv4

"little" -> "none".

> any real routing-based solutions that help the one will help the other.

One of the painful pills the WG members have had to swallow is that there *is
no* "routing-based solution" to providing support for wide-spread
multi-homing (at least in anything like the current routing architecture,
i.e. packets which include only source and destintion addresses, as opposed
to a source route).

If you (or anyone else) doesn't understand why, please review the WG mailing
list archives before claiming otherwise.


>   (i) if any of the options turn out to require an
>   approach similar to the one that continue to work for
>   big enterprises with PI space in IPv4, then we are going
>   to need (lots) more address space.

Since the precursor is not going to happen, we don't need to worry about
the hypothetical consequence.


> there is very little that we do on the Internet today that require
> connection persistence when a link goes bad

I made this exact point some time back, and asked if this was therefore
really a requirement, and was told that it was. I remained extremely dubious,
but had no interest (for obvious reasons) in correcting it if wrong.

I will note in passing that if this capability were not really needed, one
might ask why SCTP added it.

> it may speak to the IETF's sense of priorities that the efforts to
> which you refer are predominantly going into the much more complex and
> long-term problem, rather than the one that is presumably easier to
> solve and higher leverage.

The general approach (of multiple addresses) is the only realistic one. Any
analysis of whether the additional requirement (for connection survivability)
has a reasonable cost/benefit ratio needs to start with this ground truth.

It may be that the differential complexity to go from i) multiple addresses,
and established connections cannot switch, to ii) multiple addresses, and
established connections can switch, is minimal. I can't say; I haven't
bothered to look at the (to me, boring) engineering details.

Noel



Re: Death of the Internet - details at 11

2004-01-28 Thread Iljitsch van Beijnum
On 28-jan-04, at 22:00, Randall R. Stewart (home) wrote:

In other words, when there is a serious solution to
multihoming -- ie, being able to preserve a connection when
using more than one IP Address -- it will likely work for IPv4.

Yes.. SCTP solves the problem for V4 and V6 (missed that bit last 
time).
That remains to be seen. The list of issues that SCTP has or at least 
seems to have is long. To name a few:

- increased overhead compared to TCP
- requires significant changes from applications
- no backward compatibility of any kind
- source address selection problem isn't addressed fully, if at all
I can't be sure right now, but I also suspect SCTP could very well be 
vulnerable to some of the threats identified lately in the multi6 wg.




Re: Death of the Internet - details at 11

2004-01-28 Thread John Leslie
John C Klensin <[EMAIL PROTECTED]> wrote:
> --On Wednesday, 28 January, 2004 07:36 +0900 Dave Crocker 
> <[EMAIL PROTECTED]> wrote:
> 
>> In other words, when there is a serious solution to
>> multihoming -- ie, being able to preserve a connection when
>> using more than one IP Address -- it will likely work for IPv4.
> 
> Actually, that definition changes the problem into a much harder 
> one,

   Preserving a connection when using more that one IP address is
not _necessarily_ a much harder problem -- especially if we
stipulate that tunneling is a legitimate middleware operation.

> The reality is that there is very little that we do on the Internet 
> today that require connection persistence when a link goes bad... 

   But we certainly _should_ be doing things that would greatly
benefit from connection persistence when a link goes bad.

> It might be claimed that our applications, and our human work
> habits, are designed to work at least moderately well when running
> over a TCP that is vunerable to dropped physical connections.

   Alternatively, one might claim our work habits have "evolved"
to work moderately well...

> Would it be good to have a TCP, or TCP-equivalent, that did not 
> have that vunerability, i.e., "could preserve a connection when 
> using more than one address"?  Sure, if the cost was not too 
> high on normal operations and we could actually get it.  But the 
> goal has proven elusive for the last 30-odd years...

   Might we do well to consider _why_ this is so?

> By contrast, the problem that I find of greatest concern is the 
> one in which, if I'm communicating with you, and one or the 
> other of us has multiple connections available, and the 
> connection path between us (using one address each) disappears 
> or goes bad, we can efficiently switch to a different 
> combination... even if all open TCP connections drop and have to 
> be reestablished in the interim.

   If I understand, John is looking for applications-level link
redundancy, which strikes me as unlikely to be easy to deploy.

> For _that_ problem, we had a reasonably effective IPv4 solution
> (at least for those who could afford it) for many years -- all
> one needed was multiple interfaces on the relevant equipment
> (the hosts early on and the router later) with, of course, a
> different connection and address on each interface. 

   Aren't we now talking what John said "changes the problem into
a much harder one" -- namely preserving connection when using
more than one IP address?

> But, when we imposed CIDR, and the address-allocation restrictions
> that went with it, it became impossible for someone to get the
> PI space that is required to operate a LAN behind such an
> arrangement (at least without having a NAT associated with the
> relevant router) unless one was running a _very_ large network.

   A /20 is _not_ "very large" -- just impractical to justify for
small-scale projects. (Thus, the allocation policies prevented
much of the small-scale experimentation which normally comes in
the early stages of design.)

> Now, I'll stipulate this is a routing problem as much, or more, 
> than it is an address-availability problem. 

   I'm not sure I agree. It's true that address-availability
policies were driven by routing problems. One _could_ consider
the route-filtering policies to be "a routing problem", but this
doesn't strike me as useful.

> And I'll also agree that there appears to be little evidence
> that IPv6 is significantly more routing-friendly than IPv4

   Agreed.

> and hence, that any real routing-based solutions that help the
> one will help the other.  But,
>   (i) if any of the options turn out to require an
>   approach similar to the one that continue to work for
>   big enterprises with PI space in IPv4, then we are going
>   to need (lots) more address space.  And
>  (ii) If any of the "multiple addresses per host" or
>   "tricks with prefixes" approaches are actually workable
>   and can be adequately defined and implemented at scale
>   --and there is some evidence that variations of them can
>   be, at least for leaf networks-- then they really do
>   depend on structure and facilities that appear to me to
>   are available in IPv6 and not in IPv4.

   This gives the impression of overstating your case. Indeed, there
_will_ be solutions which require lots more IPv4 space; and there
will be solutions which depend on structure of IPv6. But these will
have to compete with other solutions which need neither.

> So, for the problem I was referring to (but perhaps not for your 
> much more general formulation), I stand by my comment and 
> analysis.

   I won't attempt to restate your analysis. But I think your
analysis is too narrow. You quite ignore the tricks which many
smaller ISPs can perform -- especially when they cooperate.

   We have a genuine problem in that we'd like something immediately
scalable -- and only larger ISPs can immediate

Re: Death of the Internet - details at 11

2004-01-28 Thread Jeffrey I. Schiller
Applications have to deal with more then just losing a
connection. They have to deal with the loss of state that occurs when
you lose a connection. In general you really don't know which
transactions finished and which ones didn't, so you have to re-sync
your state in some way. 

I believe that no matter what we do to provide connection persistence,
connections will still break. Whether they break because of a total
loss of path for a long period of time, or more likely path is lost
for longer then the human timeout (i.e, the human starts typing ^C
[unix] or clicking CANCEL [gui] or whatever).

As long as connections break, applications will have to deal with
getting back to work when a connection can be made again. Given I
don't believe you will ever get "perfect" connection persistence (even
if you can do better then TCP does today) so therefore applications
will always have to have that re-syncing baggage.

 -Jeff



Re: Death of the Internet - details at 11

2004-01-28 Thread Randall R. Stewart (home)
John C Klensin wrote:

Dave,

Just to pick a small nit or three...

--On Wednesday, 28 January, 2004 07:36 +0900 Dave Crocker 
<[EMAIL PROTECTED]> wrote:

John,

JCK> but the only realistic solution for someone who needs high
JCK> reliability in that environment is multihoming, and there
seems  JCK> to be no hope for multihoming of small-scale
networks with IPv4.
There is not much of a solution, today, for either IPv4 _or_
IPv6.
However there are nearly 10 different proposals under
consideration in the IETF, to deal with multihoming.  Few are
restricted to IPv4.


or to IPv6, which I assume is what you intended.

In other words, when there is a serious solution to
multihoming -- ie, being able to preserve a connection when
using more than one IP Address -- it will likely work for IPv4. 

Yes.. SCTP solves the problem for V4 and V6 (missed that bit last time).


Actually, that definition changes the problem into a much harder one, 
and one that I think is unnecessary for the problem I was discussing 
--unnecessary 99% of the time, if not always.  The reality is that 
there is very little that we do on the Internet today that require 
connection persistence when a link goes bad (or when "using more than 
one IP address").  If a connection goes down, email retries, file 
transfer connections are reconnected and the file (or the balance of 
the file if checkpointing is in use) is transferred again, URLs are 
refreshed, telnet and tunnel connections are recreated over other 
paths, and so on.  It might be claimed that our applications, and our 
human work habits, are designed to work at least moderately well when 
running over a TCP that is vunerable to dropped physical connections. 
Yes.. it is true we have trained folks to hit the reload button... there 
is no
doubt about that :->



Would it be good to have a TCP, or TCP-equivalent, that did not have 
that vunerability, i.e., "could preserve a connection when using more 
than one address"?  Sure, if the cost was not too high on normal 
operations and we could actually get it.  But the goal has proven 
elusive for the last 30-odd years 
John, please go look at RFC2960... it does this.. it is TCP equivalent.. 
and yes the
cost is not to high.

It is available in Linux 2.6, all of the BSD via KAME, Solaris (via 
package I am told), HP
and you can purchase it for many other platforms...

You can find information on this at:

http://www.sctp.org/implementations.html

It always seems to me like typical engineers... we solve the problem
one place and then rush out and try to solve it in yet another place... when
all we have to do is use what is already defined... I guess it is the 
ole NIH syndrom... :-<

-- at least in the absence of running with full IP Mobility machinery 
all of the time, which involves its own issues -- and, frankly, I'm 
not holding my breath.

By contrast, the problem that I find of greatest concern is the one in 
which, if I'm communicating with you, and one or the other of us has 
multiple connections available, and the connection path between us 
(using one address each) disappears or goes bad, we can efficiently 
switch to a different combination... even if all open TCP connections 
drop and have to be reestablished in the interim. For _that_ problem, 
we had a reasonably effective IPv4 solution (at least for those who 
could afford it) for many years -- all one needed was multiple 
interfaces on the relevant equipment (the hosts early on and the 
router later) with, of course, a different connection and address on 
each interface.  But, when we imposed CIDR, and the address-allocation 
restrictions that went with it, it became impossible for someone to 
get the PI space that is required to operate a LAN behind such an 
arrangement (at least without having a NAT associated with the 
relevant router) unless one was running a _very_ large network. 
The web server that I mention above has two addresses that are on the 
Big-I. And if
you connect via SCTP to the apache engine .. guess what.. you will use 
both of them..
and no I don't need a large block of lan allocation.. SCTP takes care of 
all of
this for me...

Will it work (multi-homed) behind a nat.. nope.. you reduce down to
singly homed .. but that is the price you pay for NAT..
I suppose one could define a inter-nat protocol so one could support 
multi-homing but
that is a swamp I would not care to walk down...

R



Now, I'll stipulate this is a routing problem as much, or more, than 
it is an address-availability problem.  And I'll also agree that there 
appears to be little evidence that IPv6 is significantly more 
routing-friendly than IPv4 and hence, that any real routing-based 
solutions that help the one will help the other.  But,

(i) if any of the options turn out to require an
approach similar to the one that continue to work for
big enterprises with PI space in IPv4, then we are going
to need (lots) more address space.  And

(ii) If any of the "multi

Re: visa requirements (US citizens)

2004-01-28 Thread Eric A. Hall

On 1/28/2004 12:46 PM, Kevin C. Almeroth wrote:

> Seems to me to pretty clear that a visa is not needed.

These are the future possibilities:

 1) You got the visa, the guard on duty that day deems it unnecessary,
and you curse the effort you spent to get it.

 2) You don't get the visa, the trainee on duty that day deems it is
necessary, and you curse the ~30 hour round-trip flight, the
money, and the effort you spent avoiding the visa fetch.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: Death of the Internet - details at 11

2004-01-28 Thread Randall R. Stewart (home)
Dave:

Comments in-line below..

Dave Crocker wrote:

John,

JCK> but the only realistic solution for someone who needs high
JCK> reliability in that environment is multihoming, and there seems 
JCK> to be no hope for multihoming of small-scale networks with IPv4.

There is not much of a solution, today, for either IPv4 _or_ IPv6.

This is just plain silly .. there is a perfectly good solution for
multi-homing.. its called SCTP.. been around since Oct 2000 as
an RFC and the SS7 over IP folks are using it now. Yes it works for
both IPv4 and IPv6 and it will even setup an association (connection
for you TCPites) that includes BOTH IPv4 and IPv6.
It fails over and keeps on working when an interface goes down...

However there are nearly 10 different proposals under consideration in
the IETF, to deal with multihoming.  Few are restricted to IPv4.
I have always wondered why we are spinning our wheels in
this multi-homing group when the solution already exists.
Lode Coene has brought this up in the Multi-6 working
group but he seems to be ignored.
You don't need to do anything.. its already there.. just use it.

R

In other words, when there is a serious solution to multihoming -- ie,
being able to preserve a connection when using more than one IP
Address -- it will likely work for IPv4.
Most of these proposals are quite new.  No more than a year old and
many less than 6 months.
This does not speak well for anything happening immediately, of
course.  However quite a number of the proposals do not require any
significant infrastructure change.  This bodes well for rapid
deployment, once they make it through the standards process.
On the other hand, getting the IETF to produce standards track
specifications out of this large pack of candidates could take another
10 years...
d/
--
Dave Crocker 
Brandenburg InternetWorking 
Sunnyvale, CA  USA 




 



--
Randall R. Stewart
815-477-2127 (office)
815-342-5222 (cell phone)




Re: visa requirements (US citizens)

2004-01-28 Thread Ken Hornstein
>>"# You may enter Korea without a visa for a stay up to 30 days
>>or less for tourism, visiting, or transit to another country when
>>carrying a valid US passport."
>>
>>Seems to me to pretty clear that a visa is not needed.
>
>I am not a lawyer, but I don't think attending a professional meeting 
>is either "tourism", "visiting", or "transit to another country". 
>YMMV.

Man, where were you guys the first time this was discussed here? :-)

So, my initial reading of the embassy web page was in line with Paul's;
it certainly seemed to me that IETF attendees would need a visa.  When
I mentioned this to the list, a bunch of people chimed up and said, "What
are you talking about, I've been there a bazillion times for meetings
and never needed a visa".

Sam Hartman and I called our respective consulates (Boston and Washington, DC,
respectively).  Sam was told unambiguously that a visa was not required.
I spoke to a seemingly less knowledgeable person, who was not completely
sure, but seemed to indicate that since IETF was non-profit, a visa was not
required.  Someone else on this list (forgive me, I don't remember your
name), had contact with the Korean Ambassador to the US, and forwarded an
email from either him or a representative who indicated that a visa is
not required.

However, Steve Bellovin has an excellent point; these reprisals by
other countries in response to our new immigration policies are
documented, and while I haven't seen any specific examples of ones from
Korea, it's certainly a possibility.  I'd hate to be the one who gets
the "rubber glove" treatment from a immigration official who has a beef
with the US.

My feeling regarding the whole thing is:

- Without a visa, you're probably okay.
- With a visa, you're almost certainly okay.

On a side note ... has anyone else had problems getting to the web site
of the host?  http://www.tta.or.kr/ietf59/index.htm has failed every time
I've tried to connect to it in the past few weeks.

--Ken



Re: Death of the Internet - details at 11

2004-01-28 Thread USPhoenix


Amen to those words
 
Wayne


Re: Death of the Internet - details at 11

2004-01-28 Thread John C Klensin
Pete,

I think the _attempt_ and _effort_ to get a solution to the 
persistent connection problem is entirely worthwhile and did not 
mean to suggest otherwise.  I think that ignoring or delaying an 
easier, and still important, problem while we work the 
persistent connection one borders on irresponsible.  And that 
distinction is the only one I was attempting to make.

We may still disagree, of course.

   john

--On Wednesday, 28 January, 2004 13:09 -0600 Pete Resnick 
<[EMAIL PROTECTED]> wrote:

On 1/28/04 at 12:39 PM -0500, John C Klensin wrote:

The reality is that there is very little that we do on the
Internet  today that require connection persistence when a
link goes bad (or  when "using more than one IP address").
If a connection goes down,  email retries, file transfer
connections are reconnected and the  file (or the balance of
the file if checkpointing is in use) is  transferred again,
URLs are refreshed, telnet and tunnel connections  are
recreated over other paths, and so on.  It might be claimed
that  our applications, and our human work habits, are
designed to work at  least moderately well when running over
a TCP that is vunerable to  dropped physical connections.
Would it be good to have a TCP, or TCP-equivalent, that did
not have  that vunerability, i.e., "could preserve a
connection when using  more than one address"?  Sure, if the
cost was not too high on  normal operations and we could
actually get it.  But the goal has  proven elusive for the
last 30-odd years -- at least in the absence  of running with
full IP Mobility machinery all of the time, which  involves
its own issues -- and, frankly, I'm not holding my breath.
I am rather ambivalent about this issue (it seems like the
obvious thing to do, but also seems quite painful to
accomplish), but I do think there is something missing in this
response: "The cost" to which you refer needs to be weighed
against the cost of *not* doing so, and that cost seems to
have been mounting all along and shows no sign of slowing
down. The fact is that we have had to engineer all sorts of
application-layer solutions to this single problem and will
continue to do so for new application-layer protocols into the
future. Worse yet, some of those solutions continue to include
ridiculously high-cost solutions such as having to retransmit
entire files, and my guess is such costs (bandwidth and
otherwise) will continue in the future. I also think that the
argument ignores the possibility that if we do address the
"connection persistence" problem, we will be able to do many
things at the application layer that we have always avoided
doing because of the cost of having to engineer around it.
From the view up here in the nosebleed section, it seems like
it is worth at least the attempt to get a solution.







Re: Death of the Internet - details at 11

2004-01-28 Thread Dean Anderson
On Wed, 28 Jan 2004, Dave Crocker wrote:

> John,
> 
> JCK> but the only realistic solution for someone who needs high
> JCK> reliability in that environment is multihoming, and there seems 
> JCK> to be no hope for multihoming of small-scale networks with IPv4.
> 
> There is not much of a solution, today, for either IPv4 _or_ IPv6.

There is a good solution, and it works for IPv4. But I can't tell you what
it is, since I am making money with that knowledge, and I don't want to 
tell my competitors my business.

But I will give you a hint:  Over-consolidation doesn't scale well.  It
just creates exhorbitant executive salaries, crazy-stupid business plans
funded by VC, and opportunities for undercooked books.  The dinosaurs are
dying, and they don't know why.  To continue the metaphor, I'm small and
furry, and I don't really know why they are dying either. But I know I'm
not in the same boat.

--Dean





Re: visa requirements (US citizens)

2004-01-28 Thread Ted Hardie
At 11:08 AM -0800 01/28/2004, Paul Hoffman / IMC wrote:
>At 10:46 AM -0800 1/28/04, Kevin C. Almeroth wrote:
>> >>The Korean embassy page that is linked to from the IETF meetings page
() makes it
pretty darn clear that US folks should get a visa. They do have a
link from that page saying how wonderful US-Korea relations are, of
course.
>>
>>What part are you reading that says this?
>>
>>"# You may enter Korea without a visa for a stay up to 30 days
>>or less for tourism, visiting, or transit to another country when
>>carrying a valid US passport."
>>
>>Seems to me to pretty clear that a visa is not needed.
>
>I am not a lawyer, but I don't think attending a professional meeting is either 
>"tourism", "visiting", or "transit to another country". YMMV.

Nor am I a lawyer.  I have heard several times, though, that "visiting" includes 
meetings of
this type, since attendees are not being paid by a Korean company for work and are
not expected to sell goods or services to Korean companies while there.  I took away
from that the idea that "if your economic interaction with Korea while there is
only that of 'consumer', you are visiting".

Again, I am not a lawyer, and I suggest only that there may be a different view.
regards,
Ted Hardie



Re: visa requirements (US citizens)

2004-01-28 Thread David Morris
Furthermore, how often have you found that web content hasn't been kept
current ???

And exactly who will you appeal to if you arrive in Korea without a Visa
and you discover it is recovered.

If I were investing my funds in travel to Korea, I'd make sure that I had
the proper documents by dealing DIRECTLY with the authorized
representatives of the government in question.

On Wed, 28 Jan 2004, Paul Hoffman / IMC wrote:

> At 10:46 AM -0800 1/28/04, Kevin C. Almeroth wrote:
> >  >>The Korean embassy page that is linked to from the IETF meetings page
> >>>() makes it
> >>>pretty darn clear that US folks should get a visa. They do have a
> >>>link from that page saying how wonderful US-Korea relations are, of
> >>>course.
> >
> >What part are you reading that says this?
> >
> >"# You may enter Korea without a visa for a stay up to 30 days
> >or less for tourism, visiting, or transit to another country when
> >carrying a valid US passport."
> >
> >Seems to me to pretty clear that a visa is not needed.
>
> I am not a lawyer, but I don't think attending a professional meeting
> is either "tourism", "visiting", or "transit to another country".
> YMMV.
>
> --Paul Hoffman, Director
> --Internet Mail Consortium
>
> ___
> This message was passed through [EMAIL PROTECTED], which is a sublist of [EMAIL 
> PROTECTED] Not all messages are passed. Decisions on what to pass are made solely by 
> IETF_CENSORED ML Administrator ([EMAIL PROTECTED]).
>




Re: packets of multiple users sent over the same TCP/IP session

2004-01-28 Thread Michael Richardson
-BEGIN PGP SIGNED MESSAGE-


> "Haim" == Haim Rochberger <[EMAIL PROTECTED]> writes:
Haim> I am looking for any protocol or type of protocol/application that
Haim> runs over 
Haim> TCP/IP, and that packets of that same session "belong" (i.e. either
Haim> destined or sourced) to/by more then one subscribers (meaning that
Haim> each packet belongs to one subscriber, but there are a some packets
Haim> that belong to subscriber A, some to B, etc.).

  Examples that come to mind:
   1) NFSv3 over TCP(one TCP connection per mount)

   2) SMTP  (multiple users' transactions per session)

   3) persistent HTTP from a www proxy.
  (I'm not really sure if this is really done)

]   ON HUMILITY: to err is human. To moo, bovine.   |  firewalls  [
]   Michael Richardson,Xelerance Corporation, Ottawa, ON|net architect[
] [EMAIL PROTECTED]  http://www.sandelman.ottawa.on.ca/mcr/ |device driver[
] panic("Just another Debian GNU/Linux using, kernel hacking, security guy"); [
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (GNU/Linux)
Comment: Finger me for keys

iQCVAwUBQBbluoqHRg3pndX9AQGt8AQApnjdup1s9FchlNqgvnnMIYNw/N8nHEOT
yck+H0et3JMGUK+pNhd+1j4MRQmizNyP2+4Q4+Od/DT/lJ9B42jaxy4aC6sgx9MI
BJH1ye53bQS8li9n8nwj2nw0Eg+c2UgqyfbheDAHSu33WbWS5eKYdnXlyGKo+NN/
uE8ccpJImKY=
=MUk9
-END PGP SIGNATURE-



Re: Death of the Internet - details at 11

2004-01-28 Thread Pete Resnick
On 1/28/04 at 12:39 PM -0500, John C Klensin wrote:

The reality is that there is very little that we do on the Internet 
today that require connection persistence when a link goes bad (or 
when "using more than one IP address").  If a connection goes down, 
email retries, file transfer connections are reconnected and the 
file (or the balance of the file if checkpointing is in use) is 
transferred again, URLs are refreshed, telnet and tunnel connections 
are recreated over other paths, and so on.  It might be claimed that 
our applications, and our human work habits, are designed to work at 
least moderately well when running over a TCP that is vunerable to 
dropped physical connections.

Would it be good to have a TCP, or TCP-equivalent, that did not have 
that vunerability, i.e., "could preserve a connection when using 
more than one address"?  Sure, if the cost was not too high on 
normal operations and we could actually get it.  But the goal has 
proven elusive for the last 30-odd years -- at least in the absence 
of running with full IP Mobility machinery all of the time, which 
involves its own issues -- and, frankly, I'm not holding my breath.
I am rather ambivalent about this issue (it seems like the obvious 
thing to do, but also seems quite painful to accomplish), but I do 
think there is something missing in this response: "The cost" to 
which you refer needs to be weighed against the cost of *not* doing 
so, and that cost seems to have been mounting all along and shows no 
sign of slowing down. The fact is that we have had to engineer all 
sorts of application-layer solutions to this single problem and will 
continue to do so for new application-layer protocols into the 
future. Worse yet, some of those solutions continue to include 
ridiculously high-cost solutions such as having to retransmit entire 
files, and my guess is such costs (bandwidth and otherwise) will 
continue in the future. I also think that the argument ignores the 
possibility that if we do address the "connection persistence" 
problem, we will be able to do many things at the application layer 
that we have always avoided doing because of the cost of having to 
engineer around it. From the view up here in the nosebleed section, 
it seems like it is worth at least the attempt to get a solution.
--
Pete Resnick 
QUALCOMM Incorporated



Re: visa requirements (US citizens)

2004-01-28 Thread Paul Hoffman / IMC
At 10:46 AM -0800 1/28/04, Kevin C. Almeroth wrote:
 >>The Korean embassy page that is linked to from the IETF meetings page
() makes it
pretty darn clear that US folks should get a visa. They do have a
link from that page saying how wonderful US-Korea relations are, of
course.
What part are you reading that says this?

"# You may enter Korea without a visa for a stay up to 30 days
or less for tourism, visiting, or transit to another country when
carrying a valid US passport."
Seems to me to pretty clear that a visa is not needed.
I am not a lawyer, but I don't think attending a professional meeting 
is either "tourism", "visiting", or "transit to another country". 
YMMV.

--Paul Hoffman, Director
--Internet Mail Consortium


Re: visa requirements (US citizens)

2004-01-28 Thread Kevin C. Almeroth
>>The Korean embassy page that is linked to from the IETF meetings page 
>>() makes it 
>>pretty darn clear that US folks should get a visa. They do have a 
>>link from that page saying how wonderful US-Korea relations are, of 
>>course.

What part are you reading that says this?

"# You may enter Korea without a visa for a stay up to 30 days 
or less for tourism, visiting, or transit to another country when 
carrying a valid US passport."

Seems to me to pretty clear that a visa is not needed.

-Kevin



Re: Death of the Internet - details at 11

2004-01-28 Thread John C Klensin
Dave,

Just to pick a small nit or three...

--On Wednesday, 28 January, 2004 07:36 +0900 Dave Crocker 
<[EMAIL PROTECTED]> wrote:

John,

JCK> but the only realistic solution for someone who needs high
JCK> reliability in that environment is multihoming, and there
seems  JCK> to be no hope for multihoming of small-scale
networks with IPv4.
There is not much of a solution, today, for either IPv4 _or_
IPv6.
However there are nearly 10 different proposals under
consideration in the IETF, to deal with multihoming.  Few are
restricted to IPv4.
or to IPv6, which I assume is what you intended.

In other words, when there is a serious solution to
multihoming -- ie, being able to preserve a connection when
using more than one IP Address -- it will likely work for IPv4.
Actually, that definition changes the problem into a much harder 
one, and one that I think is unnecessary for the problem I was 
discussing --unnecessary 99% of the time, if not always.  The 
reality is that there is very little that we do on the Internet 
today that require connection persistence when a link goes bad 
(or when "using more than one IP address").  If a connection 
goes down, email retries, file transfer connections are 
reconnected and the file (or the balance of the file if 
checkpointing is in use) is transferred again, URLs are 
refreshed, telnet and tunnel connections are recreated over 
other paths, and so on.  It might be claimed that our 
applications, and our human work habits, are designed to work at 
least moderately well when running over a TCP that is vunerable 
to dropped physical connections.

Would it be good to have a TCP, or TCP-equivalent, that did not 
have that vunerability, i.e., "could preserve a connection when 
using more than one address"?  Sure, if the cost was not too 
high on normal operations and we could actually get it.  But the 
goal has proven elusive for the last 30-odd years -- at least in 
the absence of running with full IP Mobility machinery all of 
the time, which involves its own issues -- and, frankly, I'm not 
holding my breath.

By contrast, the problem that I find of greatest concern is the 
one in which, if I'm communicating with you, and one or the 
other of us has multiple connections available, and the 
connection path between us (using one address each) disappears 
or goes bad, we can efficiently switch to a different 
combination... even if all open TCP connections drop and have to 
be reestablished in the interim. For _that_ problem, we had a 
reasonably effective IPv4 solution (at least for those who could 
afford it) for many years -- all one needed was multiple 
interfaces on the relevant equipment (the hosts early on and the 
router later) with, of course, a different connection and 
address on each interface.  But, when we imposed CIDR, and the 
address-allocation restrictions that went with it, it became 
impossible for someone to get the PI space that is required to 
operate a LAN behind such an arrangement (at least without 
having a NAT associated with the relevant router) unless one was 
running a _very_ large network.

Now, I'll stipulate this is a routing problem as much, or more, 
than it is an address-availability problem.  And I'll also agree 
that there appears to be little evidence that IPv6 is 
significantly more routing-friendly than IPv4 and hence, that 
any real routing-based solutions that help the one will help the 
other.  But,

(i) if any of the options turn out to require an
approach similar to the one that continue to work for
big enterprises with PI space in IPv4, then we are going
to need (lots) more address space.  And

(ii) If any of the "multiple addresses per host" or
"tricks with prefixes" approaches are actually workable
and can be adequately defined and implemented at scale
--and there is some evidence that variations of them can
be, at least for leaf networks-- then they really do
depend on structure and facilities that appear to me to
are available in IPv6 and not in IPv4.
So, for the problem I was referring to (but perhaps not for your 
much more general formulation), I stand by my comment and 
analysis.

Most of these proposals are quite new.  No more than a year
old and many less than 6 months.
This does not speak well for anything happening immediately, of
course.  However quite a number of the proposals do not
require any significant infrastructure change.  This bodes
well for rapid deployment, once they make it through the
standards process.
On the other hand, getting the IETF to produce standards track
specifications out of this large pack of candidates could take
another 10 years...
Yes.  And it may speak to the IETF's sense of priorities that 
the efforts to which you refer are predominantly going into the 
much more complex and long-term problem, rather than the one 
that is presumably easier to solve and higher leverage.

john





Re: Death of the Internet - details at 11

2004-01-28 Thread Dave Crocker
John,

JCK> but the only realistic solution for someone who needs high
JCK> reliability in that environment is multihoming, and there seems 
JCK> to be no hope for multihoming of small-scale networks with IPv4.

There is not much of a solution, today, for either IPv4 _or_ IPv6.

However there are nearly 10 different proposals under consideration in
the IETF, to deal with multihoming.  Few are restricted to IPv4.

In other words, when there is a serious solution to multihoming -- ie,
being able to preserve a connection when using more than one IP
Address -- it will likely work for IPv4.

Most of these proposals are quite new.  No more than a year old and
many less than 6 months.

This does not speak well for anything happening immediately, of
course.  However quite a number of the proposals do not require any
significant infrastructure change.  This bodes well for rapid
deployment, once they make it through the standards process.

On the other hand, getting the IETF to produce standards track
specifications out of this large pack of candidates could take another
10 years...

d/
--
 Dave Crocker 
 Brandenburg InternetWorking 
 Sunnyvale, CA  USA