Re: Change the subject! RE: [IAOC] Re: IPv4 Outage Planned for IETF71 Plenary

2007-12-30 Thread Greg Skinner
Hallam-Baker, Phillip [EMAIL PROTECTED] wrote:

 It is a question of ambition. At sixteen I was interested in mastering
 the computer at its most fundamental level. I wrote arcade games in
 6502 and Z80 assembler.

 Today the idea of booting linux on a laptop would not make my top ten,
 hundred or thousand list of must do before I die experiences. In other
 words I have a life.

Actually, for someone with some Linux or Unix sysadmin experience, it
is not that difficult with Linux live distros.  Insert the CD- or
DVD-ROM with the distro in the CD/DVD drive and reboot. (It may be
necessary to tell the BIOS to boot from the CD/DVD drive; check the
laptop manufacturer's manual for the proper instructions.) Linux
should come up right away.  I used Ubuntu 6.06 and both IPv4 and IPv6
were enabled when the system came up.  Since I didn't have native IPv6
connectivity, I needed to build a tunnel.  The Linux+IPv6-HOWTO
documentation (http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO) was very
helpful.  I was able to confirm that I was using IPv6 by visiting
www.ripe.net at its IPv6 address and verifying my IPv6 address on
their page.

If you have little or no Linux or Unix sysadmin experience, I'd advise
going through the process before the meeting.  No changes will be made
to your hard disk (unless you make them).  If you get stuck, you can
always reboot and start over from scratch.

 [snip]

 Nor do I see any point in any test predicated on the expectation
 that large numbers of people will ever learn about IPv6
 administration.

 I know what micrcode is, I have even written some.  I know the role
 of microcode in VLSI design. If I was teaching a comp sci course I
 would want the students to know all about microcode. That does not 
 mean that I want or need the microcode for my cpu before I program it.

 I don't know what lesson will be drawn here. The one I believe
 should be drawn is that we need to ask what the problem we are really
 trying to solve is.

Unfortunately, using a live Linux distro won't help people who need
what they normally use that isn't available on the distro CD (or can
be downloaded).  However, it can be a useful exercise just for getting
people to use IPv6 who might not otherwise.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Change the subject! RE: [IAOC] Re: IPv4 Outage Planned for IETF71 Plenary

2007-12-30 Thread Greg Skinner
I think you're reading more into my comments than was intended.  I'm
only speaking to the issue of how people who might be interested in
running IPv6 on Linux during the experiment, but are concerned the
process may be complicated or risky to their setups, may do so.  It
may not work on all hardware; there is some information at the Linux
on Laptops site (http://www.linux-on-laptops.com/) regarding known
working configurations.

FWIW, I reread Russ Housley's comments on the outage, and understand
it to be an experiment that is voluntary (but encouraged).  Perhaps
this needs to be stated differently (e.g. IPv6 experiment planned for
IETF71 Plenary).

--gregbo

On Sun, Dec 30, 2007 at 07:22:39PM -0800, Hallam-Baker, Phillip wrote:
 Why is it a useful exercise for me to try again an operating system I first 
 used a quarter century ago?
 
 I know operating system religious wars are fun but this is not an opportunity 
 to make converts. Hard as you may find it to believe it is possible to have 
 used a large number of os and onsider unix one of the worst. Nor is success 
 in itself proof of excellence as sql demonstrates. 
 
 At this point I have 10 ip addressable devices in the house, I cannot shift 
 the printers or the vonage adapter to ipv4. Nor have I any intention of 
 upgrading the four boxes running xp, they will run as is till they fail and 
 be replaced.
 
 One approach to deploying IPv6 is to try to persuade PHB to be more 
 reasonable. Another is to accept that PHB is already far more reasonable that 
 the vast vast majority of the billion odd internet users in the world and 
 that rather than trying to persuade people to be reasonable one at a time we 
 work out a deployment strategy that is likely to be acceptable to them.
 
 No question it is a lot more fun doin enginering without deadlines or user 
 requirements. But that is not the real world we have to work in.
 
 
 
 Sent from my GoodLink Wireless Handheld (www.good.com)
 
  -Original Message-
 From: Greg Skinner [mailto:[EMAIL PROTECTED]
 Sent: Sunday, December 30, 2007 01:44 PM Pacific Standard Time
 To:   Hallam-Baker, Phillip
 Cc:   IETF Discussion
 Subject:  Re: Change the subject! RE: [IAOC] Re: IPv4 Outage Planned for  
 IETF71 Plenary
 
 Hallam-Baker, Phillip [EMAIL PROTECTED] wrote:
 
  It is a question of ambition. At sixteen I was interested in mastering
  the computer at its most fundamental level. I wrote arcade games in
  6502 and Z80 assembler.
 
  Today the idea of booting linux on a laptop would not make my top ten,
  hundred or thousand list of must do before I die experiences. In other
  words I have a life.
 
 Actually, for someone with some Linux or Unix sysadmin experience, it
 is not that difficult with Linux live distros.  Insert the CD- or
 DVD-ROM with the distro in the CD/DVD drive and reboot. (It may be
 necessary to tell the BIOS to boot from the CD/DVD drive; check the
 laptop manufacturer's manual for the proper instructions.) Linux
 should come up right away.  I used Ubuntu 6.06 and both IPv4 and IPv6
 were enabled when the system came up.  Since I didn't have native IPv6
 connectivity, I needed to build a tunnel.  The Linux+IPv6-HOWTO
 documentation (http://www.tldp.org/HOWTO/Linux+IPv6-HOWTO) was very
 helpful.  I was able to confirm that I was using IPv6 by visiting
 www.ripe.net at its IPv6 address and verifying my IPv6 address on
 their page.
 
 If you have little or no Linux or Unix sysadmin experience, I'd advise
 going through the process before the meeting.  No changes will be made
 to your hard disk (unless you make them).  If you get stuck, you can
 always reboot and start over from scratch.
 
  [snip]
 
  Nor do I see any point in any test predicated on the expectation
  that large numbers of people will ever learn about IPv6
  administration.
 
  I know what micrcode is, I have even written some.  I know the role
  of microcode in VLSI design. If I was teaching a comp sci course I
  would want the students to know all about microcode. That does not 
  mean that I want or need the microcode for my cpu before I program it.
 
  I don't know what lesson will be drawn here. The one I believe
  should be drawn is that we need to ask what the problem we are really
  trying to solve is.
 
 Unfortunately, using a live Linux distro won't help people who need
 what they normally use that isn't available on the distro CD (or can
 be downloaded).  However, it can be a useful exercise just for getting
 people to use IPv6 who might not otherwise.
 
 --gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: FW: I-D Action:draft-narten-ipv6-statement-00.txt

2007-11-13 Thread Greg Skinner
On Mon, Nov 12, 2007 at 11:30:42AM -0500, Thomas Narten wrote:
 Hi.
 
 A little more background/context that got me here.
 
 My original thinking was to do something like what ICANN and the RIRs
 have done, to bring awareness to the IPv4 situation and call for IPv6
 deployment. I think the IETF can say a bit more about why, and the
 threats to the internet architecture. (This came out of some
 conversations I had at the recent ICANN meeting).
 
 Maybe this could be an IAB statement. Maybe an IETF statement. I'm not
 sure. But I think it would be useful to have an IETF voice also be
 heard in the call for deployment. Especially since there are still
 some going around saying IPv6 is not needed. IPv6 is still not
 done, so don't deploy yet, etc. Does the IETF think that deploying
 IPv6 is necessary and in the best interest of the Internet? If so,
 reiterating that would be good.

Clarification is important here.  By done, do you mean the
specifications (mostly agreed), the implementations (a point of
some controversy) or the migration path (even more controversial)?

 I think though that it needs to be relatively short (which I probably
 have already blown), and high-level, since it's really aimed at higher
 level than your typical engineer. But the overal message needs to be
 think really hard about IPv4 exhaustion and what it means to your
 business, get serious about IPv6, and it's done, so don't wait.

As I read the draft, a thought occurred to me that it's probably the
right level of detail for the Director of IT, Operations, or the
equivalent, but perhaps not for someone higher up in a management
chain.  Such persons may not be familiar with the CIDR terminology,
for example.

 To find a good balance between short and also include a bit more
 detail (especially on the implications of not seeing IPv6 deployed),
 perhaps a short executive summary (which I didn't get into -00)
 followed by a bit more detail (e.g., up to 3 pages or so) would do the
 trick.
 
 Thomas
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf
 
 

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Representation of end-users at the IETF (Was: mini-cores (was Re: ULA-C)

2007-09-22 Thread Greg Skinner
On Wed, Sep 19, 2007 at 11:29:34PM +0100, Jeroen Massar wrote:
 Stephane Bortzmeyer wrote:
  On Wed, Sep 19, 2007 at 12:50:44AM +,
   Paul Vixie [EMAIL PROTECTED] wrote 
   a message of 32 lines which said:
  
  in the IETF, the naysayers pretty much kick the consenting adults'
  asses every day and twice on sunday.  and that's the real problem
  here, i finally think.
  
  Time to have a formal representation of end-users at the IETF?
 
 What is defined as an 'end-user'?
 
 You, me, the rest of the people, are all end-users IMHO.
 
 That we might have quite a bit more knowledge on how things work and
 that we might have some connections to people so that we can arrange
 things, is nothing of an advantage over people who are not technically
 inclined (or how do you put that nicely ;)
 
 The point is that those people don't know better and as such they also
 don't know what is possible and what they are missing.

Arguably, anyone can join the IETF, and represent themself.  However,
there is a steep learning curve, especially for those people who don't
have much if any technical background, in order to participate
meaningfully.

For example, I know of people who would like IP addresses to encode
physical locations such as the country and city, so they can use this
information to decide which ads to serve (or to block), or to enforce
DRM.  But if they come to the IETF lists and ask for this capability
(or why it can't be provided), at best, they'll be told that's not the
way things are done.  Instead, they go to companies that are willing to
sell them databases that presumably map IP addresses geographically to
a high degree of accuracy, at least to the country level.

 Eg, if you tell somebody oh but I have a /27 IPv4 and a /48 IPv6 at
 home and I can access all my computers from the Internet wherever I am,
 they will be going and? why would I need that. The typical lay-man
 end-user really couldn't care less, as long as their stuff works.
 
 The only people really noticing problems with this are hobbyists and
 most likely the gaming crowd trying to setup their own gameserver and
 finding out that they are stuck behind this thing called NAT.
 
 P2P people, thus quite a large group of people using the Internet today,
 have their tools to nice NAT tricks, thus these won't notice it.
 
 And for the rest of the population the Internet consists of http:// and
 https:// if they even recognize those two things, thus most likely only
 www and email, the latter likely only over a webinterface...

Actually, one could argue that this suggests that NAT is an
engineering success, even if it is architecturally flawed, because it
serves the needs of a majority of users, causes problems in only a few
cases, and isn't mandatory.  Users can get non-NAT access, depending
upon how much money and/or effort they're willing to expend. (Granted,
this doesn't take into account the arguments about how future
applications may be inhibited by NAT, or how certain security measures
are more difficult to enforce.)

 Which group do you want to 'involve' in the IETF and more-over, why?
 Last time I checked the IETF was doing protocols and not user interfaces.

I'd like to see the general level of user understanding of the
capabilities of Internet protocols raised.  However, I don't know how
this can be accomplished without a lot of effort on the users' parts
to come up to speed.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: ideas getting shot down

2007-09-20 Thread Greg Skinner
On Wed, Sep 19, 2007 at 12:08:38PM -0400, Keith Moore wrote:
 Paul Vixie wrote:
  yes, but do you think that was because that ietf was powerless to
  stop [NAT], or because that ietf was willing to let consenting
  adults try out new ideas?  i was there, and from what i saw, it was
  the former.

 IETF has very little power, if you can call it that.  IETF can try to
 suggest good ways of doing things quickly enough that the good ways get
 adopted before bad ways do, or it can recommend against bad ways of
 doing things.  The former is much more effective.  It pretty much failed
 to do either in the case of NAT.  I remember a lot of concern being
 expressed, but a strong reluctance to make any statement - perhaps due
 to lack of consensus about how bad NATs were and what, if anything,
 could be proposed as a better way.

FWIW, I think NAT would have happened, IETF or no.  There were people
who needed a solution, and had the money to pay for it, and there were
people who could provide the solution, and were willing to do the work.
It raises the question of whether there are circumstances where it's
reasonable to bend the end-to-end principle, such as when there is a
large user community that wants inexpensive Internet access (but is
willing to live without IP access).

  the underlying problem was that people in the field didn't want universality
  among endpoints, either for security or policy reasons, and people in that
  ietf wanted universality among endpoints -- a single addressing system and
  a single connectivity realm.  that ietf said, you don't really want that, 
  you
  should use the internet as it was intended, and solve the problems you're
  having in some way that preserves universality of endpoints.  the field 
  said,
  you are completely out of your minds, we're going to ignore ietf now.  then
  later on, ietf said, if you're going to do it, then we ought to help you 
  with
  some standards support.

 That's not quite how I remember it from my POV.  Some people were very
 concerned about ambiguous addressing.  I don't think universal
 connectivity was as big a concern - it's not like IETF people expected
 everyone to run open networks.   But mostly there was a lot of unease
 and uncertainty about NATs.  Very little analysis was done.  And I don't
 think that NAPTs were initially seen as the normal case.

I remember such arguments.  I also remember an argument that NATs were
being marketed as security devices, when in fact they did not provide
the actual level of security implied.  RFC 3724 bears this out.

  which is why i'm proposing a standard of demonstrable immediate harm 
  rather
  than the current system of that's not how you should do it or that's not
  how i would do it.

 That's the wrong standard, it sets the bar way too low.  IETF shouldn't
 endorse anything unless it has justification to believe it is good; IETF
 should not discourage anything unless it has justification to believe it
 is bad.   And that justification should come from engineering analysis
 (or measurement, if it's feasible).  Sadly, a lot of people in IETF do
 not have engineering backgrounds and don't understand how to do such
 analysis.  This is something we need to change in our culture.

Based on some recent experiences, this type of analysis is not as
valued in the industry as it used to be.  It's much more valued to be
a crack programmer; someone who can rapidly deploy something that can
be quickly brought to market.  At least in the current economic
climate, I don't think there is much that can be done to change this.
Another issue is that the networking industry in general is losing
people to other disciplines, such as gaming, virtualization, and
Internet search, not to mention careers outside of the computer
industry.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Call for action vs. lost opportunity (Was: Re: Renumbering)

2007-09-14 Thread Greg Skinner
On Fri, Sep 14, 2007 at 07:48:45AM -0400, Keith Moore wrote:
 [sorry, lost attribution here]
  TCP protects you from lots of stuff, but it doesn't really let you
  recover from the remote endpoint rebooting, for example... 
 well, duh.   if the endpoint fails then all of the application-level
 state goes away.  TCP can't be responsible for recovering from the loss
 of higher-level state.  but we're not talking about endpoint failures,
 we're talking about the failure of the network.  TCP is supposed to
 recover from transient network failures.  it wasn't designed to cope
 with endpoint address changes, of course, because the network as
 designed wasn't expected to fail in that way.

When I was first learning about networking back in the mid-1980s, I
worked on a project involving mobile hosts.  The hosts were permitted
to change their IP addresses, but TCP-level connectivity needed to
remain intact.  The loss of a route to some network (or host within
that network) might trigger an ICMP unreachable, but the applications
(e.g. telnet, ftp) needed to be rewritten not to close in such a
situation.

It seemed like a reasonable thing to do to treat something like a net
or host unreachable as a transient condition, and allow the
application to proceed as if nothing serious had happened.  When
routing connectivity could be restored quickly, the maintained state
at both ends of the TCP connection would allow the application to
proceed normally.  However, this practice doesn't seem to have made it
into the application-writing community at large, because lots of
applications fail for just this reason.  I wonder if even writing a
BCP about this even makes sense at this point, because the application
writers (or authors of the references the application writers use) may
never see the draft, or even be concerned that it's something they
should check for.

  (And something that's common in today's IPv4 deployments: NAT
  timeouts. I got bitten by that in Chicago, I think they were only a
  few minutes in my hotel, drove me insane because anything other than
  HTTP didn't work for long.)
 given that NATs violate the most fundamental assumption behind IP (that
 an address means the same thing everywhere in the network), it's hardly
 surprising that they break TCP.

After installing a NAT firewall/router, I noticed my ssh connections
would drop when left idle for awhile.  That never happened before -- I
could go away from my machine for hours, and as long as client and
server machines were up, with no network dynamics, everything would
work fine when I returned.  But is it TCP itself that's failing, or
ssh interpreting the timeout as a non-transient condition, and telling
TCP to close?

I think a reasonable compromise for application writers who are
concerned about allocating resources to connections that might really
need to close (e.g. because the remote end really did crash, or there
was a really long timeout), is to allow the user to specify the
behavior for the application to take when a level 3 error condition
occurs.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Renumbering

2007-09-13 Thread Greg Skinner
On Thu, Sep 13, 2007 at 07:43:38PM +0100, Tony Finch wrote:
 On Thu, 13 Sep 2007, David Conrad wrote:
 
  How do you renumber the IP address stored in the struct sockaddr_in in a
  long running critical application?
 
 Applications that don't respect DNS TTLs are broken for many reasons, not
 just network renumbering.
 
 Tony.

I know of one application that relied on long-lived DNS hostname to IP
mappings and ignored DNS TTLs.  Search engine crawlers cached the IP
addresses for pages that had been fetched, and used those addresses
even though the TTLs had expired.  This resulted in pages from
whatever content lived at those new IP addresses showing up
unexpectedly (incorrectly) in search engine results.  I don't know if
this has been fixed, but it's an example of application usage that
bypassed IETF recommendations for a presumably good cause (performance
reasons).

Another application with a similar reliance that is seeing some growth
is the use of IP addresses for geotargeting.  The geotargeting
provider attempts to determine the physical location of the IP address
for various purposes, such as to choose what ads to display.  I
imagine people on this list can see the flaws of doing this, but
nevertheless it persists.  I don't know how the geotargeting
providers plan to handle IPv6, but it's another example of how people
develop applications in ways that the IETF may not anticipate (because
they discourage such applications), but make migration to IPv6
difficult because of the installed base that depends upon specific
uses of IPv4 addresses.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Business case for IPv6 (Was: Re: one example of an unintended consequence of changing the /48 boundary)

2007-08-28 Thread Greg Skinner
On Sun, Aug 26, 2007 at 12:20:01PM +0100, Michael Dillon wrote:
 In two or three years, IPv4 network growth will be severely limited. Any
 business whose revenue growth is linked to IP network growth, must use
 IPv6 for this beyond two to three years from now. IN order to
 successfully use IPv6 for the mission critical network growth that is
 the engine of business revenue, they need to have at least a year of
 trials and lab testing in advance. That means there is not much more
 than a year before such businesses will have missed the boat. Some may
 argue that Return three years from now on Investment made today is not
 short term, and that is true. However, if the investment is not made
 today, the platform for short term ROI will not exist in three years
 time.
 
 That does make a strong business case and some companies are busy
 working behind the scenes to prepare for the disruption caused by IPv4
 runout. For some, the disruptive event will be fatal and for others it
 will be very profitable. This message will soon reach the investment
 community so you will soon see investment analysts asking very tough
 IPv6 deployment questions, and rating stocks appropriately. That is
 definitely a short term ROI scenario for IPv6.

Hmmm ... I haven't heard much about IPv6 deployment yet from an
investment standpoint from the usual business news sources, even the
tech business news sources.  I'll grant that may change soon.
However, as has been noted, there are businesses, organizations,
etc. that have migrated to, or adopted, IPv6.  I hear that IPv6 has
gained a foothold in Asia, for example.  Perhaps the answer to all of
these questions regarding pain points is to collect and document
case studies on who made the migration or adoption, what problems were
encountered, etc.

 Think back to the days when the OSI protocols were expected to be the
 next big thing, replacing IPX, DECNET, Appletalk and NetBIOS. IP was for
 universities and labs. In telecoms, ISDN and ATM were the wave of the
 future. This is the way things were in 1993. Two years later, in 1995 we
 were experiencing exponential growth of the TCP/IP Internet. I believe
 that it was something like 1500% growth in that year and dozens of new
 books about the Internet came on the market joining the 3 books that
 were on the market in 1994.
 
 It is almost certain that IPv4 runout will drive a similar upsurge in
 IPv6, although not quite the same magnitude.

Hasn't there been exponential growth of the TCP/IP Internet all
along?  Also, with regards to the time period in question, wasn't most
of the growth fueled by (1) interest in the WWW, and (2) interest of
some of the online services at the time (AOL, CompuServe, etc.) to
provide access to the Internet, and vice versa?  I wasn't aware that
OSI was part of the discussion at this point.  A few years earlier,
there were mandates for US government networks to use OSI protocols
(GOSIP), but nothing ever came of that.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-15 Thread Greg Skinner
On Wed, Aug 15, 2007 at 01:44:09PM -0700, Michael Thomas wrote:
 Keith Moore wrote:
  ...at the cost of dropping legitimate traffic.  the thing is, the set of
  valid senders for you and the set of valid senders for everyone at cisco
  is very different, and the latter set is much fuzzier.  and those
  reputation services won't take responsibility for the mail that you lose
  by trusting them, nor are they accountable to the senders either.
 
  this is  not a way to make the network more robust.

 Robust for what? Spammers? The simple fact of the matter is that the
 alternative is to just shut down port 25 given the growth in both volume
 and complexity to filter.  That ain't robust either. Dealing with false
 positives is the cost of doing business on the internet these days. Welcome
 to reality.

http://fm.vix.com/internet/security/superbugs.html

--gregbo


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: History...?

2005-06-27 Thread Greg Skinner
On Mon, Jun 27, 2005 at 10:23:31AM -0700, Bob Braden wrote:
 
 I just came across a 1993 mailing list for the ietf.  Anyone care,
 before I delete it?

Is ftp://ftp.ietf.org/ietf-mail-archive/ietf considered to be the
definitive archive for the IETF discussion list?  According to the
names of the archive files, there is one for every month of the year
1993.  However, I don't know if every message is there.  (BTW, I think
it would be great if the missing archives from the start of the IETF
through 1992 could be located and made publicly available.)

--gregbo


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: History...?

2005-06-27 Thread Greg Skinner
On Mon, Jun 27, 2005 at 11:35:24AM -0700, Bob Braden wrote:
 
 Since I have already received 6 requests for the 1993 IETF mailing
 list, I put it up on the ancient history page of the RFC Editor web
 site.

Oops ... didn't realize it was the distribution list, not the archive.
Since some of those addresses may be live, perhaps something should be done
to reduce the risk of spam.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Fw: Impending publication: draft-iab-dns-assumptions-02.txt

2005-03-02 Thread Greg Skinner
On Wed, Mar 02, 2005 at 02:49:17PM +, Paul Vixie wrote:
  The IAB is ready to ask the RFC-Editor to publish
  
What's in a Name: False Assumptions about DNS Names
draft-iab-dns-assumptions-02
  
  as an Informational RFC.  [...]
 i think this document is just silly.  and highly subjective.  there is
 no way to edit it to correct its problems -- it should just quietly die.
 IAB should preserve its relevance and integrity by limiting its focus
 to objective technical matters (such as the excellent work on wildcards
 back when COM and NET each had wildcards), rather than fluff like this.

It has been my unfortunate experience that such documents are
necessary.  When such a document is published by a reputable source,
it saves others the trouble of having to explain repeatedly why
certain things do not work the way they might seem to, and why
assumptions must be questioned before they are acted upon.  For
example, I don't know how many times I gave essentially the same
lecture to people who wanted so very much to believe that domains
registered in .country were actually IN that country.

--gregbo

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: The gaps that NAT is filling

2004-11-28 Thread Greg Skinner
On Tue, 23 Nov 2004 14:11:19 +0100, Jeroen Massar wrote:
 On Tue, 2004-11-23 at 07:03 -0500, Margaret Wasserman wrote:
 Without solutions to these four problems on the horizon, I can't
 voice any enthusiasm that the larger address space in IPv6 will
 eliminate NAT in home or enterprise networks.

 This really isn't a problem of the IETF. The problems is at the ISP's   
 who should charge for bandwidth usage and not for IP's.   

 It is all a financial problem, people earn money this way, and there is
 not an easy way you can let them not make money. 
 Actually, can you blame them? I can't unfortunately... 

Arguably, if the ISPs handed out a (static) IP to every customer,
soon they'd be out of IPs, and thus unable to grow their businesses
from that perspective.

--gregbo

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-etal-ietf-analysis-00.txt

2002-03-31 Thread Greg Skinner

Bob Braden wrote:
 Mark Adam wrote:

 Ok... So I'm being a little idealistic, but this is different that just
 saying Me too to the We ain't makin' widgets responses. Optimally we
 should judge the work of a WG based on how well its output is accepted by
 the world at large, but that's a little late in the process.
 
 I think this leaves out a very, very important issue -- when you make
 the judgment.  A successful IETF output is one that continues to be
 relevant and useful for a long time, and that enables the continued 
 flexibility and adaptability of the Internet to the requirements 10 
 years down the road.  We got where we are today by taking a long
 view... we need to push back against those who would let short-term
 optimizations produce long-term ossification.

I agree, however I wonder if that is still possible now, in this time
where short-term imperatives dominate decision making (particularly where
networking technology is concerned).  The IETF was born in a time when
not only was it possible, it was encouraged to take the long-term view.

--gregbo




Re: I-D ACTION:draft-etal-ietf-analysis-00.txt

2002-03-29 Thread Greg Skinner

Peter Deutsch [EMAIL PROTECTED] wrote:

 The implications for this seem clear enough. It seems to imply that the
 amount of traffic per protocol the activity goes on to generate is a
 reasonable milestone for any IETF activity. This doesn't mean the POISED
 list (or heck, even the IETF general list ;-) should be shut down (and
 putting aside the question of the increase in SMTP traffic they generate
 :-) The point is that we should recognize that such activities are
 clearly overhead, and we should all be doing our best to minimize such
 overhead whenever we can.

I don't feel comfortable with the notion that the work of a WG should be
judged according to adoption of its protocols, particularly in terms of
traffic generated.  All protocols are not equal; some have limited
utility by design, as they serve a limited community.  I also don't think
that time spent in pre-use is necessarily overhead; this diminishes the
value of producing clear documentation.  If it takes longer than some
might desire for RFCs to be published, but the overall clarity of the RFCs
is improved (regardless of their utility), I think that's time well spent.

 So, if people agree that traffic measurements have value as a metric,
 then presumably the first derivative of traffic volume over time is also
 a reasonable indicator of the future takeup rate for new protocols.
 Measuring some of the other things being discussed here (RFC counts,
 engineer-hours spent in meetings, messages to a mailing list, count of
 pastries consumed) would all seem to me to be measuring overhead
 activities, not core to the organization. This is not a bad thing to
 understand, but would not seem to be the most important metric in our
 arsenal.

But past performance is not always an indicator of future performance.
Take the first couple of years of HTTP traffic, for example.

 Instead of allowing us to conclude that we're doing fine and those pesky
 users are letting us down by not deploying our favorite new toy, it puts
 the responsibility for convincing users to adopt our work clearly on our
 own shoulders, which is where I think it belongs. Just as any good
 business person knows the the difference between a good technology and a
 good product, we should acknowlege the difference between a good technology  
 and a good solution. If people don't deploy it, we are the ones who failed.
 We didn't build it in such a way that it was a better alternative for the
 user and it wasn't used. Anything else is noise, not signal.

But protocol popularity is not something intrinsic to the protocol itself.
Application development plays a huge part.  Again, using HTTP as an example,
the immense popularity (and utility) of web browsers drove HTTP traffic.
Some protocols need a killer application before they see a lot of utility,
which will in turn create more interest/demand in improving the protocol --
and more WGs and/or WG activity. :)

 As a simple test, if we were to find that the percentage of traffic on
 the net using IETF developed/endorsed protocols turns out to be falling,
 it would imply that the organization's influence is waning, which would
 be something we might want to investigate.

This need not necessarily be considered a failure of the IETF.  It might
be an indication of the maturity of the IETF, in that other standards
bodies/companies/users can use IETF protocols/services/BCPs as a foundation
for whatever it is they're trying to do.

--gregbo




Re: What is at stake?

2002-02-04 Thread Greg Skinner

Ed Gerck [EMAIL PROTECTED] wrote:

 In this scenario, and with all due respect to everyone's opinions,
 policies that might have been justifiable some 10 or 15 years ago,
 such as laissez-faire interoperation, conformance verification and
 trust, cannot be justified by saying the existing system is quite
 effective or in doing this for the last 10 years, i've yet to suffer a
 mishap because of this...  What was, aint' any more.

 In addition, within the last ten years the Internet has changed radically
 from a centrally controlled network to a network of networks -- with no
 control point whatsoever.  There is, thus, further reason to doubt the
 assertion that what worked ten years ago will work today in the same
 way.

I'm coming in late to this thread, but it seems to me that the Internet
was not particularly centrally controlled about 10 years ago.  The ARPAnet
was retired; regional research networks were in full force; commercial
networks were starting to appear.  If you'd said 15 years ago, I'd be more
likely to agree.

However, I find it interesting that this thread seems to have grown out
of a complaint about MIME (non)interoperability.  It strikes me as ironic
that even back in the 1980s, there was a lot of noninteroperability, at
least among mail systems.  I'm sure many remember mail messages that would
bounce because they were sent to networks (note quotes) that didn't (or
did) treat % or @ or ! as special characters.  But leaving those types
of problems aside, there were still sizable pockets of noninteroperability.
(I remember being flamed at on header-people many years ago because I used
a feature of emacs that allowed me to edit mail headers in ways that were
non-RFC 822 compliant.)

Issues such as who got to attach to the Internet don't seem to be that
relevant here, because as far as I can recall, neither ARPA nor its
contract agencies were in the protocol standardization business.
Interoperability came about mostly as a desire on the part of implementors
to have implementations that would work together, rather than as a mandate
from on high.  (Also, a fair amount of peer pressure, in that as an
implementor, you didn't want to get a bad reputation for noninteroperable
software, if you subscribed to the IETF ethos of interoperability.)

Seems to me what's been overlooked in comparing now to back in the day
is that the notion of interoperability now comes primarily from the creators
of the most used software, which for the most part interoperates with
itself and not with that of other vendors (and very rarely with reference
implementations created from IETF specs).  Furthermore, the value of
interoperability with another vendor's products is weighed (by the
consumer) against the cost; another vendor may have a more interoperable
product, but may charge more for it (why not, as it takes more time to
develop).  What is the consumer's perception of paying more money for
a product that's, say, certified by the IETF Underwriters Labs, if they
can get something that works nearly 100% of the time for all of their
needs, and costs less (and in some cases, is bundled with the rest of the
system)?

My guess is that at this point, certification of most software isn't
going to matter much to consumers unless the software costs less.
There are some exceptions -- consumers may pay more money for more secure
versions of software that are also interoperable.  But I don't think
distributing lists of nonconformant products is going to matter much at
this point.

--gregbo
gds at best.com




Re: Site hit rates and AOL

2001-07-22 Thread Greg Skinner

Anthony Atkielski [EMAIL PROTECTED] wrote:

 So online advertisers that are counting impressions and unique viewers are
 just making the numbers up?

Yes and no, depending on one's POV.  Some are using statistical techniques
to estimate the actual size of the population viewing an ad or a page based
on samples of user behavior taken from software embedded in browsers.  These
techniques are subjects of considerable debate in some circles, particularly
among people who feel they don't accurately track emerging trends or niche
populations.

--gregbo




Re: more on IPv6 address space exhaustion

2000-08-14 Thread Greg Skinner

 At 02:53 PM 8/11/00 -0700, Greg Skinner wrote:
 I have heard on some local (SF bay area) technology news reports that
 the Commission on Online Child Protection is looking at dividing the
 IPv6 address space into regions that can be classified according to
 their "safety" for child access.
 I wouldn't worry excessively about that. This is roughly comparable to the 
 argument that there should be a DNS TLD ".kids" in which folks who have a 
 non-porn web site can get a domain name. Best.com is now part of Verio, 
 which is in turn becoming part of NTT. All of these are non-porn companies. 
 Is that their most important aspect, one they are going to base their 
 domain name one? nay, nay...

You are assuming that they will not acquire another domain name in one
of these protected zones (e.g. best.kids).  I don't think the idea is
all that farfetched, particularly if they can find some business
reason to do so.  If branding some of their content (or customers'
content) under .kids makes business sense, they'll do it.

 Addresses will be assigned by address registries to service providers, and 
 in turn to subscribers to service providers. The commission presumably has 
 some valid comment on what content should be accessible by children, but it 
 has no idea whether or how that relates to the business structure of the 
 Internet, and therefore to IPv6 addressing.

Leaving aside my opinions on how technically sound the idea is, there
is certainly precedent for restricting certain types of domain name or
IP address space.  Members of the commission are certainly aware of
this.  If they can get enough support for their proposals, and enough
of the Internet community goes along with them, they can pull it off,
in my opinion.

--gregbo




more on IPv6 address space exhaustion

2000-08-11 Thread Greg Skinner

Brian E Carpenter [EMAIL PROTECTED] wrote:

 If a routeable prefix was given to every human, using a predicted
 world population of 11 billion, we would consume about 0.004% of the
 total IPv6 address space.

 (The actual calculation is 11*10^9/2^48 since there are 48
 bits in an IPv6 routing prefix. Or
 11,000,000,000 / 281,474,976,710,656 = 0.39 )

I have heard on some local (SF bay area) technology news reports that
the Commission on Online Child Protection is looking at dividing the
IPv6 address space into regions that can be classified according to
their "safety" for child access.

Depending on how this allocation is done (if it's done), couldn't this
mean we will still need NAT?

--gregbo




Re: Complaint to Dept of Commerce on abuse of users by ICANN

2000-07-31 Thread Greg Skinner

Lloyd Wood [EMAIL PROTECTED] wrote:

 William Allen Simpson wrote:

 The users of the Internet have access to several free browsers that
 support frames on a dozen platforms.  Folks that are unable to use
 the Internet are not an appropriate electorate.  Lazy kindergartners
 are not the target audience for ICANN membership.

 I do hope this isn't the official ICANN view. I imagine that
 a disability discrimination lawsuit would soon follow.

 how many text-to-speech audio browsers support frames well?

Support for the disabled does seem to be a concern in some quarters;
for example, see

http://dir.yahoo.com/Computers_and_Internet/Software/Internet/World_Wide_Web/Browsers/Lynx/

The secure registration page requires https, which isn't available in lynx
as far as I know.

I am wondering if it might make sense going forward to allow an email
submission.  Email is for the most part a queued delivery mechanism, so it
is not necessary for the user to resubmit an html form if the server is
busy.

--gregbo




Re: draft-ietf-nat-protocol-complications-02.txt

2000-07-20 Thread Greg Skinner

Keith Moore [EMAIL PROTECTED] wrote:

 the reason I say that your statement is content-free is that it offers
 no specific criticism of IETF that can be used in a constructive fashion.

With respect to this particular thread, the only criticism I'd make is
I don't see how the draft in question will alter the business practices of
AOL or any other large Internet access provider that does not provide
full Internet service.  I think the draft is useful for protocol
developers who may require interoperability across NAT boundaries, and
network managers who may need to explain why certain architectures may
cause certain protocols to break.  Beyond that, I don't see that it will
cause any significant change in the business practices of companies who
have decided (for whatever reasons) that it is not necessary to give any
(or all) of their customers full Internet service.

The IETF might perhaps take an advocacy position for traditional Internet
service.  An RFC on the order of "Full Internet Access is Good" might
sway a few people who are unaware of the wealth of services a full
provider offers.  On the other hand, a provider that actually offers
such services is much more likely (imho) to have success among a
potential customer base for them, and arguably has more resources to do so.
By this I mean the money for a PR campaign, advertising, etc.

 I find that the work of IETF varies in quality - much of it is quite
 good, some of it mediocre, a small fraction is highly dubious.
 Most IETF WGs I've worked with do not operate with an attitude like
 "people will have to do what we say", but rather "how do we solve
 this problem".  Most of them seem to understand that they have not
 only to solve the problem, but also to make the solution technically
 sound, attractive to those who would use it, and easy to deploy.

OK, in this case, what is the problem that needs to be solved, from the
standpoint of AOL?  Their customers, for the most part, are either unaware
that there is a problem, or the problem does not currently affect them.  If
enough of their customers feel it is a problem, no doubt AOL will change
their business practices (because if they fail to do so they will lose
business to access providers who will solve their problems).

So, what is the problem we are trying to solve here, and is the IETF the
organization that can provide the most effective solution?

--gregbo




Re: draft-ietf-nat-protocol-complications-02.txt

2000-07-17 Thread Greg Skinner

Masataka:
   
 If IETF makes it clear that AOL is not an ISP, it will commercially
 motivate AOL to be an ISP.

Keith:

 probably not.  folks who subscribe to AOL aren't likely to be
 reading IETF documents.

 face it, it's not the superior quality of AOL's service that keeps
 AOLers from moving - it's their susceptibility to marketing BS and
 their addiction to chat rooms.  it's hard to help those people.

Assuming they need help.  The impression I have gotten from people
who regularly use AOL is that they are generally satisfied with the
nature of the service (as opposed to the quality of the service).
As they discover more about the Internet, they may or may not switch
to "real" ISPs, depending on whether they have needs for what "real"
ISPs provide.

--gregbo




Re: draft-ietf-nat-protocol-complications-02.txt

2000-07-17 Thread Greg Skinner

Masataka Ohta wrote:

 If IETF makes it clear that AOL is not an ISP, it will commercially
 motivate AOL to be an ISP.

Why?  Certainly, they are aware that they are not an ISP by your
definition.  It hasn't changed their business practices.  Why would
an IETF RFC change their business practices?  The business practices
of AOL are determined, for the most part, by what Wall Street and
their customers think is important, not what the IETF thinks.  Most
of their customers are unlikely to read such an RFC anyway.

--gregbo




Re: draft-ietf-nat-protocol-complications-02.txt

2000-07-11 Thread Greg Skinner

Jon:

 personal comment
 Other classes of organisation may simply be providing a subset of
 internet services - I don't see a market or technical case for these
 and in fact would encourage regulatory bodies to see if these types of
 organisations are trying to achieve lock out or are engaged in
 other types of monopolistic or anti-competitive behaviour. :-)

If I'm understanding you correctly, there is clearly a market for such
organizations, otherwise they would not exist.  Whether or not there
is technical justification for what they do is a matter of opinion.
For reasons that have been beaten to death here and elsewhere, they
provide some function that is not met with the existing IPv4 service.

I could make the argument that they provide Internet access, in the
sense that one can use these providers to gain access to a subset of
content and services that is "traditionally" called Internet service.
I would support them being classified as Internet Access Providers
(IAPs).  In some circles, that's what they're called.

Masataka:

 I just want to make it illegal for these types of organisations call
 their service "Internet" or "internet".

 It's something like "Olympic".

How would you go about doing that?  What judicial organization is likely
to make an issue of this, in light of all the other (arguably more serious)
issues on their plates?

--gregbo




Re: HTML email

2000-05-16 Thread Greg Skinner

"Theodore Y. Ts'o" [EMAIL PROTECTED] wrote:

 I wonder how many people are still using plain-text, non-HTML enabled
 mail readers?  It still happens on some mailing list, where someone will
 send a base-64 encoded html'ified message (usually using MS Outlook),
 and someone will send back "try again in English; I don't read that MIME
 crap."

I still use plaintext mail readers such as elm, pine, even /usr/ucb/Mail. :)
I prefer to save the attachment off and use a separate program to read it
later, rather than to launch it from the mail program.

from the old school,
--gregbo




Re: IPv6: Past mistakes repeated?

2000-05-09 Thread Greg Skinner

"J. Noel Chiappa" [EMAIL PROTECTED] wrote:

 From: Greg Skinner [EMAIL PROTECTED]

 There was a similar discussion here about five years ago where some
 people proposed market models for address allocation and routing.
 Unfortunately, it's not in the archives.

 I think it was on CIDRD, actually, no?

I don't think so.  I vaguely recall at some point in the discussion,
it was also suggested that IP addresses be maintained as trusteeships.
At any rate, thanks to all for the links.

--gregbo




Re: IPv6: Past mistakes repeated?

2000-05-08 Thread Greg Skinner

"David R. Conrad" [EMAIL PROTECTED] wrote:

 Ah, nostalgia.  It's so nice to revisit old "discussions"...

There was a similar discussion here about five years ago where some people
proposed market models for address allocation and routing.  Unfortunately,
it's not in the archives.  If anyone has this discussion archived, could
they please point me to it?  Thanks.

--gregbo




Re: PIARA (IP address space economics)

2000-05-08 Thread Greg Skinner

[EMAIL PROTECTED] (Sean Doran) wrote:

 If steps are taken to avoid the development of a massive black
 aftermarket for IPv4 addresses overallocated by IANA et al., by providing
 the mechanisms of a "white market" -- notably a public registry of
 IP address title, with an exclusive but transferable right to
 transfer title to another party --  then I would object much less
 strenuously to your draft, since it is fundamentally PIARA, but
 with a rather odd auctioning system for the remaining not-yet-allocated
 IPv4 address space.

[...]

 How do we get it adopted quickly, and get the IANA, APNIC, ARIN and RIPE
 to IMMEDIATELY cease offering IPv4 address space to people who do
 not FULLY comply with the requirements in your More Restricted Assignment
 Plan, and the various RFCs and standards-tract documents it rests upon?

Is this an appropriate discussion for ICANN's ASO policy mailing list?
(Not that I mind reading it here.)

--gregbo




Re: VIRUS WARNING

2000-05-07 Thread Greg Skinner

Keith Moore [EMAIL PROTECTED] wrote:

 but sooner or later folks are going to be held liable for poor engineering
 or poor implementation of networking software, just like folks today can be
 held liable for poor engineering or implementation of bridges or buildings.

I don't see how, as long as the software manufacturers ship the software
with legal disclaimers, e.g. "We are not responsible for damages ..."
Also, bridges and buildings are built by licensed professionals, for the
most part.  Comparatively speaking, very few software professionals are
licensed in this way.  They do accept responsibility for damages; said
responsibility is factored into the cost of the bridge or building.
[Generalization] Much software is cheap and sold in bulk as a commodity.
If for some reason software became significantly more expensive that would
limit its spread and growth.  We would no longer have the thriving
industry we have now.

--gregbo




RE: IPv6: Past mistakes repeated?

2000-05-07 Thread Greg Skinner

Mathis Jim-AJM005 [EMAIL PROTECTED] wrote:

 We need to move forward with IPv6 both by deploying it in
 the "core" and setting a time-frame after which non-IPv4
 compatible addresses will be assigned.  Unless there is a
 clear reason to move, no one wants to change software just
 to change.  Once IPv6 is in the major backhaul carriers, ISPs
 can role out improved services based on IPv6 which will be
 the real reason end-users upgrade.  Seems like a real
 leadership vacuum here...

Hmmm ... seems like the same issues are in effect with regards to
deploying IPv6 in the "core", namely, no one wants to change software
just to change.  There don't seem to be overly compelling reasons (yet)
for a significantly large number of end users to switch to IPv6
compliant technology, such that it would spur deployment of IPv6 in the
critical infrastructure they use.  Rather, it has spurred deployment
of IPv4/NATv4.

Some of you know that I like to draw parallels between the Internet
and other media.  One possible analogy (with US radio broadcasting) is
that IPv4 is to AM as IPv6 is to FM.  Licensing of FM stations and the
eventual growth and development of that medium was accomplished through a
variety of means, such as limiting the number of new AM licenses granted,
and the development of programming on FM that became sufficiently compelling
that a marketplace grew for radios that could receive both AM and FM
broadcasts.

This suggests that a possible key to mass deployment of IPv6 could come
from stricter IPv4 address space allocation, but more likely from
development of content reachable *only* via IPv6 address space.  This would
hopefully compel the folks who currently want to stick with IPv4/NATv4 to
make/market/purchase IPv6-compliant solutions in order not to be left
behind.

For the record, I don't necessarily think stricter IPv4 address space
allocation is a good idea.  But using the US radio broadcasting analogy
again, a good deal of FM licenses were issued to people who wanted to be
broadcasters but had no choice but to go to FM because the FCC would not
issue them an AM license.

--gregbo




Re: breaking the IP model (or not)

2000-04-14 Thread Greg Skinner

Keith Moore wrote:

 perhaps architectural impurity alone shouldn't keep you from doing
 something, but the fact that something violates fundamental design
 assumptions should cause you to do some analysis and hard thinking
 about the likely consequences of using them.  and if you are in the
 business of selling boxes that violate the design assumptions you
 shouldn't misrepresent these to your customers.

True, however I think at least some of the customers are also to blame.
In their haste to get on the Internet they went out and bought NAT boxes
without understanding their limitations.

I hear about this sort of thing even outside of the context of NAT,
e.g. with people who have non-globally routable IP address blocks and
don't understand why they can't reach certain sites.  They then complain
to their ISPs, who point out that their service does not guarantee
global routing.

--gregbo