RE: The internet architecture

2008-12-29 Thread michael.dillon
 Yes, of course.  There are lots of ugly things that can 
 happen.  You don't have to go very far to run into why.  The 
 question is why have we insisted on not doing it right for so long?

Perhaps because others were working on the problems of application
communication without IP addresses. AMQP is one such http://amqp.org/
as are all the other protocols that call themselves message queuing.
XMPP might fall into the same category (RFC refs here
http://xmpp.org/rfcs/)
but I'm not familiar enough with the details to be sure that it meets
the criteria for unbroken end-to-end communication through IP addressing
change events.

In many ways, this is all a problem of language and history. At the
time many RFCs were written, the world of networking was very different
and very undeveloped. Getting the bareboned basics of networking right
was very, very important. But it was less important to make things
easy for application developers or application users because the very 
fact of a network delivered such great benefits over what came before,
that other problems seemed unworthy of attention. As all of this recedes
into history, the language that we use to speak about technology has
changed
so that terminology which was historically concise, is now a bit vague
and can be interpreted in more than one way. That's because lots of
other
people now use the same language and apply it to their designs,
architectures,
etc.

I think the only way to resolve the question is to publish an Internet 
architecture description in today's context, that explains what the
Internet architecture is, what it isn't, and why it has succeeded in
being what it is. At the same time, one could point to other work
outside
the IETF that has worked on other problems which are related and
intertwined
with Internet architecture, yet separate from it. And if AMQP really
meets
all the requirements of an IP address free protocol, perhaps it should
be
taken under the IETF's wing.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: How I deal with (false positive) IP-address blacklists...

2008-12-10 Thread michael.dillon
 Schemes that attempt to assess the desirability of the email 
 to the recipient have been tried - personal whitelists, 
 personal Bayesian filters, etc. etc. In practice they haven't 
 worked all that well, perhaps due to the average user's 
 inability to capably and consistently perform such assessments.

You are talking about the operational imperative. In ops
you have to work with what you have and make the best of
things.

 Well, sure. When you have a million users it's not only 
 difficult to focus on an individual user's needs, it's also 
 totally inappropriate.

In ops, yes. But in design its the other way around. The needs
of a single user form a use-case which guides the designers.
In this forum there are some who believe that the Internet
email architecture can be reformed so that it does not have
the same weakenesses which allow the flood of spam to produce
a positive statistical result for the spammers.

DNSBLs may be needed today in email operations, but if the
IETF steps up and does the work to fix the root of the
problem, perhaps they won't be needed at all in the 
future.

 And from what I've seen most of the ones I deal with - these 
 folks are our main customers - take those responsibilities 
 extremely seriously, if for no other reason than large 
 numbers of complaints are very costly to deal with and will 
 end up getting them fired.

Again you are talking about email operations which is
dealt with very well by MAAWG. 

 It provoked a strong reaction from me because it both 
 reminded me of the appallingly  low quality of the previous 
 discourse and seemed like an indication of the resumption of 
 same. And I simply couldn't take another round of it.

So how do you and Ted reach consensus? What is it that
you and he have failed to understand which causes you
to have such emotionally opposite reactions? I suspect
that you are thinking like an email operator who has the 
position that you can't change what is being thrown at
you so you just have to deal with it and live with the 
damage. And Ted is thinking like a user who wishes that
Internet email would just work, like his TV and his web
browser. Neither of you are wrong.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: How I deal with (false positive) IP-address blacklists...

2008-12-09 Thread michael.dillon
 Second, the fact that 10 years ago you set up sendmail for 
 the computer club at your college doesn't make you an expert 
 on modern large scale email systemms administration. The 
 operational concerns for large-scale email setups today are 
 very different from thost that would have applied to small 
 scale setups a few years back.
 
 I'm not going to get into the insight real operational 
 experience provides because I also lack the necessary 
 operational experience to have an informed opinion.

To make good standards you need a broad selection of
informed opinion from different viewpoints. Why should
it not be as simple to set up an IETF standard email
system for a small organization as it was 10 years ago?

Definitely there are issues of scale that have to be 
considered, but if the IETF really wanted to have large
scale email operators drive new Internet email standards
then we would hand the job over to MAAWG. 

You are right that the quality of the discussion about
DNSBLs has not been too good. But the underlying problem
seems to be that dissenting voices did not participate in
the drafting of the DNSBL document, and therefore the document
writers had not found the right level of compromise to get
the dissenters on board. Anyone can claim to be a great expert
and write a standards document, but the real hard work is in
getting a group of people with differing backgrounds and
experience to agree with that standards document.
 
--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: How I deal with (false positive) IP-address blacklists...

2008-12-09 Thread michael.dillon
 Maybe it's just me, but I'll take the evidence presented by  
 someone who has access to the operational statistics for a 
 mail system that services 10s of millions of end users and 
 handles thousands of  outsourced email setups over someone 
 like myself who runs a tiny little setup any day.

Then the IETF is the wrong place to look for help. The IETF
does not give a lot of priority to documenting operational
practices. You really should be looking here 
http://www.maawg.org/about/publishedDocuments
And given that MAAWG exists and does a good job of
publishing email operational best practices, I wonder
why people are so worried that the IETF does not do so.

 You might want to review the actual discussion before making 
 such claims. And while you're at it you might also want to 
 explain how it would be possible to get views that are, to a 
 close first approximation,  summed up as DNSBLs are evil 
 incarnate on board.

To begin with, the draft authors could have tried to incorporate
those views into the Security Considerations section, since when
you scratch the surface of the DNSBLs are evil opinions you
will most often find anecodtes about lost or blocked email. 
Simply ignoring those views will not get you to consensus.

--Michael Dillon
 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: How I deal with (false positive) IP-address blacklists...

2008-12-09 Thread michael.dillon
Why should
  it not be as simple to set up an IETF standard email system for a 
  small organization as it was 10 years ago?
 
 
 If you go back far enough, New York City was small and 
 friendly.  Not much required to build a satisfactory home there.
 
 Things have changed.  No matter the size of the home, things 
 have changed.
 
 Environmental pressures are ignored only at one's serious risk.

I agree with all of your points. But I also believe that it should
be possible to encapsulate the neccessary security features into
an Internet email architecture so that people can set up an email
server for a small organization in an afternoon, and it will pretty
much run on its own. The fact that the current Internet email
architecture
does not allow for this, and therefore disenfranchises small 
organizations from running their own email services, does not 
dissuade me from believing that we could fix the problem if we
put some effort into doing so.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon

 IMO, one of the biggest challenges surrounding IPv6 
 adoption/deployment is that all applications are potentially 
 impacted, and each and everyone one of them needs to be 
 explicitely enabled to work with IPv6.

Or NAT-PT needs to improved so that middleboxes can be inserted
into a network to provide instant v4-v6 compatibility.  

 That is a huge 
 challenge, starting with the observation that there are a 
 bazillion deployed applications that will NEVER be upgraded.

Yes, I agree that there is a nice market for such middleboxes.

 Boy, wouldn't it be nice of all we had to do was IPv6-enable 
 the underlying network and stack (along with key OS support 
 routines and
 middleware) and have existing apps work over IPv6, oblivious 
 to IPv4 vs. IPv6 underneath.

Middleboxes can come close to providing that.

 Wouldn't it have been nice if the de facto APIs in use today 
 were more along the lines of ConnectTo(DNS name, service/port).

I don't know if nice is the right word. It would be interesting
and I expect that there would be less challenges because we would
have had a greater focus on making DNS (or something similar) more
reliable. It's not too late to work on this and I think that it
is healthy for multiple technologies to compete on the network.
At this point it is not clear that IPv6 will last for more than
50 years or so. If we do work on standardizing a name-to-name
API today, then there is the possibility that this will eventually
prevail over the IPv6 address API.

Way back when there was an OS called Plan 9 which took the idea of 
a single namespace more seriously than other OSes had. On Plan 9
everything was a file including network devices which on UNIX are
accessed with sockets and addresses. This concept ended up coming
back to UNIX in the form of the portalfs (not to mention procfs).

I think it is well worthwhile to work on this network endpoint
naming API even if it does not provide any immediate benefits
to the IPv6 transition.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-05 Thread michael.dillon
 It may 
 well be that having applications be more brittle would be an 
 acceptable cost for getting a viable multihoming approach 
 that address the route scalability problem. (All depends on 
 what more brittle really means.) But the only way to answer 
 such questions in a productive manner is to look pretty 
 closely at a complete architecture/solution together with 
 experience from real implementation/usage.

I agree.
For instance, the cited DNS problems often disrupt communication
when there is a problem free IP path between points A and B because
DNS relies on third parties to the packet forwarding path. But 3rd
parties can also be used to make things less brittle. For instance
if an application whose packet stream is being disrupted could call
on 3rd parties to check if there are alternative trouble-free paths
and then reroute the stream through a 3rd party proxy. If a strategy
like this is built-into the lower level network API, then an application
session could even survive massive network disruption as long as
it was cyclic.

I have in mind the way that Telebit modems used the PEP protocol 
to test and use the communication capability of each one of several
channels. As long as there was at least one channel available and the
periods of no-channel-availability were short enough, you could get
end-to-end data transfer. On a phone line which was unusable for fax
and in which the human voice was completely drowned out by static,
you could get end-to-end UUCP email transfer. A lot of work related
to this is being done by P2P folks these days, and I think there
is value in defining a better network API that incorporates some
of this work.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: sockets vs. fds

2008-12-05 Thread michael.dillon
 It's possible that this represents insight worth sharing 
 broadly, so I'm copying the list.
 
 It isn't immediately obvious to me why file descriptors would 
 have had a major impact, so can you elaborate?

Down at the end of this page
http://ph7spot.com/articles/in_unix_everything_is_a_file
there is a list of pseudo-filesystems including portalfs from
FreeBSD, which allows nework services to be accessed as a 
filesystem. For those interested in pursuing the idea, you should
have a look at FUSE http://fuse.sourceforge.net/ which allows
anyone to create resources in the filesystem namespace, for instance
by writing a Python script
http://apps.sourceforge.net/mediawiki/fuse/index.php?title=FusePython.
FUSE is also availabel on OS/X http://code.google.com/p/macfuse/

It should be possible to register a URN namespace to go along with this
API so that you could have
urn:fs:example.com:courselist/2009/autumn/languages/
to represent a service that is provided by the courselistFS on some host
that example.com knows about.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The internet architecture

2008-12-01 Thread michael.dillon

 I know IETF thinks IP is the center of the universe and the 
 one true religion. But not in process control it is not. A 
 PIC controller comes with 384 bytes (BYTES, not kilo) of RAM. 

This is wildly out of date. For at least the last 10 years
cheap and common PICs have been made with more RAM than that.
The IP stack has been implemented in 1K bytes of code that
will run on the 8-bit PIC CPUs.

 Good luck getting an IP stack in there. And even if you use a 
 bigger processor with a built in TCP/IP stack you can only 
 run it over Ethernet type media. You can't use RS485 which 
 looks a much better bet for hardwired home automation systems 
 to me, it is what process control has used for decades.

Exactly. Process control doesn't need IP at all at the
edge of their network. They have other solutions that
work well for them. Whether it is I2C or one-wire or
RS-485, data can be relayed onto an IP network by devices
which speak both protocols. There is no problem here.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Advice on publishing open standards

2008-11-28 Thread michael.dillon

 For the past 5 years, I've been processing written sign 
 language as data.
 I've worked directly with the inventor of the script, which 
 is over 30 years old.
 
 We are ready to standardize.  The latest symbol was finalized 
 last month after more than a year of improvements and refining.
 
 I believe I'll need to write 2 Internet Drafts followed by 2 RFCs.

First, you have to give up control of the standard. This means that, for
instance, the Makaton people and BLISS Symbolics people may very well
make changes to the standard to accomodate their signing symbols.

You really should have a look at the Tao of IETF
http://www.ietf.org/tao.html before you publish an Internet draft. I'm
not sure that this is the kind of thing that needs to be standardised
via the IETF since it is not primarily about network protocols.
Signwriting is more to do with encoding of graphical information and
seems more closely related to w3.org's work with SVG and SVG fonts, not
to mention the Unicode consortium.

Also, I would suggest that you abandon the ASCII stuff since it seems to
be just a roundabout way of defining standard bitmapped representations.


--Michael Dillon


--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: [BEHAVE] Lack of need for 66nat : Long term impact to applicationdevelopers

2008-11-26 Thread michael.dillon
 Yeah, but we're trying to get rid of that stuff, or at least 
 considerably reduce the cost and complexity, because (among other
 things) it presents a huge barrier to adoption of new multiparty apps.

Promoters of NAT, particularly vendors, seem to have a two-valued
view of the network in which inside is good and outside is evil.
But network operators, who sit on the outside of the NAT,
do not share that view. In fact, we see a future in which
cloud computing centers within the network infrastructure
will lead to a wide variety of new multiparty applications.
In many cases the network operator also takes management
responsibility for gateway devices, so the idea of evil on
the outside is even further fetched.

That said, if there is to be some form of NAT66 because there
are real requirements elsewhere, it would be preferable if
the defined default state of this NAT66 was to *NOT* translate
addresses. This is not crazy if you see it in the context
of a NAT device which includes stateful firewalling.

I suggest that if part of the problem is an excess of
pressure from vendors, then the IETF could resolve this
by actively seeking greater input from other stakeholders
such as network operators.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: uncooperative DNSBLs, IETF misinformation (was: several messages)

2008-11-14 Thread michael.dillon
 - DNSBLs are temporary fad, they'll never last.
(we've been serving DNSBLs for 10 years)

Longevity is no guarantee of future survival.

 - DNSBLs are bad for email.
(we alone flag some 80 billion spam emails *per day*, spam which
would otherwise clog servers and render email completely useless)

Interesting point. If you did not run those DNSBLs then the flood of
spam would have rendered email completely useless which would have
reduced the sell-rate from one in 12.5 million, to zero. At which
point there is no financial incentive for spam. Or, more likely, spam
would have been maintained at a much lower level to maximize their
profit.

 - DNSBLs have huge False Positives.
(at 80 billion spams stopped per day, if we had even a miniscule
FP level there would be a worldwide outcry and everyone would stop
using us. Do the maths. Our FP level is many times lower than any
other spam filter method by a very, very long way)

Hmmm. No data provided, so no maths is possible. Note that a huge FP
rate
does not imply a huge quantity of false positives, if you allow for an
importance factor. 

 - DNSBLs break email deliverability.
(DNSBL technology in fact ensures that the email sender is notified
if an email is rejected, unlike Bayesian filters/content filters
which place spam in the user's trash without notifying the senders)

This still breaks deliverability. 

 - DNSBLs sit in the middle of an end-to-end email transaction
(see: http://www.spamhaus.org/dnsbl_function.html for 
 enlightenment)

There is a diagram under Rights of a Sender vs Rights of a Receiver
which shows that the DNSBL modifies the behavior of the Receiving
mail server. This is what I mean by sitting in the middle of an
end-to-end (sender to recipient) email transaction.

 - Someone from BT said DNSBLs should not be standardised
(BT has a contract with Spamhaus to use our DNSBLs on its network,
we're not sure why BT would prefer the DNSBLs it uses to not be
standardised but we'll ask them at contract renewal time ;)

This is the IETF. Nobody here speaks for any company or any
other organisation. Many people have stated that THIS DRAFT should
not be accepted as a STANDARDS TRACK RFC because it does not meet
the IETF requirements for an IETF STANDARD. That is very different
from saying that DNSBLs should not be standardised. 

 - DNSBLs are all bad because someone had a bad experience with SORBS.
(well, we're not SORBS. Nor are Trend Micro, Ironport, or the other
responsible DNSBL operators)

DNSBLs are risky because of the many cases put forward here. This
implies that there are security considerations that should be discussed
in the RFC, but which the authors neglected to mention. 

DNSBLs using 127.0.0.2 cause absolutely no 'damage' whatsoever)

You must not have read the draft. People are concerned with stuff 
like this from section 2.3:

   To minimize the number of DNS lookups, multiple sublists can also be
   encoded as bit masks or multiple A records.  With bit masks, the A
   record entry for each IP is the logical OR of the bit masks for all
   of the lists on which the IP appears.  For example, the bit masks for
   the two sublists might be 127.0.0.2 and 127.0.0.4, in which case an
   entry for an IP on both lists would be 127.0.0.6:


In any case, your diatribe will not change the fact that this draft
will not be standardised. In order to become a standard, a draft has
to have consensus, and you can't build consensus by misstating the
arguments against standardisation, and then saying that it is all
farcical.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: uncooperative DNSBLs, IETF misinformation (was: several messages)

2008-11-14 Thread michael.dillon
  - DNSBLs are a temporary fad, they'll never last.
 (we've been serving DNSBLs for 10 years)
 
  Longevity is no guarantee of future survival.
 
 A good argument against publishing a standard for any 
 technology at all.

Not at all. But it seems to me that the IETF does try
to design standard protocols that have a chance at
longevity.

 This theory can be tested and you guys at BT could be the pioneers:  

I have no idea what theory you are talking about testing.
I was making a comment about what might have been. Obviously,
time has passed and there is no longer any opportunity to test
what might have been.

In addition, my comment about the past had nothing whatsoever
to do with any particular company or ISP. It was a comment
about the feedback loop between DNSBLs and spam volumes. The
more effective DNSBLs are, the more volume of spam is sent
by the spammers who rely on people buying the products that
they advertise.

  Hmmm. No data provided, so no maths is possible.
 
 I thought perhaps you might be with BT's mail engineering 
 team. 

Not even close. 

 customers. (If you're not with BT's mail engineering team I apologize)

If you promise to not make unwarranted assumptions about
IETF participants in future, then I accept your apology.
You might want to read this http://www.ietf.org/tao.html

 How many times have you sent an email and your recipient says 
 days later I didn't get it and you say well you must have 
 since it didn't bounce back and both of you waste time. 

Yes it's true, the Internet email architecture has a number
of holes that can break deliverability. DNSBLs are only a part
of the problem.

 DNSBL technology maintains the fundemental rule of email
 deliverability: If an email can not be delivered *inform the Sender*.

First of all, the draft only says that there SHOULD be
a TXT record with a reason and that it is often used
as the text of an SMTP error response. The draft doesn't
actually say anything at all about informing the sender,
only about the sender's mail server. But then, the draft
is defining the DNSBL protocol, not the entire architecture.

At this point I'm beginning to wonder whether the IETF
should even publish this as an informational RFC. After
all, the information is already public and the authors can
publish the substance of this protocol elsewhere if they choose.
If there was a working group to publish a set of RFCs that
cover the whole area of DNSBLs and filtering then this would
make a fine document for that WG to start with. But on its
own it leaves too many loose ends.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: [BEHAVE] Can we have on NAT66 discussion?

2008-11-14 Thread michael.dillon
DL Port/Overload NAT for IPv4 (NAT:P) has security benefits 
   in that it requires explicit configuration to allow for 
   inbound unsolicited transport connections (via port forwarding)
   to 'inside' hosts.

Perhaps you missed this statement from
http://www.ietf.org/internet-drafts/draft-mrw-behave-nat66-01.txt

   NAT66 devices that comply with
   this specification MUST NOT perform port mapping.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: uncooperative DNSBLs, IETF misinformation (was: several messages)

2008-11-14 Thread michael.dillon
  This still breaks deliverability.
 
 How?

A user writes an email and sends it to another user. The other user does
not receive the email. This means that deliverability is broken. The
DNSBL is an agent in preventing that delivery. To my mind, this deserves
some explicit discussion in the Security Considerations section. On one
hand, a misused DNSBL can wreak havoc, and on the other hand a
compromised DNSBL could block more email than an administrator wishes.
The draft presented by the ASRG was very weak in its discussion of
security considerations given the fact that a DNSBL is explicitly
designed to break email deliverability.

  There is a diagram under Rights of a Sender vs Rights of a Receiver 
  which shows that the DNSBL modifies the behavior of the 
 Receiving mail 
  server. This is what I mean by sitting in the middle of an 
 end-to-end 
  (sender to recipient) email transaction.
 
 At the desire of the receiving mail site's administrator.

That is irrelevant. The fact is that it does sit in the middle and the
implications of this should be clearly documented.

 And is not unique to DNSBLs. Any sort of spam filtering 
 modifies the behavior of the receiving mail server.

But that would be a topic for another RFC which should also have 
a substantial Security Considerations section.

There are a number of reasons to document something in a standards
document. One is to help people build compatible implementations of
a protocol. Another is to help operators of the protocol interoperate.
But it is also to provide a clear description of the protocol so
that others can improve on it in the future, or replace it entirely
with a superior architecture.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: uncooperative DNSBLs, IETF misinformation (was: several messages)

2008-11-14 Thread michael.dillon

  A user writes an email and sends it to another user. The other user 
  does not receive the email. This means that deliverability 
 is broken. 
  The DNSBL is an agent in preventing that delivery.
 
 Is this unique to DNSBLs? If not, then why does it merit 
 deeper consideration in the context of DNSBLs?

Because the draft
http://www.ietf.org/internet-drafts/draft-irtf-asrg-dnsbl-07.txt
that started this discussion has set the context of DNSBLs.
You are right that it also merits deeper consideration in other
contexts.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: several messages

2008-11-14 Thread michael.dillon

  
 Although this one has been, IMO, a little more hostile than 
 necessary (on both sides), anyone who isn't interested in 
 that type of review should not be looking for IETF Standardization.

And for those who have not read the Tao of IETF, the
relevant section is 8.2. Letting Go Gracefully

   The biggest reason some people do not want their
   documents put on the IETF standards track is that
   they must give up change control of the protocol. 
   ...

http://www.ietf.org/tao.html#anchor37

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: several messages

2008-11-12 Thread michael.dillon
 Huh?  Concrete, real example:  I send a message to an IETF 
 mailing list.
 A list subscriber's ISP rejects the forwarded message.  
 IETF's mailman drops the subscriber, because this has been 
 happened multiple times.
 I can't notify the subscriber, because their ISP also rejects 
 my email.
 My ISP is irrelevant to the scenario, and the (now former) 
 list subscriber doesn't even know this has happened, or why.
 
 Another real, concrete example: some (but not all) messages 
 sent via my employer were tossed because one of my employer's 
 mail servers was listed on a blacklist.  As an employee, I 
 had no alternatives for sending mail - company policy 
 precluded the use of webmail alternatives via company 
 infrastructure.

This is the type of thing which should be discussed in a much
longer Security Considerations section, even if it is only
an informational RFC. A DNSBL sits in the middle of an
end-to-end email transaction, and there is a danger of this
type of mysterious man-in-the-middle effect. The issues
surrounding this should be openly disclosed in the RFC.
Many RFCs don't need more than a cursory Security Considerations
section, but this one does, partly because of its impact on
end-to-end email transactions, and partly because of its
overloading different meanings onto the DNS protocol.

If reputation systems are going to be an integral part
of the future Internet email service, then this existing
DNSBL needs to be properly documented, and it would be
fruitful to have a WG take on the task of designing something
that looks less like a temporary band-aid solution.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: IP-based reputation services vs. DNSBL (long)

2008-11-11 Thread michael.dillon

 Would refusing to publish as a standard stop 
 implementations or merely create potential interoperability 
 issues that could lead to more legitimate messages being dropped?

How would refusing to publish a document that is already public,
CREATE potential interoperability issues? The question is not
whether this information should be made public, because it already
has been and there is no reason to believe that an IETF refusal
would in any way prevent future publication of the information.

The heart of the question is whether or not this is work that
belongs in the IETF.

A big part of the issue is the fact that this draft glosses over
the security considerations of DNSBLs. If the draft had taken more
than three brief paragraphs to discuss these, then we would be 
having a different discussion.

DNSBLs are a temporary band-aid solution for a badly broken
Internet email architecture. They have provided the community
with an education but that doesn't mean that they should be
standardised by the IETF.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-11 Thread michael.dillon

 there's a lot of evil e-mail messages out there; the cost of 
 letting even one of those messages through is unacceptable, 
 so false positives are OK. 

This is precisely the sort of thing that should have been 
covered in much more detail in the Security Considerations
section of the draft.

 I have no problem with the IETF documenting the world as it exists.
 That's what an informational track RFC does.  

 (where, oh well, we'll just block the whole /48 or /32 
 might have unfortunate side effects not forseen yet)

Again, this is missing from the Security Considerations.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread michael.dillon
 And what does this have to do with the technical details of 
 running and using one?  We all know that spam stinks and 
 DNSBLs stink, too.
 Unfortunately, the alternatives to DNSBLs are worse.

That's a rather narrow view. Very large numbers of people
think that Instant Messaging is a far superior alternative
to DNSBLs, not to mention VoIP, web forums and other variations
on the theme. Fortunately, the IETF has done some good work
in the area of SIP and XMPP has steadily been gaining traction.

I think it is a positive thing to document the technology
of DNSBLs but I have no idea why this has come to the IETF.
Maybe it is a veiled test of the IETF's relevance to the 
21st century Internet.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: placing a dollar value on IETF IP.

2008-10-28 Thread michael.dillon
 It all goes back to the light bulb as a great example of 
 standards setting - back before there was a standard base for 
 bulbs, I'm sure every light bulb manufacturer had a vested 
 interest in their pre-standard bases and sockets - whether it 
 screwed left or right or used push-in pins, the size of the 
 base, etc.,

You haven't tried to buy light-bulbs in England, have you?
The choice is bayonet mount, or screw mount, then several
size bases, not to mention halogen and spotlight mounts.

If you've traveled much in Europe you will notice similar
confusion with the standard plug shape. They all have the
same size and position of pins but only the Swiss hexagonal
plug will fit in all the sockets. Even the countries with
the same size round socket and plugs managed to place the
ground/earth connections in different places. And let's not
mention the Soviet Union's GOST standard where the pins are
1mm in diameter smaller and 1 mm further apart. That's the 
reason for the split pins on many plug adapters because
they have enough flex to work in sockets in Ukraine, Russia,
Kazakhstan etc.

Interoperability of standards is a hard-won prize, whether in
the IETF or elsewhere. The cost of producing documents is
a mere drop in the bucket. In addition, cost is a very slippery
thing to get ahold of because of the difference between 
expenses and investments.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: IETF copying conditions

2008-09-18 Thread michael.dillon
  I think the *whole point* of a standard is to restrict how 
 things are 
  done, in order to promote interoperability.
 
 Standards are recommendations not restrictions.

Let's say that the restrictions viewpoint wins out in the
IETF and all RFCs are copyrighted in such a way that I
am not free to publish a revised version.

What law would prevent me from publishing the following
GW-SMTP document?

snip-
Gee-Whizz SMTP is a derivative of IETF.

In RFC 2821 replace all occurences of HELO with GDAY.
snip-

This is clearly an incompatible derivative of SMTP but I 
don't even need to quote the document, even though fair use
laws would allow me to do that.

--Michael Dillon

P.S. it seems to me that the best way to ensure that incompatible
derivatives do not flourish is to make sure that the work of the
IETF remains relevant to the current situation, and not mired in
the past. That way, the IETF will maintain a position of respect 
and people will not want to create incompatible derivative works.

Openness is required in order for advancement to occur.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: draft-rfc-image-files-00.txt

2008-08-26 Thread michael.dillon
 On first reading this seems to be an interesting way to go.

It seems to be heading in the right general direction, but 
I wonder why it does not concentrate on specifying inputs
rather than outputs. Given that XML is now widely used as the
input format for RFCs, it seems worthwhile to review the bits
of XML-related stuff that are mature enough for use for writers.

For instance, SVG for diagrams and PNG for images, standard
CSS for tables.

Of course, there has to be a defined standard reporitory output
for publishing the RFCs, but that already seems to be PS/PDF.
If the IETF defines a standard input format, and the XML2RFC
toolset is updated to support that toolset and output PS/PDF
format for the repository, then that takes care of the format issue.
Then there is only one file, not two or three. And the toolset could
feasibly generate a text file plus PS/PDF images only format, as an
alternate output if that is desired. Or SWF output files, or TGZ
file with a folder containing HTML and separate files for each SVG
diagram or PNG image. No need to choose, just prioritise.

Stylistic issues are quite separate, although they should probably
also be specified up front if it keeps things more orderly. I'd suggest
avoiding numbering images/diagrams in favor of naming them. E.g. see
diagram former-state-machine or refer to image original-napkin-notes.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Proposals to improve the scribe situation

2008-08-05 Thread michael.dillon
  Many people do not have the liberty of upgrading machines or OSs at 
  ease.
 
 But is that a problem for you or for the network team ?
 
 There is a point where certain legacy hardware is just not 
 going to cut it anymore and I don't believe that that is the 
 fault of the network team. 

Given the subject line above that started this thread, are you
sure you are taking the discussion in the right direction? 
Demanding perfection from wireless networks is probably not
the way to go, and demanding perfection from participants'
laptops doesn't seem to be the right way either.

One thing that does seem interesting to explore is whether
scribing could be made easier by building a special piece 
of software to support the scribing activity. Such software
could include a Jabber client that has some more robust features
to deal specifically with the type of intermittent connectivity
problems that occur on wifi networks at conferences, not just 
the IETF ones. The scribe could just keep on typing and the software
would log every keystroke locally, and automatically log in 
and send anything that was missed. In addition you could add
features to make typing easier as in predictive texting.

If you think of this in terms of Meeting Scribe software,
not a special Jabber client, then it could have a bunch of
other features as well, such as uploading the transcripts
to a website, assisting the scribe in producing summary minutes
from a transcript, downloading a list of meeting participants 
to use in entering the names into the appropriate points in
the transcript, managing two Jabber channels, one with pure
transcript, and the other with transcript and other chatter.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: About IETF communication skills

2008-08-03 Thread michael.dillon
  I don't know what accredited means anymore.
 
  IMHO it should mean real journalists in this context.
  That excludes technical experts who play at journalism on 
 their blog.
 
 Right, we wouldn't want to encourage reporting by people who 
 actually know what they're talking about...

Huh!?
Did I ever say anything about preventing people from writing/publishing
anything that they want? NO!
Those people who already know what they are talking about do not
need the kind of assistance that a press conference provides.

 What would the goal of accreditation be?

To make sure that the scarce time of the volunteer experts who
are taking questions at the press conference, is not wasted.
The press has existed in its current form for well over a century.
It's a different world from IP networking, and it has its own
traditions and its own processes. If a group of people is serious
about getting its message to the press in a relatively ungarbled 
form, then that group of people works with the press in the established
ways. That includes press conferences, and special support for
accredited
members of the press. Even an occasional contributor to a magazine
will probable have a business card with a title like Contributing
Editor.

  After all, there are
  no restrictions on non-journalists writing anything that 
 they want so 
  the IETF doesn't lose anything by restricting a press conference to 
  people whose dayjob is journalism.
 
 Except openness.

Huh!? There are no restrictions on people writing what they want
so if some conversations are not open to all, then the IETF loses
openness. This is ridiculous because the IETF is full of private
conversations, bar BOFs, private emails, etc. That does not reduce
the openness of the IETF. The professor explaining TCP in a first
year university class does NOT reduce the openness of the IETF.

 I would suggest that if the IETF wants more accurate press 
 coverage, it should, along with managing expectations, 
 produce press releases.  
 For most meetings, I don't think the number of tech 
 journalists that would show up is sufficient to warrant press 
 conferences, although it might in places like San Francisco.

Press releases would be a great idea. I believe that as we get
closer to IPv4 exhaustion there will be a surge of press interest
in the IETF and that is when a press conference will be useful.
Is there a mechanism in place (requests for free tickets?) that
would give the IETF some indication that there is a sufficient
level of interest to warrant the effort of a press conference?

--Michael Dillon
 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: About IETF communication skills

2008-07-31 Thread michael.dillon
 Maybe IETF should be thinking about what actions and 
 policies, uniformly applied, will result in the most accurate 
 representation of its work to the community.

In my experience, the best action to take would be to advise,
or teach, people how to handle media interviews. Back when I 
used to regularly talk to journalists I had no problem with
their articles because I planned the interviews in advance. 
I made sure that I had no more than two or three key points
to make, I prepared a sound bite or two, and I repeated myself.

There is an art in taking complex technical material and
explaining it in layman's terms, but that is exactly what
you must do with journalists if you want them to accurately
represent your message. Even journalists who cover technology
are not technologists themselves. Their specialty is writing
and they can only write what you CLEARLY and consistently
explain to them.

It can be especially hard for people with a deep technical
understanding of something, complete with a multitude of 
corner cases, to summarize in laymans' terms and gloss
over the details. That's why I agree with Keith that some
IETF action would be beneficial here.

Note that one way to approach the issue is to hold official
press conferences at which only accredited members of the
press can ask questions. By doing this you focus attention
on a few people who would, hopefully, prepare for the event
and understand how to explain the work to ordinary people
like journalists and their readers. This doesn't prevent the
press from attending other meetings and it doesn't prevent 
IETF members from talking to the press. What it does do is
hold out the carrot of quality communication, and one hopes
that the press will appreciate the effort and make full use
of it. Indeed, the invitations should explicitly solicit
clarifying questions about anything that the journalist
has already begun working on.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: About IETF communication skills

2008-07-31 Thread michael.dillon
 I don't know what accredited means anymore. 

IMHO it should mean real journalists in this context.
That excludes technical experts who play at journalism
on their blog. Since the intent of the press conference
is to help non-technical writers understand both the
IETF technology and the IETF ways of working, it would
be bad to let in bloggers who might use more than their
fair share of the question time. After all, there are 
no restrictions on non-journalists writing anything that
they want so the IETF doesn't lose anything by restricting
a press conference to people whose dayjob is journalism.

 Too often, it 
 turns into ways to exclude unfriendly or non-mainstream 
 reporters, or to plant favorable ones.

That would be a silly thing to do now that the Internet
gives everyone an opportunity to have their say.

 These days, the analogous issue is whether or not 
 bloggers are real
 journalists.  I would hope that most IETFers would object to 
 that distinction.

You can't object to a technical fact. Of course then there is
the clarity of terminology, so lets define journalist as someone
who is paid to write articles for a publication and who is
at IETF to do their dayjob. Whether or not the journalist
also has a blog is irrelevant. This definition does exclude
people like me who are not currently paid to write and who
only write on things like blogs and mailing lists.

 To put it bluntly, I'm not at all in favor of trying to 
 manage news coverage, especially by organizational 
 mechanisms. 

My suggestion was not about managing new coverage at all, 
but about providing a venue for a specific section of the 
attendees that have special needs. Like the Newcomer's training.
In this case, journalists know and understand the press conference
format so this is just part and parcel of speaking to them in 
language that they understand. Why not let anyone ask questions?
Because the time of the people doing the answering is too valuable
to waste. They are there to talk with the press to help the 
journalists understand what is going on.

 Say what you mean, say it clearly, and publish 
 your own blog/newsletter/whatever if you need to.  Complaints 
 about misconstrued quotes are also appropriate, because any 
 system needs a feedback channel.  But trying to control the 
 press is not only worse than the disease, it's counter-productive.
 (I'd be astonished if the reporter in question were not 
 reading this thread -- what will the next story be?)

That way lies madness. This is not about keeping things secret, 
but about making things more open by making a concerted effort
to speak through journalists to the general public who are coming
to rely on IETF technology as part of the core infrastructure of
society. The IETF doesn't HAVE to do this. But then we will continue
to suffer from the very common problem of having complex technical
matters completely misconstrued in the press. It happens in every 
technical field, sciences, engineering, medicine.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: request for comments -- how to?

2008-07-28 Thread michael.dillon
 I have an idea for a new anonymous routing protocall -- and I 
 was wondering what I should do to get input. I was told by 
 some people to post here.

First Steps
---
Write a document that describes the protocol. If you can implement it as
well, great, but to start you need to have a document. Then set up a
mailing list where people can discuss your design. Find people who are
interested in your protocol by talking about it in places where this
kind of person hangs out. 

Next Steps
--
Once you have a group of people on your mailing list discussing the
protocol design, then have a look at this page
http://www.ietf.org/html.charters/wg-dir.html. Find out which area you
think your work belongs in and send a message to one of the ADs (Area
Directors) asking them what you need to do to change your discussion
group into an IETF working group.

Focus on the first steps for now because if you can't get over that
hurdle without IETF help then you probably don't have something that
belongs in an IETF working group.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Proposed Experiment: More Meeting Time on Friday for IETF 73

2008-07-21 Thread michael.dillon

  To elaborate, my understanding is that the rules for 
 teleconferencing 
  are governed by the rules for interim meetings, which require 
  something like one month's advance notice plus attendance 
 requirements 
  at the previous IETF, and a minimum period of time between 
 meetings.  

 I will also note that telling people they cannot meet to 
 discuss things is about as effective as telling water it 
 cannot flow downhill.

In any case, what is teleconferencing?

Does it include someone running a realtime meeting on an IM service?
Note that these days, IM services can include video and audio.

If you use a service like that provided by webex.com does that make it a
teleconference?

If you have an audio conference call using SIP/VoIP over the Internet,
is that a teleconference?

One wonders why the IETF persists in using old technology long after the
real world has shifted to leveraging the very technology that the IETF
created in the first place? Really, the place for experiments is to
supplement mailing lists with a variety of other Internet-based
interaction technologies. Experiment with various of these possibilities
and find out what works. This should take the load off the face-to-face
meetings so that there is no need to extend IETF meetings.

Maybe they could even be shortened?

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Proposed Experiment: More Meeting Time on Friday for IETF 73

2008-07-21 Thread michael.dillon

 Teleconferencing, in this context, includes any 
 communications vehicle that enables participants to meet 
 without having to travel, and which they all agree to. Could 
 be telephone, skype with or without video, Marratech, Webex, 
 Citrix, or anything else as long as they all agree.

Sounds to me like it means any technology which requires the Internet
since email functions quite nicely using non-Internet technologies like
UUCP. Why does the IETF have rules which hamper using the Internet to
develop Internet-based protocols?

And then use that as an excuse to lengthen the face-to-face meetings
making it even harder for people who are not IETF fanatics, or funded by
their vendor-employer to attend them?

The IETF really needs to sit up and take notice of how other development
projects leverage the Internet, such as the many open-source software
projects. I'm not saying that all WGs should be forced to start using
blogs or IM chat sessions or whatever. Rather, I think the IETF should
formally get rid of that teleconference rule, and actively encourage WGs
to experiment with new ways of working that leverage Internet
technologies, and which reduce the amount of time needed in face-to-face
meetings.

That would be worthy of the title experiment.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Proposed Experiment: More Meeting Time on Friday for IETF 73

2008-07-18 Thread michael.dillon
 I oppose this experiment.  I already donate to my employer a 
 significant amount of travel time on weekends without wanting 
 to add to it.  Flight schedules are tightening, thanks to the 
 cost of fuel, which means that having sessions on Friday at 
 all poses a problem now, if I want to get back by Saturday.  
 Having afternoon sessions would put a nail in that coffin.

One wonders why the IETF is so reluctant to make more use
of an interesting service called the Internet. I hear that
it offers cheap conference calls, and cheap video conferencing
capabilities. One would think that in this age of rising
fuel/transportation costs, the IETF would be looking to
make heavier use of telecommunications to reduce the amount
of face-time required.

 In addition, I'd argue that we need to update our rules to 
 allow for less notice so that more use of teleconferencing 
 can take place.  I recognize that this solution is not a 
 panacea, especially for the poor shmo who has to be up at 
 4:00am to participate.

That poor schmo should be happy that he has saved himself
the aggravation of a week of jet lag. Also, video or voice
conferences can be recorded so that people who have real problems
with time differences, can still follow the work, and join
in at other venues such as mailing lists, or the next conference
call which is scheduled at a different time.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Single-letter names (was: Re: Update of RFC 2606 based on therecent ICANN changes?)

2008-07-07 Thread michael.dillon

 Alphabetic scripts such as Latin mostly represent sounds used 
 to make up words. While one can certainly find some 
 legitimate single-character words (such as the article a or 
 the personal pronoun i) 

And lest someone might think that this curiosity of single
character words only applies to vowel sounds, in Russian,
the Cyrillic letter equivalents of v, k and s, are also
single letter words. 

 On the other hand, characters in ideographic scripts such as 
 Han are not mere sounds or glyphs; they represent one or more 
 concepts.

Some people might dispute that and say that they represent
syllables. Since the various Chinese dialects tend to have
monosyllabic words, almost all possible syllables also represent
a word or concept. However, many concepts in modern Chinese
dialects require multiple syllables to express them and
therefore multiple characters to write them. So there isn't
really a one to one mapping of word, syllable, concept as
many people suppose.

It would be more defensible to disallow single codepoint labels
where the code point represents a single consonant sound or a single
vowel sound. That still leaves a grey area of syllabic symbol systems
such as Hiragana, Inuit syllabics, etc. However, the number of people
affected by a rule on syllabics is small enough that one could
reasonably
poll representatives of these language communities to see if a rule
prohibiting single-syllable TLDs would cause hardship.

Note that the current system allows both single syllable TLDs such
as .to and single ideograph TLDs such as .sing when ASCII characters
are used. Or if you want to include tones, then .sing4 would be a single
ideographic codepoint. I think that it would be a good thing to update 
RFC 2606 to collect the various arguments and reasoning so that the
ICANN 
experts have some guidance to work from. If we can't deal with all the 
corner cases in an updated RFC, then at least ICANN experts have a point

of reference from which to depart, or not.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: problem dealing w/ ietf.org mail servers

2008-07-03 Thread michael.dillon

  Which (autoconfig) you should either not be using on 
 servers, or you 
  should be configuring your software properly to select the correct 
  outbound address.
 that's a bizarre statement.  the distinction between a client 
 and a server is an artificial one.  either autoconfig is 
 useful for all kinds of machines, or it's almost useless. 

You are correct when talking about IP networks in general,
however Jeroen is talking about the public Internet, not
IP networks in general. 

Of course another way to make this less bizarre is to stop
using the word server to refer to two different things.
Jeroen is saying that an IPv6 devices that wishes to 
advertise its IPv6 address for the purposes of receiving
SMTP connection requests, should not be configured in
such a way that its IPv6 host ID is randomly assigned.

Of course you could try to dynamically update your reverse
DNS to match the random host IDs but that creates corner
cases and race conditions which can be entirely avoided just
by making the publicly visible IPv6 address a static one.
Jeroen further pointed out that there is no reason for 
an interface, which has been assigned a random host ID, 
to suffer with only one address because IPv6 makes it
straightforward to have multiple addresses on an interface.

BTW, I do agree with your general viewpoint of Internet 
email architecture; it is horribly ugly and broken.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-rfc2821bis

2008-03-28 Thread michael.dillon
  OTOH, I think standardizing this convention makes all 
 sorts of sense, 
  but not, of course, in 2821bis.
  
  Why not in 2821bis?  Is 2821bis really that time critical?
 
 It is on its way to Draft Standard.  This would be a 
 sufficiently new feature to force recycle at Proposed, which, 
 to put it mildly, would not be welcomed by the editor or, 
 more important, those who convinced me to do the work.

Let me throw another idea into the mix. What if we were to
recommend a transition architecture in which an MTA host
ran two instances of the MTA software, one binding only to
IPv4 addresses, and the other binding to only IPv6 addresses.
Assume that there will be some means for the two MTA software
instances to exchange messages when their DNS queries return
the wrong kind of addresses (A or ). The IPv4 MTA can 
continue to use the rules regarding MX records with A record
fallback. The IPv6 MTA could require SRV records to identify
destinations and have no  fallback.

It immediately seems obvious that some issues surrounding 
an architecture of email systems during the IPv4-IPv6 transition,
are well out of scope of RFC28821bis. Therefore, I suggest that
2821bis move forward and that people interested in documenting
email transition strategies discuss this with the Application
area AD's. Such work should not be done without some form of
outreach to operational groups such as MAAWG since email operators
are quite likely to do whatever makes operational sense without
regard to IETF documents. Unless, of course, email operators
are highly involved in writing such documents.

--Michael Dillon
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: IETF Last Call for two IPR WG Dcouments

2008-03-28 Thread michael.dillon
 c) to distribute or communicate copies of the Original Work 
 and Derivative Works to the public, with the proviso that 
 copies of Original Work or Derivative Works that You 
 distribute or communicate shall be licensed under this 
 Non-Profit Open Software License or as provided in section 17(d);

Is this a viral clause similar to that found in the GPL which
makes numerous commercial developers purposely avoid incorporating
such code into theirs? And in the IETF context, wouldn't this
clause encourage developers to create implementations that
ARE NOT COMPATIBLE WITH IETF STANDARDS?

Of course, IANAL but I really don't see why the IETF couldn't use
a licence which is basically the MIT or BSD licence possibly along
with some language about granting a patent license.

Is there a reason why the IETF does not submit its prospective license
to the OSI for approval? They know more about Open Source licences 
than anyone. There is more information here
http://www.opensource.org/approval

--Michael Dillon
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: ISP support models Re: IPv6 NAT?

2008-02-19 Thread michael.dillon
 I'm not buying that this is so important that it's worth 
 having a box rewrite EVERY address in EVERY packet for.
 
 If you really want this, you can simply create a loopback 
 interface with address fc00::1 on it and users can type 
 http://[fc00::1]/; (ok, so the brackets are annoying, but no 
 NAT helps against that) and the users can connect to that 
 address regardless of what the addresses used on the LAN are.
 
 If the box runs a DNS resolver and mechanisms to inform hosts 
 about the resolver address, you can avoid the whole address 
 typing thing.
 
 And of course the use of a proper service discovery mechanism 
 is highly recommended.

If nobody writes all of this up into a set of guidelines
for implementors of SOHO IPv6 gateways, including some more
details on a proper service discovery mechanism, then it isn't
going to happen. Implementors will just go with the tried and 
true technique of rewriting EVERY address in EVERY packet because
that is what the experts suggest.

If you want to let them know that the real experts suggest something
different, then at minimum, an RFC should be published. However I'm
beginning to believe that we need more than just a few good RFCs 
with guidelines for IPv6 gateways and middleboxes. We probably also
need some books, magazine articles, conference presentations etc.
to back it up. And the presentations need to be at conferences like
this one http://www.date-conference.com/ not at the IETF meetings.

And this one http://www.realtimelinuxfoundation.org/
and this http://www.embedded.com/

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: IPv6 NAT?

2008-02-15 Thread michael.dillon
 Since NAT for IPv6 is much simpler than for IPv4, a bunch of 
 the issues associated with IPv4 NAT usage don't exist.  Like, 
 there should be no need for port translation.  No need to 
 time out mappings.  For the most part, NAT for IPv6 should be 
 just a simple substitution of prefix A for prefix B.  What, 
 exactly, are the range of choices that NAT vendors need to agree on?

A couple of things come to mind... 

Vendors need to agree on the timeout for mappings and on the
method for substituting prefixes. Even if ignoring port translation
seems obvious, a vendor who is adapting/upgrading old code might
include this in the absence of a standard. Also, an IPv6 NAT could
include features that are not in v4 NAT such as using RFC 3041
algorithms to generate the Interface ID portion of the mapped 
address rather than passing the ID through unchanged.

An often used example of how IPv6 is better than IPv4, talks about
how every device can have its own IPv6 address, so that just like
a telephone set, every device can be called by any other device.
But if you look into how the telephone system works, many telephone
sets are not available to receive calls. Instead, they are in
communication with a PABX which may or may not forward phone calls
to the phone set. Since an IPv6 NAT device fills an analogous gateway
role in the Internet, one wonders why there is no IPv6 NAT standard
to cover things like local hosts registering with the NAT to receive
packets on a certain port, or local hosts registering a forwarding
address for packets on a certain port.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: IPv6 NAT?

2008-02-15 Thread michael.dillon
   Consider the case where my home network is IPv6, my broadband
provider 
 is IPv4 only and the box I am ultimately contacting is IPv6.
 
   There you have an IPV6 NAT box, its called the legacy IPv4
Internet 
and its going to be around for at least as long as Telex survived after

the invention of email.

The first email message that I sent was in 1974 but I believe it was
around
earlier than that. Here in the UK, BT plans to shut down the Telex
network
by next month, March 2008 according to
http://www.sinet.bt.com/331v2p1.pdf

It is becoming increasingly clear that we are not in a transition to
IPv6,
but in a transition to IPv4-IPv6 coexistence and interoperation. It's
messy
but that is what is coming down the road.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: I-D Action:draft-rosenberg-internet-waist-hourglass-00.txt]

2008-02-13 Thread michael.dillon


 - the GOOD news is that the wasp waist-hourglass is no longer HTTP 
 [RFC3205], or
 
 - the GREAT news is that the wasp waist-hourglass isn't Skype (yet).

The REALLY GREAT news is that when IP ceased to be the wasp waist, 
TCP/UDP moved to fill that position which implies that having a
wasp waist in the protocol stack is a stable state towards which
the protocol set wants to converge.

Therefore, there is good reason to encourage this wasp-waist to
be the right part of the protocol stack. Which means that we should
conciously think about, and discuss, where the wasp waist should
be. And we should try to reinforce positioning the wasp wasit at
the optimal point of the stack.

Is TCP/UDP the right place which we should try to reinforce, or
should we instead try to move it back down to IP as version 6
becomes more widely deployed?

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: AMS - IETF Press Release

2008-02-12 Thread michael.dillon

Speaking of the IETF eating its own dogfood, is there 
a reason why people write things like this?

 http://www.businesswire.com/portal/site/google/index.jsp? 
 ndmViewId=news_viewnewsId=20080212005014newsLang=en

Or use services like this?

 http://tinyurl.com/37qjnd

Rather than following the advice of Appendix C in RFC 3986 which
recommends putting angle brackets around URLs like so?

http://www.businesswire.com/portal/site/google/index.jsp?ndmViewId=news
_viewnewsId=20080212005014newsLang=en

I generally type  left-arrow and then paste the URL between them.
In some environments it even works for non-standard URLs like
\\SomeServer\Some Folder Name\Engineering\Joe Bloe\Important
Spreadsheet.xls
or
file:Q:\Engineering\Joe Bloe\Important Spreadsheet.xls

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: Eating our own dog food and using SIP for telephony... (was Re: Myview of the IAOC Meeting Selection Guidelines)

2008-02-11 Thread michael.dillon
 
   P.S. How many folks out there have phones (hard or soft) from
which they can 
 place calls to other random SIP endpoints?  (I do, but also realize
I'm in a minority.)

The only reason that the average corporate desktop phone can make long
distance calls to anywhere in the world is that their local PABX
supports routing those calls.

In the same way, any arbitrary SIP phone (hard or soft) can also make
calls to arbitrary SIP endpoints if they are connected to a SIP PABX
that supports routing the calls. See http://www.asterisk.org for the
leading open-source SIP PABX software.

Even if the IETF needs something that Asterisk can't do today, it would
be wise to implement it anyway so that Asterisk developers can see the
need.

--Michael Dillon
___
Ietf mailing list
Ietf@ietf.org
http://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-net-utf8 (Unicode Format for Network Interchange) to Proposed Standard

2008-01-15 Thread michael.dillon

   (1) We have a collection of protocols in the IETF that
   use text in lines and data transmission models that are
   both very simple and very dependent on a clear and
   precise definition of line.  That definition has
   traditionally involved a CRLF sequence and that alone.

 Now the net-utf8 work is targeted exclusively at the first case.
 Even more specifically, it has been clear since the request 
 to get this written up and standardized came out of an 
 Applications Area meeting a few years ago that it would need 
 be designed so that all of the sensible and plausible uses of 
 NVT would be
 valid for it, without changes.

In that case, this is *NOT* truly net-utf8 as one would understand
it in normal English. Instead it is the Unicode version of NVT.

It seems to me that there is room for a standard Net-UTF8
for future new protocols, that sticks closely to the Unicode
standards and tries to be transparent to arbitrary UTF-8
streams. This newer UTF8 standard would take its line-ending
cues from the Unicode regular expression rules, i.e. a 
command line ending could potentially be detected using
a Unicode compatible RE engine. Unicode Whitespace can be used
to delimit words and arguments in command lines. One might even
leverage the existence of PS to mark the end of the command lines
paragraph, for instance an SMTP like protocol would not need a DATA
keyword because a PS could be used to mark the beginning of
data.

I think it is a good idea to have an update to NVT that allows 
for Unicode, but I don't think that the need for reverse
compatibility kludges which make it incompatible with the
Unicode standard require this document to not masquerade as the
definitive format for Unicode on the wire.

If the nomenclature is changed a bit to make it clear that
this is an update of NVT, then I think many of the arguments
against it fall away. Of course this means that there will
be another document at some point with a different form of
UTF8 on the wire, but it doesn't hurt to give future protocol
designers a clear choice.

Is it better to stick with the installed base and make incremental
improvements, or should we break with the past and start afresh
using the hard-earned knowledge of building that installed base?
This draft should not have to wait for that decision to be made
rather it should go to RFC status sooner, rather than later, so
that attention can be focussed on the second option, and what 
comes after that.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Deployment Cases

2008-01-03 Thread michael.dillon
  Unless I've missed something recent, the IETF did not do a 
 lot of work 
  on the scenario where IPv4 islands need to communicate over an IPv6 
  Internet, talking to both IPv4 and IPv6 services.
 
 It is called dual-stack.

That seems to simply ignore the issue by saying, let there be IPv4
everywhere for those who need it. But if there are not enough
addresses to grow the IPv4 infrastructure, you can't have IPv4
everywhere.

 The big question of course is: what exact problem do you want 
 to solve?

An IPv4 island, that is blissfully unaware of the IPv4 address
crunch until far too late, wants to avoid changing their network
but still maintain connectivity to both the IPv6 and the IPv4 
Internet. They may have to connect to an IPv6 ISP (for instance
if their IPv4 ISP goes bankrupt or they move an office location)
or they may connect to an IPv4 ISP who peers with a dual-stack
ISP or 6PE ISP. How do they continue to access all Internet services
from their IPv4 island, regardless of whether or not the services
are using IPv6 only?

Note that this is rather similar to the question raised regarding
the IPv4 outage at an IETF meeting. How does an IPv4 laptop user
continue to function without interruption when only IPv6 Internet
access is available at an IETF meeting? Assume that IPv4 wifi is
functioning and that the user does not change anything on their
laptop to accommodate the fact that connectivity is via IPv6. And
include the scenario where the IPv4 user successfully accesses an
arbitrary protocol server which is IPv6 only and has no A records
in its DNS.

 Stating I want to connect from IPv4 to IPv6, can mean a lot 
 of things, is this HTTP? SMTP? or do you really want a 
 generic solution?

A generic solution that will work for all TCP and UDP protocols.

I don't believe that there is an IETF solution for this scenario
which I expect to be a very common scenario specifically because 
transition to IPv6 is being triggered by an IPv4 address shortage
whose effects will be felt first in the network core and ripple
outward from there.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Deployment Cases

2008-01-02 Thread michael.dillon
 The reason I am proposing deployment cases is that while I 
 beleive that #1 is the ultimate end state I also believe the 
 same of PKI and cryptographic security systems. There is no 
 technology developed in computer science that provides a more 
 compelling intellectual case 

...to computer scientists...

 than Public Key Cryptography.  
 Yet after three decades our use of PKI barely scratches the 
 surface of what is possible. We need to ask why.

Human psychology.

 Recently I spoke to a senior 
 executve at a very large manufacturing company that is 100% 
 certain that their principal product line will be completely 
 obsolete within five years, most of you would say it is 
 obsolete today. Their idea of forward planning for the change 
 is not investing in any new equipment that is unlikely to see 
 a return before that time.

How many senior executives in Internet operators are consciously
not investing in any IPv4 products that will not provide a 
return before the global IPv4 space is exhausted. I believe that
very few such executives have even made this fundamental decision.

As a result, there is not yet enough pressure on vendors to get
their products IPv6 ready before 2010. Where are the Internet 
gateways that seamlessly work with IPv4 or IPv6 on either side
of the box? Where are the OSS systems? Where are the firewalls,
load balancers, and other linchpins of the data center? Given the
fact that network operators need a fair amount of lead time to test
and certify new equipment (or software) before easing it into
production in stages, I don't believe that we are as advanced as
we need to be by this point in time.

 Mere exhaustion of the IPv4 address space is not going to be 
 a sufficient incentive unless (1) it is certain to happen in 
 the next two quarters and (2) the impact is certain to be 
 negative on the specific stakeholder in question. 

Even this is problematic because the fund managers and 
investment analysts are not yet asking senior executives
how they plan to mitigate IPv4 exhaustion. If senior executives
don't consider the issue, then they won't take action even if
it is certain to happen in the next two quarters.

 If we are to turn the stakeholders around we have to offer 
 them a compelling proposition. Merely preventing the 
 exhaustion of the unallocated IPv4 pool is not a sufficient 
 incentive for a stakeholder executive sitting on a large pool 
 of unused addresses. 

It is not the exhaustion of the free pool that should be feared.
It is the fact that your IPv4 network will lose the ability to
grow (and therefore drive growth in revenue) when there are no
free addresses. You will be forced to spend a lot of money on
either implementing IPv6 in a last minute panic, or spend a lot
of money on strings and sealing wax to make IPv4 services more
or less feasible. 

The sooner that companies take action, the sooner they can navigate
an optimal path through these waters. In some case, spending on 
things like double NAT for IPv4 may well provide a return on 
investment, but that has to be balanced against a scenario in 
which more investment dollars go towards making an IPv6 Internet
service functionable earlier.

Unless I've missed something recent, the IETF did not do a lot of
work on the scenario where IPv4 islands need to communicate over
an IPv6 Internet, talking to both IPv4 and IPv6 services. Yet this
core-outwards scenario seems to be the primary transition scenario
that we are driving towards. The first companies to be impacted
by lack of IPv4 addresses are the core network operators, so they
must transition to IPv6 before the end user islands.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Let's look at it from an IETF oldie's perspective... Re:IPv4Outage Planned for IETF 71 Plenary

2007-12-20 Thread michael.dillon

 However, I would gently suggest that if people want IPv6 to 
 be successful, we need to start using it, and we need to 
 start creating the engineering solutions that allow IPv6 to 
 be useful in the real-world.

Yes. And that includes figuring out what is needed to make an
IETF meeting function with only IPv6 transit connections to
the outside world including full support for IPv4 users at 
the meeting. Full Support in this context means making it
possible for IPv4 users to seemlessly access IPv6-only services.
If there is to be any kind of IPv4 outage, it should be on
some of the IETF servers that currently function on both IPv4
and IPv6 to prove that it is possible to make a transition
to IPv6 that is relatively seamless to the end users of the
Internet.

 The 
 question is what other real-world deployment problems are 
 hiding that haven't been addressed yet.

A mandate to make an IETF meeting function as described above
would be a good way to get people to work on figuring out
these problems.

   There are some who seem to be arguing that the IETF is 
 not the place to work out these problems.  Well, last I 
 checked, the word
 *ENGINEERING* is in the name of our organization. 

I think there is a multileveled understanding of what engineering
means that has led to some of the outraged comments. By some
standards, many of the IETF participants, especially old-timers,
are FORMER engineers because they don't get their hands dirty 
any more. By other standards they are EXPERIENCED engineers who
don't need to get their hands dirty and can spend all their time
planning, designing and vetting other people's work.

The EXPERIENCED engineers are right that technology experiments
which amount to a service outage should not be dumped on an IETF
meeting. But the hands-on folks are also right that the IETF has
an important role to play as the IP4 exhaustion crisis ramps up. 
It is because the pool of experienced engineers is avaliable as
a resource, that it makes sense for a working group to plan,
implement, TEST, and deploy whatever is needed to make it possible
for an ENTIRE IETF meeting to function with no IPv4 Internet 
connectivity other than via automatic tunnels, proxies, and NATs.
The IPv4 packet transport may be turned off at the lower layer
but that doesn't mean that IPv4 users would get no service or even
degraded service.

Of course, such a demonstration could be done without involving the
IETF at all, but then it loses a lot of the PR impact. For instance
if the IETF meeting needs  records in the root to avoid deploying
root hijacking, then ICANN will act. If the IETF requests major
service providers to participate in such a demonstration by turning
on some type of IPv6 services in trial mode, then I suspect we will
see Google and Microsoft and CNN and Verizon all join in the effort.

   Let me give a challenge.  It's been nine years.  In the 
 next year, let's try to do whatever ENGINEERING work is 
 necessary so that the IETF conference network can offer 
 IPv6-only services to all of its laptop clients, and that 
 this be sufficient for people to get real work done.  

This means that it has to include transition technologies
so that a pure IPv6 users can still get to all the IPv4 services
that they need, and so that an IPv4 user can do the same.
If you push this one step further and show that an IPv4 user
on the IETF network can make use of IPv6-only services, then
you will really make a splash in showing that IPv6 is ready
for primetime.

--Michael Dillon

And once this has been proved at an IETF meeting, the next step
is to do it at an event like Davos. 
http://en.wikipedia.org/wiki/World_Economic_Forum


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv4 Outage Planned for IETF 71 Plenary

2007-12-19 Thread michael.dillon

 Yes, right now IPv6 deployment isn't good enough that we 
 can't do this without using all sorts of workarounds.  OK, 
 let's document those workarounds and make them available to 
 the attendees.  If it means that the IETF network provider 
 has to hijack the root, then let them hijack the root on the 
 IETF network, and document that fact.  If there needs to be 
 NAT-PT bridges to allow IETF'res to get back to their home 
 networks connected to ISP's that don't yet offer IPv6 
 connectivty, then let there be NAT-PT bridges --- and 
 document them.  If various Windows, Macintosh, and Linux 
 laptops need special configuration so to work around other 
 problems, then document what the workarounds should be, and 
 give people early warning and access to them.

This is the best suggestion that I have seen in this whole thread.

Build it, test it, get it ready for production and then unleash
it on the IETF themselves. All documented so that any network
operator who claims that it is impossible can be given a copy
of the recipe book.

IPv6 could have been ready to go years ago, but people got used
to pushing it down the priority list thinking that it was a long
term thing and it would be easier to deal with it later. That was
true to a point, but now IPv4 exhaustion means that IPv6 is no
longer a long term thing that can be repeatedly deprioritised. We
have to deal with it now, even if everything isn't as ready as we
had hoped.

 (Or maybe the
 IPv4 network can be made available over the wireless on a 
 separate SSID with a fee attached,

That sounds a bit draconian. Since it is pretty straightforward to
tunnel IPv4 over IPv6, give them the IPv4 SSID and a walled-garden
web server where they register for free use of the tunneling service.
Then monitor your outgoing traffic (100% IPv6) and record how much
of it uses this tunnel service.

For this to work, it needs to be virtually painless for all users
including those who have a pure IPv4 environment and no capability
to use IPv6 in any form whatsoever.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


IETF Eurasia

2007-11-29 Thread michael.dillon

 Maybe I should elaborate. In several WG where I am active at 
 least half of participants are from Europe or Asia.

Why do IETF meetings have to be monolithic and all-inclusive?
Why can't the IETF hold partial meetings in Europe and Asia?
This would probably mean more IETF meetings but nobody has to
go to all of them.

Essentially, I am suggesting that WGs with a lot of participants
in Europe or Asia should be able to band together and hold
local IETF meetings leveraging the same IETF secretariat services
as the full meetings.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IETF Eurasia

2007-11-29 Thread michael.dillon
  Why do IETF meetings have to be monolithic and all-inclusive?

 I can tell you why we do - crosstalk. It can be incredibly 
 useful for people from the Security Area to look in on 
 Applications, or for Transport and RAI folks to understand 
 the workings of the layers beneath them and their users, for example.
 
 That doesn't make for a has to, but it seems like a good 
 reason to choose to, from my perspective.

I agree with your reasoning. I should have asked,
why do *ALL* IETF meetings have to be monolithic and all-inclusive?

Smaller meetings held outside North America could be located
in smaller cheaper hotels, and would encourage wider participation
in the IETF. In fact, smaller meetings in North America would 
achieve the same ends.

I'm not suggesting getting rid of the existing monolithic
meetings, but adding another type of meeting that is smaller,
cheaper to attend, and held in cities/countries that are
far from the USA but closer to people who should be more 
involved in the IETF. For instance, Pune and Bangalore India,
Moscow and Ekaterinburg Russia, Dalian and Shanghai China 
as well as places like Helsinki, Frankfurt, Tokyo, Seoul.

Note that smaller regional meetings still provide the opportunities
for some crosstalk, even if the variety of WG choices to attend
will be smaller. And it increases the amount of crosstalk and
cross-fertilization between people who regularly work in the IETF
and those who have not done IETF work because they have not had
the opportunity to see it in action, face to face.

Note also that RIPE does something along these lines with their
regional meetings having more focus on education. I expect that
an IETF regional meeting would also have to have more focus
on education since a higher proportion of first-timers would attend.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: FW: I-D Action:draft-narten-ipv6-statement-00.txt

2007-11-16 Thread michael.dillon
  In terms of new work, the only thing I'd like to see is 
 that driven by 
  a clear and compelling need from folk that are seriously trying to 
  deploy IPv6 and can identify a real gap in available standards. I 
  don't doubt there is some work to be done here, but it needs to be 
  driven by a real, concrete need, not just be yet more tinkeritus.

I do see that clear and compelling need, not to tinker, but
to document the current state of IPv6 in order to assist operational
deployment in both network operators and enterprises. The fact that
IPv4 addresses are heading towards exhaustion in two to three years
from now is one of those exceptional situations that demands some
support from the IETF. For those people who believe that IPv6
deployment is the best answer to this crisis, what do they need
to know about IPv6 in order to make good deployment decisions?

 'Indeed. I'm not looking for a book at all, but an RFC which 
 summarizes the current state of IPv6 that can be used as an 
 authoritative source to win arguments with people who are 
 still stuck in IPv4 thinking. At this point, I have to trawl 
 through dozens of RFCs looking for this information, or else 
 use one of the books Brian recommended and hope that the fact 
 of his recommendation holds some weight'
 
 It's that dozens of RFCs that grabs my attention (and makes 
 me think, how many
 more?)  For me, it's not enough to say 'we are done'; we need 
 to do more, like produce the that ultimate RFC as well (well, 
 ultimate until the experience of deployment demands a change).

For example, RFC 4294 was produced to give an overview of IPv6
targetted at implementers. It is terse and leaves most of the
details to the RFCs that it references which is reasonable since
an implementer MUST read all those RFCs in order to get everything
right. But for operations folks who are deploying IPv6, they should
not need to read through a dozen or more RFCs to understand the
current state of IPv6. If you look at V6OPS
http://www.ietf.org/html.charters/v6ops-charter.html
there are already 27 RFCs and more on the way.

The root of the problem is IPv4 thinking and that is rooted in
the IPv4 addressing architecture. Since the IPv6 addressing
architecture is poorly documented (scattered among several RFCs)
the IETF does not have an authoritative source that describes
the IPv6 addressing architecture. This is absolutely fundamental
to IPv6 deployment because early on in the planning stage you
need to understand IPv6 addressing, and develop your own IPv6
addressing plan that forms the architecture of your deployment.
The RFCs on addressing talk about site-local addresses or 
TLAs which cause people to mistrust them.

The solution is to publish an RFC targetted at operational
deployment that serves as an authoritative overview of IPv6.

--Michael Dillon
   

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: FW: I-D Action:draft-narten-ipv6-statement-00.txt

2007-11-13 Thread michael.dillon
  ULA,
 
 No apparent consensus to do this. But is it needed to deploy 
 IPv6? A lot of people say absolutely not. 

And if, during the next year or so of larger scale deployment
of IPv6, we discover that ULA-C is needed, then it can be made
available relatively quickly because it doesn't require upgrades
to any existing IPv6 devices or software.

Don't forget NAT-PT.

Deprecated by the IETF because its not a good long-term idea,
but it has already been deployed and if people can get some
short term use out of it, the IETF only deprecates, it doesn't
ban.

 In terms of new work, the only thing I'd like to see is that 
 driven by a clear and compelling need from folk that are 
 seriously trying to deploy IPv6 and can identify a real gap 
 in available standards. I don't doubt there is some work to 
 be done here, but it needs to be driven by a real, concrete 
 need, not just be yet more tinkeritus.

Spot on!

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Daily Dose version 2 launched

2007-11-07 Thread michael.dillon
 I did not find it useful to see e.g. the first 300 characters 
 of the Daily Dose page. I also considered other options (such 
 as producing a version without the yellowish detail boxes), 
 but did not find them personally useful either.

If I'm not mistaken, you are the developer, not the set of
end users. In which case your personal preferences are not
relevant. If you can provide a choice of two feeds, one using
RSS and one using ATOM, then why can't you also offer a summary
feed and a full content feed?

  In the real world, web site developers also do something called 
  usability testing which catches all these issues before the site 
  ever goes live. For example, read this:
  http://www.useit.com/alertbox/2319.html
 
 I guess this is talking about real-world web site developers, 
 who develop sites for others for money. I would indeed expect 
 e.g. the secretariat to do this kind of testing for services 
 paid from our meeting fees. But Daily Dose is not one of 
 those services.

There is a grand old IETF tradition of asking for volunteers and
then randomly picking from that set. I see no reason why a web developer
who wants to put something useful on their resume, would not ask
for volunteers from the audience of this site, and then run a usability
test with them.

--Michael Dillon

P.S. I thought this was part of the site redevelopment using Django
but perhaps I was mistaken.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Putting requirements on volunteer tool developers (Was: Re: Daily Dose version 2 launched)

2007-11-07 Thread michael.dillon
 Please be careful about how you set requirements for things 
 like Daily Dose, or anything in the tools site, really. The 
 developers like Pasi are volunteers who are providing a great 
 service at no cost.

 Where did resumes come up?

Very often, people who volunteer to do something, don't really
do it for free, but in order to gain useful experience that they
can put on their resume and use to get better (or more interesting)
jobs in the future. Running a usability test along the lines that
Jakob Nielsen has written about, is the kind of thing that would
look good on a web developer's resume.

 FWIW, I don't think a big volunteer test would have added any 
 value for a feature like this.

I agree, and that is why I provided the URL to an article by someone
who has done experiments and discovered that you get most of the
value by doing a SMALL usability test with 5 users, then fix up
all their issues, and repeat again with 5 more users. When you
get to the point of diminishing returns, stop repeating. If you
consider people's time to be a cost regardless of whether money
changes hands, then you get the maximum cost/benefit ratio by doing
this rather than just flinging it out to the community.

 And even if it did, do we have 
 a right to require Pasi to do something like that, given that 
 it would consume a lot of his time, delay the introduction of 
 the new version to the entire community, etc?

Nope, we would have no right to require that anybody implement
a good idea or follow best practice.

But given the number of messages to the list suggesting changes
and fixes, I thought it was reasonable to point out that there is
a better way, and it is documented here
http://www.useit.com/alertbox/2319.html
and it is not expensive to do either in money terms or in terms
of time spent.

 Finally, as someone else noted, we have a tool development 
 day coming up -- please join and add the features you need!

If I were able to travel to Vancouver, I would do so.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Putting requirements on volunteer tool developers (Was: Re: Daily Dose version 2 launched)

2007-11-07 Thread michael.dillon
 I still haven't seen YOU offering to do anything to help.

And I don't see you asking me to help.

 It seems you're saying that if I can't find the time to get 
 together a usability test panel before I tell people at large 
 about the new or updated tool I've put together, then I 
 shouldn't bother spending time doing any tools at all.
 
 Do you really mean that?

No, that is not what I mean.

Judging from the many responses on this list, you are doing a
mass usability test on the entire user population. I assumed,
quite rightly it seems, that you didn't know about how it was
possible to do usability testing very cheaply and very effectively.
If you are not interested in this, then why make such a fuss?
Just ignore me and forget about it.

Frankly I have no idea what this whole tempest is about and
I don't know why you want to drag this onto the open list.

--Michael Dillon




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Daily Dose version 2 launched

2007-11-07 Thread michael.dillon

 Gotta love this one. Nothing like developers eating their own 
 dog food and improving things based on their own experience. 

I have a long career behind me as a developer and I do eat my
own dogfood. However, I find that submitting my work to user
testing is a rather humbling experience which generally gives me
the opportunity to change that dogfood into haute cuisine.
I didn't say that developers should not eat their dogfood or
that eating their own dogfood is bad in any way. All I did say
was that usability testing is good, can be done easily and
cheaply, and results in making the developers look good.

  Frankly I have no idea what this whole tempest is about and I don't 
  know why you want to drag this onto the open list.
 
 It's a real shame that some people can't seem to appreciate 
 the hard work the tools team has been doing and recognize the 
 real value they have brought to the IETF organization. They 
 have done much to make the IETF a more effective organization.
 
 Speaking for many I'm sure, a big thanks to the tools team. 
 Keep up the good work and ignore those that somehow just don't get it.

I'll add my thanks to the tools team as well. I'm not saying
that their work is bad in any way. I apologize if my comments
appeared otherwise.

But it sure looks like their work *IS* underappreciated or they
would not have felt the way that they do, apparently due to one
ill-chosen word. I suspect that part of the reason for this 
underapreciation is that they release their work before doing
usability tests and therefore the bulk of the feedback that they
receives is pointing out shortcomings in their work. But maybe
I'm wrong and there is some hidden controversy in the tools team
that I inadvertently stepped into.

I mentioned Python in my message because this page:
http://www3.tools.ietf.org/tools/ietfdb/wiki/VancouverSprint
led me to think that all tools were being developed in Python.
Apparently, this is not so since I've been informed that some
of the developers are still building tools in PERL.

--Michael Dillon



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Daily Dose version 2 launched

2007-11-06 Thread michael.dillon
 The second, and more important, reason is that AFAIK most 
 feed readers and aggregators wouldn't be able to render the 
 expanding yellowish boxes (which contain ID abstracts and 
 other details) anyway, because they rely on CSS and JavaScript.

Since when has anyone considered CSS and JavaScript to be
CONTENT!?!?!?

Generally, RSS and ATOM feeds are produced by software.
Software can do things like parse web pages and separate
the content from the markup and also shorten long content
items to the first 300 characters or so. 

In the real world, this kind of thing is Python 101 in that
beginners who have never before used a scripting language
somehow manage to set up their own RSS and ATOM feeds.

In the real world, web site developers also do something
called usability testing which catches all these issues
before the site ever goes live. For example, read this:
http://www.useit.com/alertbox/2319.html

If you would run three or four cycles of these cheap
5-user usability tests, fixing all issues before doing
the next test, then you would not have so much traffic
on the IETF complaining.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Patents can be for good, not only evil

2007-10-30 Thread michael.dillon
 That was a waste of your time and money. Publication of those 
 inventions by you, at zero cost to you and others, would have 
 been sufficient to prevent someone else from trying to patent 
 them. Next time, get good advice from a patent lawyer on how 
 to achieve your goals without paying for a patent.

Perhaps he did consult a lawyer and learned that by patenting
them he now has the ability to sue any non-licenced implementors
in court, and can take away a share of any earnings that they 
made with their non-licenced implementation. That is not possible
merely by publishing first.

Law is every bit as complex as network protocols or application
programming. It is better to consult with an expert in the field
before making assumptions.

I am not a lawyer.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: 2026, draft, full, etc.

2007-10-30 Thread michael.dillon
 I've suggested before that the advancement of a specification 
 is a highly overloaded action - it implies that the IETF 
 thinks it's a good idea, it implies that the specification is 
 sound, it implies it's well deployed.

Does the IETF have a way to communicate that a specification is 
a good idea with a sound specification and that is well deployed?
For that matter, does the IETF have a way to make that determination?

One way in which the IETF has conveyed additional info in the past
is by designating RFCs as part of a BCP or FYI series. Similar 
mechanisms could be used to convey that a specification is more
than just a plain old humdrum RFC.

The point of all this being, that if the IETF does communicate that
certain RFCs are of a higher class than others, it makes it harder
for others to misunderstand the meaning (or mislead others about the
meaning) of RFC status for some particular protocol.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Silly TLS Auth lobbying

2007-10-29 Thread michael.dillon
 - Many RFCs are *not* on the IETF standards track.

One of the commenters mentioned that even Informational RFCs are seen,
by the uninitiated, as having the force of a standard.

 - Any Experimental RFC is *not* on the IETF standards track.
So there is no endorsement by IETF in publishing such.

In fact, designating an RFC with IPR concerns to Experimental status is
a kind of purgatory for the RFC. It gives opponents a chance rally
implementation efforts using alternatives so that the Experimental RFC
never goes anywhere. But, if it turns out that there are no viable
alternatives or that the IPR owner is being fair and reasonable, then
the RFC can come out of Experimental status after the work has proven
itself. If the FSF and others understood that Experimental RFCs are a
form of purgatory, I think many of them would be satisfied.

 able to be published as an Informational RFC or Experimental RFC.
 Technology that is useful will be adopted if economically 
 sensible, whether in an RFC or not, whether made a formal 
 standard or not.
 By having an open specification, users can at least 
 understand the properties of the technology that is documented openly.

And people who dislike an IPR-encumbered design are free to publish an
alternative that satisfies the same needs. If they succeed in getting
their design published as an Informational or Experimental RFC, then
they have reached IETF feature parity.

I am in favor of publishing draft-housley as Experimental

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: An example of what is wrong with the IETF's IPv6 documentation

2007-10-23 Thread michael.dillon
  The following thread from ARIN's Public Policy Mailing List is an 
  example of what is wrong with the IETF's documentation of IPv6.
 
 I look forward to seeing your draft.

Unfortunately, I was not involved in the creation of IPv6, nor did I
follow it as RFCs were released and then deprecated, so I don't fully
understand it myself. This is a piece of work that needs to be taken on
by some of the people who were immersed in the creation of IPv6.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


An example of what is wrong with the IETF's IPv6 documentation

2007-10-22 Thread michael.dillon
The following thread from ARIN's Public Policy Mailing List is an
example of what is wrong with the IETF's documentation of IPv6. People
are struggling to understand just how IPv6 works, not at the
implementation level of detail, but at a higher level. 

What is mandatory, what is optional? What are the basic principles, what
is the fundamental architecture?

Some people argue that IPv6 is merely IPv4 with more bits, therefore all
the rules and constraints of IPv4 must necessarily be applied. There is
no IETF document that provides the right kind of high-level view of
IPv6, and no document that provides guidelines for RIRs.

In the absence of such guidance, it appears as though people who plan to
alloocate /120's to customers are right, and Brian Dickson is the
authoritative voice of the IETF who understands IPv6 most clearly.

Most people who make decisions about addressing plans in the RIRs or in
ISPs, do not have the time to wade through dozens of RFCs trying to
figure out what is NOT DEPRECATED and what is the IPv6 STANDARD.

I believe that the 6MAN group should add to its charter two documents:
IPv6 guidelines for RIRs and IPv6 Overview for ISPs.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
 Behalf Of Brian Dickson
 Sent: 22 October 2007 22:42
 To: ARIN PPML
 Subject: Re: [ppml] IPv6 assignment - proposal for change to nrpm
 
 Leo Bicknell wrote:
  In a message written on Mon, Oct 22, 2007 at 09:31:36AM 
 -0400, Azinger, Marla wrote:

  3177 is a recommendation from 2001 and not a standar of any kind.
  
 
  I'm afraid many people are not looking at the right RFC's and/or 
  considering what all needs to be changed if the /64 
 boundary is to be 
  updated.  I'm fairly sure this is not an exhaustive list, /64 is 
  referenced in many locations in different IPv6 RFC's, many of which 
  are standards track.
 
  * http://www.faqs.org/rfcs/rfc2373.html
IP Version 6 Addressing Architecture
Status: Standards Track
 
Section 2.5.1: Interface IDs are required to be 64 bits long...
 
Section 2.5.7: Aggregatable Global Unicast Addresses
 
Section 2.5.8: Local-Use IPv6 Unicast Addresses
 

 RFC 2373 was obsoleted by 3531 which was obsoleted by 4291.
 2.5.8 is gone, but AGUA is still roughly the same (all but 
 000 require use of EUI-64 modified), and ditto 2.5.1
  * http://www.faqs.org/rfcs/rfc2374.html
An IPv6 Aggregatable Global Unicast Address Format
Status: Standards Track
 
Section 3.1 makes it clear the lower 64 bits are an interface
identifier for
 
I also point out section 3.4 makes a recomendation we 
 continue to use
a slow start method:
 
  It is recommended that
  organizations assigning NLA address space use slow 
 start allocation
  procedures similar to [RFC2050].
 

 2374 was obsoleted by 3587.
  * http://www.faqs.org/rfcs/rfc2450.html
Proposed TLA and NLA Assignment Rule
Status: Informational
 
Section 3: IPv6 Aggregatable Global Unicast Address Format
 

 This bit was itself in RFC 2374, which was obsoleted by RFC 3587.
  * http://www.faqs.org/rfcs/rfc2460.html
Internet Protocol, Version 6 (IPv6) Specification
Status: Standards Track
 
Section 3: Specifically referrs to 2373 (ADDRARCH)

 4291  obsoletes 3531 which obsoleted 2373.
 
 (I don't know why 2460 hasn't been updated with the new references...)
  * http://www.rfc-editor.org/rfc/rfc3177.txt
IAB/IESG Recommendations on IPv6 Address Allocations to Sites
Status: Informational
 
Section 3: Recomendations

 This was informational only, from 2001, and IMHO no longer as 
 relevant as it once was.
 
 So, by my count, that is 4291 and 3587.
 
 My IETF draft also lists 2464 (Ethernet),  4941 (privacy), 
 and 4862 (autoconfiguration).
 Most other IPv6 RFCs inherit from those few, and mostly the 
 choice is rather axiomatic.
 Two small changes, basically, in a backward-compatible 
 manner, is meant to be as minimally-disruptive as is possible.
 (Think surgery to remove a burst appendix or inflamed tonsils.)
 
 Anyone interested can see the draft at:
 http://www.ietf.org/internet-drafts/draft-dickson-v6man-new-au
 toconf-00.txt
 
 My draft even includes the necessary patch for Linux, about 
 17 lines in total, mostly the result of the necesssary line 
 length provisions for an RFC. (It was 10 lines
 originally.)
 
 Brian
 ___
 PPML
 You are receiving this message because you are subscribed to 
 the ARIN Public Policy Mailing List ([EMAIL PROTECTED]).
 Unsubscribe or manage your mailing list subscription at:
 http://lists.arin.net/mailman/listinfo/ppml Please contact 
 the ARIN Member Services Help Desk at [EMAIL PROTECTED] if you 
 experience any issues.

I don't think it is necessary to discuss the quoted text, just be aware
of how hard it is to pin down how IPv6 works and find authoritative IETF
documents to back up an assertion.

--Michael Dillon 

RE: Travel Considerations

2007-10-12 Thread michael.dillon
 Here is an interesting optimization problem: it turns out the 
 most polluting part of a conference is people taking jets to 
 fly to the conference.  Minimize that and the planet wins.  

Simple solution. Only allow people to attend if they take a train or bus
to the conference. Enforce this by including tickets for train or bus
(your choice) when registering to attend a conference. It makes
trans-continental conferences impossible (although you might add an
ocean liner option) but that is not necessarily a bad thing.  What is
wrong with have several different continental chapters of the IETF,
given that all decisionmaking is supposed to be done online, not in
meetings?

--Michael Dillon

P.S. if you look at a map of Europe, Prague is rather centrally located
an makes an ideal location for a green conference like this.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [secdir] secdir review ofdraft-ietf-dnsop-reflectors-are-evil-04.txt

2007-10-03 Thread michael.dillon
  From: Danny McPherson [EMAIL PROTECTED]
 
  where's the authoritative source for who owns what prefixes
 
 This, one could imagining putting together.

The IETF has delegated this to the IANA and the RIRs. So far, the RIRs
have not done anything more than keep the antiquated whois directory
functioning. I use the word antiquated in reference to whois directory
services not because of the query protocol, but because of its origins
as a way of auditing network users in order to justify budget
allocations back in ARPANET days. Since that time, there has never been
a serious attempt to rethink the purpose and scope of the IP addressing
whois directory. 

One wonders whether the vastness of the IPv6 address space is sufficient
change for the IETF to write some guidance to the RIRs regarding the
purpose and scope of a whois directory. Or maybe some other method of
signalling the ownership of address prefixes.

  and who's permitted to originate/transit what prefixes?

The RIRs have taken a stab at this problem with route registry services
but it has never gotten significant support from ISPs. Since the RIRs
delegate short prefixes to ISPs, who then may delegate longer prefixes
in some way, the chain of permission to originate/transit, originates
with the RIRs. Is a new protocol needed for this to work right? Or is
there simply not enough demand.

Note that some RIRs such as RIPE, attempt to maintain a fairly detailed
route registry database as part of their whois directory.

 Second, you're talking about potentially orders of magnitude 
 more data: for each destination, there are worldwide likely 
 hundreds (or more) of ISP's which are likely to be viable 
 backbone transits. (By 'backbone transit', I mean ISP's/AS's 
 which are not even directly connected to the organization 
 which is the source or destination of the packets; e.g. 
 customer A is connected to ISP p which is connected to ISP q 
 which is connected to ISP r which is connected to customer B; 
 q is a 'backbone transit'.)

We have the technology to deal with orders of magnitude more data,
assuming that the task is delegated to servers, not to routers.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IETF solution for pairing cellular hosts

2007-09-26 Thread michael.dillon
Please refer to my first mail. There are three basic problems that I
see.

1. You don't want to publish your private information
2. Manual exchange is difficult 

Ridiculous! I give my phone to the other person and ask
them to dial my number and call me. Now we both have a
record of each others' phone numbers.

3. Face-to-face contact is not always available

I can't believe that I'm reading this in an email message.
Some people put their phone number in the signature block.
Others type it in when requested. Most mobile devices have
a way to sync a phonebook with a PC. Job done.

The proposed solution is the only one that addresses these three
problems. 

Seems to me that you have only one real problem and we
can adress that (number 1 above) by simply doing nothing.

--Michael Dillon

P.S. I did design a method vaguely similar to this for exchanging
contact information between mobile phone users that required the
support of the mobile network operator. 

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 RIR policy [was Re: IPv6 addresses really are scarce after all]

2007-09-20 Thread michael.dillon
 I'm glad to hear that RIRs can respond to users' concerns, 
 but that doesn't change the fact that they're second-guessing 
 an IETF decision and that other things in the IPv6 
 architecture are dependent on that design decision.

Given that the IETF has not released any documentation of the decisions
in question, other than one RFC that contains deprecated material, I
think that the RIRs have every right to second-guess the IETF. If anyone
thinks that the RIRs second-guessing results in the wrong decisions,
then perhaps they will take the time to write the missing RFC with
guidance for RIRs. It wouldn't hurt to issue another RFC with guidance
for network operators, even if it covers a lot of the same ground as the
RIR guidance.

 That's why
 IETF took too long to become aware of the inherent problems 
 associated with NATs and too long to speak out about those 
 problems, and has said too little about them. 

It's that last point that I have a problem with. If supreme court judges
can't come to a consensus then at least they will explain why, by
writing a dissenting opinion. In my opinion, if there is a problem with
reaching consensus on important issues, then the IETF should pressure
both sides of the issue to write a draft explaining the problem area.

 It's also why 
 it never has developed a viable transition path away from NAT 
 and toward native IPv6.

The IETF has effectively specified that IPv6 NAT devices are the right
way to do this by not defining a standard set of functionality for an
IPv6 gateway for connecting an IPv6 network to the IPv6 Internet. By not
specifying how it is to be done, the IETF is giving carte blanche for
anyone to solve the problem in any way that they please.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: mini-cores (was Re: ULA-C)

2007-09-20 Thread michael.dillon
 Downloading over IPv6 is still almost always slower than over 
 IPv4, but for day-to-day stuff the performance difference 
 isn't an issue with native IPv6 connectivity (for me). 6to4 
 is a crapshoot, it can be reasonable or it can completely 
 fail, with everything in between.  

That will improve as more people set up 6to4 relay services
http://www.getipv6.info/index.php/First_Steps_for_ISPs#Setup_A_6to4_Rela
y

 But it's never going to be better than native IPv4, obviously.

Not so. Native IPv4 will route packets by choosing the best path.
Part of that path decision will be made via BGP which allows
network administrators to remove certain paths from the list
of possibilities. When a 6to4 relay is used, the destination
IPv4 address is that of the tunnel endpoint, not the final
destination. It is entirely possible that a tunnel endpoint
may take the packets into an AS which would not otherwise be
used and which IS a better path than would be chosen by pure
IPv4 routing. Back in the 1990s some shrewd American ISPs used
this fact to get free transit from certain ASNs because the
traffic to a tunnel endpoint qualified under the peering agreement.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: ULA-C (Was: Re: IPv6 will never fly: ARIN continues to kill it)

2007-09-20 Thread michael.dillon

 Does Balkanization of the Internet mean anything to you?

Yes.
NAT, BGP route filtering, bogon lists, firewalls, Community
of Interest extranets such as SITA, Automotive Network Exchange,
RadianzNet. And let's not forget the IP VPN services that companies
like Verizon sell as a flagship product.

It is probable that there are more hosts today in the Balkanized
portions of the Internet than on the public portions.

--Michael Dillon

P.S.
Not to mention sites that are more than 30 hops away from each
other. I've seen traceroutes that go up to 27 hops so I imagine
that the hopcount diameter is once again becoming an issue
as it was prior to 1995.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Representation of end-users at the IETF (Was: mini-cores (was Re: ULA-C)

2007-09-20 Thread michael.dillon
 Over the last ten years, I explained a zillion times to my 
 management, workmates, etc. why e-mail addresses cannot 
 contain accented characters, only to be asked when the IT 
 department of the organization is going to fix it. This is 
 the archetypical example of an issue that has been known 
 since the days of RFC821/822. Yet, work to address this has 
 only started a year ago, although I am conscious there were 
 some intermediate step needed, like Unicode.

For this to work, we need a way to display that address on
devices which do not have the complete set of Unicode glyphs
installed. And we also need a way to display a representation
of the address that can be used to unambiguously input the
address on a device which does not understand the full set
of Unicode glyphs.

This was discussed a couple of days ago in this message
http://www1.ietf.org/mail-archive/web/ietf/current/msg47925.html
regarding deprecating RFC 1345 because it is the wrong solution
to the problem.

In fact, it may be necessary to attach a language tag (defined 
in RFC 4646 and 4647) to these addresses in order to make this
fully possible. For instance, there is a Norwegian mans' name
which is usually written Hakon in English. In Norwegian, the 
letter a is written with a small ring attached to the top. This
ring represents that the name is pronounced more like Hokon than
Hakon. Nevertheless, it is standard for people to us a double a
to represent this glyph (a-ring) when writing Norwegian with
devices which do not have the a-ring glyph. But Haakon is even
more misleading to English eyes.

In order for an email display and entry device to fully make sense
of addresses which contain a glyph not available on the device,
it may be necessary to know both the language tag of the device
user, as well as the language tag of the address.

I'm sure that many people are working on this problem, but most
of this work is happening outside of the IETF. Perhaps even in 
commercial ventures like Mozilla's new email company,
http://www.mozilla.com/en-US/press/mozilla-2007-09-17.html

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Call for action vs. lost opportunity (Was: Re: Renumbering)

2007-09-19 Thread michael.dillon
 Are there any documents that give adoption instructions for 
 what are expected to be common scenarios?  These would be 
 step-by-step cookbooks, with explanations for when they apply 
 and when they don't?

There are lots and lots of documents in lots and lots of places. Many of
them were written years ago and include deprecated stuff, such as 6bone.
They are written from numerous points of view, i.e. enterprise, academic
campus, Research  Education Network, European, etc.

 Given the adoption hurdles IPv6 has been showing, then 
 efforts to both make it easy and publicize/document that it's 
 easy could be helpful.

Yes. Given that the IPv4 exhaustion date is now within the planning
horizon of ISPs, ARIN has set up a wiki at http://www.getipv6.info to
document how to use IPv6. Since ARIN's audience is ISPs, this is taking
the ISP point of view to the problem.

For instance, if you are an end user, 6to4 is something that you
configure to dip your toes in IPV6 and see how it works without touching
existing IPv4 infrastructure. But for an ISP, 6to4 is a relay service
that you configure in several routers to ensure that an end user's early
experience with IPv6 is more positive and less likely to include high
hop count, and high latency caused by trombone-shaped tunneling
architecture.

IMHO the type of document that Dave is talking about requires many
authors. A wiki is well-suited to creating such documents. But it needs
contributors who know something about IPv6 who will write up some of
this cookbook material, or dust off old papers and presentations and
copy the key bits, with corrections, into the wiki.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: ULA-C (Was: Re: IPv6 will never fly: ARIN continues to kill it)

2007-09-19 Thread michael.dillon
 the concern i heard wrt ULA-G (and therefore wrt ULA-C upon 
 with -G is based) is that the filtering recommendations in 
 RFC 4193 were as unlikely to work
 as the filtering recommendations in RFC 1597 and RFC 1918.  

Given the overwhelming success of RFC 1918 it only requires a very small
percentage of sites leaking routes to make it seem like a big problem.
This is normal. When you scale up anything, small nits happen frequently
enough to become significant issues. But that is not a reason to get rid
of RFC 1918.

The fact that the filtering recommendations of ULA-C and ULA-G have the
same flaws as RFC 1918 is a not sufficient reason to reject them
wholesale.

 i realized in 
 that moment, that ULA-G (and therefore ULA-C) is not an end 
 run around PI space, it's an end run around the DFZ.  some 
 day, the people who are then responsible for global address 
 policy and global internet operations, will end the tyranny 
 of the core by which we cripple all network owners in their 
 available choices of address space, based solely on the 
 tempermental fragility of the internet's core routing system. 
  but we appear not to be the generation who will make that leap.

I think that even today, if you analyze Internet traffic on a global
scale, you will see that there is a considerable percentage of it which
bypasses the core. Let the core use filters to protect the DFZ because
the DFZ is no longer necessary for a workable Internet.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: ULA-C (Was: Re: IPv6 will never fly: ARIN continues to kill it)

2007-09-19 Thread michael.dillon

 what I read into it is... the future internet might not be 
 structured as it is today, we might get a internet on the 
 side which don't touch the DFZ at all. Mostly regionbased traffic...

WRONG! The future Internet will be structured the SAME as it is today,
mostly region-based traffic. The main exception to that rule is when a
there are countries in different regions which share the same language.
For instance there will always be lots of interregional traffic between
France and Canada, or between Portugal and Brazil.

People who are in the IETF have a warped view of reality because we all
speak English, and since there are English speaking countries in North
America, Europe, southern Africa, and the Asia-Pacific region, it seems
like everything is centralised. In addition, English is the 21st century
lingua-franca so it will always drive a certain level of international
traffic to any country, but moreso to countries like Norway where the
people often learn to speak English better than native English-speaking
people.

Go to a country like Russia and it's a different story. Few people learn
English or any other language well enough to use it. There are no vaste
hordes of English-speaking tourists like in Spain or Italy. But there is
still a vast Internet deployment for the most part separate from the
English-speaking Internet. There the major search engines are Rambler
and Yandeks. Internet exchanges are located in Moskva, Sankt Peterburg,
Nizhniy Novgorod, Samara, Perm', Ekaterinburg, and Novosibirsk. 

It's a basic fact of economics that the majority of transactions in any
point on the globe will always be with nearby points. That's why the USA
buys more goods from Canada than from any other country, in spite of the
fact that Canada is 1/10th the population. Communications volume follows
transaction volume, and therefore, the only reason that the Internet was
not more regional a long time ago, is that the process of shifting
communications from legacy networks to the Internet is a slow process.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: RFC-1345 mnemonics

2007-09-17 Thread michael.dillon
   Folks who haven't been involved in multi-lingual 
 computing might not realise quite what a minefield this whole 
 area is.  Approximately speaking, choices about how to 
 represent glyphs/letters and how to encode them usually have 
 some inherent linguistic bias. 

In addition, many applications of encoding glyphs need to consider
direction. It is one thing to transliterate another writing system into
ASCII for the purposes of providing a readable format. It is another
thing to have an encoding that works in the other direction, so that
some form of ASCII input can be accurately translated into a set of
UNICODE glyphs. The former (output tranliteration) is not so hard to
achieve, eg. Any russian speaker will understand Ya ne znayu chto
skazat' and any Chinese who has learned a roman alphabet language
should be able to understand Wo mi-lu le. Of course Ja nye znaju shto
skazat would be just as readable to most Russians, and most Chinese
would struggle with long texts that lack the accents that official Hanyu
Pinyin uses to mark the four tones. I think it is conceivable to supply
an official output transliteration to ASCII for all Unicode glyphs and
much of this work has already been done by language groups, for instance
TISCII transliterates Tamil letters into ASCII.

Translating from ASCII into Unicode is far more complex and probably
impossible without introducing inter-glyph gaps and weird accent codes
like RFC 1345 did. Many of the input methods that are used for typing
foreign language glyphs into a computer are actually Input Method
Editors that have inbuilt dictionaries and help users choose one of a
series of possible glyphs. For instance, Japanese can represent a single
syllable fu with 3 or more glyphs. To choose the right one you need to
know if the entire word is borrowed from a foreign language, and if not,
then there is still the choice whether or not to use a Hiragana glyph,
or choose one of the Kanji glyphs borrowed from Chinese. There are at
least two Kanji glyphs with an ON-reading of fu. In spite of all this
complexity on input, there is a standard transliteration for output. 

 People who use languages that 
 aren't optimally encoded in some representation tend not to 
 be very happy with the non-optimal encodings their language 
 might have been given.

That is the key to this whole thing. What is the use-case for these
ASCII encodings? If an encoding is not usable by native speakers of the
language using the original glyphs, then I can't see any worthwhile use
case at all. UTF-8 works for any machine-to-machine communication.
Existing input methods work for converting from ASCII to glyphs but
these are not simple mapping tables. And existing transliteration
systems work for generating readable ASCII text from glyphs, although
they may not be fully standardised. For instance a Russian named Yuri
(transliterated according to English pronunciation rules) will have his
name written as Iouri on his passport because that is transliterated
according to French pronunciation rules since French was the former
lingua franca of international diplomacy.

RFC 1345 should be deprecated because it misleads application developers
e.g. the Lynx case, and the work on transliteration and input methods is
being done more effectively outside the IETF.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Call for action vs. lost opportunity (Was: Re: Renumbering)

2007-09-16 Thread michael.dillon
 I'm not particularly interested in getting into a Yes it is! 
 No it isn't! debate.  I will merely point out that IPv6 has 
 been implemented and is being deployed as IPv4 with more 
 bits. 

If more people would get involved in developing a best practices
document for IPv6, perhaps through a wiki like www.getipv6.info then
this would be less of a problem.

We need some document to point to in order to explain why IPv6 is not
just IPv4 with more bits.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Call for action vs. lost opportunity (Was: Re: Renumbering)

2007-09-16 Thread michael.dillon
  I wonder if even 
 writing a BCP about this even makes sense at this point, 
 because the application writers (or authors of the references 
 the application writers use) may never see the draft, or even 
 be concerned that it's something they should check for.

I think that it does make sense to write a draft because the growing
effort to roll out IPv6 is causing lots of people to review their
applications and how they communicate on the network. I would expect
that some percentage of those applications would end up being changed if
such a draft were available.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 will never fly: ARIN continues to kill it

2007-09-14 Thread michael.dillon
  From: Tony Li [EMAIL PROTECTED]
 
  As a practical matter, these things are quite doable.  
 
 Tony, my sense is that the hard part is not places *within one's own
 organization* where one's addresses are stored, but rather in 
 *other organizations*; e.g. entries in *their* firewalls. Can 
 those with experience confirm/deny this?

In fact, in one of the global IPv4 networks that we operate, ACLs are
managed just as Tony describes. However, when we need to add/change
ACLs, it takes roughly 90 days to roll it out for two reasons. One is
that we cannot risk changing all routers at one time, so we spread the
work over two or more weekends. But the major piece of work is getting
the change in customer firewalls. This requires notification, planning
on their side, scheduling of their own change windows, etc. All of the
human effort involved in doing this has real costs.

At the same time, we and our customers will instantly make changes to
routing in our networks without any notification or planning or
scheduling of change windows. The difference is that routing is handled
by BGP (and OSPF) which everybody trusts to do the right thing. A lot of
smart people have put a lot of work into building routing protocols that
are reliable. The same amount of brainpower and work has not been
applied to ACL management in routers or firewalls. 

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Renumbering

2007-09-14 Thread michael.dillon
   I remember Bill Clinton describing trying to develop an Internet
standard like 'nailing jello to the wall'.  
 
Actually he said that trying to censor the Internet was like trying to
nail jello to the wall. See the press release from the U.S. Embassy in
China where he made the remark in the context of the Great Firewall of
China.
 
http://www.usembassy-china.org.cn/press/release/2000/clinton38.html
 
With hindsight, and knowing the many ways in which people have subverted
the Great Firewall he was quite right. In the IETF context, I think it
proves the rule of be conservative about what you send, be liberal
about what you accept because the jello comes from the way people
actually use IETF technology in the real world.
 
The IETF is incapable of designing a solution to a problem. We can only
design protocols which create possibilities for the real solution
designers to leverage. Engineers like to have end-to-end control of a
problem in order to design end-to-end solutions but that is often not
possible in the real world, and especially not possible when your work
is restricted to the vicinity of layer 3.
 
--Michael Dillon 
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Call for action vs. lost opportunity (Was: Re: Renumbering)

2007-09-14 Thread michael.dillon
 given that NATs violate the most fundamental assumption 
 behind IP (that an address means the same thing everywhere in 
 the network), it's hardly surprising that they break TCP.

Has RFC 2526 been deprecated? 

-- Michael Dillon

P.S. RFC 2526 - Reserved IPv6 Subnet Anycast Addresses

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 will never fly: ARIN continues to kill it

2007-09-14 Thread michael.dillon
   Actually it is.  You just are not willing to do it.  It is
   100% possible to do this automatically.  It just requires
   the chains of trust to be setup.
  
   If you live in a dynamic world, you setup those chains.
 
   Just don't say that renumbering can't be automated because
   it can.

You repeatedly wave your hands and say that it can be done but you
refuse to provide a single reference to a protocol or implementation
which makes this possible. The reason people are arguing against you is
because we do not know of a case study where this is successfully being
done. We don't know of any software which is successfully being used to
renumber firewalls.

Please point us to specific references, conferences, case studies, RFCs,
etc.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Symptoms vs. Causes

2007-09-13 Thread michael.dillon
  and IMHO, any solution that doesn't let the user type his password 
  into some Web form is a non-starter, both for reasons of backward 
  compatibility and because sites (quite
  legitimately) want to provide a
  visually attractive interface to users which is consistent 
 across all 
  platforms (for support reasons).
 
 This may well be true. 
 
 However, I'm not aware of any technique which both meets this 
 constraint and is phishing resistant.

Bank issues a SecurID token (or SD chip with onetime pad) and requires a
six-digit PIN to be entered which cannot be reused. In order to get to
the bank in the first place, user must enter a URL that is printed on
their monthly statement. It changes every month and you may not use any
other URL.

So much for typing. How about selecting password letters from dropdown
boxes, or from an image map with scrambled letters that was sent to the
browser. 

My bank requires my surname, a customer number that is not the account
number, a 5 digit pin code typed in, and a challenge response where the
challenge is two random letter positions from my secret word, and the
response is two letter selections from two dropdown boxes.

No protocols needed.

It would be interesting if someone submitted a best practices draft for
banking services over the Internet, which documented how banks could
prevent, or avoid phishing. Such a draft could say  something like,
never send customers an email with URLs for a site which requires login.


--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 will never fly: ARIN continues to kill it

2007-09-13 Thread michael.dillon
  Oh man, that's rich.  Do you actually believe that?
 
   If you design the network for IPv6 and not just copy the
   IPv4 model.  If you use the technology that has been developed
   over the last 20 years, rather than disabling it, yes it is
   possible.

OK, how is it possible to automate the renumbering of my firewall
entries which contain IPv6 addresses and prefixes?

How is it possible to automate the renumbering of my extranet business
partner firewalls who also contain some of my IPv6 addresses and
prefixes?

How do I automate the renumbering of router ACLs in my own IPv6 network?

These are purely theoretical questions, but I do know of many instances
where these kinds of things do need renumbering when an IP address
prefix changes.

Please don't say DEN, WBEM, etc.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Symptoms vs. Causes

2007-09-13 Thread michael.dillon

  So much for typing. How about selecting password letters 
 from dropdown 
  boxes, or from an image map with scrambled letters that was sent to 
  the browser.
 
 Sorry, what about these? They have essentially the same 
 security properties as cleartext passwords.

One would hope that all communication from the browser to the server is 
encrypted as in SSL regardless of whether passwords go in cleartext or whether 
there is some Javascript to encrypt them first. In that case, the big issue is 
keylogging software that has been widely installed by malware distributed by 
Phishing organizations. Key-stroke loggers do not look at mouse-clicks.

 Second, it doesn't take that many phishing attacks to extract 
 most of the secret word.

Depends on length of said word/phrase. Also, I can see how naïve people are 
fooled by the first email, but surely the percentage who would click on each 
successive email, decreases.

At the end of the day, phishing is a social problem, not a technical problem. 
It can't be solved by purely technical means. All technical solutions to 
phishing involve some form of behavior change.

You've mentioned man-in-the-middle attacks. Such attacks cannot be prevented if 
the user interface requires cleartext inputs. Remember, this is not like 
typical cryptography MITM attacks where the MITM receives an ecrypted stream 
and is able to decrypt it, modify it, and reencrypt it. In this case, the user 
asks the MITM to provide a web page and associated Javascript. While the look 
of this page will be identical to the bank's page, the functionality does not 
need to be identical. It can send everything cleartext to the MITM who them 
emulates the human user.

To defeat MITM you need a secure channel, but how can you establish a secure 
channel to a human being who has already defeated the bank's security system by 
enlisting the phishing organization as their agent?

I would rather see the focus of effort go to building simple embedded computer 
systems that one can plug into a USB port and rely on to establish an encrypted 
channel to the bank. That way, the human user does not play any significant 
role in establishing the channel of communication and cannot subvert the 
process.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [Ietf-http-auth] Re: Next step on web phishing draft(draft-hartman-webauth-phishing-05.txt)

2007-09-10 Thread michael.dillon
 Hmm... I'm still not sure what you're trying to say. My point 
 is that there shouldn't be any consensus calls by anyone on 
 the ietf-http-auth mailing list. 

Why not? Does the IETF have a patent on IETF processes?

 It's not a WG.

Why not?

Of course, you probably mean that any consensus calls on the
ietf-http-auth mailing list would not be considered IETF consensus calls
because that list is not formally an IETF WG and is not formally
following all IETF processes.

In any case, a WG is not supposed to be formed unless there is already
some work done and that work has reached some consensus among interested
parties. One would expect that people working on a draft would try to
use some of the IETF process in order to get to the point of either
publishing a draft or forming a WG.

Unless of course, the IETF has some exclusive intellectual rights in
running WGs and having consensus calls...

 I have no problem with Sam soliciting opinions in his 
 document on any forum of his choice. What I object to is the 
 notion--again implied in your above comments--that this 
 document has some formal standing.  As I said initially, this 
 is an individual submission that failed to obtain consensus. 
 As such it doesn't need shepherding or shepherding ADs, any 
 more than any other individual ID.

Really, this is irrelevant. Either there is or is not a group of people
who have done some work and reached some consensus that the work needs
to be completed in the IETF. If there is work and consensus, then even
if it was published and rejected as an individual draft, there is no
reason for the work to stop and the people to go away.

It makes more sense to channel the work appropriately rather than
rejecting it and castigating the group. We all know that the Internet
has many security issues made worse by the immense scale of the network
in this day and age. There is an entire IETF area decicated to Security
with 17 or 18 WGs in it. It seems to me that we should be advising the
people working on this draft to take their work to the Security ADs and
see if it fits into an existing WG or whether a new AD could be created.
The process nits are entirely irrelevant to the work and do not advance
the IETF in any way.

Personally, I would like to see some more criticism of the fact that
this draft is about Phishing, a symptom of security problems, rather
than about strengthening a weakness in Internet security. It is entirely
possible to solve the phishing problem without strengthening the
network, and possibly even introducing new weaknesses. Being too focused
on one symptom is not a good way to approach security. Indeed, it is
entirely possible that the solution to phishing lies with the banking
system, not with the Internet or IETF. 

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [Ietf-http-auth] Re: Next step on web phishing draft(draft-hartman-webauth-phishing-05.txt)

2007-09-10 Thread michael.dillon

  So you also want a different word to shepherding?
 
 No. I want there not to be an implication that the 
 development of this document is a formal activity of the IETF.

Let me give you a short lesson from IETF 101.
If the name of a draft contains ietf as the second component, or the
name of an IETF WG or the name of one of the IETF bodies (iab, irtf,
etc...) then it is a formal activity of the IETF. Otherwise it is not.
Since this ID contains hartman as the second component, then it is clear
that this in not a formal IETF draft but merely a whim of someone named
hartman, possibly with some co-conspirators.

 What I'm objecting to is what appears to be the AD-sponsored 
 formation of some design team which has not gone through the 
 BOF-WG process.

Have you verified with the AD in question that they were wearing their
AD hat at the time that they sponsored the formation of this design
team? AD's still have some freedom to act outside the IETF.

--Michael Dillon


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [address-policy-wg] Re: IPv6 addresses really are scarce after all

2007-08-31 Thread michael.dillon
 Will all due respect, even if you assume a home with ten 
 occupants, a few hundred subnets based on functions, and 
 enough sensor-type devices to estimate several thousand of 
 them per occupant and a few thousand more per room, 2**64 is still a
 _lot_ of addresses.  

This is hyperbole. All IPv6 subnets have the same minimum number of
addresses (2**64) regardless of where they are used.

  But I don't think hyperbole 
 helps the discussion.

I agree. In any case, it doesn't make sense to discuss IPv6 in terms of
hostcounts. It makes more sense to discuss numbers of subnets or numbers
of aggregation levels.

If a private home with two occupants and one PC, builds out an in-law
suite for one mother-in-law with one PC, then it still makes sense to
have at least two subnets in that private home, i.e. at least one level
of aggregation. Hostcount is irrelevant. Note that if both mother-in-law
and homeowner install 4 or five home media devices, the subnetted
architecture will work better than a /64 per home scenario.

Now that we have shown subnetting is useful in a private home, it is
clear that a /64 per home is not enough.

It still leaves open the question of whether a /48 is too much, i.e. too
many subnets and/or too many levels of aggregation. If a /48 is not too
much, then the IETF should issue guidance that states that. If some
prefix length between /48 and /64 is OK under certain circumstances then
the IETF should issue guidance which states that. I still have not seen
any clear indication that there is a negative technical impact of
assigning a /56 per home. To date, the only clear technical issue I have
seen mentioned for subnet prefixes longer than /48 is that if they are
not on a 4-bit hex nibble boundary, it makes IPv6 PTR delegation more
complex.

--Michael Dillon




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 RIR policy [was Re: IPv6 addresses really are scarce after all]

2007-08-30 Thread michael.dillon
 1. This is NOT ARIN's decision to make, nor that of any of 
 the other RIRs, because the /48 decision is not independent 
 of many other design decisions in IPv6.

Show me the document where this is explained.

I'm not disagreeing with you, I am just saying Show me the document
because if you can't show it to me, then you are wrong. The IETF has not
made any design decision until it is in an RFC.

 2. If ARIN or any of the other RIRs have concerns about an 
 IETF design decision, they need to express that to IETF and 
 ask IETF to fix it.

ARIN, like the IETF, is mainly a bunch of individuals. I, as an
individual with a history of involvement in ARIN (I was a founding
member of the ARIN Advisory Council), have already come to this mailing
list, which ostensibly is frequented by individuals who have a history
of involvement in the IETF. I have already asked the IETF to fix this.

Clearly you do not believe that a request from an individual is
sufficient. Since the IETF seems to be defined by its documents, I
wonder which RFC I can refer to in order to find out the correct formal
process for ARIN to follow in order to ask the IETF to fix the problem?

I will admit that I could attempt to fix this by writing an ID myself,
but since I was not involved in the IP-NG work in IETF, I really don't
know why the IPv6 architecture is what it is. And since I learned about
IPv6 mainly by reading RFCs, I worry that I may still have
misconceptions hanging around from before various things were
deprecated.

 The danger in my going to them personally is that it will weaken or
 delay the communication that needs to occur.   By insisting that I do
 this before either ARIN or IETF takes any action there is a 
 far greater chance that the problems in both ARIN and IETF 
 will not get fixed. 

The fact is that we need a document explaining the IPv6 architecture as
it stands today. A document that can guide the RIRs but also the many
IPv4 network designers who are being forced to architect or design their
first IPv6 network. A compact document that fairly states the IETF's
intent with regard to IPv6. But most importantly of all, this document
must answer objections and common misunderstanding. Failure to do this
last item, is failure to communicate, which will cause many people to
waste time and waste a lot of money. You can't answer objections and
correct misunderstandings unless you participate in forums *OUTSIDE* the
IETF where the objections and misunderstandings arise.

If this delays communications a bit, that is a good thing if it also
results in better quality of work.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 RIR policy [was Re: IPv6 addresses really are scarce after all]

2007-08-30 Thread michael.dillon
 A /48 per 'site' is good, especially in the case of 
 businesses, for home-usage though, most very likely a /56 
 will be more than enough. As such IMHO having 2 sizes, one 
 business, one homeuser, would not be a bad compromise, 
 otherwise the really large ISP's, eg the ones having multiple 
 million customers, would need multiple million /48's and then 
 the address space consumption suddenly really goes really 
 fast. Having /56's there would slow that down a little bit. A 
 /56 is still 256 /64's, and I have a hard time believing that 
 most people even on lists such as ARIN ppml or the various 
 IETF ones will ever configure that many subnets  at home.

I would still like to get to the bottom of this issue and understand
what things a /56 assignment size will break. I strongly suspect that
these things will not be of great importance to networks in the home,
but I would still like to know what they are and document the issues
clearly.

Also, I strongly suspect that the IETF did not consider the situation of
in-home networks in great detail when they reached the conclusion of /48
for all sites, because at that time, there were few, if any, companies
planning Internet deployments on the same scale as the phone system. I
suspect that we have grown things a bit faster than was expected.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 RIR policy [was Re: IPv6 addresses really are scarce after all]

2007-08-29 Thread michael.dillon
 I'd encourage folk to read the entire IPv6 policy to get a 
 more complete picture. 

 And, for those of you worried about end users being given a 
 /64 (or worse), from a registry perspective, it is 100% 
 acceptable to give every end site a /56. That is what the 
 above wording means, and that is what the RIRs expect LIRs to 
 do.

That is *NOT* what the quoted wording means. You should read the entire
IPv6 policy to get a more complete picture. In your quoted text, /56 is
used as the base unit of measure because another area of policy suggests
that it is OK to assign a /56 to a small site. The /56 was added at the
request of IPS who have large numbers of consumer subscribers (cable
ISPs) since they preaddress their infrastructure for every home the
cable passes. Using a /48 would chew up /32s at an enormous rate
compared to ISPs using telecom circuits. In the same section where the
various sizes are discussed, it also states that it is acceptable to
assign every end site a /48. Nowhere does it state that a /56 is
acceptable for anything but the smallest sites (private homes).

 The RIRs are not trying to conserve in the same sense as 
 for IPv4. 

The RIRs are also not trying to not conserve. In fact the RIRs are
merely responding to external requests for policy changes. After much
discussion, some policy changes meet rough consensus after which they
get a bit of wordsmithing (sometimes sloppily done) and enshrined in
policy. The RIRs lurch from one direction to another like a brain
damaged drunkard. A better quality of engagement from the IETF in the
form of an RFC with IPv6 Guidance for RIRs would greatly improve the
situation.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-28 Thread michael.dillon


 We shouldn't be surprised that a one size fits all approach 
 (where home users get the same amount of space by default as an IBM or
 Microsoft) doesn't seem to make a lot of sense to some people.

I think this is a wrong comparison. The intent is to give a /48 to a
site where a site is either a private home or a building or an office or
a campus. In each case, the site is relatively compact physically and is
under a single administrative control. In a residential apartment
building, each apartment is a site. In an industrial complex, each unit
is a site. And so on.

The fact that some companies may choose to service all their sites via a
private network (or VPN) gatewayed through one (or a few) main site(s)
does not substantially change this. If a bank closes a branch and
somebody turns it into a restaurant, that restaurant will get a /48. The
bank could have gotten a /48 for the branch if they wanted to use
generic IPv6 Internet access as the underlying service for their VPN. 

Consider the scenario where one bank merges with another bank. If both
banks have structured their newtork with /48 assignments from local
ISPs, the network merger is much simpler. They can even keep the IPv6
address assignments and ISP connectivity running. If they choose to hang
all their sites off a central gateway you are likely to find that one
bank assigned a /64 per branch and another assigned a /62, or maybe a
/120. Merging those two networks will be as messy as in the IPv4 world.

 At the same time the continuing existance of RFC 3177 is not 
 going to stop RIRs from adopting policies that they think 
 make sense. Leaving
 3177 on the books as it currently stands, strikes me as a 
 form of denial. It doesn't document existing practice, and at 
 a minimum, we should acknowledge that.

I believe that 3177 was intended to be guidance for RIRs and other
network architects. Given that the IETF is moving IPv6 into maintenance
mode, this RFC sorely needs to be updated to reflect the final design.

If you also want to have an informational RFC to document existing
practice, that is reasonable as a separate document, but it should not
simply document what the RIRs did in the first half of 2007. Instead it
should discuss the pros and cons of varying the HD ratio and,
implementing a separate assignment size for home users, and the choice
of /56 as that assignment size. Remember, the RIRs might rescind those
policies or dream up something new by the time 2008 rolls around.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-28 Thread michael.dillon

 But the /48 boundary is not. We had a long discussion about 
 that in the IPv6 WG, and our specs were carefully cleansed to 
 make sure there were no real dependencies on such a boundary. 
 Think Randy Bush saying your reinventing IPv4 classful 
 addressing about a thousand times. :-)

It is a very bad thing when the IETF bows to demands from the ISP
industry to hobble the network architecture for other businesses and
consumers. Thankfully, the IETF did not do that and the
one-size-fits-all architecture of a /48 for all, remains intact. The
fact that some RIRs allow ISPs to assign a different one-size-fits-all
to consumer sites, really doesn't change this fundamental architecture.

 Indeed, even though the official IETF party line is that 
 links have to have 64 bits of subnet addressing assigned to 
 them, a number of operators screamed loudly that for internal 
 point-to-point links, that was horribly wasteful and they 
 weren't going to stand for it. So, products do indeed support 
 prefixes of arbitrary length (e.g., /126s and the like), and 
 some operators choose to use them. This is one of those 
 situations where the IETF specs seem to say one thing, but 
 the reality is different. And we pretend not to notice too much.

It is good that the IETF responded to demands from the ISP industry for
features which are needed in that industry. It is not good if the IETF
lets the ISP industry make architectural decisions for other businesses.

 And there are folk that participate in both the IETF and the RIR
communities.

That's fine as long as they aren't trying to get the RIRs to override
IETF architectural decisions. The RIRs are not a proper forum for that
kind of thing because the needed technical review is simply not possible
in the RIRs.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-27 Thread michael.dillon
 (2) The many examples you give seem to be to be associated 
 with different domains of authorization and privilege for 
 different groups of people and functions within the home.  My 
 impression of the experience and literature in the field is 
 that almost every time someone tries to create such a 
 typology, they conclude that these are much better modeled as 
 sometimes-overlapping domains rather than as discrete
 partitions.   The subnet-based model you posit requires that
 people or devices switch addresses when they change functions 
 or activities.  Up to a point, one can do it that way (and 
 many of us have, even with IPv4).  

The subtext here is Ethernet. People are talking about home networks
based on Ethernet and whether or not they should be segmented by
routers. In my experience Ethernet bridges and switches are not designed
with security as a goal. When they fail to transmit all incoming frames
on all interfaces, it is to prevent segment overload or broadcast
storms. There are many cases where people have found ways, sometimes
quite simple ways, to receive Ethernet frames that are not addressed to
them. Given this backdrop, I am suggesting that a homeowner may have
several reasons for inserting routers (and router/firewalls) into their
home network, thus requiring the ability to have multiple /64 IPv6
subnets. Architecture aside, this is a pragmatic response to an
information security issue.

 But I suggest that trying to use subnetting as the primary 
 and only tool to accomplish those functions is 
 architecturally just wrong, _especially_ for the types of 
 authorization-limitation cases you list.  Wouldn't you rather 
 have mechanisms within your home network, possibly bound to 
 your switches, that could associate authorization property 
 lists with each user or device
 and then enforce those properties? 

This would be nice, but I believe this needs more work and not just in
the IETF. Also, I believe that the IETF should tackle the basic
requirements for a home and/or business IPv6 Internet gateway first, and
then go on to the more advanced security issues.

 (4) Which IETF WG is working on these things?  :-(

Or failing that, which area does it belong in?

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: one example of an unintended consequence of changing the /48boundary

2007-08-27 Thread michael.dillon
  Think back to the days when the OSI protocols were 
 expected to be the
  next big thing
 
 No doubt the savvier members of the investment community will 
 remember those days, and the predictions of how much money 
 would be made/lost by those who did/didn't invest in OSI, and 
 will take that into account when they hear similar claims 
 (such as yours) about IPv6.
 
 I can see the marketing slogans now: IPv6, the OSI of the 
 21st Century!

In my analogy, IPv4 is the analog of OSI, and IPv6 is the analog of the
1990's Internet Protocol. In the 1990's people could see that the
DECNET/IPX/Banyan/NetBIOS/LU6.2 was not workable in an internetworked
world. It was clear that we needed a common-denominator protocol to
interwork between them all. OSI and IP both targeted that space but OSI
lagged behind in implementation of tools and protocols. The analysts
missed this, and even believed that IP was lagging behind, because they
did not take open-source deployments into account.

In today's world, it is clear that IPv4 networks cannot grow beyond
three years from now, but IPv6 networks can grow. There is no competitor
to IPv6 other than the possibility of making IPv4 even more complex with
triple and quadruple NAT or some other weird stuff. The market impacts
of open source are now well known and carefully studied, so when someone
says that Linksys has no home gateway boxes, a savvy investment analyst
knows that they are leveraging Linux technology and since Linux fully
supports IPv6, Linksys can ramp up production very quickly if they are
motivated.

This time we know that there is a tipping point about to happen and this
time, the investment community knows a lot more about technology and how
a technology shift can turn up-and-comers into buggy-whip manufacturers.
Given the large percentage of VC funded businesses who tie their
business growth to the Internet (and often to its growth), I fully
expect them to start asking tough IPv6 questions before this year is
out.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-26 Thread michael.dillon
   If I assign 4M /48's of IPv6 (one to each cable modem on my
   network), according to the HD-ratio I am justified to obtain
   something around a /20 of IPv6 addresses.  In other words, I am
   justified in getting 268M /48's even though I am only 
 using 4M of
   them.  That would be enough for me to assign at least two for
   every household in the US (not just the 19M on my network).

   Anyhow, you can see where this might lead...

Yes, towards a rethinking of whether the HD ratio is an appropriate way
to measure the size of an ISP's second allocation.

 If one does the math, giving every home user a /56 instead of 
 a /48 provides almost two orders of magnitude more headroom 
 in terms of address usage. And at what cost? Surely, everyone 
 will agree that giving a /56 to home sites is more than 
 enough space for the foresable future! That's enough for 256 
 subnets per home site! That's an incredible amount of address space!

And since it is very rare for a home site to change into a non-home site
while under the same ownership/occupancy, the rule of /56 for home users
maintains the same-size-for-all philosophy that was suggested with /48
blocks. A home user can move across town to another residence, hook up
to another ISP and be reasonably guaranteed to get another /56.

 That leads me to 
 doubt that the specific propoasl that was mentioned that 
 started this thread will actually get much traction within 
 ARIN, but that is not my problem. :-)

Nevertheless, a lot of this controversy around IPv6 addressing is not
very well informed and is often based on IPv4 architecture, not IPv6.
There is a gap in education here, and due to the scramble to deal with
the imminent runout of IPv4 addresses, people don't have time to get
properly educated. This is a scenario in which informed guidelines from
the IETF would have great value even if they cannot be prescriptive.

 I find the fact that RFC 3177 has not been revised to reflect 
 the reality of today is a bit disapointing. 

Precisely! At one time the IETF put some effort into producing IPv6
educational material but that effort seems to have faded away.

 And, FWIW, I was one of those that pushed for the changes.  
 As one who originally supported of the /48 recommendation in 
 RFC 3177, I think it was a mistake. Giving a /48 to every 
 home user by default is simply not managing the address space 
 prudently. Home users will do more than fine with a /56.

I agree. What I don't agree about is the idea that /56's should go to
small sites, i.e. that the ISP makes some judgement about how many
subnets a business might need, and then decide whether or not to give
them a /48 or a /56. Technically, the ARIN policy wording allows for
this.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-26 Thread michael.dillon
  I find the fact that RFC 3177 has not been revised to reflect the 
  reality of today is a bit disapointing.

 reality of today seems like an odd concept when trying to 
 make or revisit design decisions that will need to serve us 
 for decades.  I keep seeing people making the same mistake of 
 trying to design the future Internet to meet only the needs 
 of the current Internet.

Just to clarify, RFC 3177 discusses characteristics of IPv6 such as NLAs
and TLAs that have been deprecated. When a reader realizes that some of
the topic matter of this RFC is obsolete/deprecated, they are no longer
able to trust anything in the document.

Thomas is saying that RFC 3177 needs to be updated because of the
deprecated bits. And if this document is to be revisited then it could
include other material to become IPv6 Addressing Guidelines for RIRs
and ISPs.

 Frankly I think that statement is out of order.  The RIRs 
 need to be taking it to IETF.

They would if it was clear that their activities were contrary to a
published RFC. The IETF must take the first step here and clarify
things.

 Now I'm all for prudent use of IPv6 space, and if the /48 
 needs to be changed to /56 or some other value, then by all 
 means let's have the discussion here.  But the discussion 
 belongs here, not elsewhere.  IPv6 is a lot of delicately 
 crafted compromises, and it's not as if these
 compromises were made independently of one another.   Changes 
 like this
 can have unintended consequences, and these need to be 
 understood and examined.  RIRs are not in a position to do this.

Agreed. But the working relationship between the RIRs and the IETF is
somewhat in tatters at present. An update of RFC 3177 would be a fine
first step to repair that relationship.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: one example of an unintended consequence of changing the /48boundary

2007-08-26 Thread michael.dillon
 6to4 is a transition technique that I would argue is not 
 really appropriate for a large site (i.e, one with _many_ 
 subnets).

I.e. one with a /48 allocation from an RIR. Therefore it would appear
that 6to4 is targeted at small sites such as home users who will likely
receive a /56 from the RIR. Therefore Keith's concerns are warranted.

 Give me a break. Use leading zeros for the first 8 bits of 
 the subnet part and everything else still works just fine.

Are you sure of that? Can you point me to a draft or RFC where this is
documented?

 The biggest barrier to the success of IPv6 is the lack of a 
 short-term ROI. There just isn't a strong business case for 
 anyone to invest in deployment. It's really that simple.

In two or three years, IPv4 network growth will be severely limited. Any
business whose revenue growth is linked to IP network growth, must use
IPv6 for this beyond two to three years from now. IN order to
successfully use IPv6 for the mission critical network growth that is
the engine of business revenue, they need to have at least a year of
trials and lab testing in advance. That means there is not much more
than a year before such businesses will have missed the boat. Some may
argue that Return three years from now on Investment made today is not
short term, and that is true. However, if the investment is not made
today, the platform for short term ROI will not exist in three years
time.

That does make a strong business case and some companies are busy
working behind the scenes to prepare for the disruption caused by IPv4
runout. For some, the disruptive event will be fatal and for others it
will be very profitable. This message will soon reach the investment
community so you will soon see investment analysts asking very tough
IPv6 deployment questions, and rating stocks appropriately. That is
definitely a short term ROI scenario for IPv6.

Think back to the days when the OSI protocols were expected to be the
next big thing, replacing IPX, DECNET, Appletalk and NetBIOS. IP was for
universities and labs. In telecoms, ISDN and ATM were the wave of the
future. This is the way things were in 1993. Two years later, in 1995 we
were experiencing exponential growth of the TCP/IP Internet. I believe
that it was something like 1500% growth in that year and dozens of new
books about the Internet came on the market joining the 3 books that
were on the market in 1994.

It is almost certain that IPv4 runout will drive a similar upsurge in
IPv6, although not quite the same magnitude.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-26 Thread michael.dillon
 The definition of a small network is pretty much single 
 subnet. Yes, I understand very well that the average home of 
 the future will have a mixed wiring. Of course, my own home 
 does have Ethernet and Wi-Fi. In the not so distant future, 
 it will have several Wi-Fi networks operating on different 
 frequencies, some form of power-line networking, and some 
 rooms may have their own high speed wireless wiring using UWB 
 or some similar technology. But I am pretty much convinced 
 that all of these will be organized as a single subnet.

You are remarkably trusting. You do all your homebanking on the same
subnet as your teenage children who are studying Hacking 101 in the
privacy of their bedroom? And when guests come over for dinner, you have
no objection to them taking their laptop to the bathroom in order to
surf for child porn over your wireless network.

The fact is that a lot of people will WANT subnets in the home. They
will want a router/firewall that will isolate each of the children's
bedrooms so that they cannot mess with your bank account or with their
brother's/sister's romantic chat sessions. Many people will want all
wireless access to go through a router. Many will have an in-law suite,
and want to seamlessly integrate their relative's existing network via a
simple router connection. And the family jewels, that Raid 5 server
cluster that holds all the family photos and videos, will be behind
another router/firewall. When the kids host a LAN party, the gamers will
connect to the family network via a router/firewall with limited
Internet access for only the necessary protocols. Subnets multiply for
architectural and security reasons.

Multiple subnets per home is *NOT* a waste of anything. It is an
invitation to dreamers and inventors to make better network things for
the home market. It is an enabler of business activity, an enabler of
competition.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: DNS as 1980s technology [was Re: The Internet 2.0 box Was: IPv6 addresses really are scarce after all]

2007-08-24 Thread michael.dillon
 The IETF has a simple process for all of this: write a draft.

Not true. 

The IETF also runs a large number of mailing lists for discussion of
things both general and specific. It is not necessary to start work by
writing a draft. One can also start work by discussing the problem area
on one or more of the mailing lists, especially when one believes that
the work can best be done by a team, not by one lone author of an ID.

And that is what Keith has done, stated the outlines of the problem
area, a way forward, and asked if anyone wants to help him work on the
problem.

No reason to attack him like you did and I specifically want to address
this because mailing lists have a much larger audience than their
participants. If such attacks are not answered it creates barriers for
new blood to enter into the IETF process. Please don't do this.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: e2e

2007-08-22 Thread michael.dillon
 I.e., a new mail protocol will have to 
 address things like forwarding and mailinglists explicitly.

Not protocol. A new email architecture will have to address all these
things explicitly, but the protocols may indeed be usable as is, or with
minor changes.

The point is that we need to examine the entire architecture first,
separate from any protocol details, and get clear and *EXPLICIT*
requirements. Then review existing technology/protocols to identify
gaps, then fill the gaps, and document how to transition to the new
architecture.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: New models for email (Re: e2e)

2007-08-21 Thread michael.dillon

 My strong belief is that a proposal for a new protocol that 
 does the same thing as SMTP but slightly better is a total 
 non starter. No matter how much better the protocol is the 
 cost of transition will dominate.

Right! It is not the protocol that is at fault, it is the architecture
in which the protocol is used. I once lived in a house that had not been
designed by an architect. It started life as a granary, then was
converted to a tiny house by installing a kitchen sink and a bathroom.
Then the owner added a bedroom on one end, extending the gable roof.
Next the house was doubled by extending it to one side under a shed
roof. At this point the owner's alcoholism was affecting the quality of
the work. Various corners were cut such as a single-pane mobile home
window. Another bedroom was added to the other end of the house by
extending only the shed-roof section. No electrical outlets in this part
and only one tiny window. Finally, when the owner's alcoholism was well
advanced he filled in the gap beside the last bedroom with a hand-poured
concrete floor, and a roof and walls made with poles (with the bark
still on) and scrap plywood. This became the entrance foyer to the
house. Shortly thereafter, I bought the house, my first home purchase.

The Internet email architecture is a lot like this house, with add-ons
and patches. The latest round of SPF/DKIM stuff feels a lot like that
last room built with poles and whatever scrap was handy. I just feel
that given the lessons learned in scaling an email system to global
proportions, we could do better by taking an overall architectural view
to the problem, and coming up with a more robust architecture.

 The only way that I see a new email infrastructure emerging 
 is as a part of a more general infrastructure to support 
 multi-modal communication, both synchronous and asynchronous, 
 bilateral and multilateral, Instant Messaging, email, voice, 
 video, network news all combined in one unified protocol. 

Wrong! This is a sure way to invite second-system effect. Read this
http://en.wikipedia.org/wiki/The_Mythical_Man-Month or better yet, read
the book itself. Sometimes it is better to leave things out of your
design than to include them all.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-20 Thread michael.dillon
 I know the reasons behind the /48 etc but it just going to 
 cause us trouble to keep it like that, we should divide the 
 /48 cateogry of users into two:
 - people that can get the current /48 as long as they have 
 more than ONE subnet
 - people that only have ONE subnet, typical home-users 
 (end-users, including your grandmother), they should get a  
 /56 or whatever else bigger than /60 and smaller than /48.

ARIN already has done something like that but the /56 is not for sites
with ONE subnet because IPv6 already defines /64 for that. Instead they
define /56 as the right size for sites expected to need only a few
subnets over the next 5 years. This came about due to requests for a
smaller assignment size for consumer customer, i.e. individual homes and
apartments.

During the discussion it became clear that even if the majority of homes
today only have one subnet, this is likely to change as more categories
of networkable device become available.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: IPv6 addresses really are scarce after all

2007-08-19 Thread michael.dillon
 ARIN ... belives IPv6 addresses are ... resources that need 
 to be [distributed] according to need.
 
 I guess I have to agree with this sentiment.  If the ARIN 
 community decides there is a better way to distribute IP 
 addresses *OTHER THAN* need, I'd be really happy to hear what 
 that method would be.

That method would be to distribute IPv6 addresses (which are not scarce)
in accordance with the IPv6 addressing architecture which allows for
every site (building/campus/office/place-of-business/home) to get a /48
assignment. This single assignment will last 99.% of them as long as
IPv6 exists. It allows for ISPs/ANS-holders to receive a single /32 or
shorter prefix which will only put a single entry in the global routing
table.

A bit oversimplified, but you get the idea. There is an overall
architecture which IS DIFFERENT FROM IPv4 and which actually helps limit
the growth of the global routing table.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


  1   2   >