Re: Future Handling of Blue Sheets

2012-04-25 Thread Martin Rex
Is it really so completely out of this world to expect the decency
to ask whether it is OK to take a photo for the purpose of _publication_?
Leaving it up to the individual subjects whether they prefer to
relocate further to the background first or prefer to temporarily
leave the room?  Especially when you believe that the vast majority
is going to provide that consent?

The folks I would have least expected to be offended by the
concept of consent are IETFers.


John C Klensin wrote:
 
  snip
  The issue with John's purpose is that he is taking a picture
  of the audience(!), and if you take such a picture from the
  front in a small meeting room, there might a small number of
  folks end up _prominently_ in the foreground of that picture,
  at which point their consent will be required.
 
 I don't want to argue prominently versus non-prominently.

There is nothing to argue about.  The exemption you'd have to
qualify for is clearly scoped (in the German §23 KunstUrhG/KUG)
to pictures of a _public_events_ depicting (identifiable) persons
and _not_ pictures of persons at public events, which means that
the fewer people are predominantly shown on the picture, the
more central those people and the setting will have to be to that event
in order for the picture to still fall in scope of the
pictures of a public_event definition.  But this is just a
necessary prerequiste for the exemption to be in scope at all.
By itself, it is not a sufficient criteria.

The procedure how to determine whether an exemption from explicit
consent requirement for publication of a persons picture is applicable,
has been established by the german constitutional court quite a while ago.
The decision BVerfGE 35, 202 from 1973 contains a fairly comprehensive
description (paragraphs 50-53 -- sorry, all german):

   http://www.servat.unibe.ch/dfr/bv035202.html#Rn050
 
Comments of the European Court of Human Rights (ECHR) on the German Federal
court applying that principle to the exceptions in §23 KunstUrhG (KUG)
in its decisions.


  section 23(2) of the KUG gives the courts adequate opportunity to
  apply the protective provisions of section 2(1) read in conjunction with
  section 1(1) of the Basic Law ... .

  (bb) In theory the criteria established by the Federal Court of Justice
  for interpreting the concept of legitimate interest used in section 23(2)
  of the KUG are irreproachable from the point of view of constitutional law.


  [...]

  66.  In these conditions freedom of expression calls for a narrower
  interpretation (see Prisma Presse, cited above, and, by converse
  implication, Krone Verlag, cited above, § 37).

  67.  In that connection the Court also takes account of the resolution
  of the Parliamentary Assembly of the Council of Europe on the right to
  privacy, which stresses the one-sided interpretation of the right to
  freedom of expression by certain media which attempt to justify an
  infringement of the rights protected by Article 8 of the Convention
  by claiming that their readers are entitled to know everything about
  public figures (see paragraph 42 above, and Prisma Presse, cited above).


While that one particular (2005 EHCR) decision was about the 1. exemption
(public figure), the evaluation rules established by the german constitutional
court rules that a narrow interpretation applies to all exceptions alike.
  Dabei kommt es verfassungsrechtlich nicht darauf an, bei welchem
   Tatbestandselement des § 23 KUG die Abwägung vorgenommen wird

The final criteria established by the german constiutional court is pretty
clear, if the intended objective can be sufficiently achieved by other
means (not encroaching on a persons privacy rights), then this precludes
the exemption from voluntary consent to apply.

  zu bewerten und zu prüfen, ob und wieweit dieses Interesse auch ohne
   eine Beeinträchtigung - oder eine so weitgehende Beeinträchtigung -
   des Persönlichkeitsschutzes befriedigt werden kann.


-Martin



IAOC and permissions [Re: Future Handling of Blue Sheets]

2012-04-25 Thread Brian E Carpenter
Dear IAOC,

I suggest that your standard dealings with local hosts should
include requiring them to perform a local check on whether the
standard Note Well takes account of all local legal requirements,
including for example consent to publication of images. If it doesn't,
the host should provide an augmented Note Well for use during
meeting registration.

From the recent discussion, this needs to be done for sure for
IETF 87.

Regards
   Brian Carpenter

On 2012-04-25 00:30, John C Klensin wrote:
 
 --On Tuesday, 24 April, 2012 18:19 -0500 James M. Polk
 jmp...@cisco.com wrote:
 
 IETF 87 is in Germany (15 months from now), so we'd better
 solve this issue soon, I should think.
 
 The IESG and IAOC are invited to take my comments on the
 situation as an appeal against the decision to hold that meeting
 unless either the situation can be clarified with counsel to the
 degree that we understand that Martin's concerns are not
 applicable, that appropriate permission language and permissions
 can be clarified with Counsel so that a binding between
 registration and permission is possible and used, or that a
 community consensus call demonstrates that the community
 believes that the just make lists plan is preferable to having
 the option to take pictures.
 
 And that is my last comment on the subject unless I have to
 formalize such an appeal.
 
john
 
 .
 


RE: IAOC and permissions [Re: Future Handling of Blue Sheets]

2012-04-25 Thread Christian Huitema
Brian,

 I suggest that your standard dealings with local hosts should include 
 requiring them to perform a local check on
 whether the standard Note Well takes account of all local legal 
 requirements, including for example 
 consent to publication of images. If it doesn't, the host should provide an 
 augmented Note Well for use 
 during meeting registration.

Rather than going this route, we might consider some better balance between 
privacy and standard-settings. Taking and publishing a person's image is a step 
above listing their names. Do we really need that for the purpose of standard 
making, let alone Internet Engineering? How about answering the classic privacy 
checklist:

1) How much personal information do we collect, and for what purpose? The rule 
here should be to collect the strict minimum necessary for the purpose. 
Pictures don't appear to meet that bar.
2) How do we process that information? Who in the IETF has access to it?
3) Do we make that information available to third parties? Under which 
guidelines? Again, there is a big difference between answering a subpoena and 
publishing on a web page.
4) How do we safeguard that information? Is it available to any hacker who 
sneaks his way into our database?
5) How long do we keep the information? Why?
6) How do we dispose of the expired information?

These look like the right questions to the IAOC.

-- Christian Huitema





RE: [nvo3] Key difference between DCVPN and L2VPN/L3VPN

2012-04-25 Thread Adrian Farrel
Hi Linda,

 Respect your advice. However, some wording in the proposed charter are too
 ambiguous, is it the intent?
 
 For example:
 
   An NVO3 solution (known here as a Data Center Virtual Private Network
 (DCVPN)) is a VPN that is viable across a scaling range of a few thousand VMs
to
 several million VMs running on greater than 100K physical servers.
 
 Do you mean one VPN across a scaling range of million VMs? or many VPNs
 combined to scale range of million VMs?

I don't find the text ambiguous at all. You might disagree with what it says (a
VPN), but you can't claim it is ambiguous.

 Another example:
   NVO3 will consider approaches to multi-tenancy that reside at the
 network layer rather than using traditional isolation mechanisms that rely on
the
 underlying layer 2 technology (e.g., VLANs)
 
 network layer can mean different things to different people. Why not simply
 say NV03 will consider approaches to multi-tenancy which do not rely on Layer
2
 VLANs?

There are also layers above the network layer. The charter rules them out of
scope. This is good.

Stewart has clarified that network layer includes IP and MPLS, and that it is
the bit of the hourglass that we all know as the network layer.

 3rd example:
 
The NVO3 WG will determine which types of  service are needed by
 typical DC deployments
 
 Data center provide Computing and storage services. Network facilitates the
 connection among Computing entities and storage entities.
  Why does NV03 WG need to determine what types of services are needed by
 typical DC deployment?
 
 Do you mean NV03 WG will consider network deployment by typical DC?

I think s/services/connectivity services/ might address this issue. Although the
examples cited (but not quoted by you) do tend to give a strong hint.

Adrian



RE: Questions: WG Review: Network Virtualization Overlays (nvo3) - 23-Apr-2012 update

2012-04-25 Thread Adrian Farrel
Hi Stewart,

  The NVO3 WG will write the following informational RFCs, which
  must be substantially complete before rechartering can be
  considered:
 
  substantially complete is sufficiently subjective to risk a riot at some
point
  in the future!
  Can we be more precise with some well-known process step such as WG last
  call or publication request.
  I do not believe that rechartering at that point would take more than a
couple
  of weeks, so it is not as though the WG will grind to a halt for a season.
 
Problem Statement
Framework document
Control plane requirements document
Data plane requirements document
Operational Requirements
Gap Analysis

 Perhaps we should set WGLC as the marker for recharter to be considered.

That would work for me.

A



RE: Questions: WG Review: Network Virtualization Overlays (nvo3) - 23-Apr-2012 update

2012-04-25 Thread Adrian Farrel
Hi,

  NVO3 will document the problem statement, the applicability, and an
  architectural framework for DCVPNs within a data center
  environment. Within this framework, functional blocks will be defined to
  allow the dynamic attachment / detachment of VMs to their DCVPN,
  and the interconnection of elements of the DCVPNs over
  the underlying physical network. This will support the delivery
  of packets to the destination VM, and provide the network functions
  required for the migration of VMs within the network in a
  sub-second timeframe.
  This has been discussed a bit, but I still can't believe that it won't cause
  contention down the line. The term migration will mean different things to
  different people and some will expect it to mean the picking up of one
active
  operational environment and its transportation to run in a different place.
We
  need to be clear whether we mean simply that the re-registration of a VM
at
  a different location and the associated convergence of the network is
  intended to be sub-second, or whether it is the whole transportation of the
  VM.
 
  I don't have an immediate suggestion for wording around this other than to
say
  that the bald word migration is not enough.
 
 I think that discussion on the list has clarified this to mean that network
 will not be a gate to subsecond migration of the VM, but the process
 of migrating the VM is outside the scope of the charter.
 
 Perhaps we can say:
 
 This will support the delivery
 of packets to the destination VM, and provide the network functions
 required to support the migration of VMs within the network in a
 sub-second timeframe.

This is getting close, and I appreciate the intent.
And I understand this is getting wrapped around the axle of requirements that
have not yet been written.
What we want to do is include a description of the migration rate and speed of
VMs. This is useful material like the scaling parameters.
What we need to do is say what the WG works on. 
But I am not clear what a network function is in this context, or how such a
function supports the migration of VMs without actually being involved in the
migration.

On reflection, we can also do something to improve the sentence because the two
halves are not really related.

How about solving this with two changes...

OLD

An NVO3 solution (known here as a Data Center Virtual Private
Network (DCVPN)) is a VPN that is viable across a scaling
range of a few thousand VMs to several million VMs running on
greater than 100K physical servers. It thus has good scaling 
properties from relatively small networks to networks with 
several million DCVPN endpoints and hundreds of thousands of
DCVPNs within a single administrative domain.

NEW

An NVO3 solution (known here as a Data Center Virtual Private
Network (DCVPN)) is a VPN that is viable across a scaling
range of a few thousand VMs to several million VMs running on
greater than 100K physical servers. It thus has good scaling 
properties from relatively small networks to networks with 
several million DCVPN endpoints and hundreds of thousands of
DCVPNs within a single administrative domain.

A DCVPN also supports VM migration between physical servers
in a sub-second timeframe.

END

...and...

OLD

NVO3 will document the problem statement, the applicability, 
and an architectural framework for DCVPNs within a data center
environment. Within this framework, functional blocks will be 
defined to allow the dynamic attachment / detachment of VMs to
their DCVPN, and the interconnection of elements of the DCVPNs
over the underlying physical network. This will support the
delivery of packets to the destination VM, and provide the 
network functions required for the migration of VMs within the
network in a sub-second timeframe.

NEW

NVO3 will document the problem statement, the applicability, 
and an architectural framework for DCVPNs within a data center
environment. Within this framework, functional blocks will be 
defined to allow the dynamic attachment / detachment of VMs to
their DCVPN, and the interconnection of elements of the DCVPNs
over the underlying physical network. This will support the
delivery of packets to the destination VM within the scaling 
and migration limits described above.

END

Thanks,
Adrian

PS To Thomas who thinks we are needlessly wordsmithing...
When I see the spectre of fist-fights about the outcome of this work, I prefer
to spend one or two weeks extra at this stage nailing down all loose corners
rather than several months in ICU sometime in the future.






Re: IAOC and permissions [Re: Future Handling of Blue Sheets]

2012-04-25 Thread Brian E Carpenter

Christian,

On 2012-04-25 08:57, Christian Huitema wrote:
 Brian,
 
 I suggest that your standard dealings with local hosts should include 
 requiring them to perform a local check on
 whether the standard Note Well takes account of all local legal 
 requirements, including for example 
 consent to publication of images. If it doesn't, the host should provide an 
 augmented Note Well for use 
 during meeting registration.
 
 Rather than going this route, we might consider some better balance between 
 privacy and standard-settings. Taking and publishing a person's image is a 
 step above listing their names. Do we really need that for the purpose of 
 standard making, let alone Internet Engineering? How about answering the 
 classic privacy checklist:

These are excellent questions, and I support them being studied (perhaps
initially by a small group), but I think they are orthogonal to my
suggestion. Since privacy laws vary widely, I really think this issue
needs to be checked on a per-host-country basis, regardless of our general
policy.

Brian

 1) How much personal information do we collect, and for what purpose? The 
 rule here should be to collect the strict minimum necessary for the purpose. 
 Pictures don't appear to meet that bar.
 2) How do we process that information? Who in the IETF has access to it?
 3) Do we make that information available to third parties? Under which 
 guidelines? Again, there is a big difference between answering a subpoena and 
 publishing on a web page.
 4) How do we safeguard that information? Is it available to any hacker who 
 sneaks his way into our database?
 5) How long do we keep the information? Why?
 6) How do we dispose of the expired information?
 
 These look like the right questions to the IAOC.
 
 -- Christian Huitema
 
 
 


Re: [IAOC] IAOC and permissions [Re: Future Handling of Blue Sheets]

2012-04-25 Thread Marshall Eubanks
Dear Brian;


On Wed, Apr 25, 2012 at 3:42 AM, Brian E Carpenter
brian.e.carpen...@gmail.com wrote:
 Dear IAOC,

 I suggest that your standard dealings with local hosts should
 include requiring them to perform a local check on whether the
 standard Note Well takes account of all local legal requirements,
 including for example consent to publication of images. If it doesn't,
 the host should provide an augmented Note Well for use during
 meeting registration.

 From the recent discussion, this needs to be done for sure for
 IETF 87.


The legal subcommittee (which includes the IETF counsel) is actively
researching this issue.

Regards
Marshall


 Regards
   Brian Carpenter

 On 2012-04-25 00:30, John C Klensin wrote:

 --On Tuesday, 24 April, 2012 18:19 -0500 James M. Polk
 jmp...@cisco.com wrote:

 IETF 87 is in Germany (15 months from now), so we'd better
 solve this issue soon, I should think.

 The IESG and IAOC are invited to take my comments on the
 situation as an appeal against the decision to hold that meeting
 unless either the situation can be clarified with counsel to the
 degree that we understand that Martin's concerns are not
 applicable, that appropriate permission language and permissions
 can be clarified with Counsel so that a binding between
 registration and permission is possible and used, or that a
 community consensus call demonstrates that the community
 believes that the just make lists plan is preferable to having
 the option to take pictures.

 And that is my last comment on the subject unless I have to
 formalize such an appeal.

    john

 .



Re: [dane] Last Call: draft-ietf-dane-protocol-19.txt - Referring to the DANE protocol and DANE-denoted certificate associations

2012-04-25 Thread Phillip Hallam-Baker
+1

The name should reflect the final product, not the path taken to get there.

If people were to use these records then they would see them in the
DNS zone and see them listed as TLSA and look for an RFC with that
name. Only the people who were involved in the group would know that
DANE and TLSA are the same thing.





On Tue, Apr 24, 2012 at 6:23 PM, =JeffH jeff.hod...@kingsmountain.com wrote:
 [ these are excerpts from a current thread on dane@ that I'm now denoting as
 an IETF-wide Last Call comment ]

 Paul Hoffman replied on Fri, 20 Apr 2012 13:57:28 -0700:

 On Apr 20, 2012, at 10:50 AM, =JeffH wrote:

 Various specs are going to have need to refer to the DANE protocol
 specification a well as describe the notion of domain names that map to
 TLSA records describing certificate associations.

 In working on such language in draft-ietf-websec-strict-transport-sec,
 here's the terms I'm using at this time and their (contextual) meaning..

 DANE protocol
   The protocol specified in draft-ietf-dane-protocol (RFC# tbd).


 There is an issue here that we haven't dealt with, which is that DANE
 protocol doesn't really make sense because we might be adding additional
 protocols for certificate associations for things other than TLS. For your
 doc, you should be saying TLSA protocol, not DANE protocol because
 HSTS
 is specific to TLS. (More below.)


 After further perusal of draft-ietf-dane-protocol-19, if I understand
 correctly, the term DANE (and its expansion) names a class of Secure
 DNS-based cert/key-to-domain-name associations, and protocols for particular
 instances will nominally be assigned their own names, where a case-in-point
 is the TLSA Protocol, yes=?

 i.e. we could define another separate spec for mapping Foo protocol's
 keys/certs to DNS RRs, and call 'em FOOA, and then in following this naming
 approach, refer to the protocol of using them while establishing Foo
 connections as the FOOA protocol, yes?



 Paul Hoffman further explained on Sat, 21 Apr 2012 13:38:38 -0700:

 On Apr 20, 2012, at 3:34 PM, =JeffH wrote:

 Paul Hoffman replied on Fri, 20 Apr 2012 13:57:28 -0700:

  On Apr 20, 2012, at 10:50 AM, =JeffH wrote:
 
  There is an issue here that we haven't dealt with, which is that DANE
  protocol doesn't really make sense because we might be adding
  additional
  protocols for certificate associations for things other than TLS.

 Yep. DANE is a working group name. But, I was working from the
 specification name per the present spec.

  ...
  Proposal for [-dane-protocol] spec:
 
  The protocol in this document can generally be referred to as the TLSA
  protocol.

 So as a practical matter, if we wish to refer to this particular spec as
 defining the TLSA protocol, then perhaps the spec title should reflect
 that such that the RFC Index is searchable for that TLSA term.

 The WG already decided against that (unfortunately).


 I agree it is unfortunate and respectfully suggest that this decision be
 revisited.

 Many (most?) people have been referring to the protocol being worked on by
 the working group (which is now draft-ietf-dane-protocol) as the DANE
 protocol or simply DANE for as long as the WG has been formed, /plus/,
 the present title of spec is..

  The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS)


 I think it will just continue to unnecessarily sow confusion if the term
 TLSA doesn't somehow get into the spec title and thus into the various RFC
 indexes (whether or not the suggested statement above explicitly naming the
 protocol TLSA protocol is added to the spec (I think it should be added)).


 Ways to accomplish addressing the spec title issue could be..

  TLSA: The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS)


 ..or..

  The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS): TLSA


 HTH,

 =JeffH



 ___
 dane mailing list
 d...@ietf.org
 https://www.ietf.org/mailman/listinfo/dane



-- 
Website: http://hallambaker.com/


Re: [nvo3] Key difference between DCVPN and L2VPN/L3VPN

2012-04-25 Thread Marshall Eubanks
A question in line.

On Wed, Apr 25, 2012 at 6:01 AM, Adrian Farrel adr...@olddog.co.uk wrote:
 Hi Linda,

 Respect your advice. However, some wording in the proposed charter are too
 ambiguous, is it the intent?

 For example:

   An NVO3 solution (known here as a Data Center Virtual Private Network
 (DCVPN)) is a VPN that is viable across a scaling range of a few thousand VMs
 to
 several million VMs running on greater than 100K physical servers.

 Do you mean one VPN across a scaling range of million VMs? or many VPNs
 combined to scale range of million VMs?

 I don't find the text ambiguous at all. You might disagree with what it says 
 (a
 VPN), but you can't claim it is ambiguous.

 Another example:
       NVO3 will consider approaches to multi-tenancy that reside at the
 network layer rather than using traditional isolation mechanisms that rely on
 the
 underlying layer 2 technology (e.g., VLANs)

 network layer can mean different things to different people. Why not simply
 say NV03 will consider approaches to multi-tenancy which do not rely on 
 Layer
 2
 VLANs?

 There are also layers above the network layer. The charter rules them out of
 scope. This is good.

 Stewart has clarified that network layer includes IP and MPLS, and that it 
 is
 the bit of the hourglass that we all know as the network layer.


The charter talks about

 The NVO3 WG will determine which types of  service are needed by
 typical
 DC deployments (for example, IP and/or Ethernet).

I generally think of Ethernet as being Layer 2. Does this charter
envision Ethernet as being part of the
network layer? What was intended with having it mentioned there?

Regards
Marshall


 3rd example:

        The NVO3 WG will determine which types of  service are needed by
 typical DC deployments

 Data center provide Computing and storage services. Network facilitates the
 connection among Computing entities and storage entities.
  Why does NV03 WG need to determine what types of services are needed by
 typical DC deployment?

 Do you mean NV03 WG will consider network deployment by typical DC?

 I think s/services/connectivity services/ might address this issue. Although 
 the
 examples cited (but not quoted by you) do tend to give a strong hint.

 Adrian

 ___
 nvo3 mailing list
 n...@ietf.org
 https://www.ietf.org/mailman/listinfo/nvo3


RE: [secdir] secdir review of draft-ietf-emu-chbind-14

2012-04-25 Thread Stephen Hanna
Thanks, Joe. Looks like we've reached agreement on most things.
There are a few items left where Sam's input is needed.
I'll wait to see what he has to say.

Thanks,

Steve

 -Original Message-
 From: Joe Salowey [mailto:jsalo...@cisco.com]
 Sent: Wednesday, April 25, 2012 1:34 AM
 To: Stephen Hanna
 Cc: draft-ietf-emu-chb...@tools.ietf.org; sec...@ietf.org; IETF-
 Discussion list; Sam Hartman
 Subject: Re: [secdir] secdir review of draft-ietf-emu-chbind-14
 Importance: High


 On Apr 24, 2012, at 2:05 PM, Stephen Hanna wrote:

  Joe,
 
  I'm glad that my comments were useful to you and to the editors
  of this draft. I will respond to your comments below inline.
  I'm going to clip out as much as I can, including anything
  that has already been agreed on.
 
  Thanks,
 
  Steve
 
  --
 
  Joe Salowey wrote:
 
  On Apr 13, 2012, at 11:26 AM, Stephen Hanna wrote:
 
  In the Introduction, the second paragraph says that the
  problem results when the same credentials are used to
  access multiple services that differ in some interesting
  property. Do you mean client or server credentials?
  I think you mean EAP server credentials. Please be more
  explicit to make this clearer, since many people will
  assume that you mean client (EAP peer) credentials. If
  I'm correct and you do mean EAP server credentials,
  I suggest that you say so in the first sentence of
  this paragraph and also in the last sentence.
 
  [Joe]   The case here is both client and server credentials.  If
  different credentials are required for each type of service then the
  authenticator will not be able to lie about the type of service it
 is
  representing to the client because the client credentials are bound
 to
  the service.
 
  SH I don't see what the problem is with using the same
  client credentials with two different services if the
  server credentials are different. The client will be able
  to detect a lying NAS easily since the server credentials
  won't match what it expects for that service. Could you
  please explain this more?
 

 [Joe] In many cases the server credentials will be the same.  They will
 be the credentials of the AAA server.

  In the first paragraph of the Problem Statement, the second
  sentence says However, when operating in pass-through mode,
  the EAP server can be far removed from the authenticator.
  While this is true, I think the more relevant statement here
  is that the EAP server can be far removed from the EAP peer.
  This paragraph is all about problems that can arise when
  parties between the EAP peer and the EAP server (including
  the authenticator) are malicious or compromised. So the
  important thing to point out at the start of the paragraph
  is the large number of parties that may come between the
  EAP server and the EAP peer.
 
 
  [Joe]  I think I see your point, but I'm not sure.  Traditionally,
 we
  have often thought of the path between EAP peer and EAP
 authenticator
  as being vulnerable to having multiple parties able to insert
  themselves into the conversation.  However it may be the case that
 the
  authenticator the peer is talking to isn't the one it thinks it is
 and
  the real authenticator may be somewhere in the path as well.  The
  conversation between the EAP peer and EAP server will not be
  compromised, however the result of the conversation may not have its
  intended effect.
 
  My point is that having a long distance between the EAP server
  and the authenticator has little to do with the lying NAS
  problem. The problem is that there are potentially untrustworthy
  parties (the NAS and any proxies) between the EAP peer and the
  EAP server and the EAP peer is trusting what it's told by them.
  If the EAP server and the authenticator were two inches apart
  with no intermediaries, that wouldn't help. The problem is the
  potentially untrustworthy folks between the EAP server and
  the EAP peer. You're trying to verify some of what they're
  telling the EAP peer. So I'm not sure that sentence helps but
  if it does, the problem is between the EAP peer and the EAP server.
 

 [Joe] I don't have an issue with stating that the problem is between
 the peer and the server, but I'd like to hear the author's view on
 this.

 
  [Joe] THis is historical and reflects the discussion we had in the
  working as the document was being developed.
 
  I see
  several downsides to including that text in this document.
  First, you're making it harder for the reader to understand
  what they actually need to do to implement this protocol.
  You've probably decreased by half the number of people who
  will actually read all of this document. Second, some people
  will use that digression to support arguments like (My
  incompatible implementation is compliant with RFC 
  because I encoded the network information into a blob
  and used that to generate EAP session keys). IETF is an
  engineering organization. We should make our specs as clear
  and 

Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread Phillip Hallam-Baker
I see no value in deallocating code point spaces and a huge amount of
potential harm.

Except at the very lowest levels of the protocol stack (IP and BGP)
there is really no technical need for a namespace that is limited. We
do have some protocols that will come to a crisis some day but there
are plenty of ways that the code space for DNS, TLS etc can be
expanded if we ever need to.


The Internet is driven by innovation and experiment. There are
certainly people who think that the role of this organization is to
protect the Internet from the meddling of people who might not get it
but they are wrong.

Even more wrong is the idea that IANA can actually act as suggested.
IANA only exists by mutual consent. I am happy to register a code
point if there is a reasonable procedure to do so. But if the idea is
to force me to kiss someone's ring then I'll pick a number and ship
the code and let other folk work out the problems later.

This already happens on a significant scale in the wild. SRV code
points being a prime example. There are far more unofficial code
points than recognized ones. Some of them are in standards I wrote at
W3C and OASIS. It would be best if IANA tracked these but I think it
rather unlikely anyone is going to accidentally overwrite the SAML or
the XKMS SRV code points.

It has happened with other registries too. Back in the day there was
an attempt to stop the storage manufacturers using ethernet IDs on
their drives. So the drive manufacturers simply declared a block of
space (Cxx) as theirs and the IEEE has since been forced to accept
this as a fait accompli. It has happened in UPC codes as well, when
the Europeans decided that the US authority was charging them
ridiculous fees they just extended the code by an octet and set
themselves up as a meta-registry.


The only role of IANA is to help people avoid treading on each other
by accident. If it starts issuing code points that have been 'lightly
used', the value of IANA is degraded. I certainly don't want my
protocol to have to deal with issues that are caused by someone's
'experiment' that has not completely shut down. The only value of
going to IANA rather than choosing one myself is to avoid
contamination with earlier efforts.

The only area where I can see the benefits of re-allocation
outweighing the risks is in IP port assignments but even there I think
the real solution has to be to simply ditch the whole notion of port
numbers and use SRV type approaches for new protocols.


IANA is not a control point for the Internet. A fact that people need
to keep in mind when the ITU attempts to grab control in Dubai later
on this year.

Weakness is strength: The registries are not the control points people
imagine because they only have authority as long as people consent.


On Thu, Apr 19, 2012 at 5:17 PM, David Conrad d...@virtualized.org wrote:
 Hi,

 Scott O Bradner s...@sobco.com wrote:
 encouraging a report is fine

 Agreed.

 retracting the code points seems to add more confusion than it is worth 
 unless the code space is very tight

 Disagree.  From my experience at IANA, trying to figure out who to contact to 
 remove a code point gets harder the longer the code points are not being 
 used.  Unless the code space is unlimited, I'd argue that you want to 
 deallocate as soon as an experiment is over.  I'd even go so far as to say 
 that the original proposal for experimental code points should have explicit 
 revocation dates (which can, of course, be refreshed similarly to IDs).

 and I see no reason to obsolete the experimental rfc or move it to historic 
 status unless the report is that some bad thing happens when you try it out 
 - updating the old rfc is fine

 Having been involved with RFC 6563, I think in general it is quite useful to 
 signal hey, you really don't want to implement this.  If this can be done 
 by updating the old RFC, that's fine.

 and I agree with Elliot about the nature of research - it is very common to 
 not reach a conclusion that something is bad (as in bad for the net) - and 
 that is the only case where I think that an experiment should be flagged as 
 a don't go there situation

 Agreed, with the proviso that limited resources (whether they are scarce or 
 not) should be reclaimed.

 Regards,
 -drc




-- 
Website: http://hallambaker.com/


Re: [nvo3] Key difference between DCVPN and L2VPN/L3VPN

2012-04-25 Thread Stewart Bryant

On 25/04/2012 14:57, Marshall Eubanks wrote:

A question in line.

On Wed, Apr 25, 2012 at 6:01 AM, Adrian Farreladr...@olddog.co.uk  wrote:

Hi Linda,


Respect your advice. However, some wording in the proposed charter are too
ambiguous, is it the intent?

For example:

   An NVO3 solution (known here as a Data Center Virtual Private Network
(DCVPN)) is a VPN that is viable across a scaling range of a few thousand VMs

to

several million VMs running on greater than 100K physical servers.

Do you mean one VPN across a scaling range of million VMs? or many VPNs
combined to scale range of million VMs?

I don't find the text ambiguous at all. You might disagree with what it says (a
VPN), but you can't claim it is ambiguous.


Another example:
   NVO3 will consider approaches to multi-tenancy that reside at the
network layer rather than using traditional isolation mechanisms that rely on

the

underlying layer 2 technology (e.g., VLANs)

network layer can mean different things to different people. Why not simply
say NV03 will consider approaches to multi-tenancy which do not rely on Layer

2

VLANs?

There are also layers above the network layer. The charter rules them out of
scope. This is good.

Stewart has clarified that network layer includes IP and MPLS, and that it is
the bit of the hourglass that we all know as the network layer.


The charter talks about


The NVO3 WG will determine which types of  service are needed by
typical
DC deployments (for example, IP and/or Ethernet).

I generally think of Ethernet as being Layer 2. Does this charter
envision Ethernet as being part of the
network layer? What was intended with having it mentioned there?

Regards
Marshall

Hi Marshall

The client VMs *may* need to talk Ethernet to their peers, but it is
intended that VPNs that convey the client packets will run over the
network layer i.e. IP and or MPLS.

Stewart




Re: Future Handling of Blue Sheets

2012-04-25 Thread Samuel Weiler

On Tue, 24 Apr 2012, David Morris wrote:


The IETF meetings are actually not totally public. You must purchase a
'ticket' to attend. We would not allow someone to walk in off the street
and photograph the functions, or even sit in a meeting and take notes.


Without commenting specifically about photographs, as MSJ pointed out, 
this is not true.  It may not be socially acceptable to crash working 
group meetings, but we do not use force to prevent it.  And as Scott 
Bradner pointed out on this list some time ago:


   Back when the IETF decided to charge for meetings ($100/meeting
   sometime in the early 1990s) Steve Coya said that the IETF would
   never check badges to block people from meetings.

The Beijing meeting was a notable exception.  There were guards 
looking for badges and at least one attendee's partner was restrained 
by the guards when she attempted to enter the meeting space to find 
her partner.  Even then, it the IETF was not the party doing the 
checking.  As the IAOC chair later explained:


   The IAOC was not aware that badge checking was going to happen
   prior to this meeting. It was implemented by the meeting host in
   conjunction with the hotel.

I imagine we would respond differently to something truly disruptive. 
Happily, we tend to be a civilized enough crowd that we don't need 
bouncers.


-- Sam


Re: [dane] Last Call: draft-ietf-dane-protocol-19.txt (The DNS-Based

2012-04-25 Thread Andrew Sullivan
On Wed, Apr 25, 2012 at 09:52:39AM -0400, Phillip Hallam-Baker wrote:

 dependency on the DNSSEC trust chain despite the easily observed fact
 that less than 97% of DNS resolvers will pass anything other than
 A/ and CNAME records.

I'm having a hard time understanding that sentence.  Could you
clarify, please:

A.  Fewer than 97% of DNS resolvers can pass anything other than
A/ and CNAME, which means something more than 3% of resolvers pass
only A/ and CNAME.  

This is what I _think_ you mean, which means that n%  broken
resolvers  3%, right?  If so, I'd like a citation, though it
doesn't sound wrong to me.  That we'd have something on the order
of 3% of the software deployed everywhere on the Internet be
broken ought to be completely unsurprising.

B.  97% of the DNS resolvers is the most that has ever been observed
working according to specification, and the number may be much lower.

This is the rhetorical point I think might be read in.  In this
case, I think a citation is in order.

Thanks,

A

-- 
Andrew Sullivan
a...@anvilwalrusden.com


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread David Conrad
On Apr 25, 2012, at 7:27 AM, Phillip Hallam-Baker wrote:
 Except at the very lowest levels of the protocol stack (IP and BGP)
 there is really no technical need for a namespace that is limited.

Arguable, but irrelevant since the reality is that historically many (most?) 
protocols defined by the IETF to date used fixed length fields implying 
limitations in the number of identifiers in those namespaces.

 We
 do have some protocols that will come to a crisis some day but there
 are plenty of ways that the code space for DNS, TLS etc can be
 expanded if we ever need to.

Unfortunately, experience has demonstrated that most implementations of 
protocols do not handle potential expansion.

 Even more wrong is the idea that IANA can actually act as suggested.

You seem to have an odd idea of what is being suggested. However, experience 
has shown arguing with you is a waste of time so I'll let others engage if they 
care.

 Weakness is strength


And we've always been at war with Eastasia.

Regards,
-drc



RE: [PWE3] Last Call: draft-ietf-pwe3-redundancy-bit-06.txt (Pseudowire Preferential Forwarding Status Bit) to Proposed Standard

2012-04-25 Thread Aissaoui, Mustapha (Mustapha)
Dear all,
I have made a couple more clarifications to the text below based on additional 
feedback from Daniel on the use of the active PW selection algorithm in use 
cases presented in Section 15.

I am copying the new updated Section 5.1. All the changes from the current 
version of the draft are underlined.

Let me  know if you have any comments.

Regards,
Mustapha.
===
5.1. Independent Mode:
PW endpoint nodes independently select which PWs are eligible to become active 
and which are not. They advertise the corresponding Active or Standby 
preferential forwarding status for each PW. Each PW endpoint compares local and 
remote status bits and uses the PW that is UP at both endpoints and that 
advertised Active preferential forwarding status at both the local and remote 
endpoints.
In this mode of operation, the preferential forwarding status indicates the 
preferred forwarding state of each endpoint but the actual forwarding state of 
the PW is the result of the comparison of the local and remote forwarding 
status bits.
If more than one PW qualifies for the Active state, each PW endpoint MUST 
implement a common mechanism to choose the PW for forwarding. The default 
mechanism MUST be supported by all implementations and operates as follows:
1. For FEC128 PW, the PW with the lowest pw-id value is selected.
2. For FEC129 PW, each PW in a redundant set is uniquely identified at each 
PE using the following triplet: AGI::SAII::TAII. The unsigned integer form of 
the concatenated word can be used in the comparison. However, the SAII and TAII 
values as seen on a PE node are the mirror values of what the peer PE node 
sees. To have both PE nodes compare the same value we propose that the PE with 
the lowest system IP address use the unsigned integer form of AGI::SAII::TAII 
while the PE with the highest system IP address use the unsigned integer form 
of AGI::TAII::SAII. This way, both PEs will compare the same values. The PW 
which corresponds to the minimum of the compared values across all PWs in the 
redundant is selected.
Note 1: in the case where the system IP address is not known, it is recommended 
to implement the optional active PW selection mechanism described next.
Note 2: in the case of segmented PW, the operator needs to make sure that the 
pw-id or AGI::SAII::TAII of the redundant PWs within the first and last segment 
are ordered consistently such that the same end-to-end MS-PW gets selected. 
Otherwise, it is recommended to implement the optional active PW selection 
mechanism described next.
The PW endpoints MAY also implement the following optional active PW selection 
mechanism.
1. If the PW endpoint is configured with the precedence parameter on each 
PW in the redundant set, it must select the PW with the lowest configured 
precedence value.
2. If the PW endpoint is configured with one PW as primary and one or more 
PWs as secondary, it must select the primary PW in preference to all secondary 
PWs. If a primary PW is not available, it must use the secondary PW with the 
lowest precedence value. If the primary PW becomes available, a PW endpoint 
must revert to it immediately or after the expiration of a configurable delay.
3. This active PW selection mechanism assumes the precedence parameter 
values are configured consistently at both PW endpoints and that unique values 
are assigned to the PWs in the same redundancy set to achieve tie-breaking 
using this mechanism.
There are scenarios with dual-homing of a CE to PE nodes where each PE node 
needs to advertise Active preferential forwarding status on more than one PW in 
the redundancy set. However, a PE MUST always select a single PW for forwarding 
using the above active PW selection algorithm. An example of such a case is 
described in 15.2. .
There are scenarios where each PE needs to advertize Active preferential 
forwarding status on a single PW in the redundancy set. In order to ensure that 
both PE nodes make the same selection, they MUST use the above active PW 
selection algorithm to determine the PW eligible for active state. An example 
of such a case is described in 15.5. .
In steady state with consistent configuration, a PE will always find an active 
PW. However, it is possible that such a PW is not found due to a 
mis-configuration. In the event that an active PW is not found, a management 
indication SHOULD be generated. If a management indication for failure to find 
an active PW was generated and an active PW is subsequently found, a management 
indication should be generated, so clearing the previous failure indication. 
Additionally, a PE may use the optional request switchover procedures described 
in Section 6.3. to have both PE nodes switch to a common PW.
There may also be transient conditions where endpoints do not share a common 
view of the Active/Standby state of the PWs. This could be caused by 

RE: [MBONED] Last Call: draft-ietf-mboned-64-multicast-address-format-01.txt (IPv4-Embedded IPv6 Multicast Address Format) to Proposed Standard

2012-04-25 Thread mohamed.boucadair
Dear SM,

Thank you for the review. 

Please see inline.

Cheers,
Med 

-Message d'origine-
De : mboned-boun...@ietf.org [mailto:mboned-boun...@ietf.org] 
De la part de SM
Envoyé : dimanche 22 avril 2012 01:26
À : ietf@ietf.org
Cc : mbo...@ietf.org
Objet : Re: [MBONED] Last Call: 
draft-ietf-mboned-64-multicast-address-format-01.txt 
(IPv4-Embedded IPv6 Multicast Address Format) to Proposed Standard

At 15:33 18-04-2012, The IESG wrote:
The IESG has received a request from the MBONE Deployment WG 
(mboned) to
consider the following document:
- 'IPv4-Embedded IPv6 Multicast Address Format'
   draft-ietf-mboned-64-multicast-address-format-01.txt as 
a Proposed
Standard

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
ietf@ietf.org mailing lists by 2012-05-02. Exceptionally, 
comments may be

Is there a write-up for this proposal?

In Section 2:

   The format to build such addresses is defined in Section 3 for
ASM mode and Section 4 for SSM mode.

I suggest expanding ASM and SSM on first use.

Med: Ok. Done in my local copy. Thanks.


In Section 3:

   To meet the requirements listed in Appendix A.2

Wouldn't it be better to reference RFC 4291?

Med: Do you mean, cite RFC4291 in addition to the ref to Appendix A.2?


   This field must follow the recommendations specified in [RFC3306]
if unicast-based prefix is used or the recommendations specified
in [RFC3956] if embedded-RP is used.

Shouldn't that be a MUST?

Med: Done. 


In Section 4:

   Flags must be set to 0011.

Is that a requirement?

Med: Yes, because as listed in Appendix A.2, we wanted to have an a prefix in 
the ff3x::/32 range.


   The embedded IPv4 address SHOULD be in the 232/8 range [RFC4607].
232.0.0.1-232.0.0.255 range is being reserved to IANA.

Why is this a SHOULD? 

Med: We first considered a MUST but we relaxed that required to SHOULD for 
any future use case which may need to map IPv4 ASM to IPv6 SSM. Does this makes 
sense to you?

 What does being reserved to IANA mean?


Med: It should be for IANA allocation instead of to IANA. Better?


Although the proposal appears simple, I would suggest further review 
as it updates RFC 4291.

Med: Reviews are more than welcome. FWIW, a call for review has been issued in 
6man and 6vops for 2 weeks:
* http://www.ietf.org/mail-archive/web/ipv6/current/msg15488.html
* http://www.ietf.org/mail-archive/web/v6ops/current/msg12174.html


Regards,
-sm

___
MBONED mailing list
mbo...@ietf.org
https://www.ietf.org/mailman/listinfo/mboned


Re: [dane] Last Call: draft-ietf-dane-protocol-19.txt (The DNS-Based

2012-04-25 Thread Paul Wouters

On Wed, 25 Apr 2012, Phillip Hallam-Baker wrote:


The browser providers do not hard fail on OCSP because doing so would
require them to wait for the OCSP response within the TLS handshake
and this is considered an unacceptable performance degradation.


And with the current ocsp.entrust.net issue, that would be a two day
performance degradation? There is also the privacy issue. But all of
this is off-topic.


Section 4 of the draft mandates a client hardfail if the DNSSEC trust
chain cannot be obtained. This is essential if the client is going to
use DNSSEC to establish certificate constraints just as certificate
revocation is an essential part of the PKIX model. But no browser
provider can expect to succeed with a product that simply stops
working when the user tries to surf the Web from a coffee shop.


hotspot detection for temporal DNS failure/spoofing mode is what
browsers need to support/implement. It is no different then the fix
they needed when we opened our browser at a coffee shop and all
tabs destroyed themselves reloading into hotspot login pages.


Since the coffee shop problem is not intentional we might imagine that
it will eventually go away. But this puts DANE in a deployment
deadlock bind. Nobody is going to fix their coffee shop routers until
there is a need to and that need won't exist until the coffee shop
routers are fixed.


I suggest you run with dnssec-trigger software for a while and then
come back to this assertion. We do not need coffeeshops to fix anything,
we need better DNSSEC integration on our laptops and phones.


Rather than mandating hardfail or any particular client behavior, the
specification should simply state that the client MUST conclude that
the DANE status of the certificate is invalid and then leave the
client to decide on what course of action to take. This will depend on
the circumstances of the particular user and the client provider's
assessment of the reliability of the DANE data and might range from do
nothing to send a security policy violation notification somewhere to
hardfail.


In all IETF protocols, there is the local policy overrides, yet we
don't add it specifically to every MUST in the protocol documents.
Additionally, the browser can fail the connection and terminate the TLS
and provide the user some feedback with a local policy override, and
then the brower complies to both the hard fail requirement, and your
'too big too fail' argument.


Contrary to the assumptions of the specification authors, hard fail is
not the best option. It is not even the best option in the case that
the users are dissidents.


Tell that to the chrome users in Iraq that are still alive and not in
jail getting tortured. If anyone is going to sacrifice the one for the
many, I sure as hell hope it won't be protocol designers.


But you can be sure that Iran,
Russia and China will be doing so the minute any client started to
make use of DANE. These countries can (and will) make use of a client
hard fail to ensure that people don't use browsers that might be used
for 'information terrorism' or freedom of speech as the rest of us
call it.


Sure. Any data can be blocked. You work around the block with tunneling
and then use the actual DANE information for your own protection and
safety within the tunnel. Suggesting the block means you might as well
not add security in case there is no block, or you can break through it,
is wrong.


But the DANE approach is too dogmatic to succeed.


All your arguments against dane are equally valid/invalid for OCSP with
hard fail, which you seem to be a proponent of. I fail to see the
consistency in your reasoning.

Paul


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread ned+ietf
 I see no value in deallocating code point spaces and a huge amount of
 potential harm.

It depends on the size of the space. I completely agree that if the space is
large - and that's almost always the case - then deallocating is going to be
somewhere between silly and very damaging.

Deprecacting the use of a code point, OTOH, may make sense even if the space is
large.

The takeaway here, I think, is that if you're going to conclude experiments,
you need to examine these allocations and do something sensible with them,
where sensible is rarely going to mean deallocate.

 Except at the very lowest levels of the protocol stack (IP and BGP)
 there is really no technical need for a namespace that is limited. We
 do have some protocols that will come to a crisis some day but there
 are plenty of ways that the code space for DNS, TLS etc can be
 expanded if we ever need to.

There may not be any technical need, but there are a number of legacy designs
that were done ... poorly.

 The Internet is driven by innovation and experiment. There are
 certainly people who think that the role of this organization is to
 protect the Internet from the meddling of people who might not get it
 but they are wrong.

+1

 Even more wrong is the idea that IANA can actually act as suggested.
 IANA only exists by mutual consent. I am happy to register a code
 point if there is a reasonable procedure to do so. But if the idea is
 to force me to kiss someone's ring then I'll pick a number and ship
 the code and let other folk work out the problems later.

 This already happens on a significant scale in the wild. SRV code
 points being a prime example. There are far more unofficial code
 points than recognized ones. Some of them are in standards I wrote at
 W3C and OASIS. It would be best if IANA tracked these but I think it
 rather unlikely anyone is going to accidentally overwrite the SAML or
 the XKMS SRV code points.

Media types are an excellent example of this. The original registration
procedures were too restrictive so people simply picked names to use. We fixed
that for vendor assignments (fill in a web form) and the registrations
starting rolling in. (We're now trying to do the same for standard
assignments.) But of course we now have a legacy of unassigned material
to deal with.

 It has happened with other registries too. Back in the day there was
 an attempt to stop the storage manufacturers using ethernet IDs on
 their drives. So the drive manufacturers simply declared a block of
 space (Cxx) as theirs and the IEEE has since been forced to accept
 this as a fait accompli. It has happened in UPC codes as well, when
 the Europeans decided that the US authority was charging them
 ridiculous fees they just extended the code by an octet and set
 themselves up as a meta-registry.

 The only role of IANA is to help people avoid treading on each other
 by accident. If it starts issuing code points that have been 'lightly
 used', the value of IANA is degraded. I certainly don't want my
 protocol to have to deal with issues that are caused by someone's
 'experiment' that has not completely shut down. The only value of
 going to IANA rather than choosing one myself is to avoid
 contamination with earlier efforts.

No, not quite. The other important role is (hopefully) to keep people from
having multiple allocations for the same thing.

 The only area where I can see the benefits of re-allocation
 outweighing the risks is in IP port assignments but even there I think
 the real solution has to be to simply ditch the whole notion of port
 numbers and use SRV type approaches for new protocols.

+1

 IANA is not a control point for the Internet. A fact that people need
 to keep in mind when the ITU attempts to grab control in Dubai later
 on this year.

 Weakness is strength: The registries are not the control points people
 imagine because they only have authority as long as people consent.

Good point.

Ned


RE: [MBONED] Last Call: draft-ietf-mboned-64-multicast-address-format-01.txt (IPv4-Embedded IPv6 Multicast Address Format) to Proposed Standard

2012-04-25 Thread SM

Hi Med,
At 08:05 25-04-2012, mohamed.boucad...@orange.com wrote:

Med: Do you mean, cite RFC4291 in addition to the ref to Appendix A.2?


Yes, and have Appendix A.2 as informative.

Med: Yes, because as listed in Appendix A.2, we wanted to have an a 
prefix in the ff3x::/32 range.


You are using a must.  It might be interpreted differently.

Med: We first considered a MUST but we relaxed that required to 
SHOULD for any future use case which may need to map IPv4 ASM to 
IPv6 SSM. Does this makes sense to you?


Yes.


Med: It should be for IANA allocation instead of to IANA. Better?


There is no mention of that in the IANA Considerations section.  The 
range is already reserved for SSM destination addresses.  I am at a 
lost on that part of the text.  I'll defer to you on this.


Well, you tried your best.

Regards,
-sm 



Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread Phillip Hallam-Baker
+1

Deprecating a code point is very different from deallocating it which
implies that it is going to be given out again in the future. Last
thing I want is a used code point.

I agree that avoiding multiple allocations is also a function of IANA,
but this is also something that argues for not being overly strict. If
it is too hard to get an allocation then there will be multiple
unofficial versions. Eventually there may be an attempt to pick one of
them as 'official'.

On Wed, Apr 25, 2012 at 1:46 PM, Ned Freed ned.fr...@mrochek.com wrote:
 I see no value in deallocating code point spaces and a huge amount of
 potential harm.

 It depends on the size of the space. I completely agree that if the space is
 large - and that's almost always the case - then deallocating is going to be
 somewhere between silly and very damaging.

 Deprecacting the use of a code point, OTOH, may make sense even if the space 
 is
 large.

 The takeaway here, I think, is that if you're going to conclude experiments,
 you need to examine these allocations and do something sensible with them,
 where sensible is rarely going to mean deallocate.

 Except at the very lowest levels of the protocol stack (IP and BGP)
 there is really no technical need for a namespace that is limited. We
 do have some protocols that will come to a crisis some day but there
 are plenty of ways that the code space for DNS, TLS etc can be
 expanded if we ever need to.

 There may not be any technical need, but there are a number of legacy designs
 that were done ... poorly.

 The Internet is driven by innovation and experiment. There are
 certainly people who think that the role of this organization is to
 protect the Internet from the meddling of people who might not get it
 but they are wrong.

 +1

 Even more wrong is the idea that IANA can actually act as suggested.
 IANA only exists by mutual consent. I am happy to register a code
 point if there is a reasonable procedure to do so. But if the idea is
 to force me to kiss someone's ring then I'll pick a number and ship
 the code and let other folk work out the problems later.

 This already happens on a significant scale in the wild. SRV code
 points being a prime example. There are far more unofficial code
 points than recognized ones. Some of them are in standards I wrote at
 W3C and OASIS. It would be best if IANA tracked these but I think it
 rather unlikely anyone is going to accidentally overwrite the SAML or
 the XKMS SRV code points.

 Media types are an excellent example of this. The original registration
 procedures were too restrictive so people simply picked names to use. We fixed
 that for vendor assignments (fill in a web form) and the registrations
 starting rolling in. (We're now trying to do the same for standard
 assignments.) But of course we now have a legacy of unassigned material
 to deal with.

 It has happened with other registries too. Back in the day there was
 an attempt to stop the storage manufacturers using ethernet IDs on
 their drives. So the drive manufacturers simply declared a block of
 space (Cxx) as theirs and the IEEE has since been forced to accept
 this as a fait accompli. It has happened in UPC codes as well, when
 the Europeans decided that the US authority was charging them
 ridiculous fees they just extended the code by an octet and set
 themselves up as a meta-registry.

 The only role of IANA is to help people avoid treading on each other
 by accident. If it starts issuing code points that have been 'lightly
 used', the value of IANA is degraded. I certainly don't want my
 protocol to have to deal with issues that are caused by someone's
 'experiment' that has not completely shut down. The only value of
 going to IANA rather than choosing one myself is to avoid
 contamination with earlier efforts.

 No, not quite. The other important role is (hopefully) to keep people from
 having multiple allocations for the same thing.

 The only area where I can see the benefits of re-allocation
 outweighing the risks is in IP port assignments but even there I think
 the real solution has to be to simply ditch the whole notion of port
 numbers and use SRV type approaches for new protocols.

 +1

 IANA is not a control point for the Internet. A fact that people need
 to keep in mind when the ITU attempts to grab control in Dubai later
 on this year.

 Weakness is strength: The registries are not the control points people
 imagine because they only have authority as long as people consent.

 Good point.

                                Ned



-- 
Website: http://hallambaker.com/


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread Phillip Hallam-Baker
Not arguable in the fashion that you do. You seem to want to signal
disagreement without needing to actually argue a contrary case.

Cutting pieces out of someone's argument to make it look stupid is
itself a stupid trick.


On Wed, Apr 25, 2012 at 12:55 PM, David Conrad d...@virtualized.org wrote:
 On Apr 25, 2012, at 7:27 AM, Phillip Hallam-Baker wrote:
 Except at the very lowest levels of the protocol stack (IP and BGP)
 there is really no technical need for a namespace that is limited.

 Arguable, but irrelevant since the reality is that historically many (most?) 
 protocols defined by the IETF to date used fixed length fields implying 
 limitations in the number of identifiers in those namespaces.

 We
 do have some protocols that will come to a crisis some day but there
 are plenty of ways that the code space for DNS, TLS etc can be
 expanded if we ever need to.

 Unfortunately, experience has demonstrated that most implementations of 
 protocols do not handle potential expansion.

 Even more wrong is the idea that IANA can actually act as suggested.

 You seem to have an odd idea of what is being suggested. However, experience 
 has shown arguing with you is a waste of time so I'll let others engage if 
 they care.

 Weakness is strength


 And we've always been at war with Eastasia.

 Regards,
 -drc




-- 
Website: http://hallambaker.com/


Re: Last Call: draft-ietf-dane-protocol-19.txt

2012-04-25 Thread =JeffH

Some additional modest last call comments on draft-ietf-dane-protocol-19...

terminological issues
-

The usage notion has at least five term/phrase variations used in the spec. I 
found this quite confusing. Here's the variations I find..


usage = usage type = certificate usage = certificate usage type = TLSA Usages


I suggest settling on one or two phrases for most all occurances. I suggest 
using certificate usage type in (almost) all cases, and usage type perhaps 
in cases where the bare term usage is presently used. To help out, here's an 
updated TLSA RDATA Wire Format diagram..


1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |  Usage Type   |   Selector| Matching Type |   /
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+   /
   /   /
   / Certificate Association Data  /
   /   /
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+



Separately, the term TLSA certificate association is used in a few places, 
and should probably be TLS certificate association for consistency.


I also found the term TLSA association used on pages 22  23, which ought to 
be TLS certificate association, yes?


Section 8.1 introduces the somewhat ambiguous term DANE client, which is 
contrasted with common TLS clients and current TLS client. Perhaps 
TLSA-aware TLS client is more appropriate here than DANE client, especially 
if this protocol is referred to as the TLSA protocol?




editorial items
---

 1.1. Background of the Problem

[ I'd entitle this section Background and Motivation ]

last para:

DNS-Based Authentication of Named Entities (DANE) offers the option
to use the DNSSEC infrastructure to store and sign keys and
certificates that are used by TLS.

If in fact the term DANE (and its expansion) names a class of Secure
DNS-based cert/key-to-domain-name associations, and protocols for particular
instances will nominally be assigned their own names, where a case-in-point is
the TLSA Protocol, then..

s/TLS/security protocols/

..in the above-quoted sentence.


 1.2 Securing the Association with a Server's Certificate

which association is this section title referring to? (it's ambiguous)

Suggested more precise section title:

Securing the Association of Domain Name and TLS Server Certificate


 Currently, the client must extract the domain name from the
certificate and must successfully validate the certificate, including
chaining to a trust anchor.

I found the above description unnecessarily imprecise, though recognizing the 
desire to provide only a brief overview here. A suggested modest rewrite:


   Currently, the client must extract the domain name from the certificate
   and validate it [RFC6125], and must also successfully validate the
   certificate, including chaining to a CA's trust anchor [RFC5280].
   However, as explained above in Section 1.1, essentially any CA may
   have issued the certificate for the domain name.



 2. The TLSA Resource Record

The TLSA DNS resource record (RR) is used to associate a certificate
with the domain name where the record is found.

suggested clarification..

  The TLSA DNS resource record (RR) is used to associate a TLS server
  certificate or public key with the domain name where the record is found,
  thus forming a TLS certificate association.

Otherwise, TLS certificate association is only tacitly defined.



 2.1.1. The Certificate Usage Field


A one-octet value, called certificate usage or just usage,
specifies the provided association that will be used to match the
target certificate from the TLS handshake.

In the above, as well as two more instances in the paragraphs following the 
above, I suggest..


  s/target certificate/presented certificate/

..to be more consistent with RFC6125.




 2.1.4. The Certificate Association Data Field


The certificate association data to be matched.  This field
contains the data to be matched.

The 2nd sentence This field contains the data to be matched. is redundant.




 5. TLSA and DANE Use Cases and Requirements
   .
   .
   .
Combination  -- Multiple TLSA records can be published for a given
   host name, thus enabling the client to construct multiple TLSA
   certificate associations that reflect different DANE assertions.
   No support is provided to combine two TLSA certificate
   associations in a single operation.

Roll-over  -- TLSA records are processed in the normal manner within
   the scope of DNS protocol, including the TTL expiration of the
   records.  This ensures that 

Re: [dane] Last Call: draft-ietf-dane-protocol-19.txt (The DNS-Based

2012-04-25 Thread Phillip Hallam-Baker
On Wed, Apr 25, 2012 at 11:15 AM, Andrew Sullivan
a...@anvilwalrusden.com wrote:
 On Wed, Apr 25, 2012 at 09:52:39AM -0400, Phillip Hallam-Baker wrote:

 dependency on the DNSSEC trust chain despite the easily observed fact
 that less than 97% of DNS resolvers will pass anything other than
 A/ and CNAME records.

 I'm having a hard time understanding that sentence.  Could you
 clarify, please:

 A.  Fewer than 97% of DNS resolvers can pass anything other than
 A/ and CNAME, which means something more than 3% of resolvers pass
 only A/ and CNAME.

    This is what I _think_ you mean, which means that n%  broken
    resolvers  3%, right?  If so, I'd like a citation, though it
    doesn't sound wrong to me.  That we'd have something on the order
    of 3% of the software deployed everywhere on the Internet be
    broken ought to be completely unsurprising.

That was what two independent studies that were input to the CABForum
revocation Workshop found. One was by Comodo, the other I am not sure
what the citability status would be.

The Comodo study was obtained by hooking the OCSP validation call in a
very large number of browsers for over a week. I will see if it could
be submitted as a draft as such studies can be useful.


 B.  97% of the DNS resolvers is the most that has ever been observed
 working according to specification, and the number may be much lower.

    This is the rhetorical point I think might be read in.  In this
    case, I think a citation is in order.

Unfortunately this is also the case since we were merely looking for
support for TXT records. So I would expect to see an even higher rate
of stripping for DNSSEC records.

-- 
Website: http://hallambaker.com/


Re: [dane] Last Call: draft-ietf-dane-protocol-19.txt (The DNS-Based

2012-04-25 Thread Phillip Hallam-Baker
On Wed, Apr 25, 2012 at 11:47 AM, Paul Wouters p...@nohats.ca wrote:
 On Wed, 25 Apr 2012, Phillip Hallam-Baker wrote:


 Rather than mandating hardfail or any particular client behavior, the
 specification should simply state that the client MUST conclude that
 the DANE status of the certificate is invalid and then leave the
 client to decide on what course of action to take. This will depend on
 the circumstances of the particular user and the client provider's
 assessment of the reliability of the DANE data and might range from do
 nothing to send a security policy violation notification somewhere to
 hardfail.


 In all IETF protocols, there is the local policy overrides, yet we
 don't add it specifically to every MUST in the protocol documents.
 Additionally, the browser can fail the connection and terminate the TLS
 and provide the user some feedback with a local policy override, and
 then the brower complies to both the hard fail requirement, and your
 'too big too fail' argument.

So you are saying that this MUST does not matter because MUSTs can
always be overridden?

My view is that the correct approach in this case is to use a SHOULD.
That is also what it says in 2119:

1. MUST   This word, or the terms REQUIRED or SHALL, mean that the
   definition is an absolute requirement of the specification.

6. Guidance in the use of these Imperatives

   Imperatives of the type defined in this memo must be used with care
   and sparingly.  In particular, they MUST only be used where it is
   actually required for interoperation or to limit behavior which has
   potential for causing harm (e.g., limiting retransmisssions)  For
   example, they must not be used to try to impose a particular method
   on implementors where the method is not required for
   interoperability.


I think that 'causing harm' needs to be considered very narrowly here
as it seems pretty clear that the original context is talking about
harm to network operations.

If we think that local security policy is going to override then we
really have no choice but to use a SHOULD if we are taking 2119
literally.

 Contrary to the assumptions of the specification authors, hard fail is
 not the best option. It is not even the best option in the case that
 the users are dissidents.

 Tell that to the chrome users in Iraq that are still alive and not in
 jail getting tortured. If anyone is going to sacrifice the one for the
 many, I sure as hell hope it won't be protocol designers.

That is unfair on many levels, not least because 1) the Chrome guys
were the people who found the issue and 2) this was not a case where
revocation could have helped since the CA had no idea which certs had
been issued.

As I explained in the following discussion, what was most important
was to detect and report the security policy anomaly even if it was
not going to be sufficiently reliable to hard fail on. Of course what
would be the best case would be an Internet that didn't have such a
high degree of inherent unreliability. But that is not an option at
present.

Proposing to mandate behavior in the expectation of being ignored is
irresponsible.

 Sure. Any data can be blocked. You work around the block with tunneling
 and then use the actual DANE information for your own protection and
 safety within the tunnel. Suggesting the block means you might as well
 not add security in case there is no block, or you can break through it,
 is wrong.

No, it means that you cannot address this particular problem with a
reductionist approach. We have already deployed a technology that
provides a partial fix to this problem.

 But the DANE approach is too dogmatic to succeed.


 All your arguments against dane are equally valid/invalid for OCSP with
 hard fail, which you seem to be a proponent of. I fail to see the
 consistency in your reasoning.

That is true. What I am arguing here is to not build a second
infrastructure that is going to fail in exactly the same way that OCSP
is failing.

If you subscribe to the right key you will have my omnibroker proposal
which proposes a way to use both DANE and OCSP without the current
failure modes. Otherwise I can send you the PDF.


-- 
Website: http://hallambaker.com/


WG Review: Network Virtualization Overlays (nvo3) - 25-Apr-2012 update

2012-04-25 Thread Stewart Bryant

This version of the NVO3 charter reflects the discussions
on the list and comments received as of this afternoon.

I propose to take this to the IESG for their second
review tomorrow.

Stewart

==

NVO3: Network Virtualization Over Layer 3

Chairs - TBD
Area - Routing
Area Director - Stewart Bryant
INT Area Adviser - TBD
OPS Area Adviser - TBD

Support for multi-tenancy has become a core requirement of data centers
(DCs), especially in the context of data centers supporting virtualized
hosts known as virtual machines (VMs). Three  key requirements needed
to support multi-tenancy are:

  o Traffic isolation, so that a tenant's traffic is not visible to
any other tenant, and

  o Address independence, so that one tenant's addressing scheme does
not collide with other tenant's addressing schemes or with addresses
used within the data center itself.

   o Support the placement and migration of VMs anywhere within the
 data center, without being limited by DC network constraints
 such as the IP subnet boundaries of the underlying DC network.

An NVO3 solution (known here as a Data Center Virtual Private
Network (DCVPN)) is a VPN that is viable across a scaling
range of a few thousand VMs to several million VMs running on
greater than one hundred thousand physical servers. It thus has
good scaling properties from relatively small networks to
networks with several million DCVPN endpoints and hundreds of
thousands of DCVPNs within a single administrative domain.

A DCVPN also supports VM migration between physical servers
in a sub-second timeframe.

Note that although this charter uses the term VM throughout, NVO3 must
also support connectivity to traditional hosts e.g. hosts that do not
have hypervisors.

NVO3 will consider approaches to multi-tenancy that reside at the
network layer rather than using traditional isolation mechanisms
that rely on the underlying layer 2 technology (e.g., VLANs).
The NVO3 WG will determine which types of connectivity services
are needed by typical DC deployments (for example, IP and/or
Ethernet).

NVO3 will document the problem statement, the applicability,
and an architectural framework for DCVPNs within a data center
environment. Within this framework, functional blocks will be
defined to allow the dynamic attachment / detachment of VMs to
their DCVPN, and the interconnection of elements of the DCVPNs
over the underlying physical network. This will support the
delivery of packets to the destination VM within the scaling
and migration limits described above.

Based on this framework, the NVO3 WG will develop requirements for both
control plane protocol(s) and data plane encapsulation format(s), and
perform a gap analysis of existing candidate mechanisms. In addition
to functional and architectural requirements, the NVO3 WG will develop
management, operational, maintenance, troubleshooting, security and
OAM protocol requirements.

The NVO3 WG will investigate the interconnection of the DCVPNs
and their tenants with non-NVO3 IP network(s) to determine if
any specific work is needed.

The NVO3 WG will write the following informational RFCs, which
must have completed Working Group Last Call before rechartering can be
considered:

Problem Statement
Framework document
Control plane requirements document
Data plane requirements document
Operational Requirements
Gap Analysis

Driven by the requirements and consistent with the gap analysis,
the NVO3 WG may request being rechartered to document solutions
consisting of one or more data plane encapsulations and
control plane protocols as applicable.  Any documented solutions
will use existing IETF protocols if suitable. Otherwise,
the NVO3 WG may propose the development of new IETF protocols,
or the writing of an applicability statement for non-IETF
protocols.

If the WG anticipates the adoption  of the technologies of
another SDO, such as the IEEE, as part of the solution, it
will liaise with that SDO to ensure the compatibility of
the approach.

Milestones:

Dec 2012 Problem Statement submitted for IESG review
Dec 2012 Framework document submitted for IESG review
Dec 2012 Data plane requirements submitted for IESG review
Dec 2012 Operational Requirements submitted for IESG review
Mar 2013 Control plane requirements submitted for IESG review
Mar 2013 Gap Analysis submitted for IESG review
Apr 2013 Recharter or close Working Group




Re: Future Handling of Blue Sheets

2012-04-25 Thread Eric Burger
I would strongly support what Wes is talking about here.  I see two (other) 
reasons for keeping blue sheets.  The first is it is a recognized method of 
showing we have an open standards process.  The second is to support those who 
are trying to defend themselves in patent suits.  Frankly, I hope the IETF 
makes it hard for those who want to abuse the IETF process to get patents or 
ignore prior art and then come after the industry for undeserved royalties.

For the former purpose, just having a list is sufficient. However, for the 
latter purpose, one needs records that would be admissible in court. Without 
eating our dog food and having some sort of audited digital signature 
technology, a simple scan will not do.

On Apr 23, 2012, at 10:04 AM, George, Wes wrote:

 From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of IETF
 Chair
 Sent: Sunday, April 22, 2012 10:31 AM
 To: IETF
 Subject: Future Handling of Blue Sheets
 
 2.  Scan the blue sheet and include the image in the proceedings for the WG
 session; and
 3.  Discard paper blue sheets after scanning.
 
 [WEG] Based on some other messages in this thread, there seems to be a lack 
 of clarity as to the full, official purpose of the blue sheets. Are they 
 simply to track generic participation levels for room sizing, or are they 
 also meant as a historical record of attendees to a given WG? It seems that 
 if they are being subpoenaed, and they are archived today, I tend to think 
 that they're meant to officially track attendees. I'd appreciate someone 
 correcting me if I'm wrong.
 
 If blue sheets are meant to be an official record, then technically we should 
 document handling/scanning/storage procedures for WG chairs and the 
 secretariat such that this scan will be admissible in lieu of a paper copy 
 for any subpoena or other court proceeding. But if we're honest, I'm not sure 
 that they're of much use as an official record either way. Do we have 
 procedures today that would prevent tampering before the paper copy ends up 
 in an archive box? And even then, blue sheets and jabber logs (for remote 
 participants) are still ultimately a best-effort honor system, and therefore 
 there is no guarantee of their validity. I can remotely participate without 
 registering for the meeting, and can sign into Jabber as Mickey Mouse just 
 as easily as I can sign the blue sheet that way. I can also sign as Randy 
 Bush or sign my own name completely illegibly.
 
 Could we simply do a headcount for room sizing, and treat the matter of 
 official attendee record for WG meetings as a separate problem? IMO, it's not 
 currently solved by the blue sheets, and I don't see that changing just 
 because we dispense with the paper copies in a box in a warehouse.
 
 Thanks
 Wes George
 
 This E-mail and any of its attachments may contain Time Warner Cable 
 proprietary information, which is privileged, confidential, or subject to 
 copyright belonging to Time Warner Cable. This E-mail is intended solely for 
 the use of the individual or entity to which it is addressed. If you are not 
 the intended recipient of this E-mail, you are hereby notified that any 
 dissemination, distribution, copying, or action taken in relation to the 
 contents of and attachments to this E-mail is strictly prohibited and may be 
 unlawful. If you have received this E-mail in error, please notify the sender 
 immediately and permanently delete the original and any copy of this E-mail 
 and any printout.



Re: Future Handling of Blue Sheets

2012-04-25 Thread SM

Hi Eric,
At 15:06 25-04-2012, Eric Burger wrote:
For the former purpose, just having a list is sufficient. However, 
for the latter purpose, one needs records that would be admissible 
in court. Without eating our dog food and having some sort of 
audited digital signature technology, a simple scan will not do.


I assumed that the IAOC considered the legal implications of 
discarding the blue sheets before the IETF was asked for 
feedback.  The IAOC is supposed to be working on a statement of 
privacy since mid-2011.  There is a document about retention policy.


I haven't seen any of the above mentioned in this long thread.

Regards,
-sm 



Re: [nvo3] WG Review: Network Virtualization Overlays (nvo3) - 25-Apr-2012 update

2012-04-25 Thread Stewart Bryant

Does deleting IETF in the following
sentence:

Any documented solutions
will use existing IETF protocols if suitable.

satisfy your concerns?

- Stewart



Re: [nvo3] WG Review: Network Virtualization Overlays (nvo3) - 25-Apr-2012 update

2012-04-25 Thread Stewart Bryant

Those were my thoughts when the text was written.

Stewart

On 26/04/2012 00:57, david.bl...@emc.com wrote:

Joe and Pat,

I'm less concerned about this - I think the words if suitable regarding
use of existing IETF protocols are sufficient to support choosing the best
solution whether it comes from IETF or elsewhere.  As Pat notes:


when the time comes, we will debate what is suitable anyway.

I agree that such a debate is inevitable.

Thanks,
--David

David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA  01748
+1 (508) 293-7953 FAX: +1 (508) 293-7786
david.bl...@emc.comMobile: +1 (978) 394-7754



-Original Message-
From: nvo3-boun...@ietf.org [mailto:nvo3-boun...@ietf.org] On Behalf Of Joe
Pelissier (jopeliss)
Sent: Wednesday, April 25, 2012 7:35 PM
To: n...@ietf.org; i...@ietf.org
Cc: IETF Discussion
Subject: Re: [nvo3] WG Review: Network Virtualization Overlays (nvo3) - 25-
Apr-2012 update

I too am uncomfortable with the wording regarding the IETF protocols.
It seems that we should be striving to choose the best technical
solution regardless of whether its an IETF protocol or that from another
SDO. This can, and should, be covered as part of the gap analysis.
Also, we should give preference to using existing suitable protocols
(IETF or from other SDOs) over development of new protocols.

Regards,
Joe Pelissier


-Original Message-
From: nvo3-boun...@ietf.org [mailto:nvo3-boun...@ietf.org] On Behalf Of
Pat Thaler
Sent: Wednesday, April 25, 2012 5:55 PM
To: Stewart Bryant (stbryant); n...@ietf.org; i...@ietf.org
Cc: IETF Discussion
Subject: Re: [nvo3] WG Review: Network Virtualization Overlays (nvo3) -
25-Apr-2012 update

Stewart,

The charter is looking pretty good. I'd like to get on to the next
phase, but not with this text:
Driven by the requirements and consistent with the gap analysis, the
NVO3 WG may request being rechartered to document solutions consisting
of one or more data plane encapsulations and control plane protocols as
applicable.  Any documented solutions will use existing IETF protocols
if suitable. Otherwise, the NVO3 WG may propose the development of new
IETF protocols, or the writing of an applicability statement for a
non-IETF protocol.

There are two issues with this:
Is now the right time to be defining the boundaries on what we might
request being chartered next? Framework, requirements and gap analysis
drafts are still to be written. If we get to the end and find we need
something other than or in addition to a data plan encapsulation or
control plane protocol, would we not request it to be chartered? Surely
once the work is done.

Secondly, as this text got rewritten, it gives a preference for IETF
protocols over other protocols even if they are standards. There is a
part of the work where an IEEE 802.1 protocol, VDP, may turn out to be
suitable. Obviously any IETF protocols that are also suitable should be
considered but not to the exclusion of consideration for an IEEE
protocol.

Presumably there is always a preference for using existing protocol if
suitable rather than inventing new. It seems unnecessary to state that -
when the time comes, we will debate what is suitable anyway.

Therefore, at least  Any documented solutions will use existing IETF
protocols if suitable. Otherwise, the NVO3 WG may propose the
development of new IETF protocols, or the writing of an applicability
statement for a non-IETF protocol.  should be deleted.

Regards,
Pat

-Original Message-
From: nvo3-boun...@ietf.org [mailto:nvo3-boun...@ietf.org] On Behalf Of
Stewart Bryant
Sent: Wednesday, April 25, 2012 2:39 PM
To: n...@ietf.org; i...@ietf.org
Cc: IETF Discussion
Subject: [nvo3] WG Review: Network Virtualization Overlays (nvo3) -
25-Apr-2012 update

This version of the NVO3 charter reflects the discussions on the list
and comments received as of this afternoon.

I propose to take this to the IESG for their second review tomorrow.

Stewart

==

NVO3: Network Virtualization Over Layer 3

Chairs - TBD
Area - Routing
Area Director - Stewart Bryant
INT Area Adviser - TBD
OPS Area Adviser - TBD

Support for multi-tenancy has become a core requirement of data centers
(DCs), especially in the context of data centers supporting virtualized
hosts known as virtual machines (VMs). Three  key requirements needed to
support multi-tenancy are:

o Traffic isolation, so that a tenant's traffic is not visible to
  any other tenant, and

o Address independence, so that one tenant's addressing scheme does
  not collide with other tenant's addressing schemes or with
addresses
  used within the data center itself.

 o Support the placement and migration of VMs anywhere within the
   data center, without being limited by DC network constraints
   such as the IP subnet boundaries of the underlying DC 

Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread David Conrad
Ned,

On Apr 25, 2012, at 10:46 AM, Ned Freed wrote:
 I see no value in deallocating code point spaces and a huge amount of 
 potential harm.
 It depends on the size of the space.

Why?  We're talking about completed experiments. I'm unclear I see any 
particular value in having IANA staff continue to maintain registries (which is 
what I've translated code point _spaces_ to) for the protocol defined within 
the RFC(s) of that experiment.  I could, perhaps, see memorializing the final 
state of the registries for the experiment in an informational RFC, but don't 
really see the point in cluttering up http://www.iana.org/protocols with more 
junk than is already in there. Trying to find things is already annoying enough.

 The takeaway here, I think, is that if you're going to conclude experiments,
 you need to examine these allocations and do something sensible with them,
 where sensible is rarely going to mean deallocate.

I agree with the first part.  Don't understand the last part.  

Regards,
-drc



Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread ned+ietf
 Ned,

 On Apr 25, 2012, at 10:46 AM, Ned Freed wrote:
  I see no value in deallocating code point spaces and a huge amount of 
  potential harm.
  It depends on the size of the space.

 Why? 

Because if you deallocate and reallocate it, there can be conflicts. Perhaps
you haven't noticed, but a lot of times people continue to use stuff that IETF
considers to be bad ideas, including but not limited to things we called
experiments at some point.

 We're talking about completed experiments.

It doesn't matter if we're talking about pink prancing ponies. The issue is
whether or not the code point is valuable. If it isn't there's no reason to
deallocate it and every reason not to, although you may want to deprecate it's
use if the experiment proved to be a really bad idea.

 I'm unclear I see any particular value in having IANA staff continue to
 maintain registries (which is what I've translated code point _spaces_ to)
 for the protocol defined within the RFC(s) of that experiment.

Ah, I think I see the conflict. You're talking about experiments that define
namespaces themselves, rather than using code points out of some other more
general space, which is what I was talking about.

That said, I have to say I reach pretty much the same conclusion for
experimental code point spaces that I do for experimental use of code points
in other spaces: They should not be deallocated/deleted. Again, just because
the IETF deems an experiment to be over, or even if the IETF concludes that
the experiement was a failure, doesn't mean people won't continue to use
it. And getting rid of information that people may need to get things to
interoperate seems to, you know, kinda go against some of our core principles.

 I could, perhaps, see memorializing the final state of the registries for the
 experiment in an informational RFC, but don't really see the point in
 cluttering up http://www.iana.org/protocols with more junk than is already in
 there. Trying to find things is already annoying enough.

That's a problem with the organization of the web site, not an argument for
getting rid of information. There are abundant examples of web sites that
contains thousands of times as much stuff as IANA does where finding what you
want is easy if not outright trivial.

Ned


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread David Conrad
Ned,

On Apr 25, 2012, at 7:31 PM, Ned Freed wrote:
 I see no value in deallocating code point spaces

 It depends on the size of the space.
 Why? 
 Because if you deallocate and reallocate it, there can be conflicts. Perhaps
 you haven't noticed, but a lot of times people continue to use stuff that IETF
 considers to be bad ideas, including but not limited to things we called
 experiments at some point.
 
Perhaps you haven't noticed, but no one was suggesting deallocating and 
reallocating anything that was in use.  Or do you have a different 
interpretation of if appropriate?

 And getting rid of information that people may need to get things to
 interoperate seems to, you know, kinda go against some of our core principles.


Sorry, where did anyone suggest getting rid of any information that people may 
need to get things to interoperate again?  Or do you interpret moving a XML 
page from a web server into an informational RFC to be getting rid of 
information?

I'll admit I find this thread bordering on the surreal with some fascinating 
kneejerk reactions.  As far as I can tell, the only thing that was proposed was 
something to encourage documentation of the conclusion of experiments and if 
appropriate, deprecate any IANA code points allocated for the experiment.  
Both of these seem like good things to me.  This has somehow been translated 
into variously:

a) a declaration about how research is done
b) deletion and/or reallocation of code point spaces that people are using
c) killing off successful protocols because they're documented in experimental 
not standards track rfcs
d) violating our core principles
e) process for the sake of process
f) IANA being a control point for the Internet
g) etc.

Did I miss a follow-up message from the Inherently Evil Steering Group that 
proposed these sorts of things?

Regards,
-drc



Call for Comment: IETF and ITU-T Standardiation Sector Collaboration Guidelines

2012-04-25 Thread IAB Chair
 This is an IETF-wide Call for Comment on Internet Engineering Task Force and 
 International
 Telecommunication Union - Telecommunications Standardization Sector 
 Collaboration Guidelines.  
  
 The document is being considered for publication as an Informational RFC  
 within the IAB stream, and is available for inspection here:  
 http://tools.ietf.org/html/draft-iab-rfc3356bis
  
 The Call for Comment will last until May 25, 2012. Please send comments  
 to i...@iab.org, or submit them via TRAC (see below).  
  
 ===  
 Submitting Comments via TRAC  
  
 1. To submit an issue in TRAC, you first need to login to the IAB site on the 
 tools server: http://tools.ietf.org/wg/iab/trac/login
  
 2. If you don't already have a login ID, you can obtain one by  
 navigating to this site: http://tools.ietf.org/newlogin  
  
 3. Once you have obtained an account, and have logged in, you can file  
 an issue by navigating to the ticket entry form: 
 http://trac.tools.ietf.org/wg/iab/trac/newticket  
  
 4. When opening an issue:  
  
 a. The Type: field should be set to defect for an issue with the current 
 document text, or enhancement for a proposed addition of functionality 
 (such as an additional requirement).  
 b. The Priority: field is set based on the severity of the Issue. For 
 example, editorial issues are typically minor or trivial.  
 c. The Milestone: field should be set to milestone1 (useless, I know).  
  d. The dComponent: field should be set to the document you are filing the 
 issue on.  
 e. The Version: field should be set to 1.0.  
 f. The Severity: field should be set to based on the status of the document 
 (e.g. In WG Last Call for a document in IAB last call)  
 g. The Keywords: and CC: fields can be left blank unless inspiration  seizes 
 you.  
 h. The Assign To: field is generally filled in with the email address  of the 
 editor.  
  
 5. Typically it won't be necessary to enclose a file with the ticket,  but if 
 you need to, select I have files to attach to this ticket.  
  
 6. If you want to preview your Issue, click on the Preview button.  When 
 you're ready to submit the issue, click on the Create Ticket button. 
  
 7. If you want to update an issue, go to the View Tickets page:  
 http://TRAC.tools.ietf.org/wg/iab/trac/report/1  
  
 Click on the ticket # you want to update, and then modify the ticket fields 
 as required  


Last Call: draft-ietf-netext-access-network-option-10.txt (Access Network Identifier (ANI) Option for Proxy Mobile IPv6) to Proposed Standard

2012-04-25 Thread The IESG

The IESG has received a request from the Network-Based Mobility
Extensions WG (netext) to consider the following document:
- 'Access Network Identifier (ANI) Option for Proxy Mobile IPv6'
  draft-ietf-netext-access-network-option-10.txt as a Proposed Standard

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2012-05-09. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract


   The local mobility anchor in a Proxy Mobile IPv6 domain is able to
   provide access network and access operator specific handling or
   policing of the mobile node traffic using information about the
   access network to which the mobile node is attached.  This
   specification defines a mechanism and a related mobility option for
   carrying the access network identifier and the access operator
   identification information from the mobile access gateway to the
   local mobility anchor over Proxy Mobile IPv6.




The file can be obtained via
http://datatracker.ietf.org/doc/draft-ietf-netext-access-network-option/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-ietf-netext-access-network-option/ballot/


No IPR declarations have been submitted directly on this I-D.




Call for Comment: IETF and ITU-T Standardization Sector Collaboration Guidelines

2012-04-25 Thread IAB Chair
This is an IETF-wide Call for Comment on Internet Engineering Task Force
and International Telecommunication Union - Telecommunications
Standardization Sector Collaboration Guidelines.  

 

The document is being considered for publication as an Informational RFC
within the IAB stream, and is available for inspection here:  

http://tools.ietf.org/html/draft-iab-rfc3356bis

The Call for Comment will last until May 25, 2012.  Please send comments to
i...@iab.org, or submit them via TRAC (see below).  

 

===  

Submitting Comments via TRAC  

 

1. To submit an issue in TRAC, you first need to login to the IAB site on
the tools server: 

http://tools.ietf.org/wg/iab/trac/login  

 

2. If you don't already have a login ID, you can obtain one by navigating to
this site: 

http://trac.tools.ietf.org/newlogin 

  

3. Once you have obtained an account, and have logged in, you can file an
issue by navigating to the ticket entry form: 

http://trac.tools.ietf.org/wg/iab/trac/newticket  

 

4. When opening an issue:  

 

a. The Type: field should be set to defect for an issue with the current
document text, or enhancement for a proposed addition of functionality
(such as an additional requirement).  

b. The Priority: field is set based on the severity of the Issue. For
example, editorial issues are typically minor or trivial.  

c. The Milestone: field should be set to milestone1 (useless, I know).   

d. The Component: field should be set to the document you are filing  the
issue on.  

e. The Version: field should be set to 1.0.  

f. The Severity: field should be set to based on the status of the document
(e.g. In WG Last Call for a document in IAB last call)  

g. The Keywords: and CC: fields can be left blank unless inspiration seizes
you.  

h. The Assign To: field is generally filled in with the email address of the
editor.  

 

5. Typically it won't be necessary to enclose a file with the ticket,  but
if you need to, select I have files to attach to this ticket.  

 

6. If you want to preview your Issue, click on the Preview button.  When
you're ready to submit the issue, click on the Create Ticket button. 

 

7. If you want to update an issue, go to the View Tickets page:  

http://trac.tools.ietf.org/wg/iab/trac/report/1  

 

Click on the ticket # you want to update, and then modify the ticket fields
as required.