Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-11 Thread Olafur Gudmundsson

On Sep 10, 2013, at 6:45 PM, Evan Hunt e...@isc.org wrote:

 On Tue, Sep 10, 2013 at 05:59:52PM -0400, Olafur Gudmundsson wrote:
 My colleagues and I worked on OpenWrt routers to get Unbound to work
 there, what you need to do is to start DNS up in non-validating mode wait
 for NTP to fix time, then check if the link allows DNSSEC answers
 through, at which point you can enable DNSSEC validation. 
 
 That's roughly what we did with BIND on OpenWrt/CeroWrt as well.  We
 also discussed hacking NTP to set the CD bit on its initial DNS queries,
 but I don't think any of the code made it upstream.
 

Not sure if this will work in all cases, as a paranoid resolver might 
only ignore the CD bit for the actual answer not for the DNS records needed
to navigate to the answer. 


 My real recommendation would be to run an NTP pool in an anycast cloud of
 well-known v4 and v6 addresses guaranteed to be reliable over a period of
 years. NTP could then fall back to those addresses if unable to look up the
 server it was configured to use.  DNS relies on a well-known set of root
 server addresses for bootstrapping; I don't see why NTP shouldn't do the
 same.
 

This is something worth suggesting, and 

 (Actually... the root nameservers could *almost* provide a workable time
 tick for bootstrapping purposes right now: the SOA record for the root
 zone encodes today's date in the serial number.  So you do the SOA lookup,
 set your system clock, attempt validation; on failure, set the clock an
 hour forward and try again; on success, use NTP to fine-tune. Klugey! :) )
 
 -

RRSIG on the SOA or NS or DNSKEY also is fine timestamp except when it is a 
replay attack or a forgery, 

Olafur



Re: Practical issues deploying DNSSEC into the home.

2013-09-11 Thread Olafur Gudmundsson

On Sep 10, 2013, at 7:17 PM, Brian E Carpenter brian.e.carpen...@gmail.com 
wrote:

 On 11/09/2013 09:59, Olafur Gudmundsson wrote:
 ...
 My colleagues and I worked on OpenWrt routers to get Unbound to work there, 
 what you need to do is to start DNS up in non-validating mode
 wait for NTP to fix time, then check if the link allows DNSSEC answers 
 through, at which point you can enable DNSSEC validation.
 
 Hopefully you also flush the DNS cache as soon as NTP runs. Even so,
 paranoia suggests that a dodgy IP address might still be cached in
 some app.
 
Brian

Flushing cache is a good idea, and dnssec-trigger does this when it upgrades 
the unbound from recursor to validator. 

Olafur



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-11 Thread Olafur Gudmundsson

On Sep 10, 2013, at 8:17 PM, David Morris d...@xpasc.com wrote:

 
 
 On Wed, 11 Sep 2013, Brian E Carpenter wrote:
 
 On 11/09/2013 09:59, Olafur Gudmundsson wrote:
 ...
 My colleagues and I worked on OpenWrt routers to get Unbound to work there, 
 what you need to do is to start DNS up in non-validating mode
 wait for NTP to fix time, then check if the link allows DNSSEC answers 
 through, at which point you can enable DNSSEC validation.
 
 Hopefully you also flush the DNS cache as soon as NTP runs. Even so,
 paranoia suggests that a dodgy IP address might still be cached in
 some app.
 
 I think you can avoid that issue by having the device not pass traffic
 until the DNSSEC validation is enabled. Only the device needs the special
 permissive handling for this to work.
 

You mean only allow NTP and DNS traffic in the beginning, until checks are 
done? 
In many cases we can get a reasonable time by writing the current time to a 
NVRAM variable every 6 hours or so, but that
only helps for reboot. 

Olafur 



Re: Practical issues deploying DNSSEC into the home.

2013-09-10 Thread Olafur Gudmundsson
[cc'ed to a more approriate IETF wg] 
On Sep 10, 2013, at 11:55 AM, Jim Gettys j...@freedesktop.org wrote:

 Ted T'so referred to a conversation we had last week. Let me give the 
 background.
 
 Dave Taht has been doing an advanced version of OpenWrt for our bufferbloat 
 work (called CeroWrt http://www.bufferbloat.net/projects/cerowrt/wiki/Wiki).  
 Of course, we both want things other than just bufferbloat, as you can see by 
 looking at that page (and you want to run in place of what you run today, 
 given how broken and dated home router firmware from manufacturers generally 
 is).  Everything possible gets pushed upstream into OpenWrt as quickly as 
 possible; but CeroWrt goes beyond where OpenWrt is in quite a few ways.
 
 I was frustrated by Homenet's early belief's (on no data) that lots of things 
 weren't feasible due to code/data footprint; both Dave and I knew better from 
 previous work on embedded hardware.  As example, Dave put a current version 
 of bind 9 into the build (thereby proving that having a full function name 
 service in your home router was completely feasible; that has aided 
 discussions in the working group) since.
 
 We uncovered two practical problems, both of which need to be solved to 
 enable full DNSSEC deployment into the home:
 
 1) DNSSEC needs to have the time within one hour.  But these devices do not 
 have TOY clocks (and arguably, never will, nor even probably should ever have 
 them).  
 
 So how do you get the time after you power on the device?  The usual answer 
 is use ntp.  Except you can't do a DNS resolve when your time is incorrect. 
  You have a chicken and egg problem to resolve/hack around :-(.
 
 Securely bootstrapping time in the Internet is something I believe needs 
 doing  and being able to do so over wireless links, not just relying on 
 wired links.
 
 2) when you install a new home router, you may want to generate certificates 
 for that home domain (particularly so it can be your primary name server, 
 which you'd really like to be under your control anyway, rather than 
 delegating to someone else who could either intentionally on unintentionally 
 subvert your domain).  
 
 Right now, on that class hardware, there is a dearth of entropy available, 
 causing such certificate generation to be painful/impossible without human 
 intervention, which we know home users don't do.  These SOC's do not have 
 hardware RNG's, and we can't trust them either blindly. Ted's working on that 
 situation in Linux; it is probably a case of the enemy of the good is the 
 perfect, but certainly I'm now much more paranoid than I once was.
 
 See: https://plus.google.com/117091380454742934025/posts/XeApV5DKwAj
 
 Jim
 


My colleagues and I worked on OpenWrt routers to get Unbound to work there, 
what you need to do is to start DNS up in non-validating mode
wait for NTP to fix time, then check if the link allows DNSSEC answers through, 
at which point you can enable DNSSEC validation. 
see: 
https://www.dnssec-deployment.org/index.php/2012/03/a-validating-recursive-resolver-on-a-70-home-router/
 
We also discovered that some cheap devices like this will do NTP at startup and 
never again that combined with long up-time and bad clocks 
caused the Validators to start rejecting signatures due to the time on the 
signatures. 

The big issue is that validator implementors assume that it runs on good 
hardware with good links, thus it is safe to enable DNSSEC out of the gate.
We need either have resolvers come up in recursive mode and tool like 
dnsec-trigger or our scripts change the behavior to validator after that has 
been
deemed safe, or build the checking into the validators. 

The same can be said of devices that have been installed from media or have 
been turned off for a long time (say month or more), in these cases 
starting up in validating mode only only turns the device into a brick. 

Olafur



Re: [spfbis] Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-22 Thread Olafur Gudmundsson

On Aug 22, 2013, at 4:36 AM, Jelte Jansen jelte.jan...@sidn.nl wrote:

 On 08/21/2013 08:44 PM, Olafur Gudmundsson wrote:
 
 Most of the recent arguments against SPF type have come down to the 
 following (as far as I can tell): 
  a) I can not add SPF RRtype  via my provisioning system into my DNS 
 servers
  b) My firewall doesl not let SPF Records through 
  c) My DNS library does not return SPF records through or does not 
 understand it, thus the application can not receive it.
  d) Looking up SPF is a waste of time as they do not get through, thus 
 we only look up TXT
 
 So what I have taken from this is that the DNS infrastructure is agnostic to 
 RRtype=99 but the 
 edges have problems. 
 As to the arguments 7 years is not long enough to reach conclusion and force 
 the changes through the
 infrastructure and to the edges. The need for SPF has been blunted by the 
 DUAL SPF/TXT strategy and 
 thus we are basically in the place where the path of lowest-resistence has 
 taken us. 
 
 What I want the IESG to add a note to the document is that says something 
 like the following: 
 The retirement of SPF from specification is not to be taken that new 
 RRtypes can not be used by applications, 
 the retirement is consequence of the dual quick-deploy strategy. The IETF 
 will continue to advocate application 
 specific RRtypes applications/firewalls/libraries SHOULD support that 
 approach.
 
 
 So what makes you think the above 4 points will not be a problem for the
 next protocol that comes along and needs (apex) RR data? And the one
 after that?
 

There are two reasons, mail is a legacy application with lots of old cruft 
around it. 
New protocols on the other hand can start with clean slate, and the use of the 
protocol is
optional unlike email. 
With a new protocol you can tell someone you can not use Vendor X as it does 
not support Y 
and they will put up a system that works, for email there is installed base and 
enterprise policies to use
Vendor X then SPF RR can not be used. 


 While I appreciate the argument 'this works now, and it is used'
 (running code, and all that), I am very worried that we'll end up with
 what is essentially a free-form blob containing data for several
 protocols at the zone apexes instead of a structured DNS.
 
 So if this approach is taken, I suggest the wording be much stronger, in
 the hope this chicken/egg problem (with 5 levels of eggs. or chickens)
 will be somewhat mitigated at some point. Preferably with some
 higher-level strategy to support that goal.
 
 Jelte


I agree 

Olafur



Re: [spfbis] Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-21 Thread Olafur Gudmundsson

On Aug 19, 2013, at 5:41 PM, Andrew Sullivan a...@anvilwalrusden.com wrote:

 I'm not going to copy the spfbis WG list on this, because this is part
 of the IETF last call.  No hat.
 
 On Mon, Aug 19, 2013 at 02:04:10PM -0700, Murray S. Kucherawy wrote:
 On Mon, Aug 19, 2013 at 1:59 PM, Dave Crocker d...@dcrocker.net wrote:
 
 From earlier exchanges about this concern, the assertion that I recall is
 that 7 years is not long enough, to determine whether a feature will be
 adopted.
 
 What is the premise for seven years being not long enough?  And what does
 constitute long enough?  And upon what is that last answer based?
 
 I have two observations about this.  First, EDNS0, which is of
 significantly greater benefit to DNS operators than the mere addition
 of an RRTYPE, took well over 10 years to get widespread adoption.
 Second, we all know where IPv6 adoption stands today, and that has
 certainly been around longer than 7 years.  So I think it _is_ fair to
 say that adoption of features in core infrastructure takes a very long
 time, and if one wants to add such features one has to be prepared to
 wait.
 
 But, second, I think all of that is irrelevant anyway.  The plain fact
 is that, once 4408 offered more than one way to publish a record, the
 easiest publication approach was going to prevail.  That's the
 approach that uses a TXT record.
 

For the record I think SPF RRtype retirement is not in the good-idea category, 
but nor is it
in the bad-idea category,  it falls in the we need-to-do-something-that-works. 

Most of the recent arguments against SPF type have come down to the following 
(as far as I can tell): 
a) I can not add SPF RRtype  via my provisioning system into my DNS 
servers
b) My firewall doesl not let SPF Records through 
c) My DNS library does not return SPF records through or does not 
understand it, thus the application can not receive it.
d) Looking up SPF is a waste of time as they do not get through, thus 
we only look up TXT

So what I have taken from this is that the DNS infrastructure is agnostic to 
RRtype=99 but the 
edges have problems. 
As to the arguments 7 years is not long enough to reach conclusion and force 
the changes through the
infrastructure and to the edges. The need for SPF has been blunted by the 
DUAL SPF/TXT strategy and 
thus we are basically in the place where the path of lowest-resistence has 
taken us. 

What I want the IESG to add a note to the document is that says something like 
the following: 
The retirement of SPF from specification is not to be taken that new RRtypes 
can not be used by applications, 
the retirement is consequence of the dual quick-deploy strategy. The IETF 
will continue to advocate application 
specific RRtypes applications/firewalls/libraries SHOULD support that approach.


Olafur




Re: The Nominating Committee Process: Eligibility

2013-06-27 Thread Olafur Gudmundsson

On Jun 27, 2013, at 5:50 AM, S Moonesamy sm+i...@elandsys.com wrote:

 Hello,
 
 RFC 3777 specifies the process by which members of the Internet Architecture 
 Board, Internet Engineering Steering Group and IETF Administrative Oversight 
 Committee are selected, confirmed, and recalled.
 
 draft-moonesamy-nomcom-eligibility proposes an update RFC 3777 to allow 
 remote contributors to the IETF Standards Process to be eligible to serve on 
 NomCom and sign a Recall petition ( 
 http://tools.ietf.org/html/draft-moonesamy-nomcom-eligibility-00 ).
 
 Could you please read the draft and comment?
 
 Regards,
 S. Moonesamy
 


SM, 
I read the draft, I think there might be some merit to this proposal but I 
think the threshold issue should be clarified. 
What does one of the last five mean during an IETF meeting? 

I think the threshold of having attended one meeting is too low, I would relax 
the rule to say something like this:
must have attended at least 5 meetings of the last 15 and including one of the 
last 5. 
15 meetings is 5 years, I know that is a long time, t this will allow people 
that that have been involved for a long time but have limited 
resources to attend to participate in Nomcom/recall processes. 

Q: do you want to limit how many infrequent attendees can be on Nomcom just 
like the number of people from a single organization can sign a
recall ? 


Olafur



Re: Last Call: draft-jabley-dnsext-eui48-eui64-rrtypes-03.txt (Resource Records for EUI-48 and EUI-64 Addresses in the DNS) to Proposed Standard

2013-06-20 Thread Olafur Gudmundsson

On Jun 19, 2013, at 9:29 AM, joel jaeggli joe...@bogus.com wrote:

 Given that this document was revved twice and had it's requested status 
 change during IETF last call in response to discussion criticism and new 
 contribution I am going to rerun the last call.


I reviewed this version and I think this is a fine document that I support. 
In particular the document goes out of its way to address the issues raised in 
prior 
IETF last call to the extent possible. 
The document is going to be the specification of two DNS RRtypes that have been 
allocated via expert review, we (DNS community) want to encore
that any such allocations be published as RFC's for future references. 

This document is not a product of any working group.

Olafur (DNSEXT co-chair) 



Re: Proposed Standards and Expert Review (was: Re: Last Call draft-jabley-dnsext-eui48-eui64-rrtypes-03.txt (Resource Records for EUI-48 and EUI-64 Addresses in the DNS) to Proposed Standard))

2013-05-21 Thread Olafur Gudmundsson

On May 21, 2013, at 1:32 PM, John C Klensin john-i...@jck.com wrote:

 (Changing Subject lines -- this is about a set of general
 principles that might affect this document, not about the
 document)
 
 --On Tuesday, May 21, 2013 22:23 +0700 Randy Bush
 ra...@psg.com wrote:
 
 joe,
 
 i have read the draft.  if published, i would prefer it as a
 proposed standard as it does specify protocol data objects.
 
 I would generally have that preference too.  But it seems to me
 that the combination of
 
   -- RRTYPEs (and a bunch of other protocol data objects
   associated with different protocols) are allocated on
   expert review
   
   -- The fact that those protocol data objects have
   already been allocated is used to preempt IETF
   consideration of issues that normally go into Standards
   Track documents, including the criteria for Proposed
   Standards in 2026.
 
 is fundamentally bad news for reasons that have little to do
 with this document or RRTYPEs specifically.  If the combination
 is allowed, it provides an attack vector on the standards
 process itself because someone can get a parameter approved on
 the basis of ability to fill out a template and then insist that
 the IETF approve and standardize it simply because it is
 registered and in use.That would turn allocation of
 parameters by expert review (and some related issues connected
 to deployed therefore it is ok -- watch for another note) into
 a rather large back door for standardization that could bypass
 the 2026 and other less formal criteria, the IETF's
 historically-strong position on change control, and so on.

John, 
There are basically 3 different kinds of DNS RRtypes, 
- types that affect the behavior of the DNS protocol and are cached by 
resolvers, 
- types that have DATA and are cached by resolvers 
- meta Types that may affect processing but are not cached. 

DNSEXT in its wisdom has deemed the second group to be harmless as far as
DNS is concerned and getting code to store data in DNS is a good thing, thus it 
is easy to get 
it. Getting code for the other classes requires IETF standards action. 

Documents that describe the DATA types use are encouraged to be published as 
Informational or some other stable reference. 

 
 These are not new issues and we have historically dealt with
 them in a number of ways that don't require moving away from
 liberal allocation policies and toward the IETF is in charge of
 the Internet and has to approve everything.  For example, we
 have decided that media types don't have to be standardized
 although certain types of names do.  People then get to choose
 between easy and quick registration and standardization, but
 don't get to use the first to leverage the second.  One could
 argue that the pre-IETF (and very early) division between
 system and user port numbers reflects the same sort of
 distinction between a high standard for justification and
 documentation and much lower ones.
 
As I explained DNS RRtype allocation has this separation. 

 It is possible (although I'm not convinced) that this discussion
 should suggest some tuning of the allocation model for RRTYPEs.
 Probably that model is ok and we just need to figure out clearer
 ways to say if you want standards track, don't get an
 allocation first and try to use that as justification because
 you will get a real Last Call anyway and everyone will end up a
 little irritated.   Or something else may be appropriate.  But
 it seems to me that, as soon as one wants to say all protocol
 parameters or other data values should be standardized then
 allocation models based on expert review are inappropriate.  For
 the RRTYPE case, that issue should, IMO, have been pushed with
 the relevant WG when the decision to allow expert review was
 made (and, again, IMO, that cure would be worse than the disease
 because it would indirectly drive more folks toward overloading
 of TXT and other existing types).

If the expert thinks an application crosses from DATA space to Control space 
he is expected to reject the application and ask for clarification. 

So far nothing has shown up that crosses this boundary, so there is no problem. 
I will go as far as saying why should there be higher bar for getting a DNS 
RRTYPE than MIME media type ? 

Olafur




Re: Last Call: draft-farrell-ft-03.txt (A Fast-Track way to RFC with Running Code) to Experimental RFC

2013-01-14 Thread Olafur Gudmundsson

On 11/01/2013 10:14, The IESG wrote:


The IESG has received a request from an individual submitter to
consider the following document: - 'A Fast-Track way to RFC with
Running Code' draft-farrell-ft-03.txt as Experimental RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to
the ietf@ietf.org mailing lists by 2013-02-08. Exceptionally,
comments may be sent to i...@ietf.org instead. In either case, please
retain the beginning of the Subject line to allow automated sorting.



I have experience in process like this, as my WG DNSEXT has required
multiple implementations and inter-op testing before advancing before
advancing documents that make significant changes to the DNS protocol.
Having done this I'm confident that the resulting specifications and
code was much better.

I support this experiment but offer the following comments.

Comment #1: The important part of running code is to assess
clarity of the specification, thus implementation by editors of the
document should not count as one-of-two required implementations
Implementations by editors co-workers are ok IFF the the editors keep
track of communications that lead to changes in code or draft.

Comment #2: It is important that participants all realize that point of
the exercise is not to point figurers at bugs. Rather the goal is to
improve the specifications and make ALL the implementations as compliant
and bug free as possible.

Comment #3 (Section 4 point #6)
Test cases used for interoperability are critical. These test
cases MUST be public. Evaluations of test cases generated by the
implementors and/or other working group participants are critical as
that is a great indicator of the quality and thoroughness of the tests.
IMHO public test cases render the point of open vs. closed source
irrelevant.

Comment #4: The IETF-LC and WGLC statements SHOULD contain references to
the testing performed and the implementations that participated.

Olafur


Re: Last Call: draft-farrell-ft-03.txt (A Fast-Track way to RFC with Running Code) to Experimental RFC

2013-01-14 Thread Olafur Gudmundsson

Hi Stephen,

On 14/01/2013 13:02, Stephen Farrell wrote:

Hi Olafur,

On 01/14/2013 04:39 PM, Olafur Gudmundsson wrote:


I have experience in process like this, as my WG DNSEXT has required
multiple implementations and inter-op testing before advancing before
advancing documents that make significant changes to the DNS protocol.
Having done this I'm confident that the resulting specifications and
code was much better.

I support this experiment but offer the following comments.

Phew, so I'm not entirely crazy, good to know:-)



Comment #1: The important part of running code is to assess
clarity of the specification, thus implementation by editors of the
document should not count as one-of-two required implementations
Implementations by editors co-workers are ok IFF the the editors keep
track of communications that lead to changes in code or draft.

First this draft only requires one implementation and not two as
your WG did. Second, I don't know what you mean by keep track
of communications - can you explain? (Or send a pointer to
one you used in the WG?)


If you only require one then it OUGHT to be by someone other than the 
editors to count, otherwise it is just a proof of concept and we have 
limited confidence  that spec and code match up.


Well this can be done in number of ways,
- Issues tracker
- Postings to WG mailing list
- Notes in Appendix,

In our case we used the second one.


That's also sort of like the point Stefan W. raised. And he
suggested:

  If the source code has been developed
  independently of the authoring of the draft (and ideally by non
  WG participants), it is likely that the implementation and the
  draft match, and that pitfalls unaware developers may find have
  been found and dealt with.  If, on the other hand, draft
  author(s) and implementation developer(s) overlap, then it is
  sensible to scrutinize the draft more closely, both with
  respect to its match with the implementation and for
  assumptions that author/developer may have taken for granted
  which warrant documentation in the draft.

Are you saying something like that would help?

I agree with Stefan W.  but would make it stronger that implementation by
editors does not count except for interoperability testing.


Personally, I think there are thousands of nuggets of advice
that we could possibly add, but I'm not convinced that we
should - I think we'd be better off trying to keep this draft
shorter and simpler and then if the experiment succeeds we can
incorporate those things that experience has shown are worth
including.

I agree, thus I suppressed lots of comments/suggestions


Comment #2: It is important that participants all realize that point of
the exercise is not to point figurers at bugs. Rather the goal is to
improve the specifications and make ALL the implementations as compliant
and bug free as possible.

Agreed. Not sure what text would change though.

Same comment about nuggets as above:-)

I was hoping we could work this into paragraph #3 of introduction section.
I will think about text changes and send you off-line.


Comment #3 (Section 4 point #6)
Test cases used for interoperability are critical. These test
cases MUST be public. Evaluations of test cases generated by the
implementors and/or other working group participants are critical as
that is a great indicator of the quality and thoroughness of the tests.
IMHO public test cases render the point of open vs. closed source
irrelevant.

I think that's arguable, but a reasonable argument. Again though,
I don't think we're at the point where a MUST ought go into this
draft and that'd be better done when the experiment's done.

Or do you have a suggestion for text? (I'm not at all sure how
I'd write that up now.)
I will take a stab at re-write second paragraph of section 2.1 to 
reflect this but that might take a day or two.



Comment #4: The IETF-LC and WGLC statements SHOULD contain references to
the testing performed and the implementations that participated.

I could buy that as a SHOULD or ought. I've noted that
in the working version as a change to maybe make. [1]


good.

Olafur



Re: When to adopt a draft as a WG doc (was RE: IETF work is done on the mailing lists)

2012-11-28 Thread Olafur Gudmundsson

I guess that a better question is:
What are the expectations if a draft becomes an WG document?

The opinions ranges from:
a) It is something that some members of the WG consider inside the scope
of the charter.

z) This is a contract that the IESG will bless this document!

Not all working groups are the same, some work on brand new stuff and it
makes sense to have competing ideas progress and then the WG makes a
choice. In other cases the WG is just fixing something in an important
deployed protocol thus stricter criteria makes sense.

For a WG I have chaired we have two adoption paths:
a) publish draft as draft-editor-wg---, discuss on WG mailing list
once document is on track and people can make intelligent choice ask
for adoption.
b) Chairs based on discussion on lists or events, will
commission a WG document to address a particular issue. This will be
published as draft-ietf-wg- in version 00. Most of the time this is
reserved for updated version of published RFC's.

Olafur


On 28/11/2012 10:36, George, Wes wrote:

From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On
Behalf Of John Leslie

I'm increasingly seeing a paradigm where the review happens
_before_ adoption as a WG draft. After adoption, there's a great
lull until the deadline for the next IETF week. There tend to be a
 few, seemingly minor, edits for a version to be discussed. The
meeting time is taken up listing changes, most of which get no
discussion. Lather, rinse, repeat...


[WEG] I've seen several discussions recently across WG lists, WG
chairs list, etc about this specific topic, and it's leading me to
believe that we do not have adequate guidance for either WG chairs or
participants on when it is generally appropriate to adopt a draft as
a WG document. I see 3 basic variants just among the WGs that I'm
actively involved in: 1) adopt early because the draft is talking
about a subject the WG wants to work on (may or may not be an
official charter milestone), and then refine a relatively rough draft
through several I-D-ietf-[wg]-* revisions before WGLC 2) adopt after
several revisions of I-D-[person]-[wg]-* because there has been
enough discussion to make the chairs believe that the WG has interest
or the draft has evolved into something the WG sees as useful/in
charter; Then there are only minor tweaks in the draft up until WGLC
(the above model) 3) don't adopt the draft until some defined
criteria are met (e.g. interoperable implementations), meaning that
much of the real work gets done in the individual version

It seems to me that these variants are dependent on the people in the
WG, the workload of the group, the chairs, past precedent, AD
preferences, etc. It makes it difficult on both draft editors and
those seeking to follow the discussion for there to be such a
disparity from WG to WG on when to adopt drafts. I'm not convinced
that there is a one-size-fits-all solution here, but it might be nice
to coalesce a little from where we are today. So I wonder if perhaps
we need clearer guidance on what the process is actually supposed to
look like and why. If someone can point to a document that gives
guidance here, then perhaps we all need to be more conscientious
about ensuring that the WGs we participate in are following the
available guidance on the matter.

Wes George

This E-mail and any of its attachments may contain Time Warner Cable
 proprietary information, which is privileged, confidential, or
subject to copyright belonging to Time Warner Cable. This E-mail is
intended solely for the use of the individual or entity to which it
is addressed. If you are not the intended recipient of this E-mail,
you are hereby notified that any dissemination, distribution,
copying, or action taken in relation to the contents of and
attachments to this E-mail is strictly prohibited and may be
unlawful. If you have received this E-mail in error, please notify
the sender immediately and permanently delete the original and any
copy of this E-mail and any printout.






Re: Recall Petition Submission

2012-11-06 Thread Olafur Gudmundsson


Lynn,

As one of the original signers has been challenged to be NomCom Eligible 
I put forward two more signers

Wes Hardaker wjh...@hardakers.net Sparta
Joao Luis Silva Damas,  j...@isc.org ISC

At this time please do not send me any more signatures
as each person that indicates that they support the recall
decreases the pool of candidates to sit on the recall committee.

thanks
Olafur


On 05/11/2012 15:15, Olafur Gudmundsson wrote:


Lynn St. Amour ISOC president,

   In accordance with the rules in RFC 3777 Section 7, I request that
you start recall proceedings  against Mr. Marshall Eubanks as member of
the IAOC as well as IETF Trustee, due to his total disappearance from
the IAOC and IETF Trust for over 3 months, and either inability or
refusal to resign.

Evidence to this effect was presented by Bob Hinden [1], as well as
further evidence that the IETF Trust ousted Mr. Eubanks as  chair of the
IETF Trust[2] , due to his prolonged absence in that body as well .

I as an IETF member in good standing (having attended over 50 meetings
since 1987, including 9 of the last 10 meetings), regrettably request
that you start the recall procedure. Below are listed statements of
support for this recall petition from  more than 20 other NomCom
eligible IETF participants.
The IETF participants below all meet the NomCom criteria as well as the
diversity of organizations set forth in RFC3777,
furthermore none is a current member of IETF/IAB/IRTF/IAOC/ISOC board.

Thanks

   Olafur Gudmundsson

[1]
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=934k2=11277tid=1351092666

[2]
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=933k2=65510tid=1351272565




Signatories:
Olafur Gudmundsson o...@ogud.com  Shinkuro
Joel Jaeggeli joe...@bogus.comZynga inc.
Mike StJohns mstjohns@ comcast.net   ninethpermutation
Warren Kumari war...@kumari.net  Google
Margaret Wasserman margaret...@gmail.com  Painless Security, LLC

Olaf Kolkman o...@nlnetlabs.nl NlNetLabs
Melinda Shore melinda.sh...@gmail.com No Mountain Software
Fred Baker f...@cisco.com   Cisco
Jaap Akkerhuis j...@nlnetlabs.nl NlNetLabs
James Polk jmp...@cisco.com   Cisco

Sam Hartman hartmans-i...@mit.edu   Painless Security, LLC.
Andrew Sullivan a...@anvilwalrusden.com   Dyn Inc.
Stephen Hanna sha...@juniper.net   Jupiter
Henk Uijterwaal henk.uijerw...@gmail.com Netherlands Forensics Institute
Paul Hoffman paul.hoff...@vpnc.orgVPNC

John Klensin john-i...@jck.com  Self
Mehmet Ersue mehmet.e...@nsn.com  Nokia Siemens Networks
Tobias Gondrom tobias.gond...@gondrom.org  Thames Stanley
Yiu Lee yiu_...@cable.comcast.com  Comcast
Tero Kivinen kivi...@iki.fi  AuthenTec Oy







Recall Petition Submission

2012-11-05 Thread Olafur Gudmundsson


Lynn St. Amour ISOC president,

  In accordance with the rules in RFC 3777 Section 7, I request that 
you start recall proceedings  against Mr. Marshall Eubanks as member of 
the IAOC as well as IETF Trustee, due to his total disappearance from 
the IAOC and IETF Trust for over 3 months, and either inability or 
refusal to resign.


Evidence to this effect was presented by Bob Hinden [1], as well as 
further evidence that the IETF Trust ousted Mr. Eubanks as  chair of the 
IETF Trust[2] , due to his prolonged absence in that body as well .


I as an IETF member in good standing (having attended over 50 meetings 
since 1987, including 9 of the last 10 meetings), regrettably request 
that you start the recall procedure. Below are listed statements of 
support for this recall petition from  more than 20 other NomCom 
eligible IETF participants.
The IETF participants below all meet the NomCom criteria as well as the 
diversity of organizations set forth in RFC3777,

furthermore none is a current member of IETF/IAB/IRTF/IAOC/ISOC board.

Thanks

  Olafur Gudmundsson

[1] 
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=934k2=11277tid=1351092666 

[2] 
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=933k2=65510tid=1351272565 





Signatories:
Olafur Gudmundsson o...@ogud.com  Shinkuro
Joel Jaeggeli joe...@bogus.comZynga inc.
Mike StJohns mstjohns@ comcast.net   ninethpermutation
Warren Kumari war...@kumari.net  Google
Margaret Wasserman margaret...@gmail.com  Painless Security, LLC

Olaf Kolkman o...@nlnetlabs.nl NlNetLabs
Melinda Shore melinda.sh...@gmail.com No Mountain Software
Fred Baker f...@cisco.com   Cisco
Jaap Akkerhuis j...@nlnetlabs.nl NlNetLabs
James Polk jmp...@cisco.com   Cisco

Sam Hartman hartmans-i...@mit.edu   Painless Security, LLC.
Andrew Sullivan a...@anvilwalrusden.com   Dyn Inc.
Stephen Hanna sha...@juniper.net   Jupiter
Henk Uijterwaal henk.uijerw...@gmail.com Netherlands Forensics Institute
Paul Hoffman paul.hoff...@vpnc.orgVPNC

John Klensin john-i...@jck.com  Self
Mehmet Ersue mehmet.e...@nsn.com  Nokia Siemens Networks
Tobias Gondrom tobias.gond...@gondrom.org  Thames Stanley
Yiu Lee yiu_...@cable.comcast.com  Comcast
Tero Kivinen kivi...@iki.fi  AuthenTec Oy



Re: [IETF] Re: Recall petition for Mr. Marshall Eubanks

2012-11-01 Thread Olafur Gudmundsson

On 01/11/2012 14:08, Dave Crocker wrote:



On 11/1/2012 10:52 AM, Michael StJohns wrote:

Per Olafur's email, I submitted my signature directly to him, along with
my Nomcom eligibility status.  I'm sure other's did as well, so you
shouldn't take the absence of emails on this list as lack of support for
the proposal.



(wearing no hat)


As a small point of procedures, no one is sending an actual signature.

It therefore would provide a modicum of better assurance for 
signatories to send the email that declares their signature directly 
to the ISOC President rather than to the person initiating the recall.



d/



Dave,

As this never been attempted before, by collecting the signatures 
myself, checking the NomCom eligibility and diversity in organizations 
as required by RFC3777, I hoped to reduce the work that Lynn had to do.

In the case the petition fails to get enough signers ISOC's work is no-op.

I will be happy when/if recall petition is submitted to include all the 
emails that I have got as proof that I did not
stuff the box, and publish the names of signers,  and the integrity of 
the process can be challenged at that time.


Olafur




Recall petition for Mr. Marshall Eubanks

2012-10-31 Thread Olafur Gudmundsson

Fellow IETF'rs
below is a recall petition that I plan on submitting soon if there is 
enough support.


If you agree with this petition please either comment on this posting, 
or send me email of support noting if you are NomCom eligible (I'm 
arbitrarily limiting the submitted signers of the petition to people 
that have demonstrated recent IETF attendance and participation)


The reason I'm starting this recall petition is that I think no other 
way will vacate the positions that Marchall holds before the end of the 
year, I have purposely held of starting this process in the hope that 
Marshall on his own resigned but that has not happened even after he has 
been publicly exposed for over a week.


I'm disappointed that a person that took on a responsibility that he his 
not able/willing to fulfill anymore, has not had the courtesy to take a 
few minutes to draft and send a letter of resignation or an explanation.


Olafur

 Recall Petition 

Lynn St. Amour ISOC president,

  In accordance with the rules in RFC 3777 Section 7, I request that 
you start recall proceedings  against Mr. Marshall Eubanks as member of 
the IAOC as well as IETF Trustee, due to his total disappearance from 
the IAOC and IETF Trust for over 3 months, and either inability or 
refusal to resign.


Evidence to this effect was presented by Bob Hinden [1], as well as 
further evidence that the IETF Trust ousted Mr. Eubanks as  chair of the 
IETF Trust[2] , due to his prolonged absence in that body as well .


  I as an IETF member in good standing (having attended over 50 
meetings since 1987, including 9 of the last 10 meetings), regrettably 
request that you start the recall procedure. Below are listed statements 
of support for this recall petition from  more than 20 other NomCom 
eligible IETF participants.



Thanks

  Olafur Gudmundsson

[1] 
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=934k2=11277tid=1351092666
[2] 
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=933k2=65510tid=1351272565




Re: IAOC Request for community feedback

2012-10-23 Thread Olafur Gudmundsson

On 23/10/2012 13:16, Michael StJohns wrote:

Wait just one minute.


Marshal has neither resigned nor died (both of which would vacate the
position).  He apparently *has* abrogated his responsibilities.



In even stronger terms: if a person after many years of involvement and
understanding of the rules, does not take the 5 minutes to write
a letter saying 'I resign from the IOAC', I wonder what triggered that
behavior.


I'm willing to sign on to the petition.  I'm willing to volunteer for
the recall committee.



Same here.

Olafur



Re: [dnsext] Last Call: draft-ietf-dnsext-rfc2671bis-edns0-09.txt (Extension Mechanisms for DNS (EDNS(0))) to Internet Standard

2012-10-16 Thread Olafur Gudmundsson

On 16/10/2012 17:43, SM wrote:

Hi Olafur,

I posted the following question about the draft about two weeks ago [1]:

  On publication of draft-ietf-dnsext-rfc2671bis-edns0-09, will it be
   part of STD 13?

I did not see any comments from the WG about that.  I had an off-list 
exchange with the RFC Series Editor about STDs.  The question seems 
like an IETF matter.  Has this question been discussed by the WG, and 
if so, what was the conclusion?




It was not explicitly discussed.

Olafur



Regards,
-sm

1. http://www.ietf.org/mail-archive/web/ietf/current/msg75156.html





Re: [dnsext] Last Call: draft-ietf-dnsext-rfc2671bis-edns0-09.txt (Extension Mechanisms for DNS (EDNS(0))) to Internet Standard

2012-10-04 Thread Olafur Gudmundsson

On 02/10/2012 21:15, Mark Andrews wrote:

Labels only work when all the severs for a zone that has a new label type,
in ADDITION sufficient fraction servers in all zones above that zone
MUST understand the new
label type.


Not true. Binary labels could have been made to work by removing
the left hand label until the remaining suffix consisted of only
RFC 1035 labels, looking up the servers for that domain, then
resuming query processing using those servers similar to what we
do with DS lookups.

Such processing would be required for any new label type used in a
QNAME and would be a significant change to the standard query logic.

Mark



Mark,

This will only work if all the recursive resolvers that consumers for
this new label types have been updated AND the new label type is the 
left most label(s) in the name.



Olafur



Re: [dnsext] Last Call: draft-ietf-dnsext-rfc2671bis-edns0-09.txt (Extension Mechanisms for DNS (EDNS(0))) to Internet Standard

2012-10-02 Thread Olafur Gudmundsson

My original message was not copied to ietf mailing list.

John quoted all of my text so I'm sending this follow-up to ietf as well 
as dnsext

mailing lists.


On 02/10/2012 12:38, John C Klensin wrote:


--On Tuesday, October 02, 2012 11:01 -0400 Olafur Gudmundsson
o...@ogud.com wrote:


...

The IESG has received a request from the DNS Extensions WG
(dnsext) to consider the following document: - 'Extension
Mechanisms for DNS (EDNS(0))'
draft-ietf-dnsext-rfc2671bis-edns0-09.txt as Internet
Standard

...
John,
no-hat
We learned two main things from binary labels:
a) the specification of them caused expensive processing, and
the
 utility of binary labels w/o A6 record was none.

Huh?  While I certainly agree that the binary label experiment
failed, RFC 2673 didn't seem to me to have anything to do with
A6 records and a quick review just now doesn't convince me
otherwise.  As I recall, it came out of the period in which we
were evolving to classless IPv4 addresses (with prefix
lengths/netmasks on arbitrary bit boundaries) and was intended
to permit address delegation entities to easily delegate ranges
of reverse-mapping records to the appropriate address-holding
entities for management.  The failure of that idea (for the
reasons you outline) contributed, IMO, to PTR records being much
less useful today than they were pre-CIDR.

Conversely, if a significant reason for getting rid of binary
labels was the link to A6 records, why doesn't the document just
say these existed solely to support A6 records, A6 records have
been deprecated, therefore these are too.  All of us could be
spared the discussion of why they failed relative to deployment
issues, etc.  But the question of whether it is appropriate to
get rid of label types would remain.


b) In order to introduce a new label type: All DNS
infrastructure
 needs to understand the new label type before it is can be
 reliability used. Lots of DNS processing elements barf at
labels that
 they do not expect. For example number of firewalls drop
answers
 if the name of the first answer record is not compressed (a
 protocol violation)

Understood.  Of course, some of the same things could be said
about EDNS0 itself.


Based on this experience the DNSEXT felt that the best message
to send
out is new label types do not work, in the current Internet.
Deploying a new label type requires an effort similar to what
we are
going through right now with DNSSEC, upgrade all DNS protocol
processing elements, plus systems and processes that feed and
operate the DNS systems.

But, coming back to my example, this is exactly the problem with
internationalization.  If one is willing to settle for keeping
ASCII primary and applying a hack to accommodate non-ASCII
characters, then one can avoid that very long and expensive
transition effort.  We have such a solution in IDNA.  Some parts
of the external community (including, albeit largely for
historical reasons, a couple of very significant vendors) do not
believe that is a satisfactory solution and that it would be
better to have all characters that are considered valid treated
equally.

That puts your DNSSEC comparison into an interesting light.  I
believe that, when the DNSSEC effort started, the community
would have been appalled at how long deployment would take and
how painful it would be (with or without allowances were made
for reduced expectations).  But, as far as I know, we didn't
have a good alternative way to do the job even in retrospect,
so, presumably, the community decided that the pain and slow
deployment associated with DNSSEC was (and is) worth it. Is a
clean i18n model on a par with that?  I don't know (and I'm
personally willing to live with IDNA forever if we have to).
But I'm sure that some people would be willing to make the case
that such an i18n model is at least as important as DNSSEC and
worth whatever effort it would take.

As I've tried to say to Mark, based on the experience with
binary labels, I would have no problem if the document
deprecated binary labels, noted that deployment of different
label types is horrendously difficult and/or very slow and hence
not recommended unless the requirement is really important and
there are no plausible alternatives, and then just moved on.
There just doesn't seem to be nearly enough documented support
for moving beyond we had a bad experience to completely
discard the feature/ capability.

I also note that part of what you said is ...do not work, in
the current Internet.  I'm not sure what that means.  If it
means that the WG has reached the conclusion that there are some
set of possible extensions that are sufficiently problematic
(including different label interpretation models) that they are
simply incompatible with the current DNS, i.e., that those who
want them should be working on a plausible strategy for what is
variously called DNSng or DNS2, I think that would be great and
an extremely useful contribution, especially

Re: Gen-ART LC review of draft-ietf-dnsext-ecdsa-04

2012-01-30 Thread Olafur Gudmundsson

On 29/01/2012 11:12, Roni Even wrote:

I am the assigned Gen-ART reviewer for this draft. For background on
Gen-ART, please see the FAQ at
http://wiki.tools.ietf.org/area/gen/trac/wiki/GenArtfaq.

Please resolve these comments along with any other Last Call comments
you may receive.

Document: draft-ietf-dnsext-ecdsa-04

Reviewer: Roni Even

Review Date:2012–1–29

IETF LC End Date: 2012–2–7

IESG Telechat date:

Summary: This draft is almost ready for publication as an Informational RFC.

Major issues:

The first IANA action is to update
http://www.iana.org/assignments/ds-rr-types/ds-rr-types.txt which
requires standard action for adding values.



Grr, my fault, overlooked that the editors put this text in the
header of the document.
WG LC was for standards track document.

Please treat this document as standards track.


Minor issues:

The important note in section 6 talks about the values in the examples.
I am wondering why not update the document with the correct values after
the IANA assignments by the RFC editor.


Yes, once we have the IANA assigned values we will furnish the 
RFC-editor with better examples.





Nits/editorial comments:




thanks for the review

Olafur (document pusher)
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: extra room avail IETF hotel at IETF rate

2011-07-07 Thread Olafur Gudmundsson

I also have one I'm not going to use.

Olafur


On 05/07/2011 5:32 PM, Geoff Mulligan wrote:

I found that I have an extra reservation at the IETF rate
($229/night)for Sunday to Friday at the Hilton.

If anyone is interested I can transfer the reservation.

geoff


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf





___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ogud-iana-protocol-maintenance-words (Definitions for expressing standards requirements in IANA registries.) to BCP

2010-03-19 Thread Olafur Gudmundsson

On 18/03/2010 12:31 PM, Christian Huitema wrote:

If the real reason for this draft is to set conformance levels for
DNSSEC (something that I strongly support), then it should be a one-page
RFC that says This document defines DNSSEC as these RFCs, and implementations
MUST support these elements of that IANA registry. Then, someone can conform
or not conform to that very concise RFC. As the conformance requirements
change, the original RFC can be obsoleted by new ones. That's how the IETF
has always done it; what is the problem with doing it here?


Second that. Let's not overload the registry. As Edward Lewis wrote in another message, 
The job of a registry is to maintain the association of objects with 
identities. If the WG wants to specify mandatory-to-implement functions or 
algorithms, the proper tool is to write an RFC.

-- Christian Huitema




But the document requires an RFC to change the 'requirements level', but
it can be done in more fine grain level than republishing the whole
registry, and the RFC can dictate changes to the registry in the future.

Well here a proposed problem statement for the requirement:
  How does an implementer of a protocol X, find which ones of the many
  features listed in registry Y, he/she needs to implement and which
  ones are obsolete.

and
  How does an evaluator of an implementations assess if
  implementation Z is compliant with current recommended state
  of protocol X.

The second problem my draft is addressing is:
  How to express the implementation and operational level of support.
  RFC2119 words only apply to IMPLEMENTATIONS.

As how things have been done I think that process is broken thus I want
people to figure out a better way to provide this information.

In my mind there are two options:
a) add this to the registry as this is relevant information for anyone
   that cares about doing the right thing.
b) Add a reference to the registry to a location to a SINGLE RFC that
   lists the compliance levels.

I'm big on the SINGLE argument, allowing multiple will cause a mess that
looks like spaghetti code.

The proposal is a change as what is the role of registries, but
that is not a good enough reason to reject this approach.

The question is how do we make the system more user friendly, remember
we have over 5700 RFC's published so far and we are generating almost
an RFC/day. It is not unlikely that we will have RFC 9000
published before 2020!

Olafur
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ogud-iana-protocol-maintenance-words (Definitions for expressing standards requirements in IANA registries.) to BCP

2010-03-19 Thread Olafur Gudmundsson

On 19/03/2010 12:14 PM, Paul Hoffman wrote:

At 10:33 AM -0400 3/19/10, Olafur Gudmundsson wrote:

Well here a proposed problem statement for the requirement:
  How does an implementer of a protocol X, find which ones of the many
  features listed in registry Y, he/she needs to implement and which
  ones are obsolete.

and
  How does an evaluator of an implementations assess if
  implementation Z is compliant with current recommended state
  of protocol X.

The second problem my draft is addressing is:
  How to express the implementation and operational level of support.
  RFC2119 words only apply to IMPLEMENTATIONS.


This problem statement does not match the one in the draft. The one here is a better 
problem statement, and it already has a simple solution: write an RFC that say This 
RFC defines X; a sending implementation must be able emit A and SHOULD be able to emit B; 
a receiving implementation must be able to process A and SHOULD be able to process 
B. This has nothing to do with the IANA registry other than A and B had better be 
listed there.



Well it benefits from comments from the many good people that have 
commented on the draft after the LC started :-)


I still do not believe that Publish a new RFC is the solution.
It still leaves the issue of matching operations to current best
practices, your statements only reflect interoperability out of the
box, not what we recommend that people operate/disable/plan-for/etc.

The +- words are on the right track but I think they do not go far
enough.



Further, there is nothing in your draft that says that X is a protocol. The 
draft is completely vague as to what is being conformed to.


Because the definitions are trying to cover both implementations and
operations.


As how things have been done I think that process is broken thus I want
people to figure out a better way to provide this information.


So do many of us, but it is not from lack of many well-intentioned people 
trying to fix it: it is from a lack of consensus.


The question is how do we make the system more user friendly, remember
we have over 5700 RFC's published so far and we are generating almost
an RFC/day. It is not unlikely that we will have RFC 9000
published before 2020!


Why is this any more true now that a few years ago when the newtrk work failed?


The pain is greater than it was, proposals for changes seem to
get traction when certain pain threshold is reached.
Have we reached it?

In my mind there are basically three kinds of IETF working groups:
   1) New protocols/protocol extensions frequently with limited
  attention to operations.
   2) Protocol maintainance groups
   3) Operational groups

RFC2119 words are aimed at the first type, and can to limited extend
be used by the third type, in the case the recommendations are
static.
As the second and third types of groups will become more common and
contentious it is important to think about how to clearly allow
these groups to express guidance in selecting options.

Olafur
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-19 Thread Olafur Gudmundsson

On 15/02/2010 7:43 PM, Olafur Gudmundsson wrote:

On 15/02/2010 6:37 PM, Martin Rex wrote:

Mark Andrews wrote:


In message201002151420.o1fekcmx024...@fs4113.wdf.sap.corp, Martin
Rex writes
:

OK, I'm sorry. For the DNSsec GOST signature I-D, the
default/prefered (?)
parameter sets are explicitly listed in last paragraph of section 2
of draft-ietf-dnsext-dnssec-gost-06. However, it does _NOT_ say what to
do if GOST R34.10-2001 signatures with other parameter sets are
encountered.


Since each end adds the parameters and they are NOT transmitted this
can never happen. If one end was to change the parameters then nothing
would validate.



OK. I didn't know anything abouth DNSSEC when I entered the disussion...


Having scanned some of the available document (rfc-4034,rfc-4035,rfc-2536
and the expired I-D draft-ietf-dnsext-ecc-key-10.txt) I'm wondering
about the following:

- the DNS security algorithm tag ought to be GOST R34.10-2001
and not just GOST


This is a good point, adding a version label is a possiblity in this
case or just in the future cases, but I think slapping one on
this is fine.



- DSA and the expired ECC draft spell out the entire algorithm
parameters in the key RRs, which preclues having to assign
additional algorithm identifiers if a necessity comes up to
use different algorithm parameters.

DSA did not cover the case if the key is  1024 bit.
ECC draft was killed due to the fact it was impossible to guarantee that
a implementation supporting ECC would be able to handle all the
possible curves that the proposal allowed.




Wouldn't it be sensible to do the same for GOST R34.10-2001 keys --
i.e. list the parameter set as part of the public key data?
Given the procedure of the standardization body that defines GOST
the parameter set OID could be used in alternative to spelling
out each of the element in the parameter set in full.
Implying the paramter set A for the GOST R34.10-2001 algorithm does
not seem very agile, given the limited number range for the algorithm
field in DNS security.


For interoperability reasons we WANT MINIMAL flexibility for
implementors/users. Thus we stripped all that out and picked ONE
possible GOST/2001 curve.


Given the differences between -1994 and -2001 versions,
any successor GOST R34.10-201X standard may not be able to reuse
the DNSKEY record anyway and need a new algorithm identifier.
And at that point, an unqualified label GOST would become
ambiguous.


see above,


Olafur (document shepeard)


Martin,

Based on your comments and the possible confusion over registering the
memonic GOST for DNSSEC, I think it would be wise to change the DNSKEY 
Memonic to ECC-GOST.


Olafur (document shepeard)
PS: orignally we registered RSA for RSA/MD5 combination, then we got 
RSASHA1, RSASH256, RSASHA1NSEC3 .., thus the first registered should 
not get a special shoter name than later variants :-)





___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-16 Thread Olafur Gudmundsson

On 15/02/2010 7:43 PM, Olafur Gudmundsson wrote:

On 15/02/2010 6:37 PM, Martin Rex wrote:

Mark Andrews wrote:


In message201002151420.o1fekcmx024...@fs4113.wdf.sap.corp, Martin
Rex writes
:

OK, I'm sorry. For the DNSsec GOST signature I-D, the
default/prefered (?)
parameter sets are explicitly listed in last paragraph of section 2
of draft-ietf-dnsext-dnssec-gost-06. However, it does _NOT_ say what to
do if GOST R34.10-2001 signatures with other parameter sets are
encountered.


Since each end adds the parameters and they are NOT transmitted this
can never happen. If one end was to change the parameters then nothing
would validate.



OK. I didn't know anything abouth DNSSEC when I entered the disussion...


Having scanned some of the available document (rfc-4034,rfc-4035,rfc-2536
and the expired I-D draft-ietf-dnsext-ecc-key-10.txt) I'm wondering
about the following:

- the DNS security algorithm tag ought to be GOST R34.10-2001
and not just GOST


This is a good point, adding a version label is a possiblity in this
case or just in the future cases, but I think slapping one on
this is fine.



- DSA and the expired ECC draft spell out the entire algorithm
parameters in the key RRs, which preclues having to assign
additional algorithm identifiers if a necessity comes up to
use different algorithm parameters.

DSA did not cover the case if the key is  1024 bit.
ECC draft was killed due to the fact it was impossible to guarantee that
a implementation supporting ECC would be able to handle all the
possible curves that the proposal allowed.



To clarify ECC draft killed == draft-ietf-dnsext-ecc-key

Olafur
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-15 Thread Olafur Gudmundsson

On 15/02/2010 6:37 PM, Martin Rex wrote:

Mark Andrews wrote:


In message201002151420.o1fekcmx024...@fs4113.wdf.sap.corp, Martin Rex writes
:

OK, I'm sorry.  For the DNSsec GOST signature I-D, the default/prefered (?)
parameter sets are explicitly listed in last paragraph of section 2
of draft-ietf-dnsext-dnssec-gost-06.  However, it does _NOT_ say what to
do if GOST R34.10-2001 signatures with other parameter sets are encountered.


Since each end adds the parameters and they are NOT transmitted this
can never happen.  If one end was to change the parameters then nothing
would validate.



OK.  I didn't know anything abouth DNSSEC when I entered the disussion...


Having scanned some of the available document (rfc-4034,rfc-4035,rfc-2536
and the expired I-D draft-ietf-dnsext-ecc-key-10.txt) I'm wondering
about the following:

   - the DNS security algorithm tag ought to be GOST R34.10-2001
 and not just GOST


This is a good point, adding a version label is a possiblity in this
case or just in the future cases, but I think slapping one on
this is fine.



   - DSA and the expired ECC draft spell out the entire algorithm
 parameters in the key RRs, which preclues having to assign
 additional algorithm identifiers if a necessity comes up to
 use different algorithm parameters.

DSA did not cover the case if the key is  1024 bit.
ECC draft was killed due to the fact it was impossible to guarantee that
a implementation supporting ECC would be able to handle all the
possible curves that the proposal allowed.




 Wouldn't it be sensible to do the same for GOST R34.10-2001 keys --
 i.e. list the parameter set as part of the public key data?
 Given the procedure of the standardization body that defines GOST
 the parameter set OID could be used in alternative to spelling
 out each of the element in the parameter set in full.
 Implying the paramter set A for the GOST R34.10-2001 algorithm does
 not seem very agile, given the limited number range for the algorithm
 field in DNS security.

For interoperability reasons we WANT MINIMAL flexibility for 
implementors/users. Thus we stripped all that out and picked ONE

possible GOST/2001 curve.


Given the differences between -1994 and -2001 versions,
any successor GOST R34.10-201X standard may not be able to reuse
the DNSKEY record anyway and need a new algorithm identifier.
And at that point, an unqualified label GOST would become
ambiguous.


see above,


Olafur (document shepeard)
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-12 Thread Olafur Gudmundsson

On 12/02/2010 2:18 PM, Edward Lewis wrote:

At 10:57 -0500 2/12/10, Stephen Kent wrote:



PS - I think Olafur meant private algorithms not personal algorithms.
See
http://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xhtml,
registrations for 253 and 254.

No I meant exaclty what I wrote. Personal algorithm is an algorithm 
develeoped by a person or a group of people that are not state 
sponsored thus the examples I provided.


Olafur


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-11 Thread Olafur Gudmundsson

On 11/02/2010 12:57 PM, Stephen Kent wrote:

I recommend that the document not be approved by the IESG in its current
form. Section 6.1 states:


6.1. Support for GOST signatures

DNSSEC aware implementations SHOULD be able to support RRSIG and
DNSKEY resource records created with the GOST algorithms as
defined in this document.


There has been considerable discussion on the security area directorate
list about this aspect of the document. All of the SECDIR members who
participated in the discussion argued that the text in 6.1 needs to be
changed to MAY from SHOULD. The general principle cited in the
discussion has been that national crypto algorithms like GOST ought
not be cited as MUST or SHOULD in standards like DNESEC. I refer
interested individuals to the SECDIR archive for details of the discussion.

(http://www.ietf.org/mail-archive/web/secdir/current/maillist.html)

Steve



As a document shepeard I have made note that this is desired, but at
the same time this is a topic that was outside the scope of the working
group.
This is on the other hand a topic that belongs in the IETF review.

So my questions to the IETF (paraphrashing George Orwell)

Are all crypto algorithms equal, but some are more equal than others?

Who gets to decide on what algorithms get first class status and based 
on what criteria?


Steve brought up national algorithm, but we have also personal 
algorithms such as curve25519 or threefish.


Olafur


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-jabley-sink-arpa, was Last Call: draft-jabley-reverse-servers ...

2010-01-11 Thread Olafur Gudmundsson

At 04:36 11/01/2010, Arnt Gulbrandsen wrote:

Shane Kerr writes:
   Various top-level domains are reserved by [RFC2606], 
includingINVALID. The use of INVALID as a codified, 
non-existent domainwas considered. However:


   o INVALID is poorly characterised from a DNS perspective in
  [RFC2606]; that is, the specification that INVALID does not exist
  as a Top Level Domain (TLD) is imprecise given the various 
uses   of the term TLD in policy forums;


Hm. Then why doesn't this document supersede 2606's imprecise 
specification with a better one?


That is a decent suggestion, not sure how well it will be accepted.
We tried to be much more precise in what sink.arpa is than invalid
specification.
My suggestion would be to say that any NEW uses of .invalid as a
non existing name MUST use sink.arpa. As for old uses if possible they
should migrate to sink.arpa. there is no inter operability issue
that I can think of only lag in changing code.
As for .invalid its intended use seems to be more of a documentation
use than protocol use.
Note:

As for .invalid the definition seem to me be more of a document
.invalid is intended for use in on-line construction of domain
  names that are sure to be invalid and which it is obvious at a
  glance are invalid.

Obvious at a glance implies human to me.



   o  the contents of the root zone are derived by interaction with many
  inter-related policy-making bodies, whereas the 
administrative   and technical processes relating to the ARPA 
zone are much more   clearly defined in an IETF context;


That can be put that more clearly: The IETF doesn't have sufficient 
authority over the root zone to publish 2606 and ensure its 
continued accuracy. My answer to that is that if so, then most of 
2606 is broken, and it's necessary to much fix more than just the 
paragraph that defines .invalid.


I prefer not to do 2606 rewrite, I can agree to update 2606 saying
.invalid SHOULD NOT be used on the wire.



   o  the use of ARPA for purposes of operational infrastructure (and,
  by inference, the explicit non-use of a particular name in ARPA)
  is consistent with the purpose of that zone, as described in
  [RFC3172].


Ie. if .invalid has to be dumped, the replacement should be in 
.arpa. I can accept that. _If_ it has to be dumped.


Maybe .invalid was a bad choice in the first place. But that's water 
under the bridge.


Arnt
__


Historical note: RFC2606 was done in hurry to get the names into document
before ICANN had really started to function.

Speculation: Even if the current ICANN is unlikely to allow wild card
in the root zone, that may change == .invalid suddenly exists.
IETF/IAB can prevent addition of wild card to .arpa.

Action: Next version of sink.arpa should say that .arpa MUST NOT
have a wild card as that will render sink.arpa. useless.

Olafur

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-reverse-servers (Nameservers for IPv4 and IPv6 Reverse Zones) to Proposed Standard

2010-01-06 Thread Olafur Gudmundsson

At 00:40 05/01/2010, John C Klensin wrote:


Ok, Joe, a few questions since, as indicated in another note,
you are generating these documents in your ICANN capacity:


John,
for the record, sink.arpa document was my idea and Joe volunteered to help
it has nothing to do with his day time job but is related to something that
Joe cares about, having explicit documentation of special cases.


(1) If ICANN can re-delegate the servers for these domains
without IAB or IETF action, why is IETF action needed to create
the new names?  They are, after all, just names.


Transparency ?

Olafur

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-reverse-servers (Nameservers for IPv4 and IPv6 Reverse Zones) to Proposed Standard

2010-01-05 Thread Olafur Gudmundsson

At 00:50 05/01/2010, John R. Levine wrote:

For the sink.arpa, it would be good to explain why we want this name to
exist.


We *don't* want the name to exist; that's the point of the draft. I 
presume that's what you meant?


It would still be nice to put in an explanation of the motivation 
for adding SINK.ARPA when its semantics and operations, at least for 
clients, appear identical to whatever.INVALID.


There are two parts to this, the sink.arpa for all practical purposes
will function just like .invalid, the only difference is the name space
that the name resides in. According to RFC 3152 IAB recommends that arpa
be the domain for addressing and routing special needs.
Based on this it seems logical to set up a framework for any special purpose
names in there instead of the Root zone.

The ARPA domain can be tuned better to the needs of negative caching than the
root zone, having a negative caching of a week in ARPA will not cause 
any problems. Currently the root has one day, if in the future  there are

frequent adds/deletes of names in the root zone the operators may
want to lower that value.



 Also, if your goal is that applications not have special logic
for sink.arpa you should *say* that:


Yeah.  As far as I know, it is quite uncommon for applications to 
hard code treatment of .INVALID.  But you seem to be saying that 
they do, and that causes problems that SINK.ARPA would solve. Tell 
us what they are.



There is one case where knowledge and special handling of the name may
cause problems:
  DNS Liers i.e. specialized DNS resolvers that make all
  non-existing name exist that do not generate lie for sink.arpa.

In this case the name can not be used as test of the resolvers truthful ness.
If an application knows about the name that is not a problem as all 
that will do is

to avoid a name lookup, and this is exactly the reason we want the name to be
have explicit semantics that can not change and are under IETF/IAB 
control not

ICANN's.

How sink.apra or other specialized names in ARPA can be used we leave 
to application

specialists.

Olafur 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Defining the existence of non-existent domains

2009-12-28 Thread Olafur Gudmundsson

At 05:38 28/12/2009, Arnt Gulbrandsen wrote:

John Levine writes:
If other people agree that it's a good idea to have a place that 
IANA can point to for the reserved names, I'd be happy to move this 
ahead. Or if we think the situation is OK as it is, we can forget about it.


I'd be happier with some sort of list (I was surprised by its 
length, and IMO that's a sign that the list is needed) and like your document.


(BTW. You mention _proto and _service. Neither is reserved for SRV. 
SRV uses_tcp, _udp and other _proto names. I think it would be 
stupid by use them for any other purpose in the DNS, but don't think 
that justifies reserving _ah, _ax25, _ddp, _egp, _eigrp, _encap, 
_esp, _etherip, _fc, _ggp, _gre, _hmp, _icmp, _idpr-cmtp, _idrp, 
_igmp, _igp, _ip, _ipcomp, _ipencap, _ipip, _ipv6, _ipv6-frag, 
_ipv6-icmp, _ipv6-nonxt, _ipv6-opts, _ipv6-route, _isis, _iso-tp4, 
_l2tp, _ospf, _pim, _pup, _rdp, _rspf, _rsvp, _sctp, _skip, _st, 
_tcp, _udp, _vmtp, _vrrp, _xns-idp and_xtp. But we have a fun game 
of TLA bingo on the list and see who uses/remembers most of those!)


Arnt
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf




See:
http://www.ietf.org/id/draft-gudmundsson-dnsext-srv-clarify-00.txt

and older version of that is being split (second half is to contain the
registry cleanups).
http://tools.ietf.org/html/draft-gudmundsson-dns-srv-iana-registry-04


Olafur


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-sink-arpa (The Eternal Non-Existence of SINK.ARPA (and other stories)) to BCP

2009-12-21 Thread Olafur Gudmundsson

At 14:16 21/12/2009, Ted Hardie wrote:

I have not objection to the creation of sink.arpa, but
I will repeat comments I made on the NANOG list
that there are ways of accomplishing the same thing
which do not require the creation of this registry.  One
example method would be to create MX records which
point to 257.in-addr.arpa; this address is already
guaranteed not to have any resource records associated
with it by the structure of the reverse tree.


True, but that is an ugly hack :-)



The tricky bit here is, in fact, not the creation of the
record which is guaranteed not to have a resource
record, it is generating good practices for when this
would get used and how.  Can I point a CNAME to
sink.arpa, for example?  How do I manage the expiration
in that case, given that the negative existence of
sink.arpa is declared to be infinite?


I fail to see your problem, if the CNAME target does not
exist then that results in a failed lookup.
There is no requirement CNAME target MUST exist.
The TTL on the CNAME or the negative cache value for the zone
where the CNAME exists in will dictate the caching of that answer,
(to be more precise the lowest TTL/negative TTL in the chain will control
how long the negative entry can be stored).


The MX case may very well be useful, and I repeat
that I have no objection.  But the IESG may want to
consider whether referral to a WG for either the BCP
aspects in relation to mail or the DNS itself is warranted.


Usage cases of sink.arpa and other similar names added to the
registry of special names for arpa should be reviewed by
working groups. We are not proposing any such uses, including
the cute hack QN=sink.arpa. QT=A QC=IN as a test
if your resolver is lying to you ;-)

For an actual usage example take a look at draft
http://www.ietf.org/id/draft-bellis-dns-recursive-discovery-00.txt

Olafur


Olafur



regards,

Ted Hardie

On Mon, Dec 21, 2009 at 10:40 AM, The IESG iesg-secret...@ietf.org wrote:
 The IESG has received a request from an individual submitter to consider
 the following document:

 - 'The Eternal Non-Existence of SINK.ARPA (and other stories) '
   draft-jabley-sink-arpa-02.txt as a BCP

 The IESG plans to make a decision in the next few weeks, and solicits
 final comments on this action.  Please send substantive comments to the
 ietf@ietf.org mailing lists by 2010-01-18. Exceptionally,
 comments may be sent to i...@ietf.org instead. In either case, please
 retain the beginning of the Subject line to allow automated sorting.

 The file can be obtained via
 http://www.ietf.org/internet-drafts/draft-jabley-sink-arpa-02.txt


 IESG discussion can be tracked via
 
https://datatracker.ietf.org/public/pidtracker.cgi?command=view_iddTag=18558rfc_flag=0


 ___
 IETF-Announce mailing list
 ietf-annou...@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf-announce

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-sink-arpa (The Eternal Non-Existence of SINK.ARPA (and other stories)) to BCP

2009-12-21 Thread Olafur Gudmundsson

At 15:38 21/12/2009, Ted Hardie wrote:

On Mon, Dec 21, 2009 at 12:16 PM, Olafur Gudmundsson o...@ogud.com wrote:
 At 14:16 21/12/2009, Ted Hardie wrote:

 I have not objection to the creation of sink.arpa, but
 I will repeat comments I made on the NANOG list
 that there are ways of accomplishing the same thing
 which do not require the creation of this registry.  One
 example method would be to create MX records which
 point to 257.in-addr.arpa; this address is already
 guaranteed not to have any resource records associated
 with it by the structure of the reverse tree.

 True, but that is an ugly hack :-)

Oh, absolutely.  And I agree with the idea that documenting
what you are doing is a good thing, whether it is ugly hacking
or populating a registry.



 The tricky bit here is, in fact, not the creation of the
 record which is guaranteed not to have a resource
 record, it is generating good practices for when this
 would get used and how.  Can I point a CNAME to
 sink.arpa, for example?  How do I manage the expiration
 in that case, given that the negative existence of
 sink.arpa is declared to be infinite?

 I fail to see your problem, if the CNAME target does not
 exist then that results in a failed lookup.
 There is no requirement CNAME target MUST exist.
 The TTL on the CNAME or the negative cache value for the zone
 where the CNAME exists in will dictate the caching of that answer,
 (to be more precise the lowest TTL/negative TTL in the chain will control
 how long the negative entry can be stored).


Fair enough.  I was worried that sink.arpa might get special
cased in some software, since it is guaranteed never to have
any records. A permanent entry with NXDOMAIN in the local
cache, for example, might have some odd effects.  Repeating
in the draft somewhere that this guarantee of non-existance should
not impact caching and expiry processing seems harmless and might be
useful.



We will put that on our TODO list for this draft.



 The MX case may very well be useful, and I repeat
 that I have no objection.  But the IESG may want to
 consider whether referral to a WG for either the BCP
 aspects in relation to mail or the DNS itself is warranted.

 Usage cases of sink.arpa and other similar names added to the
 registry of special names for arpa should be reviewed by
 working groups.

Which working groups reviewed the MX and MNAME use cases?
I'll go and read the archives.


None


 We are not proposing any such uses, including
 the cute hack QN=sink.arpa. QT=A QC=IN as a test
 if your resolver is lying to you ;-)


The current version still has the MX and MNAME uses,
in the example section, so it seems to be suggesting them.
If it is not meant to suggest them, then some additional language
suggesting that these still need operational specification would
be useful, at least in my opinion.


Added to the TODO list for next version.


 For an actual usage example take a look at draft
 http://www.ietf.org/id/draft-bellis-dns-recursive-discovery-00.txt


A quick look at that seems to be for .local.arpa, which would be
a different entry. Is there a different draft for MX and MNAME
and sink.arpa?


not yet, someone that knows e-mail better than me should write that
one, we are only trying to provide tools not specify how to use them.




Thanks for the quick response,


Thanks for asking the questions.



regards,

Ted Hardie


Olafur





Olafur


Olafur


 regards,

 Ted Hardie

 On Mon, Dec 21, 2009 at 10:40 AM, The IESG iesg-secret...@ietf.org
 wrote:
  The IESG has received a request from an individual submitter to consider
  the following document:
 
  - 'The Eternal Non-Existence of SINK.ARPA (and other stories) '
draft-jabley-sink-arpa-02.txt as a BCP
 
  The IESG plans to make a decision in the next few weeks, and solicits
  final comments on this action.  Please send substantive comments to the
  ietf@ietf.org mailing lists by 2010-01-18. Exceptionally,
  comments may be sent to i...@ietf.org instead. In either case, please
  retain the beginning of the Subject line to allow automated sorting.
 
  The file can be obtained via
  http://www.ietf.org/internet-drafts/draft-jabley-sink-arpa-02.txt
 
 
  IESG discussion can be tracked via
 
  
https://datatracker.ietf.org/public/pidtracker.cgi?command=view_iddTag=18558rfc_flag=0

 
  ___
  IETF-Announce mailing list
  ietf-annou...@ietf.org
  https://www.ietf.org/mailman/listinfo/ietf-announce
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-sink-arpa (The Eternal Non-Existence of SINK.ARPA (and other stories)) to BCP

2009-12-21 Thread Olafur Gudmundsson

John, SM
do the changes that Ted Hardie asked for address your concern(s)?
see:
http://www.ietf.org/mail-archive/web/ietf/current/msg59759.html

All we want sink-arpa to do is to create a domain name with known
characteristics and create a mechanism to define other such domain
names that may have other characteristics for various applications.

Olafur


At 23:57 21/12/2009, John C Klensin wrote:



--On Monday, December 21, 2009 14:18 -0800 SM s...@resistor.net
wrote:

 At 10:40 21-12-2009, The IESG wrote:
 The IESG has received a request from an individual submitter
 to consider the following document:

 - 'The Eternal Non-Existence of SINK.ARPA (and other stories)
 ' draft-jabley-sink-arpa-02.txt as a BCP

 The IESG plans to make a decision in the next few weeks, and
 solicits final comments on this action.  Please send
 substantive comments to the

 The other stories are in Section 3 of this draft. :-)  Please
 update the SMTP protocol reference to RFC 5321.

 If I understood the story, it is to get compliant MTAs not to
 attempt mail delivery to domains which do not wish to accept
 mail.  This does not really solve the implicit MX question but
 that's another story.

 Here's some text from Section 5.1 of RFC 5321:

If MX records are  present, but none of them are usable,
 or the
 implicit MX is unusable, this situation MUST be reported
 as an error.

When a domain name associated with an MX RR is looked up
 and the
 associated data field obtained, the data field of that
 response MUST
 contain a domain name.  That domain name, when queried,
 MUST return
 at least one address record (e.g., A or  RR) that
 gives the IP
 address of the SMTP server to which the message should be
 directed.

 As the intended status of this draft is BCP, it may have to
 take into consideration the above text from RFC 5321 and see
 how to resolve the issue.

Let me say this a little more strongly.  This proposal
effectively modifies RFC 5321 for one particular domain name at
the same time that it effectively (see notes by others)
advocates against coding the relevant domain name into anything
or treating it in a special way.  That combination doesn't seem
to me to work.

The issue that SM refers to as implicit MX has been hotly
debated in the email community and there was an explicit
decision (albeit with consensus that was fairly rough) to not
change things when RFC 5321 was approved.  If implicit MXs were
prohibited, then this proposal would be unnecessary to
accomplish the desired purposes for email.  If implicit MXs
continue to be permitted, this proposal, as I understand it,
would not work.

It seems to me to be a requirement that proposals that modify
existing standards be reviewed on the mailing list(s) associated
with those standards, preferably before going to IETF Last Call.
This proposal has not been reviewed on, or even exposed to,
those lists, at least with regard to email.

john



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jabley-sink-arpa (The Eternal Non-Existence of SINK.ARPA (and other stories)) to BCP

2009-12-21 Thread Olafur Gudmundsson

Correction the message should have been:
http://www.ietf.org/mail-archive/web/ietf/current/msg59761.html

  Olafur


At 00:18 22/12/2009, Olafur Gudmundsson wrote:

John, SM
do the changes that Ted Hardie asked for address your concern(s)?
see:
http://www.ietf.org/mail-archive/web/ietf/current/msg59759.html

All we want sink-arpa to do is to create a domain name with known
characteristics and create a mechanism to define other such domain
names that may have other characteristics for various applications.

Olafur


At 23:57 21/12/2009, John C Klensin wrote:



--On Monday, December 21, 2009 14:18 -0800 SM s...@resistor.net
wrote:

 At 10:40 21-12-2009, The IESG wrote:
 The IESG has received a request from an individual submitter
 to consider the following document:

 - 'The Eternal Non-Existence of SINK.ARPA (and other stories)
 ' draft-jabley-sink-arpa-02.txt as a BCP

 The IESG plans to make a decision in the next few weeks, and
 solicits final comments on this action.  Please send
 substantive comments to the

 The other stories are in Section 3 of this draft. :-)  Please
 update the SMTP protocol reference to RFC 5321.

 If I understood the story, it is to get compliant MTAs not to
 attempt mail delivery to domains which do not wish to accept
 mail.  This does not really solve the implicit MX question but
 that's another story.

 Here's some text from Section 5.1 of RFC 5321:

If MX records are  present, but none of them are usable,
 or the
 implicit MX is unusable, this situation MUST be reported
 as an error.

When a domain name associated with an MX RR is looked up
 and the
 associated data field obtained, the data field of that
 response MUST
 contain a domain name.  That domain name, when queried,
 MUST return
 at least one address record (e.g., A or  RR) that
 gives the IP
 address of the SMTP server to which the message should be
 directed.

 As the intended status of this draft is BCP, it may have to
 take into consideration the above text from RFC 5321 and see
 how to resolve the issue.

Let me say this a little more strongly.  This proposal
effectively modifies RFC 5321 for one particular domain name at
the same time that it effectively (see notes by others)
advocates against coding the relevant domain name into anything
or treating it in a special way.  That combination doesn't seem
to me to work.

The issue that SM refers to as implicit MX has been hotly
debated in the email community and there was an explicit
decision (albeit with consensus that was fairly rough) to not
change things when RFC 5321 was approved.  If implicit MXs were
prohibited, then this proposal would be unnecessary to
accomplish the desired purposes for email.  If implicit MXs
continue to be permitted, this proposal, as I understand it,
would not work.

It seems to me to be a requirement that proposals that modify
existing standards be reviewed on the mailing list(s) associated
with those standards, preferably before going to IETF Last Call.
This proposal has not been reviewed on, or even exposed to,
those lists, at least with regard to email.

john



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Request for community guidance on issue concerning a future meetingof the IETF

2009-09-24 Thread Olafur Gudmundsson

At 23:45 23/09/2009, Cullen Jennings wrote:


IAOC,

I'm trying to understand what is political speech in China. The
Geopriv WG deals with protecting users' location privacy. The policies
of more than one country have come up in geopriv meetings in very
derogatory terms. There have been very derogatory comments made by
people about the US's wiretap policy. Unless someone can point me at
specifics of what is or is not OK, I would find this very concerning.



Anything we think is a speculation, the ways of governments and
hotels have their own logic that may not make sense to technical people.

As for restrictions on topics that border technical and political sphere,
there are different restrictions in different countries.

I propose an experiment, lets have a meeting if it gets shut down
we will never return to China.

Olafur

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: DNS Additional Section Processing Globally Wrong

2009-06-05 Thread Olafur Gudmundsson

At 02:06 04/06/2009, Sabahattin Gucukoglu wrote:

On 4 Jun 2009, at 04:06, Mark Andrews wrote:
In message 
aaab52ef-ad0a-4d3c-9b28-b864f342d...@sabahattin-gucukoglu.com , 
Sabahattin Gucukoglu writes:

The problem is this: the authoritative servers for a domain can
easily
never be consulted for DNS data if the resource being looked up
happens to be available at the parent zone.  That is,
bigbox.example.net's address and the RR's TTL can never be as
specified by the zone master unless he or she has control over the
parent zone's delegation to example.net if bigbox.example.net happens
to be serving for example.net.  (Registries give you address control,
of course, but often they fix on large TTLs.)

As far as I can tell, every public recursive server I can reach,
dnscache and BIND9, and one Microsoft cache and one of whatever
OpenDNS uses, all do the wrong thing (TM) and never look up true data
from authoritative name servers.  They hang on to additional section
data from the delegating name server and pass this on as truth, the
whole truth, and nothing but the truth to everybody who asks.


Except they don't.  What you may be seeing is parent servers
returning glue as answers and that being accepted.


Glue data, additional and non-authoritative by design, intent and
specification, aren't what I want caches to keep.  The data I spent my
lunch hour putting into my zone file is. :-)

As a matter of fact, it never occurred to me to wonder at this
misbehaviour - it clearly wasn't that much of a big deal when I was
running things myself - but the 2008 cache-poison attacks found me
surprised that this is how it is.  In particular, they only worked
because the cache was happy to keep additional data for hosts that
were pertinent to the query, but in which it had no business caching.
If it had instead chased up the referal, the attacker would at least
have had to run a nameserver to answer the Is it really you? query.

Cheers,
Sabahattin


Strictly speaking the Additional Section processing is correct.
The Missing step is that recursive resolvers have optimized away
the safety step of fetching authorative delegation information.

File bug reports, send mail to namedroppers insisting that Forgery resilience
effort mandate fetch authorative behavior, even if that will break some
important stuff as implementors of fake DNS servers do not return
NS sets when asked (actually they have started to).

Olafur

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Abstract on Page 1?

2009-03-07 Thread Olafur Gudmundsson

At 14:02 07/03/2009, Henrik Levkowetz wrote:


On 2009-03-04 16:33 Margaret Wasserman said the following:
 I would like to propose that we re-format Internet-Drafts such that
 the boilerplate (status and copyright) is moved to the back of the
 draft, and the abstract moves up to page 1.

 I don't believe that there are any legal implications to moving our
 IPR information to the back of the document, and it would be great not
 to have to page down at the beginning of every I-D to skip over it.
 If someone wants to check the licensing details, they could look at
 the end of the document.

+1

Whether or not this is an easy fix for the tools, I think it's the right
thing to do, not only for drafts but also for RFCs, as it lets us focus
on the technical matter of a document, rather than copyright, other
IPR details, and administrivia.


Hear hear,

IFF we need copyright on page 1 something like this should be sufficient:
This document is covered by IETF Copyright policy ID, copy of this
policy can be found at the end of the document

Or:
s/at the end of the document/at http://www.ietf.org/copyright_ID/

I have no comment on what form the ID part should take other than it
MUST be shorter than 30 characters.

Olafur
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Proposed Experiment: More Meeting Time on Friday for IETF 73

2008-07-17 Thread Olafur Gudmundsson



At 17:33 17/07/2008, IETF Chair wrote:

The IESG is considering an experiment for IETF 73 in Minneapolis, and
we would like community comments before we proceed.  Face-to-face
meeting time is very precious, especially with about 120 IETF WGs
competing for meeting slots.  Several WGs are not able to get as much
meeting time as they need to progress their work.  As an experiment,
we are considering adding two Friday afternoon one-hour meeting slots.
The proposed Friday schedule would be:

   0900-1130 Morning Session I
   1130-1300 Break
   1300-1400 Afternoon Session I
   1415-1515 Afternoon Session II

Please share your thoughts about this proposed experiment.  The
proposed experiment will be discussed on the IETF Didcussion mail
list (ietf@ietf.org).


How about addressing the problem by creating 1.5 hour slots?

The Dublin schedule has 117 meetings slots scheduled,
24 are 60 minutes
47 are 120 minutes
8  are 130 minutes
38 are 150 minutes

Number of working groups ask for 2 hour slots because they think
1 hour is not sufficient.

For example by scheduling Monday as 4 x 90 minutes slots instead of the
2 x 120 + 1 x 130 we gain 8 meeting slots.

One observation, some of the 60 minute slots and Friday morning slot
have less than 8 meetings in parallel.
IMHO adding 2 sets of 60 minute slot on Friday will not help.

Olafur

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-ietf-pce-pcep (Path Computation Element (PCE) Communication Protocol (PCEP)) to Proposed Standard

2008-05-01 Thread Olafur Gudmundsson
Yes this is well written document,
I find it scary how many flags fields the document defines in message types
with no flags defined and NO GUIDANCE on the scope of these flags.
Are they per object Class +object type or per message type.

Are these needed or just nice to have?
What is the harm to just define these fields as Reserved ?

 Olafur


At 05:56 30/04/2008, Romascanu, Dan (Dan) wrote:
I would like to congratulate the editors for the inclusion and content
of the Manageability Consideration section. It is well written, and
includes detailed information that will be very useful for implementers
as well as for operators who will deploy the protocol.

One nit: in section 8.6 s/number of session/number of sessions/

Dan


  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of The IESG
  Sent: Wednesday, April 16, 2008 11:49 PM
  To: IETF-Announce
  Cc: [EMAIL PROTECTED]
  Subject: Last Call: draft-ietf-pce-pcep (Path Computation
  Element (PCE) Communication Protocol (PCEP)) to Proposed Standard
 
  The IESG has received a request from the Path Computation Element WG
  (pce) to consider the following document:
 
  - 'Path Computation Element (PCE) Communication Protocol (PCEP) '
 draft-ietf-pce-pcep-12.txt as a Proposed Standard
 
  The IESG plans to make a decision in the next few weeks, and
  solicits final comments on this action.  Please send
  substantive comments to the ietf@ietf.org mailing lists by
  2008-04-30. Exceptionally, comments may be sent to
  [EMAIL PROTECTED] instead. In either case, please retain the
  beginning of the Subject line to allow automated sorting.
 
  The file can be obtained via
  http://www.ietf.org/internet-drafts/draft-ietf-pce-pcep-12.txt
 
 
  IESG discussion can be tracked via
  https://datatracker.ietf.org/public/pidtracker.cgi?command=vie
  w_iddTag=14049rfc_flag=0
 
  ___
  IETF-Announce mailing list
  [EMAIL PROTECTED]
  https://www.ietf.org/mailman/listinfo/ietf-announce
 
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf

___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: houston.rr.com MX fubar?

2008-01-18 Thread Olafur Gudmundsson

At 06:29 17/01/2008, Tony Finch wrote:

On Thu, 17 Jan 2008, Mark Andrews wrote:

   a) when RFC 2821 was written IPv6 existed and RFC 2821 acknowledged
  its existance.  It did DID NOT say synthesize from .

RFC 2821 only talks about IPv6 domain literals. The MX resolution
algorithm in section 5 is written as if in complete ignorance of IPv6 so
it is reasonable to interpret it in the way that RFC 3974 does. If you
wanted to rule out MX synthesis from  then it should have been written
down ten years ago. It's too late now.

  They have already been upgraded in this way. Even without fallback-to-
  , they have to be upgraded to handle IPv6 anyway, because the IPv4
  MX lookup algorithm breaks as I described in
  http://www1.ietf.org/mail-archive/web/ietf/current/msg49843.html

   MX additional section is a optimization.  The lack of  or
   A records is NOT a bug.

Perhaps you could explain why the problem I described in the URL above
isn't actually a problem.


In my many years of dealing with DNS protocol definition and implementations
I have developed a moral: Optimization is the worst possible solution!

Placing A records in the additional section on an answer to MX query is an
attempt to optimize (or minimize) the number of queries needed by the MTA.
Due to this there are MTA's out there that will/can not ask the follow up
query if the desired address record is not in the additional section.

The fundamental question is which protocol implementation should change to
support the other ?
Mail people argue that DNS implementations should change,
us DNS people argue that Mail implementations should change/evolve.

I do not think we will ever agree.

Tony, in your message you talk about MTA doing extra wasted lookups.
Guess what, it is only a question of who does the work:
the MTA
or  the recursive DNS resolver.

The only difference is that the MTA can scope the queries based on 
what transport

protocols it uses, and curtail the search once it finds an useable answer.
The DNS recursive resolver can only stop after it tries all names for all
transport addresses.

Olafur 



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-hoffman-additional-key-words-00.txt

2008-01-18 Thread Olafur Gudmundsson

At 12:49 16/01/2008, Paul Hoffman wrote:

At 1:43 PM -0500 1/15/08, John C Klensin wrote:


A different version of the
same thinking would suggest that any document needing these
extended keywords is not ready for standardization and should be
published as Experimental and left there until the community
makes up its collective mind.


It seems that you didn't read the whole document; RFC 4307 already 
uses these terms. My experience with talking to IKEv2 implementers 
(mostly OEMs at this point) is that they understood exactly what was 
meant and were able to act accordingly when choosing what to put in 
their implementations.


I think this addition of 2119 words is quite useful.
More and more of our work shifts from protocol definition to
protocol maintenance these extra keywords give working groups a way 
to indicate to

developers/purchasers/planners/operators/regulators[1]
what the requirements for the protocol are going to be in the next few years.

More and more of the software we deal with is now released in multi 
year cycles
followed by multi year deployment lag. Further more number of 
protocols are embedded
in hardware devices that are frequently not updated during the 
device's lifetime,

think the router in homes.

Question: the use of +/- in the context of ? NOT, is that going
to be written as SHOULD+ NOT or SHOULD NOT+ ?
I have no feeling on the topic just think it should be documented.

I support this document and what it is trying to accomplish.

Olafur
PS: [1] Even though most of the people in the IETF are implementors or
lobbyists the documents get used by a large group that is absent
from the IETF and due to their roles would contribute nothing
even if they attended.   



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Daily Dose version 2 launched

2007-11-02 Thread Olafur Gudmundsson

I go to the tools site most of the time to look at Internet drafts as
I like the multiple ways I can view the drafts in text, with HTML links,
diffs etc.

I like the new news version of the page as I can see that helps people see
what is happening, I can just as easily live with news.ietf.org for that.

The new daily dose is well integrated into the existing tools so I see this
as an advantage.
My vote is to keep the Daily dose format

Olafur

At 19:12 02/11/2007, Lixia Zhang wrote:

Hi Henrik,

here is my 2 cents: every time when I go to tools page is because I'm
looking for some tools, but not news. Assuming others go there for
the same purpose, then one main goal is to ease the tool searching,
right?
instead of daily dose being the front page, what about treating the
daily dose just as an item (that's a tool in some sense:), and make
the content of http://tools.ietf.org/tools/ the front page?  for me
that can be a speedup

Lixia


Hi Joel,

On 2007-11-02 18:00 Joel M. Halpern said the following:

I second John's note.  When I saw Pasi's note, I had assumed that he
was referring to a link off of the tools page.
Replacing the tools page with an activity summary is quite
surprising.
Joel


Hmm.  I'm not sure exactly what's the issue here, so let's try to
explore this and see what we can find out.

Essentially, the change is two-fold:

1. The old lefthand list of links have been replaced by the same
lefthand
   list of links which appear on all the WG and other tools pages,
making
   for one less list of links to keep up-to-date manually.  This
has been
   in the works for quite some time; I've tried to poll people on
it, and
   most people seem to be happier with the lefthand menu on the WG
pages
   than the front page menu.

2. Replacing the static text on the first page with a news item page.

   The static text that was replaced was as follows:


-- -

Welcome to the IETF Tools Pages

   These pages are maintained by the IETF Tools Team.

 The aim of these web-pages is to help the IETF community as
follows:

 * Make it easy to find existing tools
 * Provide means of feedback on existing and new tools (wiki
and mailing
   list)
 * Provide information on new and updated tools

 If you have comments on these pages, or ideas for new tools or
for
 refinement of current tools, please send them to tools- 
[EMAIL PROTECTED]


 For current tools, there may also be a page in the IETF tools
wiki (in
 addition to the tools page you can reach through the menu to
the left)
 which summarises proposals and known deficiencies - feel
welcome to
 check this out, and contribute your information!


-- -

The way I see it, replacing that with a news summary would be a good
idea.  The text above was appropriate when the tools pages were being
established; they are now used by a lot of people, and the text above
seems to be a bit superfluous.

I could be wrong about that, and if so, an indication of how you
feel about the old and new page would be good.  Some options are:

   - You would specifically like to keep the text above (or some
   modification of it).

   - You agree that it's time for a re-working of the front page,
but you
   don't want to have the latest happenings on the front page, you
want
   them as a separate news page.

   - You don't care too much what's at the front page, as long as it's
   not too heavy-weight -- the major objection isn't to the news
page as
   such, but to the increased size of the page *for you personally
as a
   consumer of the page*

   - Some other option which I haven't understood the importance of
yet,
   which you will be able to formulate better than I can here :-)

Please speak up and indicate as specifically as you can what's good
and
bad about the change, and how we can improve it; and we'll try to
make it
so!


Regards,

Henrik


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Attention wireless lan congestion issues

2001-12-11 Thread Olafur Gudmundsson


Please stop downloading 
alt.binary. newsgroups 
windowsupdate
and other large data junk 

and also please verify that your wireless clients is NOT
(repeat NOT) in Ad Hoc Network mode and that your personal
system is NOT configured to be an access point or base 
station.

Olafur