Re: leader statements

2013-10-10 Thread Douglas Otis

On Oct 10, 2013, at 1:52 PM, j...@mercury.lcs.mit.edu (Noel Chiappa) wrote:

 From: Arturo Servin arturo.ser...@gmail.com
 
 Then we have a big problem as organization, we are then leaderless.
 
 I'm not sure this is true. The IETF worked quite well (and produced a lot of
 good stuff) back in, e.g. the Phill Gross era, when I am pretty sure Phill's
 model of his job was indeed as a 'facilitator', not a 'leader' in the sense
 you seem to be thinking of. So why do we now need a 'leader'?

Agreed.

To quote Alan Greenspan about the 2008 economic debacle-

And the answer is that we're not smart enough as people. We just cannot see 
events that far in advance. And unless we can, it's very difficult to look back 
and say, why didn't we catch something?

Couple human limitations with a desire to accept questionable justifications at 
bypassing concerns driven by leadership notoriety for pushing a group's agenda. 
 A common symptom is to declare objections to be from an aberrant individual, 
even when also expressed by others.  The IETF must remain critical of its 
process and its leadership to better avoid future debacles.

Regards,
Douglas Otis
 
  




Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard

2013-10-03 Thread Douglas Otis

On Oct 3, 2013, at 4:53 AM, Hector Santos hsan...@isdg.net wrote:

 
 On 10/2/2013 5:04 PM, Murray S. Kucherawy wrote:
 On Wed, Oct 2, 2013 at 7:41 AM, The IESG iesg-secret...@ietf.org wrote:
 
 The IESG has received a request from an individual participant to make
 the following status changes:
 
 - RFC5617 from Proposed Standard to Historic
 
 The supporting document for this request can be found here:
 
 http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/
 [...]
 
 
 I support this change, for the reasons articulated in the request and in
 this thread.
 
 I am the lead developer and maintainer of OpenDKIM, an open source
 implementation of DKIM and related standards, including VBR, ADSP, the
 recent REPUTE work, and some others.  It is widely deployed, including use
 at a few of the largest operators.  An informal survey was done on one of
 the mailing lists where this package is supported, asking which operators
 do ADSP queries and which act upon the results.  I have so far only
 received a half dozen answers to this, but the consensus among them is
 clear: All of the respondents either do not check ADSP, or check it but do
 nothing with the results.  One operator puts disposition of messages based
 on ADSP results into the hands of its users, but no statistics were offered
 about how many of these users have ADSP-based filtering enabled.  That same
 operator intends to remove that capability once this status change goes
 into effect.
 
 -MSK
 
 I don't believe this would be a fair assessment of industry wide support -- 
 using only one API to measure. There are other APIs and proprietary systems 
 who most likely are not part of the OpenDKIM group.  There are commercial 
 operations using DKIM and ADSP is part of it.
 
 The interop problem is clearly due intentional neglect by specific MLS 
 (Mailing List Software) of the DKIM security protocol, not because of the 
 protocol itself.  Support of the protocol does not cause an interop problem 
 -- it helps support the DKIM security protocol.The ADSP (RFC5617) 
 protocol is part of the DKIM security threat mitigation model (RFC4686), the 
 DKIM Service Architecture (RFC5585), the DKIM Deployment Guide (RFC5863) and 
 also the Mailing List for DKIM guideline (rfc6377).   That is FOUR documents.
 
 Applicability and Impact reports *should* to be done before pulling the rug 
 from under the non-OpenDKIM market feet.  In addition, it appears part of the 
 request is to help move an alternative DMARC protocol forward. Why would the 
 DMARC replacement do better?  Why should commercial development for ADSP be 
 stopped and removed from products, and now a new investment for DMARC be 
 done?  Would this resolve the apparent interop problem with the specific 
 Mailing List Software who refuse to support a DKIM security protocol?
 
 More importantly, why should any small operator and participant of the IETF 
 continue to support IETF projects if their support is ignored and projects 
 will be ended without their input or even explaining why it should be ended?  
 That doesn't play well for the IETF Diversity Improvement Program.

Dear Hector,

Indeed, more should be said about underlying reasons.  The reason for 
abandoning ADSP is for the same reason few providers reject messages not 
authorized by SPF records ending in -all (FAIL).  Mailing-List software 
existed long before either of these strategies and domains using mailing lists 
need to be excluded from having DMARC policies (until a revised ATPS 
specification able to use normal signatures is published.)  The reason for 
moving toward DMARC is, although aligned policy is only suitable for domains 
limited to messages of a transactional nature, places where one authorization 
scheme fails can be mostly recovered by the other which greatly increases the 
chances of a domain's policy being applied in the desired fashion.

Regards,
Douglas Otis



Re: How to protect DKIM signatures: Moving ADSP to Historic, supporting DMARC instead

2013-10-03 Thread Douglas Otis

On Oct 3, 2013, at 1:37 PM, Barry Leiba barryle...@computer.org wrote:

 To both Doug and Hector, and others who want to drift in this direction:
 
 As I've said before, the question of moving ADSP to Historic is one
 we're taking on its own, and is not connected to anything we do or
 don't do with DMARC.  Bringing DMARC into the discussion is a
 distraction, and, worse, makes it look like there's a tie-in.  There
 is not.

Dear Berry,

Even John Levine the author had opine along these same lines.  The response to 
Hector was agreeing a reason should be given, along with agreeing with his 
justifications.  The tie-in may be limited, but nevertheless DMARC has become 
the chosen alternative.  It seems if any reasons are given for moving ADSP to 
historic it also should conjecture why DMARC and not ADSP, unless your point is 
nothing has been learned?

Regards,
Douglas Otis

Re: Macro Expansion (was: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard)

2013-09-18 Thread Douglas Otis
Dear SM,

See comments inline.

On Sep 16, 2013, at 9:00 AM, S Moonesamy sm+i...@elandsys.com wrote:

 Hi Doug,
 At 21:55 11-09-2013, Douglas Otis wrote:
 Add to:
 11.5.3.  Macro Expansion
 ,---
 It is not within SPF's purview whether IPv6 or DNSSEC is being used.  IPv6 
 (RFC2460) increased the minimum MTU size to 1280 octets.  DNSSEC is deployed 
 with EDNS0 (RFC6891) to avoid TCP fallback.  EDNS0 suggests an MTU increase 
 between 1280 and 1410 octets offers a reasonable result starting from a 
 request of 4096 octets.  A 1410 MTU offers a 2.4 times payload increase over 
 the assumed MTU of 576 octets and is widely supported by Customer Premise 
 Equipment.  With increased MTUs being used with DNS over UDP, network 
 amplification concerns increase accordingly.
 
 SPF macros can utilize SPF parameters derived from email messages that can 
 modulate the names being queried in several ways without publishing 
 additional DNS resources.  The SPF macro feature permits malefactors a means 
 to covertly orchestrate directed DDoS attacks from an array of compromised 
 systems while expending little of their own resources.
 
 Since SPF does not make use of a dedicated resource record type or naming 
 convention, this leaves few solutions available to DNS operations in 
 offering a means to mitigate possible abuse.  This type of abuse becomes 
 rather pernicious when used in conjunction with synthetic domains now 
 popular for tracking users without using web cookies.
 
 However, email providers can mitigate this type of abuse by ignoring SPF 
 records containing macros.  Very few domains make use of macros, and 
 ignoring these records result in neutral handling.  Some large providers 
 have admitted they make use of this strategy without experiencing any 
 notable problem.  AOL began their support of SPF by saying they would use 
 SPF to construct whitelists prior to receipt of email.  Clearly, such 
 whitelisting practices tends to preclude benefits derived from macro use.
 '---
 
 As background information I read draft-otis-spfbis-macros-nixed-01.  I read 
 the messages where EDNS0 was mentioned [1].  I read the messages on the 
 thread starting with msg-id: 9884b9cd-0ed3-4d89-a100-58d05ea4b...@gmail.com.  
 I have followed the discussions about macros ever since the SPFBIS WG was 
 chartered.
 
 The above suggestion is to add text in the Security Considerations section of 
 the draft.  The problem being pointed out is, in simple terms, DNS 
 amplification.  The first (quoted) paragraph argues that there can be an 
 acute problem because of EDNS0 as specified in the Internet Standard.
 
 The second paragraph starts with SPF macros can utilize SPF parameters 
 derived from email messages.  I do not understand that.  From what I 
 understand the rest of the second (quoted) paragraph argues that the SPF 
 macro feature permits evildoers to use it as an attack vector.

Since this was not understood, I'll attempt to clarify.  An effort to keep 
these conversations fairly concise seems to lead to a level of confusion with 
those not familiar with DNS.

DNS UDP traffic lacks congestion avoidance when used to covertly direct 
attacks.  Residential systems represent a large component of compromised 
systems involved with email although data centers measured by overall traffic 
is increasing.  Network amplification is measured by gains beyond exchanges 
initiating a higher volume of exchanges.  DNS caching tends to reduce 
subsequent exchanges.  SPFbis macros inhibit normal caching protections by 
imposing mechanisms not directly supported by DNS and having targets 
constructed from email message components.  SPFbis mechanism names can be 
misleading since they are given a related manipulated DNS resource name.  One 
SPFbis mechanism can represent more than 100 subsequent DNS transactions where 
normally resolving the resource would represent a single transaction.  
Publishing new targets within DNS resources to circumvent caching would 
normally be expensive and unlikely to provide remarkable gain.  SPFbis macros 
change this equation significantly.  SPFbis offers macros to translate code 
points, restructure host labels, build labels from the client IP address, make 
use of the local-part of the message return path or some label in the EHLO 
hostname, etc.

In other words, SPFbis macros permit malefactors a means to modulate the target 
of their queries while still leveraging their own cached DNS records.  This 
means a malefactors' DNS resources can be highly leveraged as a result of 
recipient SPFbis macro processing.  Secondly, SPFbis also ignores the overall 
size of the resources being queried in many cases.   The most egregious is 
perhaps that of the unlimited PTR RRsets which then results in a series of 
address RRset resolutions cascading down the hostname labels that happens for a 
maximum of 10 PTRs that might be offered on either a random or round robin 
basis.  It would be extremely difficult

Re: Messages to SPFBIS mailing list (was: [spfbis] Benoit Claise's No Objection on draft-ietf-spfbis-4408bis-19: (with COMMENT))

2013-09-15 Thread Douglas Otis

On Sep 14, 2013, at 1:57 PM, S Moonesamy sm+i...@elandsys.com wrote:

 Hi Doug,
 At 20:56 13-09-2013, Douglas Otis wrote:
 If I have said something offensive, allow me once again to assure you this 
 was never my intent.
 
 There isn't anything in your message which was offensive.  I'll try to 
 explain the problem.  Your message was not even related to the topic being 
 discussed.  It becomes a problem when it happens repeatedly.  People complain 
 about it.  A WG Chair then has to decide what action to take.
 
 If you are unsure about whether your message is on-topic you can contact the 
 WG Chairs at spfbis-cha...@tools.ietf.org.  Please note that my intent is not 
 to restrict your participation.

Dear SM,

This view is fully reasonable using the paradigm SPFbis is just another 
protocol using DNS.  If so, a reference to RFC4033 would be logical and my 
response would seem off-topic.  To clarify, the strong response was aimed 
specifically at the suggestion this referenced RFC offers a meaningful 
countermeasure.  It does not and can not.
,---
 I'll suggest:
 
   and see RFC 4033 for a countermeasure.
'---
The reasoning is as follows:

Nothing within RFC4033, or even other recently proposed mitigation strategies, 
offer remedies or countermeasures that adequately address risks introduced by 
SPFbis.  SPFbis failed to acknowledge some providers will not process macros 
and extremely few domains publish SPF records containing macros.  Adding any 
number of DNS related references will NOT offer countermeasures able to address 
network amplifications SPFbis permits for unknown entities.  In other words, 
SPFbis advocates a scheme where more than 222 automated DNS macro driven 
transactions are to be made by message recipients on behalf of unknown entities.

The SPFbis process:
  a) Fails to use dedicated resource records
  b) Fails to use naming conventions
  c) Does not limit a large sequence of resource record sizes
  d) Uses macro selected email terms to modify targets which--
 1) inhibits effective caching
 2) increases network amplification potentials (3x)
 3) increases the number of indirect DNS threat vectors (any system sending 
email)

From any practical measure,  macros have already been deprecated.  SPFbis 
should reflect this reality since not doing so will greatly impact interchange. 
 SPFbis should also go one step further and return permerror when resource 
record sizes are larger than necessary to better ensure against reflected 
network amplification threats that would imperil DNSSEC/ENDS0.

Regards,
Douglas Otis






Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-09-11 Thread Douglas Otis

Recommended text is as follows:

4.6.4.  DNS Lookup Limits

Was:
,--
SPF implementations MUST limit the total number of mechanisms and modifiers 
(terms) that cause any DNS query to 10 during SPF evaluation.  Specifically, 
the include, a, mx, ptr, and exists mechanisms as well as the 
redirect modifier count against this collective limit.  The all, ip4, and 
ip6 mechanisms do not count against this limit.  If this number is exceeded 
during a check, a permerror MUST be returned.  The exp modifier does not 
count against this limit because the DNS lookup to fetch the explanation string 
occurs after the SPF record evaluation has been completed.
'--

Change to:
,---
SPF does not directly limit the number of DNS lookup transactions.  Instead, 
the number of mechanisms and the modifier term redirect MUST be limited to no 
more than 10 instances within the evaluation process.  The mechanisms ip4, 
ip6, and all and the exp modifier are excluded from being counted in this 
instance limitation. If this instance limit is exceeded during the evaluation 
process, a permerror MUST be returned.
'---

5.  Mechanism Definitions

Was:
,--
Several mechanisms rely on information fetched from the DNS.  For these DNS 
queries, except where noted, if the DNS server returns an error (RCODE other 
than 0 or 3) or the query times out, the mechanism stops and the topmost 
check_host() returns temperror.  If the server returns domain does not 
exist (RCODE 3), then evaluation of the mechanism continues as if the server 
returned no error (RCODE 0) and zero answer records.
‘---

Add:
,---
See the recommended limits on void lookups defined in Section 4.6.4. DNS 
Lookup Limits.
‘---

3.4.  Record Size

Was:

,---
Note that when computing the sizes for replies to queries of the TXT format, 
one has to take into account any other TXT records published at the domain 
name.  Similarly, the sizes for replies to all queries related to SPF have to 
be evaluated to fit in a single 512 octet UDP packet.
‘---

Change to:
,---
Note that when computing the sizes for replies to queries of the TXT format, 
one has to take into account any other TXT records published at the domain 
name.  Similarly, the sizes for replies to all queries related to SPF have to 
be evaluated to fit in a single 512 octet DNS Message.
‘---

Add to:
11.5.3.  Macro Expansion
,---
It is not within SPF’s purview whether IPv6 or DNSSEC is being used.  IPv6 
(RFC2460) increased the minimum MTU size to 1280 octets.  DNSSEC is deployed 
with EDNS0 (RFC6891) to avoid TCP fallback.  EDNS0 suggests an MTU increase 
between 1280 and 1410 octets offers a reasonable result starting from a request 
of 4096 octets.  A 1410 MTU offers a 2.4 times payload increase over the 
assumed MTU of 576 octets and is widely supported by Customer Premise 
Equipment.  With increased MTUs being used with DNS over UDP, network 
amplification concerns increase accordingly.

SPF macros can utilize SPF parameters derived from email messages that can 
modulate the names being queried in several ways without publishing additional 
DNS resources.  The SPF macro feature permits malefactors a means to covertly 
orchestrate directed DDoS attacks from an array of compromised systems while 
expending little of their own resources.

Since SPF does not make use of a dedicated resource record type or naming 
convention, this leaves few solutions available to DNS operations in offering a 
means to mitigate possible abuse.  This type of abuse becomes rather pernicious 
when used in conjunction with synthetic domains now popular for tracking users 
without using web cookies.

However, email providers can mitigate this type of abuse by ignoring SPF 
records containing macros.  Very few domains make use of macros, and ignoring 
these records result in neutral handling.  Some large providers have admitted 
they make use of this strategy without experiencing any notable problem.  AOL 
began their support of SPF by saying they would use SPF to construct whitelists 
prior to receipt of email.  Clearly, such whitelisting practices tends to 
preclude benefits derived from macro use.
‘---

Regards,
Douglas Otis



Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-09-10 Thread Douglas Otis

On Sep 2, 2013, at 5:54 AM, Phillip Hallam-Baker hal...@gmail.com wrote:

 On Thu, Aug 29, 2013 at 12:30 PM, Dan Schlitt schl...@theworld.com wrote:
 As the manager of a modestly large network I found the TXT record as a useful 
 tool in management of the network. Such a use was even suggested by other 
 system managers. That was a time when the Internet was a friendlier place. 
 Today I might do things differently and not make some of the TXT records 
 visible on the public Internet. But they would still be useful for internal 
 management.
 
 TXT records can be useful for ad-hoc local configs and the SPF use has made 
 this harder. But it is hard to see how the SPF record makes that situation 
 any better.
 
 
 Probably a better solution would be to take a chunk of the reserved RR code 
 space and stipulate that these have TXT form records so folk have 10,16 or so 
 records for this use.
 
 In the longer term, the problem with the SPF RR is that it is a point 
 solution to 'fix' only one protocol. It is an MX record equivalent. Which was 
 OK given the circumstances when it was developed.
 
 
 A shift from TXT to SPF records is not likely to happen for the niche SPF 
 spec. But may well be practical for a wider client/initiator policy spec.

Dear Phillip,

It seems many of the larger providers are unwilling to process SPF macros due 
to inherent risks and inefficiency.  Rather than accessing data using the DNS 
resource selectors of Name, Type, and Class, SPF uses mechanisms above DNS to 
utilize an additional domain, IP address, and email address input parameters 
merged with results generated from a series of proscribed DNS transactions.  
The macro feature was envisioned as leveraging these additional inputs to 
influence query construction. It seems lack of support by large providers has 
ensured scant few macros are published.

in the beginning, there were several wanting a macro language to  managing DNS 
processing with little idea where this would be headed.  At the time there was 
already a dedicated binary resource record able to fully satisfied the 
information now obtained and used from SPF.  Policy aspects of SPF are largely 
ignored due to exceptions often required.   An SRV resource record resolving 
the location of a service could include an APL RR with CIDR information of all 
outbound IP addresses.  This would offer load balancing and system priorities, 
while mapping outbound address space within two DNS transactions instead of the 
111 recursive transactions expected by SPF.  If one were starting over, DANE 
TLS or DTLS is a better solution that should be even easier to administer since 
it avoids a need to trust IP addresses and NATs.   As with PKI, there are too 
many actors influencing routing's integrity.

Regards,
Douglas Otis





Re: Conclusions of Last Call for draft-ietf-spfbis-4408bis

2013-09-09 Thread Douglas Otis
 of cookies.

11.  The continued specification of SPF macros inhibits interchange.  Because 
it is common for SPF records not to produce a pass, this issue is not likely to 
have been given adequate attention.  When SPF macros are not implemented by 
receivers for any number of very valid reasons, such as ensuring effective 
caching of DNS, their required use can and will lead to interchange issues.  
SPF may impose complex macro handling over multiple DNS responses determined by 
a sequence of queries that can not be directly handled by DNS itself.  Even the 
operation of SPF macros represents security concerns threatening the integrity 
of the associated SMTP and DNS servers.   Since the publication of SPF macros 
is well below the level used to justify removal of the SPF RR record type, the 
same consideration should have been given SPF macros.  Use of SPF macros also 
interferes with forensic efforts at handling interchange problems.  As such, it 
is not surprising to find extremely few domains publish SPF records using 
macros and large providers not processing SPF macros.

The lack of consideration given DNS by the SPF protocol offers overwhelming 
justification not to consider this protocol suitable for endorsing as a 
standard.  Going from experimental to informational should not represent any 
hardship, but would serve as a warning protocols should pay attention to their 
impact on underlying infrastructure.  There are limits on making it easy to 
send messages since the Internet is not suffering from a message scarcity.

Regards,
Douglas Otis






Re: [apps-discuss] AppsDir review of draft-ietf-repute-model-08

2013-09-03 Thread Douglas Otis

On Aug 30, 2013, at 7:50 PM, Andrew Sullivan a...@anvilwalrusden.com wrote:

 Colleagues, and Doug especially,
 
 The message I sent (below) wasn't intended as a shut up and go away
 message, but a genuine query.  I have grave doubts that TLS is the
 right example (to begin with, I think fitting it into the REPUTE
 approach, given the existing CA structure, would also be
 controversial); but I'm genuinely trying to understand how to make the
 document better,  not trying to tell anyone to go away.
 
 Best,
 
 A
 
 On Fri, Aug 30, 2013 at 07:39:24PM -0400, Andrew Sullivan wrote:
 Hi Doug!
 
 On Fri, Aug 30, 2013 at 04:24:17PM -0700, Douglas Otis wrote:
 
 Use of DKIM offers a very poor authentication example
 
 Thanks for the feedback.  I don't recall you having made this point on
 the repute mailing list.  Did you,  I missed it?

Dear Andrew,

Sorry for the delay.  I have been overwhelmed by pressing personal matters.

When the REPUTE list first started, I commented about DKIM's inability to 
support fair reputation, and about 1 year ago, and then again a few months ago 
IIRC.  These comments were ignored with some denouncement appearing in other 
venues by various DKIM advocates.  This is understandable since motivation for 
REPUTE was aimed at establishing DKIM domain reputations to supplant a lack of 
adoption of a DKIM reputation service.  Lack of adoption was not because DNS 
was unable to return a resource record referenced at a domain managed by a 
reputation service.  Even during DKIM's inception, issues related to DKIM's 
replay feature (to be compatible with SMTP) and its impact on ensuring fair 
reputation was discussed.  DKIM's validity is not related to intended 
recipients nor the entity issuing the message.  It seems some expect the 
authenticated signed DKIM message fragment can act as a proxy for a message's 
provenance.  Unfortunately, DKIM provides inadequate protection of the 
message's integrity as seen by recipients. 

I'll repeat the information contained in 
http://tools.ietf.org/html/draft-otis-dkim-harmful-03

DKIM and email in general lack a status indicating whether a message structure 
is valid as defined by RFC5322, RFC6152, RFC6532, and RFC6854.  Logically, a 
valid message structure with respect to singleton header fields should have 
been a DKIM requirement for valid signatures.  Without valid message structure, 
what a recipient sees can be trivially exploited and abused whenever extending 
a signed DKIM message fragment as a proxy for the message's source.  The hacks 
recommended in RFC5863 section 8.15 were offered as a method to better ensure 
message structure, but these are seldom implemented or noted.

Both the repute model and RFC5863 erroneously conflate verification of a DKIM 
message fragment with that of the entire message.  Such conflation is valid in 
reference to actual message sources as determined by the IP address (ignoring 
BGP spoofing) or via StartTLS with certificates obtained from trusted CAs or 
via DANE.  XMPP use of StartTLS further leverages CAs using OCSP as does most 
of the protection afforded Today's websites.  XMPP represents a scalable model 
that could apply to SMTP.

ATPS (RFC6541) also offers a broken example in how a domain is able to 
authorize an unlimited number of third-party service providers.   ATPS is 
broken because it must be used in conjunction with a non-standard DKIM 
signature which defeats its purpose. 

Getting a domain's reputation wrong can prove costly for those hosting 
reputation services.  Costs include the handling of complaints, notification of 
abuse, and at times extremely expensive legal defenses.   Some have suggested 
DKIM reputation can become a type of crowd sourcing as a means to overcome 
DKIM's limitations, especially since these same advocates also insist DKIM is 
not being abused. 

A myopic view about reputation and what is meant by Authentication will not 
offer a protocol able to ensure interchange nor will related statistics offer 
conclusive evidence.   The scale of abuse can be both massive and unfair.

 Do you have a better example, specifically excluding …

It would be better to not use examples than to offer broken examples.

 
 StartTLS would represent a much better example.
 
 …this, which strikes me as suffering from a different but related set
 of issues along the lines you're complaining about?

Can you describe these concerns?  How are these different from most Internet 
services.  Unlike StartTLS, DKIM is trivially exploited.

I confirmed the ease of this exploit when contacted by Larry Seltzer by using 
it to achieve both acceptance and inbox placement as a type of phishing 
demonstration.  What do you think this does to those whose email address or 
domain is being exploited?  It seems highly unlikely any reputation will ever 
affect those providers that must be considered Too Big to Block.

http://www.zdnet.com/dkim-useless-or-just-disappointing-spam-yahoo-google-719351

Re: AppsDir review of draft-ietf-repute-model-08

2013-08-30 Thread Douglas Otis
Dear Tony,

Use of DKIM offers a very poor authentication example, since this draft makes 
the same errors made in RFC5863.  It is wrong to suggest the DKIM protocol 
permits associating a validated identifier to a message as stated in the 
Introduction.  This is the same erroneous conflation of a message fragment with 
that of a message.  In most cases, DKIM does not adequately protect message 
integrity as explained in 
http://tools.ietf.org/html/draft-otis-dkim-harmful-03.  In addition, DKIM can 
not authenticate who is accountable for having sent the message which makes it 
impossible to safely assign reputation.  As such, DKIM should never be referred 
to as a message authentication protocol.  StartTLS would represent a much 
better example. 

Regards,
Douglas Otis

Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-26 Thread Douglas Otis

On Aug 24, 2013, at 3:16 AM, S Moonesamy sm+i...@elandsys.com wrote:

 Hi Doug,
 At 13:07 23-08-2013, Douglas Otis wrote:
 The SPFbis document improperly conflates DNS terminology with identical 
 terms invented by this document. Examples are terms used to describe 
 mechanisms having the same identifier differentiated between mechanisms and 
 DNS resource records by using lower and upper case respectively.  References 
 to SPF records are differentiated by a textual prefix and not by TYPE as 
 defined by DNS.
 
 Could you provide some examples of the above as I would like to clearly 
 understand the argument?

Dear SM,

Thank you for your questions.  Sorry for the delay while helping a sick friend 
and colleague.

When the SPF document refers to Sender Policy Framework (SPF) records or SPF 
records this conflicts with DNS's record definition.  It is wrong to refer to 
these as records.  RFC1034 defines resource records as TTL and RDATA that is 
selected by Owner (domain), Type (16 bit value), and Class (16 bit value).  No 
such selection is possible for SPF.   SPF uses a subclass of TXT resource 
records using a non-standard prefix that has no registry, nor is a registry 
practical at this point.

Terminology used by SPF sounds as if it refers directly to DNS elements.  It 
does not.  This poor terminology is misleading and makes properly expressing 
security concerns difficult where this confusion seems to be by design.

 In addition, the MARID WG NEVER reached consensus.  A follow-on group 
 operating outside of the IETF required a promise of support to subscribe to 
 their mailing list.  When one looks at how SPF is commonly used, the 
 pre-existing APL resource record offered an effective alternative, but was 
 oppose by a particular vendor unwilling to fully implement DNS.  Currently 
 this vendor is seldom used to directly interface with MTAs on the Internet 
 and no longer justifies the use of the TXT records.  As such, the SPF 
 Resource Record should not have been deprecated.
 
 There are other messages on the Last Call about the SPF Resource Record.  
 I'll take up the above together with the other comments.
 
 This draft should be made Informational and not Standards Track.
 
 I suggest providing arguments for that.

The SPF protocol was an effort that ignored concerns expressed within the DNS 
community about the effect of overloading TXT and use of dynamic macro query 
names modified by email elements wholly unconstrained by a sender's 
infrastructure.  The SPF macro scheme can greatly amplify the impact of an 
already large number of DNS transactions.  By overloading TXT, updating SPF is 
improbable.  By rarely being the target of a DDoS attack, it becomes easy to 
speak in derogatory terms as this issue being the fault of DNS.  

The primary use of SPF today is to define email outbound address space.  The 
policy aspects related to the all mechanism is largely ignored due to a high 
number of required exceptions.  Rather than using the existing APL Resource 
Record in the form of _email.domain APL CIDRs in binary form, the group 
devised a macro language to authorize various email elements using text to 
accommodate a vendor that since became practically irrelevant for Internet 
email exchange.  Although most large providers now ignore SPF macros, very few 
domains publish them as well.  Macros interfere with a provider's effort at 
mapping a domain's address space.  Most consider the authorization for 
email-address local parts the sender's role.  Nevertheless, the WG failed to 
warn of this issue by either deprecating or advising against macro publication. 
 Macros inhibit effective caching, imperil SMTP server security, and degrade 
interchange. This is not a good candidate for standardization if this category 
is expected to retain any value or the IETF is to offer helpful guidance.

Simply because SPF did not stipulate DNSSEC does not mean DNSSEC can be 
ignored.  Not considering DNSSEC is an example of a failed consideration.  Must 
the world wait for SPF to change before DNSSEC can be safely deployed?  
Overloading of TXT resource records at the zone apex along with macro scripts 
able to leverage email elements to direct reflected attacks from cached 
resource records is an example why this document should not be deemed a 
standard. 

 Section 4.6.4 fails to offer a sufficiently clear warning about potential 
 magnitudes of DNS transactions initiated by a single SPF evaluation where 
 two are recommended to occur one for the separate identifiers.  In fact, 
 this section appears to make assurances no more that 10 DNS queries will 
 result and is widely misunderstood.
 
 There is a discussion about Section 4.6.4 at 
 http://www.ietf.org/mail-archive/web/spfbis/current/msg03305.html

There is a trend by large providers to switch over to using wildcards on 
enormous networks to track users instead of using cookies.  The impact this has 
on the past conversations is enormous

Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-26 Thread Douglas Otis

On Aug 26, 2013, at 3:48 PM, Scott Kitterman sc...@kitterman.com wrote:

 On Monday, August 26, 2013 15:42:41 Douglas Otis wrote:
 Please also note that the PTR RR is not constrained in the current
 specification and can create erratic results.  It would be far safer to
 Perm error when overflowing on the number of PTR records.  There is no
 upper limit as some represent web farms hosting thousands of domains. 
 
 This exact issue was the subject of working group discussion.  Since the 
 number of PTR records is an attribute of the connect IP, it is under the 
 control of the sending party, not the domain owner.  A cap that resulted in 
 an 
 error would, as a result, enable the sender to arbitrarily get an SPF 
 permerror in place of a fail if desired.  The WG considered that not a good 
 idea.


Dear Scott,

It is within the control of the Domain owner about whether to make use of the 
ptr mechanism in their SPF TXT.  Random ordering or responses is also 
controlled by the IP address owner and not the Domain owner.  The ptr mechanism 
may offer intermittent results that will be difficult to troubleshoot.  By 
offering a Perm error on a ptr overflow, the domain owner is quickly notified 
this mechanism should not be used and are not fooled by it working some of the 
time.  The greater concern is in regard to the over all response sizes when 
DNSSEC is used.  In that case, response sizes can grow significantly.  Allowing 
large responses to occur without producing an error seems like a risky strategy 
from the DDoS perspective.  That is also another reason for worrying about the 
use of TXT RRs.  How many large wildcard TXT RR exist, and if they do, who 
would be at fault when this becomes a problem for SPF?

Regards,
Douglas Otis




Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-26 Thread Douglas Otis

On Aug 26, 2013, at 4:29 PM, Scott Kitterman sc...@kitterman.com wrote:

 On Monday, August 26, 2013 16:28:03 Douglas Otis wrote:
 On Aug 26, 2013, at 3:48 PM, Scott Kitterman sc...@kitterman.com wrote:
 On Monday, August 26, 2013 15:42:41 Douglas Otis wrote:
 Please also note that the PTR RR is not constrained in the current
 specification and can create erratic results.  It would be far safer to
 Perm error when overflowing on the number of PTR records.  There is no
 upper limit as some represent web farms hosting thousands of domains.
 
 This exact issue was the subject of working group discussion.  Since the
 number of PTR records is an attribute of the connect IP, it is under the
 control of the sending party, not the domain owner.  A cap that resulted
 in an error would, as a result, enable the sender to arbitrarily get an
 SPF permerror in place of a fail if desired.  The WG considered that not
 a good idea.
 
 Dear Scott,
 
 It is within the control of the Domain owner about whether to make use of
 the ptr mechanism in their SPF TXT.  Random ordering or responses is also
 controlled by the IP address owner and not the Domain owner.  The ptr
 mechanism may offer intermittent results that will be difficult to
 troubleshoot.  By offering a Perm error on a ptr overflow, the domain owner
 is quickly notified this mechanism should not be used and are not fooled by
 it working some of the time.  The greater concern is in regard to the over
 all response sizes when DNSSEC is used.  In that case, response sizes can
 grow significantly.  Allowing large responses to occur without producing an
 error seems like a risky strategy from the DDoS perspective.  That is also
 another reason for worrying about the use of TXT RRs.  How many large
 wildcard TXT RR exist, and if they do, who would be at fault when this
 becomes a problem for SPF?
 
 Your conclusion is different than the one the working group reached.

Dear Scott, 

Do you recall whether DNSSEC had been considered?

Regards,
Douglas Otis

Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-23 Thread Douglas Otis
 (distance) value followed by an MTA hostname.  

A DNS MX reply may offer A records for the hostnames in the Additional Section 
however per:
Per RFC2181 Section 5.4:
,---
Servers must never merge RRs from a response with RRs in their
cache to form an RRSet.  If a response contains data that would form
an RRSet with data in a server's cache the server must either ignore
the RRs in the response, or discard the entire RRSet currently in the
cache, as appropriate.
'---

This will not prove ideal for SPF with respect to effective use of DNS.

Also hostnames contained within MX resource record might be within any domain, 
and not necessarily the same domain as that of the base SPF record.

Per RFC2181 5.4.1. Ranking data
An authoritative answer from a reply should replace cached data that had been 
obtained from additional information in an earlier reply. However additional 
information from a reply will be ignored if the cache contains data from an 
authoritative answer or a zone file.


Mistake.

Section 3.4 

Second paragraph:

Similarly, the sizes for replies to all queries related to SPF have to be 
evaluated to fit in a single 512 octet UDP packet.

s/UDP packet/DNS message/

Per RFC1035 
Section 2.3.4. Size limits
UDP messages512 octets or less


Section 4.2.1. UDP usage
Messages carried by UDP are restricted to 512 bytes (not counting the IP or UDP 
headers).

The DNS message size limitation is not the same as a UDP packet limit as 
suggested in Section 3.4.


The SPFbis WG charter permitted removal of unused protocol elements where the 
“ptr” mechanism was deprecated and the SPF resource type was removed. Use of 
SPF’s very dangerous macro feature currently has less than 0.053% of the 
domains making use of these macros and clearly falls below the WG removal 
threshold.  We have also been told by a few very large providers that SPF 
records containing any macro reference are ignored for reasons related to both 
efficiency and security.  Marcos also inhibit the primary use by large 
providers which is to compile the domain’s IP address list.

The WG should have been told to focus on security and at better insuring 
interchange to achieve a safer stance moving forward.

Regards,
Douglas Otis

Radical Solution for remote participants

2013-08-12 Thread Douglas Otis

On Aug 12, 2013, at 1:06 PM, John Leslie j...@jlc.net wrote:

 Janet P Gunn jgu...@csc.com wrote:
 
 Again, it strengthens the case to get it done right. This part has been
 working well though.
 
 Not necessarily.  There was one WG where I had to send an email to the WG 
 mailing list asking for someone to provide slide numbers on jabber.
 
   ... and Janet was merely the one who _did_ so. Others did their best
 to guess at the slide numbers.
 
   At least one-third of the sessions I listened to failed to provide
 all we are told to expect in the way of jabber support. :^(
 
   OTOH, we _do_ get what we pay for; so I don't mean to complain.

Dear John,

You are right about getting your money's worth.  In the case of remote 
participants, they are charged nothing and receive no credit for having done 
so.  Often their input is ignored as well. 

A radical solution for meaningful remote participation is to change how A/V in 
meeting rooms operate.  This change will require a small amount of additional 
equipment and slightly different room configurations.

1) Ensure exact digital interfaces driving projectors are fully available 
remotely.  

2) Ensure Audio access requires an identified request via XMPP prior to 
enabling either a remote or local audio feed.

3) RFI tags could facilitate enabling local audio feed instead of an identified 
request via XMPP.

4) In the event of the local venue loosing Internet access, the device 
regulating A/V presentations must be able to operate in a virtual mode where 
only remote participants are permitted to continue the meeting proceedings.

5) Important participants should plan for alternative modes of Internet access 
to remain part of the proceedings.

6) Develop a simple syntax used on XMPP sessions to:
 1) signify a request to speak on X 
 2) withdraw a request to speak on X
 3) signify support of Y
 4) signify non-support of Y
 5) signal when a request has been granted or revoked.  For local participants 
this could be in the form of a red or green light at their microphone. 

7) Develop a control panel managed by chairs or their proxies that consolidate 
and sequence requests and log support and nonsupport indications and the 
granting of requests.

8) Chairs should be able to act as proxies for local participants lacking 
access to XMPP. 

9) Chairs should have alternative Internet access independent of that of the 
venue's.
 
10) Establish a reasonable fee to facilitate remote participants who receive 
credit for their participation equal to that of being local.

11) The device regulating A/V presentations must drive both the video and audio 
portions of the presentations.  A web camera in a room is a very poor 
replacement.

12) All video information in the form of slides and text must be available from 
the Internet prior to the beginning of the meeting.

Regards,
Douglas Otis



Re: procedural question with remote participation

2013-08-06 Thread Douglas Otis

On Aug 6, 2013, at 10:52 AM, Eliot Lear l...@cisco.com wrote:

 But if those lines contain questions, it gets you to the point where there is 
 discussion, which is just fine, as you point out here:
 
 The best outcome at a working group meeting is that, as a presenter, you 
 spend most of your time listening rather than talking. If the mic line is 
 empty, you probably should not have been on the agenda.
Dear Eliot and Joe,

The context of local conversations often use shorthand references to the 
material presented rather than restating the content to ensure remote 
participants understand what is being said.  

The IETF should devise a strategy able to virtualize both the local protector 
and PA in the event the venue no long has access to the Internet but where the 
meetings are still able to proceed.  Ensure remote participants are not 
considered secondary.  If fact, paying some access fee (that should be able to 
avoid VAT) might be reasonable.

Regards,
Douglas otis

Re: Berlin was awesome, let's come again

2013-08-06 Thread Douglas Otis

On Aug 6, 2013, at 4:48 PM, Keith Moore mo...@network-heretics.com wrote:

 On 08/06/2013 07:36 PM, John C Klensin wrote:
 ...
 IETF 39 was in Munich (August 1997)  ArabellaSheraton @
 Arabella Park, and it was HOT pretty much the whole week.
 If I recall, another very successful meeting in a place we
 should go back to.
 
 I liked Munich as a destination.   But the hotel / meeting facility in Berlin 
 far surpassed the Sheraton in Munich in every way imaginable.

Dear Keith,

Agreed.  One minor downside was needing an additional flight.  It seems AB who 
handles about a third of the traffic rather than Lufthansa that handles about 
one fifth, was not the best choice where a 6 hour layover extended an hour on 
the tarmac in a hot plane. 

Regards,
Douglas Otis

Re: Remote participants, newcomers, and tutorials

2013-07-28 Thread Douglas Otis

On Jul 28, 2013, at 3:05 PM, Arturo Servin arturo.ser...@gmail.com wrote:

 
   That may work as well.
 
   It depends on the time that the presenters have to make the material
 available.
 
   The important is to have discussion-material available in advance. It
 could be a presentation or a video (I would personally prefer a
 presentation because I can quickly scan it for important things)

Dear Arturo,

Emphasis should be to ensure equal status for remote participants.  For 
reasonable remote participation, presentation material should be made available 
in advance. It seems reasonable to allow minor edits within a day or even hours 
before the meeting starts.

To do this, video and audio control should be centralized within the meeting 
room and virtualized in the cloud when necessary.  A dual core Atom processor 
should be all that is needed.  Rather than using an audio bridge with multiple 
simultaneous  audio and video feeds, a strategy should be developed that suits 
those in the meeting venue as as well as those who are remote.   For this, 
there should be a minor level of automation available to facilitate selection 
of individuals permitted to speak.  This control should not need to be done at 
the meeting venue, but in the cloud as well.

Regards,
Douglas Otis



Re: IETF registration fee?

2013-07-11 Thread Douglas Otis

On Jul 11, 2013, at 1:50 PM, Brian E Carpenter brian.e.carpen...@gmail.com 
wrote:

 Douglas,
 ...
 Those traveling thousands of miles already confront many uncertainties.  
 Those that elect to participate remotely should be afforded greater 
 certainty of being able to participate when problems occur at local venues 
 or with transportation.  Increasing participation without the expense of 
 the brick and mortar and travel should offer long term benefits and 
 increased fairness. 
 
 How much would you be willing to pay for remote participation
 (assuming it was of high quality)?
 
 $600 for the week (cookies and taxes not included)?

Dear Brian,

A $600 price would represent a significant savings for most participants 
traveling large distances and using hotels.  If remote participants were given 
a first class status, such that even when physical venues lost Internet access, 
meetings continued.  The number of overall participants should increase and 
have the effect of demanding much lower meeting fees.

I suspect this will require abandoning the use of unmoderated inbound access to 
audio channels.  This has not worked very well.  Echo cancelation, noise, and 
disruption is likely too problematic as well as resource intensive.  Whatever 
is used must be rock solid.

The IETF already supports a hallway channel.  When the goal is to sell products 
or services, face-to-face becomes far more important.  There are many other 
organizations better at playing that role.  I have also experienced these 
face-to-face meetings many times being used to subvert ongoing efforts.  
Strictly moderated and fully recorded meetings hold a greater promise of 
providing fairness. 

Imagine XMPP as a control channel for moderators in conjunction with meeting 
channels that automatically recognize requests to speak. The meeting agenda 
should already indicate who is to speak, with their presentations available 
before the beginning of the meetings.  Nothing could  be done without 
everything being recorded and available remotely. 

Regards,
Douglas Otis

Re: Final Announcement of Qualified Volunteers

2013-07-09 Thread Douglas Otis

On Jul 9, 2013, at 2:07 PM, Ted Lemon ted.le...@nominum.com wrote:

 On Jul 9, 2013, at 4:58 PM, Scott Brim scott.b...@gmail.com wrote:
 Is the great majority of the wisdom in the IETF incorporated into a
 few megacorporations?
 
 (That might reflect market share, in which case, is it a problem?)
 
 I don't know the answer to that question, but it's an interesting question.   
 But the reason I reacted to John Klensin's message earlier the way I did is 
 that I think that the question of how biased toward the company's goals a 
 nomcom participant will be has a lot to do with the individual candidate.   
 And large companies do seem to tend to snap up long-time influential IETF 
 participants, so indeed it is likely that over time IETF knowledge will tend 
 to concentrate in one large company or another.
 
 That being the case, the current two-person rule could as easily be argued to 
 be damaging to the process as beneficial to it.   I'm not making a claim 
 either way, but I think that absent statistically valid data, this discussion 
 is completely theoretical.

Dear Ted,

From my experience, some projects have been thwarted through actions of a few 
companies.  The direction taken ended up being doomed which may have been the 
ultimate goal and potentially represents a real fairness issue IMHO.

Regards,
Douglas Otis  



Re: Final Announcement of Qualified Volunteers

2013-07-09 Thread Douglas Otis

On Jul 9, 2013, at 3:58 PM, Spencer Dawkins spencerdawkins.i...@gmail.com 
wrote:

 On 7/9/2013 8:59 AM, Scott Brim wrote:
 The sample is better at 140 if individuals represent themselves, but not if 
 they are swayed by their organizational affiliation, and organization is now 
 a significant factor in what we can expect from volunteers -- not all, but 
 even some of those from organizations where the volunteers are long time 
 participants. I support this idea. I think the gain is greater than the 
 loss, and it even fosters diversity. 
 
 I don't have a lot of time to chat about this, but
 
 - I agree with Scott that it matters what voting members are guided by 
 (organization, personal experience, intuition, flipping coins ...)
 - I suspect that it's not possible to predict what any 10 voting members 
 chosen at random will be guided by
 - I'm not sure we can even know what the 10 voting members *were* guided by, 
 unless the behavior is so bad that the advisor freaks out or the chair tells 
 us in the plenary Nomcom report
 
 If people want to think about the Nomcom volunteer pool, it may be useful to 
 wonder about whether the perspective of voting members from more 
 organizations would help the Nomcom make better choices.
 
 Of course, I'm not sure we can predict that, either :-)

Dear Spencer,

Precisely because it is impossible to judge motivations, the only way to ensure 
fairness is to ensure broad participation.  It seems unfortunate an 
organization dedicated to developing the Internet has not done more to ensure 
those obtaining remote access are not treated as second class participants.  
This should greatly increase the available talent.  The larger pool could be 
coupled with a meeting subscription to offset the loss of fees collected at 
meeting venues. 

To improve the experience, a flash based netbook coupled with the projector and 
PA could improve remote access into being much closer to a face-to-face 
experience worthy of fees requested.  Importantly, both face-to-face and remote 
participants should be able to obtain equal roles.  If the Internet is down, so 
is the meeting.  That may make venue selection more dependent upon the quality 
of the Internet availability.

Imagine real time translations made possible actually enhancing regional 
understanding.

Regards,
Douglas Otis

Re: Doug Engelbart

2013-07-03 Thread Douglas Otis

On Jul 3, 2013, at 12:10 PM, Bob Braden bra...@isi.edu wrote:

 
 Wikipedia defines genius as a person who displays exceptional intellectual 
 ability, creativity, or originality
 All three applied to Doug Engelbart. He belongs up there with other creative  
 geniuses I have had the privilege of meeting during 50 years in computing -- 
 Al Perlis,   Dick Hamming, John McCarthy,and Don Knuth come immediately to 
 mind.


Dear Bob,

Here is a video of Doug Engelbar's 1968 demo of SRI's work.
http://vimeo.com/32381658

In the early seventies, most work used mini computers where faults were 
diagnosed using the front key panel and bit light displays.  I even had the 
misfortune of debugging early production of MITS Altar systems that were flakey 
by connecting front panel LEDs directly to the data bus. As a consultant in 
1985, I worked at Xerox Parc on the Xerox Star 6085 adding a cartridge tape 
drive.   They relied on the tape drive to distribute OS updates in a timely 
fashion because their 10 Mbit Ethernet offered less than 3 Mbit throughput 
across dozens of test systems.  By then, they were using Bill English's mouse 
design.  Being relatively sheltered in the lab, this was my first encounter 
with a GUI interface.  I needed to access my test programs but was dumbfounded 
by a display that did nothing when you typed on the keyboard.  The secret was 
to drag the arrow icon over the terminal icon and click.  This finally provided 
access to a terminal display that responded to the keyboard.

Until then, my editing made use of multiple screens navigated with the use of 
key combinations.  To this day, I don't think GUI really offered improved 
productivity.  It was sexy and you did not need to remember all those damn key 
sequences.  The systems at Xerox Parc made use of the SRI developments which 
then spawned Windows and Macs introducing personal computers to the masses.  
Keeping the masses safe has been an ongoing struggle requiring creative genius 
often evidenced in algorithms rather than hardware.  The evolution of computers 
has been awe inspiring, and Steve Jobs proved genius makes a difference in 
hardware as well.   As Isaac Newton said If I have seen further it is by 
standing on the shoulders of giants.

Regards,
Douglas Otis



Re: Is the IETF is an international organization? (was: IETF Diversity)

2013-06-19 Thread Douglas Otis

On Jun 19, 2013, at 12:07 PM, SM s...@resistor.net wrote:

 Hi Aaron,
 At 11:40 19-06-2013, Aaron Yi DING wrote:
 Relating to the statement above(I assume Phillip is addressing the US 
 Academia), not quite sure are we still discussing the same topic?
 sorry, I am bit confused ..  since IETF is an international organization.
 
 I changed the subject line as I am as confused as to whether the IETF is an 
 international organization.
 
 There was a mention of First the Civil Rights act, then Selma...  ;).  I 
 assume that the act is an Act for the United States of America.  Harvard was 
 also mentioned.  I did a quick search and I found out that Harvard 
 University is an American private Ivy League research university.

Dear SM,

Are new ideas embraced without any prior geographic endorsement? While this 
seems to be the case, organizations with greater resources, often from various 
regions, will steer development.  

There be dragons empathizing about motivations to understand declared rules, 
stated justifications, or even what censuses really means.  Even with the best 
of intentions, it is very difficult to have meaningful discussions about 
motivations . 

In respect to privacy, organizations both sell and purchase profile information 
containing individual preferences and contact information.  There are also 
organizations that attempt offer selective relationships, often via a social 
network.  In deciding what is important to protect, the identity of those 
initiating transactions, or those receiving them are at odds. 

Even a statement females are more sensitive about security than men in respect 
to technology in the home can be viewed as either a real insight or a sexist 
view.  By having gender diversity, questioning the underlying motivations can 
be avoided.  It seems the same can be said of those trading profiles or and 
those offering protection from profilers.  Motivation plays a critical role in 
steering development.  It is just not something easily discussed within an 
international organization.

Regards,
Douglas Otis 

   

Re: Review of: draft-otis-dkim-harmful

2013-06-17 Thread Douglas Otis

On Jun 4, 2013, at 7:16 PM, Sam Hartman hartmans-i...@mit.edu wrote:
 So, I'd like to encourage Doug to refine his work, fix errors of
 precision, but to say I think this is worth writing down.

Dear Sam,

Thank you for your interest.  I have updated the draft and, and as requested by 
Dave Crocker, included references to prior statements by Dave Crocker and Barry 
Leiba made public subsequent to the conclusion of the WG DKIM specification in 
response to comments about the phishing threat DKIM permits.  In reviewing some 
of Dave Crocker's responses, it appears differences between validated the 
SDID and authenticated the SDID could use some clarification since this is 
awkwardly described in RFC6376 section 6.3.  

Quoting the abstract of RFC5863 co-authored by Dave Crocker, DKIM's 
authentication of email identity can assist in the global control of spam and 
phishing.  This document provides implementation, deployment, operational, 
and migration considerations for DKIM. 

Section 5.4 Inbound Mail Filtering of RFC5863 states: 
,---
   DKIM is frequently employed in a mail filtering strategy to avoid
   performing content analysis on email originating from trusted
   sources.  Messages that carry a valid DKIM signature from a trusted
   source can be whitelisted, avoiding the need to perform computation
   and hence energy-intensive content analysis to determine the
   disposition of the message.
'---
This is exactly how DKIM is being used and why DKIM is harmful!

Additional information is being acquired, but will not alter conclusions 
reached.
http://tools.ietf.org/html/draft-otis-dkim-harmful-03

Regards,
Douglas Otis

 



Re: Review of: draft-otis-dkim-harmful

2013-06-09 Thread Douglas Otis

On Jun 4, 2013, at 9:13 AM, Murray S. Kucherawy m...@blackops.org wrote:

 On Tue, Jun 4, 2013 at 4:08 AM, Douglas Otis doug.mtv...@gmail.com wrote: 
 In its current form, DKIM simply attaches a domain name in an unseen message 
 fragment, not a message.  The ease in which the only assured visible fragment 
 of the message signed by the domain being forged makes it impossible for 
 appropriate handling to be applied or likely harm prevented.
 
 
 There are existence proofs that contradict this claim.  They have been 
 brought to your attention in the past.

Thank you for your response.  Could I trouble you for a reference to the proofs 
or for you to expand on what you specifically mean?  The draft 
otis-dkim-harmful addendum captured actual DKIM From header field spoofing 
delivered to the in-box for several major providers.

 It appears you're continuing to assign semantics to DKIM signatures that 
 simply aren't there.  I don't know what else can be done to clarify this.

The semantics of d=domain and dkim=pass appear to be at the root of the 
problem.What other semantics are you suggesting?

 Procedurally speaking, what path do you anticipate your draft following?

To require messages with invalidly repeated header fields to not return a 
pass for DKIM signature validation.

I apologize if I missed your response to a private query.   I hope to post an 
update shortly covering all expressed concerns.  

Regards,
Douglas Otis






Re: Review of: draft-otis-dkim-harmful

2013-06-04 Thread Douglas Otis
 
to ensure that the license itself is valid.  In the other, you can simply write 
your name on the top of the license, and it becomes valid.

Again; we are objecting to the absolute ease of forgery facilitated by DKIM.  

 The FROM header field is the Author identifier in section 11.1 of
   [I-D.kucherawy-dmarc-base].  The DMARC specification offers normative
   language that a message SHOULD be rejected when multiple FROM header
   fields are detected.  This requirement would not be necessary or
   impose protocol layer violations if DKIM did not offer valid
   signature results when repeated header fields violate [RFC5322].
 
 It's good to see this text refer to a layer violation, since the text is one.
 
 Discussion of DMARC is entirely outside of the scope of DKIM, unless use of 
 DMARC uncovers some technical flaw in DKIM.  It hasn't done that so far and 
 the text in the draft doesn't offer any.
 
 At the least, the draft appears to be claiming that DKIM validates the author 
 (rfc5322.From) field.  It doesn't do that validation and it doesn't purport 
 to.
 
 It appears the authors have some concerns about the way DMARC uses DKIM.  
 That well might be a worthy discussion... about DMARC.  It actually has 
 nothing to do with the DKIM signing specification.

This assertion is simply wrong.  

 Trust established by a signing domain is being exploited to mislead
   recipients about who authored a message.
 
 The draft continues to make broad, onerous claims like this, but provides no 
 documentation to indicate that the DKIM signing specification is flawed in 
 the function it is performing:  attaching a validated domain name to a 
 message.

DKIM does not, in its current form, attach a validated domain name to a 
message.  By adding one line MUST NOT validate a message with multiple 
From:'s, DKIM will attach a validated domain name to a message.

In its current form, DKIM simply attaches a domain name in an unseen message 
fragment, not a message.  The ease in which the only assured visible fragment 
of the message signed by the domain being forged makes it impossible for 
appropriate handling to be applied or likely harm prevented.

Thank you for your review.  We'll take your input and review how this draft can 
be better clarified.

Regards,
Douglas Otis

Re: Review of: draft-otis-dkim-harmful

2013-06-04 Thread Douglas Otis

On Jun 4, 2013, at 3:08 PM, Barry Leiba barryle...@computer.org wrote:

  The draft continues to make broad, onerous claims like this, but provides 
  no documentation to indicate that the DKIM signing specification is flawed 
  in the function it is performing:  attaching a validated domain name to a 
  message.
 
 DKIM does not, in its current form, attach a validated domain name to a 
 message.  By adding one line MUST NOT validate a message with multiple 
 From:'s, DKIM will attach a validated domain name to a message.

Dear Barry,

Thank you for your response.

 Here's the part of this I don't understand:
 A DKIM signature does two things.  It *does* attach a validated domain name 
 (the domain in the d= tag).  And it tells the verifier what parts of the 
 message are covered by the signature (h= and l= tags).  There is no claim in 
 DKIM that the d= domain has any relation to the RFC 5322 From.  But the h= 
 tag does tell you how many From header fields are covered by te signature.

Of course it is incorrect for a DKIM signature to be valid when a message has 
multiple From header fields.  DKIM requires AT LEAST the From header field to 
be the minimal portion of the message signed.  Every other part of the message 
is optional.

DKIM was intended not to require ANY change of other mail components.  None.  

When the DKIM signature is trusted and changes how the message is handled, it 
would be wrong to suggest special consideration is then given other message 
fragments.  In addition, recipients will not see the signature header field nor 
should they be expected to understand what it contains.  They will see and 
understand the From header field however. 

Of course dkim=pass is placed in an Authentication-Results header where many 
suggest this indicates the message has been authenticated!

 Any verifier that wants to consider a message suspicious if the message 
 contains more From fields than are covered by the signature can do so, and 
 the DKIM spec does describe this situation.

DKIM does NOT score messages.  Either the signature is valid or not.  The spec 
wrongly justifies allowing invalid repeated headers to result in a DKIM 
signature verified as valid. 

 You would like the spec to REQUIRE that a message be considered suspicious 
 under those circumstances.

No. Just indicate the signature is NOT valid.  This is the only sure way to 
ensure trust is not misapplied and cause harm.

  You made your case for this at least twice to the working group and at least 
 once more to the IETF community during Last Call of the draft that became RFC 
 6376.  Your opinion wasn't agreed with: you were in the rough.  You're now 
 bringing it up a fourth time (at least), and you still appear to be in the 
 rough.   The decision was to allow the verifier to decide how to handle this.

You and Dave Crocker made assurances this issue would not be abused.  It is 
being abused and NO other protocol layer ensures message structures are valid. 
None.  It was negligent for DKIM to ignore occurrences of highly deceptive 
invalidly repeated header fields as it walks up and down the header field 
stack.  It is also wrong to suggest some other protocol layer handles this 
checking.  Such suggestion represents a waste of resources as ONLY DKIM should 
determine signature validity which MUST consider invalidly repeated header 
fields.  It also appears most even expect DKIM signature validation checks 
message structure, but this ONLY happens by double listing singleton header 
fields in the signed header list.  MOST domains don't bother with this ugly 
hack, especially larger domains where checking message structure is critical. 

 Being in the rough doesn't make you wrong.  But DKIM isn't wrong either, and 
 at some point you have to accept that you're standing alone, and accept the 
 consensus.

Putting people at risk in some race to obtain Standard status can not be 
justified.  Getting this right is far far more important.  Allowing this to 
become a standard will make specification modification even more difficult.

Regards,
Douglas Otis

  

Re: [IETF] Issues in wider geographic participation

2013-05-30 Thread Douglas Otis

On May 30, 2013, at 7:08 PM, Melinda Shore melinda.sh...@gmail.com wrote:

 On 5/30/13 4:37 PM, John C Klensin wrote:
 ultimately call the IETF's legitimacy and long-term future into
 question.  As you suggest, we may have good vendor participation
 but the operators are ultimately the folks who pay the vendor's
 bills.
 
 Here in Alaska was the first time I'd worked in an environment
 that had technologists at a considerably less than elite skill
 level, and I'd previously had no idea the extent to which
 average operators/data centers rely on vendors (worse: VARs
 and consultants) to solve their technical problems.  The only
 time I'd seen someone from an Alaskan operator participate in
 anything to do with the IETF was when one person voted on
 the transitional address space allocation.  I think Warren is
 correct to identify this as an issue with operator participation.
 
 Perhaps we should be thinking about some alternative to
 engaging operators by trying to get them to schlep to meetings.
 Something along the lines of a liaison process or creating
 a pipeline between us and NOGs.

Dear Melinda,

Perhaps something to also consider is that many installations operate at 
minimal compliance levels even within advanced regions.  The IETF is blessed 
with many very smart people (at least from my perspective) who also seem overly 
optimistic of the impact of non-normative language on outcome.  Specifications 
provide better outcomes when function is ensured at minimal levels.  In other 
words, it is better not to make assumptions.

Regards,
Douglas Otis  




Re: Review of: draft-otis-dkim-harmful

2013-05-14 Thread Douglas Otis

On May 12, 2013, at 9:59 PM, Dave Crocker d...@dcrocker.net wrote:

Dear Dave,

Thank you for your thoughtful review, it was most helpful.  I have updated the 
draft in hopes of adding greater clarity and to address your concerns. 
The new information not available to the WG at the time is how the DKIM 
specification would likely be implemented despite the precautions given, and 
the level of growing abuse being received.

http://tools.ietf.org/html/draft-otis-dkim-harmful-01

Best Regards,
Douglas Otis

DKIM is Harmful as Specified

2013-05-12 Thread Douglas Otis
Dear ietf,

To better clarify elements within otis-ipv6-email-authent draft, a separate I-D 
is focused primarily on DKIM.

http://tools.ietf.org/html/draft-otis-dkim-harmful-00

Regards,
Douglas Otis

Re: [ietf-dkim] Last Call: Change the status of DKIM (RFC 6376) to Internet Standard

2013-05-12 Thread Douglas Otis
Dear IETF,

Sorry for repeating this message, but the proper subject line had not been used.

http://tools.ietf.org/html/draft-otis-dkim-harmful-00
explains why this document should not be supported to proceed as currently 
defined.

Feedback on this I-D is welcome.

Regards,
Douglas Otis





Update of draft-otis-ipv6-email-authent

2013-05-09 Thread Douglas Otis
Dear ietf, spfbis, and repute,

Until an identifier is linked with the source of an exchange by way of 
authentication, it must not be trusted as offering valid reputation input. For 
example, a valid DKIM signature lacks important context.  A valid DKIM 
signature does not depend upon actual sources or intended recipients, both of 
which are essential elements in determining unsolicited messages or even 
whether the message is valid.  SPF only offers authorization of Non-Delivery 
Notifications, and can not be considered to represent actual sources.

Three different authors attempted to repair DKIM's header field issue within 
the WG process but repairs were rejected.  As with the DKIM WG, being right 
about likely outcomes will not always prevail in offering safe and dependable 
protocols offering a well understood services.

The initial intent for DKIM was to help prevent spoofing, but that effort ran 
astray with desires to extend DKIM beyond its capabilities.  Flexibility 
allowing DKIM to be relayed removes typical rate limitations protecting a 
domain's reputation from occasional lapses or from messages easily taken out of 
context.

Since a valid DKIM signature may not preclude prepended header fields, this 
raises important questions.  When such spoofing does occur, which domain's 
reputation should be affected?  A domain too big to block that does not add the 
non-existent header field hack? A domain being spoofed to improve statistical 
filtering?  It is clear those actually responsible for abusive exchange may be 
ignored by these strategies.

Better solutions at enforcing security and offering fair reputations are 
readily available.

http://tools.ietf.org/html/draft-otis-ipv6-email-authent-01

DKIM can not be used to establish reputation without a link with those 
responsible for its transmission.  Neither DKIM nor SPF established those 
actually responsible for the exchange.  Today, unfortunately, the only thing 
that can be trusted in email is the ip address of the host connecting to the 
mail server - and even that can be subverted with BGP injection.

Regards,
Douglas Otis

Effects on DNS can be severe

2013-05-03 Thread Douglas Otis
Dear ietf and dnsext,

I apologies for posting this ahead of the wg last call.

Over many years at attempting to change the course of the SPF process, this 
effort appears to have been futile.
It seems many even feel the present spfbis document represents current 
practices.  It does not, from the perspective of macros.
I have written an I-D that I fully expect SPF proponents will denounce and so I 
have left that wg alone.  

Here is a draft written in hopes of placing these concerns into a broader 
scope--
http://tools.ietf.org/html/draft-otis-ipv6-email-authent-00

Two references in this draft  did not carry over in the same manner as in the 
tcl script?  
Until remedied, here are the links missing in this i-d:

[I-D.otis-spf-dos-exploit]
http://tools.ietf.org/html/draft-otis-spf-dos-exploit-01

[v6-BGP-Rpts]
http://bgp.potaroo.net/v6/as6447/

SPF can pose serious threats, that when confronted, few solutions are 
available.  I have been able to convince some of the larger providers of this 
concern, who in returned offered assurances the macro extensions in their SPF 
libraries are removed and in doing so have not seen any problems.

This is a serious effort at addressing a security concern, please read this 
draft from that perspective.

Regards,
Douglas Otis



Re: Effects on DNS can be severe

2013-05-03 Thread Douglas Otis

On May 3, 2013, at 12:21 PM, Scott Kitterman sc...@kitterman.com wrote:

 On Friday, May 03, 2013 12:04:53 PM Douglas Otis wrote:
 ...
 Over many years at attempting to change the course of the SPF process, this
 effort appears to have been futile.
 ...
 
 It does seem a bit odd for you to claim you're being ignored when the largest 
 change in SPF processing limits contained in 4408bis was your suggestion.  An 
 alternate interpretation to consider is that the working group fully 
 considered your inputs and incorporated those that were appropriate 
 technically and in scope for the charter.

Dear Scott,


This was not directly part of the IETF process, as my input there was ignored.

As I recall, removal of unlimited recursion occurred after a presentation made 
in Boston to the Open Group.

As with unlimited recursion, the need for the macro functions are also 
negligible while posing real risks. This is a serious security concern still 
needing to be addressed.

Regards,
Douglas Otis



Re: Effects on DNS can be severe

2013-05-03 Thread Douglas Otis

On May 3, 2013, at 1:00 PM, Scott Kitterman sc...@kitterman.com wrote:

 On Friday, May 03, 2013 12:46:52 PM Douglas Otis wrote:
 On May 3, 2013, at 12:21 PM, Scott Kitterman sc...@kitterman.com wrote:
 On Friday, May 03, 2013 12:04:53 PM Douglas Otis wrote:
 ...
 
 Over many years at attempting to change the course of the SPF process,
 this
 effort appears to have been futile.
 
 ...
 
 It does seem a bit odd for you to claim you're being ignored when the
 largest change in SPF processing limits contained in 4408bis was your
 suggestion.  An alternate interpretation to consider is that the working
 group fully considered your inputs and incorporated those that were
 appropriate technically and in scope for the charter.
 
 Dear Scott,
 
 
 This was not directly part of the IETF process, as my input there was
 ignored.
 
 As I recall, removal of unlimited recursion occurred after a presentation
 made in Boston to the Open Group.
 
 I assume you are referring to some of the pre-IETF activities for SPF.  
 Recursion based processing limits don't appear in RFC 4408 (and also not in 
 4408bis).
 
 As with unlimited recursion, the need for the macro functions are also
 negligible while posing real risks. This is a serious security concern
 still needing to be addressed.
 
 This was discussed in the working group and tracked in the WG issue tracker:
 
 http://trac.tools.ietf.org/wg/spfbis/trac/ticket/24
 
 The change mentioned in the ticket is a direct result of your input to spfbis 
 (referenced in the ticket).
 
 As far as I can tell, this is just a case of the working group came to a 
 different conclusion, so I'll whine about it on the main list.

Dear Scott,

While the recommendation may have helped, there is a trend of large websites 
publishing synthetic domains as an alternative to web cookies. This overcomes 
the suggested fix. A similar issue applies to SPF reverse namespace macro 
expansion consuming connection resources, especially in regard to IPv6.  In the 
end, the domain is still not authenticated, the number of transactions remain 
excessive and inhibit meaningful mitigation. Nothing justifies active content 
in DNS resource records distributed with email.  Macros are not needed and 
seldom if ever are used in this regard.  Simply put, SPF macros are not safe.  
DKIM as specified is not safe. Open IPv6 email can not be defended.  The IETF 
needs to seek better and fairer remedies.

Regards,
Douglas Otis







Re: IETF Diversity Question on Berlin Registration?

2013-04-15 Thread Douglas Otis

On Apr 15, 2013, at 6:50 AM, Ted Lemon ted.le...@nominum.com wrote:

 On Apr 15, 2013, at 4:44 AM, t.p. daedu...@btconnect.com wrote:
 So perhaps, to reduce the bias, e.g. towards western white, any system
 of choosing should give preference to the views of those who do not
 attend IETF meetings, for whom judgement is based solely on the
 contributions the person in question is seen to make - via the mailing
 lists - towards open standards and running code.
 
 We could also all be assigned masks, vocoders and randomly-generated numbers 
 at the beginning of each IETF, and go around wearing burlap robes.
 
 The problem with your solution is that at the moment it's actually pretty 
 hard to participate in IETF without going to meetings.   It's a source of 
 some frustration to me that despite having basically invented the Internet, 
 the IETF still does business as if we were living in the pre-Internet era.   
 Three face-to-face meetings a year is a lot of carbon, and I think it also 
 creates barriers for participation that are only readily surpassed by people 
 who for whatever reason happen to have a great deal of advantage.   The 
 degree of good fortune that allows me to participate in IETF as I do is 
 breathtaking in its improbability.

Dear Ted,

Well said.  This speaks directly to what is limiting diversity, costs the 
Internet should remedy.  Although resources necessary to host meetings online 
are substantial, they pale in comparison with physical presence requirements.   
I would have preferred if more females were in my all male engineering classes. 
 Lowering cost should reduce average participant age which should offer better 
gender/ethnic balance. 

IMHO, diversity is more sensitive to a predominance of those wanting to 
generate ad revenue with a preference for dancing fruit, at the expense of 
security.  This has introduced i-frames, pdf with javascript, java, and many 
other types of unauthenticated active and proprietary content that remain major 
security issues.

How can the IETF increase the preference for clean, simple, open, and secure 
working code?

Perhaps registration forms could ask about roles as related to marketing, 
engineering, management, or support.  From this, perhaps needed outreach can be 
better determined.

Regards,
Douglas Otis







Re: IETF Diversity Question on Berlin Registration?

2013-04-12 Thread Douglas Otis
Dear Ray,

Outcomes, good or bad, are often influenced by groups sharing a common 
interest.  Important questions should attempt to measure whether these 
interests reflect those of the larger Internet communities. 

No gender, sexual orientation, ethic, religious, or political background should 
be excluded, but attempting to ascertain the breath and distribution of 
interests based on questions used to measure some possible selection bias based 
on distributions of aspects unrelated to the endeavors of the IETF seems 
unlikely to improve outcomes.

When faced with some distraction from endeavors at hand, a friend of mine would 
exclaim squirrel! 

Regards,
Douglas Otis

On Apr 11, 2013, at 8:11 AM, Ray Pelletier rpellet...@isoc.org wrote:

 All
 
 The IETF is concerned about diversity.  As good engineers, we would like
 to attempt to measure diversity while working on addressing and increasing
 it.  To that end, we are considering adding some possibly sensitive
 questions to the registration process, for example, gender.  Of course,
 they need not be answered and would be clearly labeled as optional.
 
 The IAOC would like to hear from the community on this proposal.  It plans to 
 make a decision on its 18 April call in order to make the changes in time for 
 the 
 Berlin registration and will consider all input received by 17 April.  
 
 Thanks for your feedback.
 
 Ray
 IAD
 



Re: Sufficient email authentication requirements for IPv6

2013-04-10 Thread Douglas Otis

On Apr 10, 2013, at 6:26 AM, Keith Moore mo...@network-heretics.com wrote:

 On 04/09/2013 08:07 PM, John Levine wrote:
 Quoting Nathaniel Borenstein  [1]:
 
   One man's blacklist is another's denial-of-service attack.
 
 Email reputation services have a bad reputation.
 They have a good enough reputation that every non-trivial mail system
 in the world uses them.  They're not all the same, and a Darwinian
 process has caused the best run ones to be the most widely used.
 
 There seems to be a faction that feel that 15 years ago someone once
 blacklisted them and caused them some inconvenience, therefore all
 DNSBLs suck forever.  I could say similar things about buggy PC
 implementations of TCP/IP, but I think a few things have changed since
 then, in both cases.
 
 There's an inherent problem with letting 3rd parties affect email traffic, 
 especially when there's no way to hold those 3rd parties accountable for 
 negligence or malice.

Dear Keith,

I share your ideals.  Being able to authenticate domains SOURCING emails brings 
self administration of sources much closer to a practical reality.  As stated 
in the pdf paper Domains as a Basis for Managing Traffic, one hundred 
thousand domains control 90% of Internet traffic out of approximately 100 
million domains active each month.  The top 150 domains control 50%, and the 
top 2,500 control 75% of the traffic. This level of consolidation permits 
effective fast-path white-listing, where then dealing with the remainder is 
less of a burden.

Let me assure you a third-party internationally offering services aimed at 
mitigating abuse either in the form of unwanted inundation of commercial 
solicitations that also affords the resources needed for protections against 
malicious code is not above the law.  We have endured many law suits brought by 
those wishing to profit on their various endeavors against the desires of our 
customers.  Truth is one of the first victims in the abatement process.  As 
such, evidence of abuse must be incontrovertible.  Authorization does not imply 
culpability any more than some signed message content independent of the 
intended recipient or the actual source.  Evidence must not rely on statistical 
likelihoods.  The stakes are far to high. 

Regards,
Douglas Otis







Re: Sufficient email authentication requirements for IPv6

2013-04-09 Thread Douglas Otis

On Apr 8, 2013, at 10:27 PM, joel jaeggli joe...@bogus.com wrote:

 On 4/8/13 9:18 PM, Douglas Otis wrote:
 
 On Mar 31, 2013, at 1:23 AM, Doug Barton do...@dougbarton.us 
 mailto:do...@dougbarton.us wrote:
 
 On 03/30/2013 11:26 PM, Christian Huitema wrote:
 IPv6 makes publishing IP address reputations impractical.  Since IP 
 address reputation has been a primary method for identifying abusive 
 sources with IPv4, imposing ineffective and flaky  replacement 
 strategies has an effect of deterring IPv6 use.
 
 In practice, the /64 prefix of the IPv6 address has very much the same 
 administrative properties as the /32 value of the IPv4 address. It 
 should be fairly straightforward to update a reputation system to manage 
 the /64 prefixes of IPv6. This seems somewhat more practical than trying 
 to change the behavior of mail agent if their connectivity happens to use 
 IPv6.
 
 That only works insofar as the provider does not follow the standard 
 recommendation to issue a /48. If they do, the abuser has 65k /64s to 
 operate in.
 
 What's needed is a little more intelligence about how the networks which 
 the IPv6 addresses are located are structured. Similar to the way that 
 reputation lists nowadays will black list a whole /24 if 1 or a few 
 addresses within it send spam.
 
 The problems are not insoluble, they're just different, and arguably more 
 complex in v6. It's also likely that in the end more work on reputation 
 lists will provide less benefit than it did in the v4 world. But that's the 
 world we live in now.
 
 Dear Doug,
 
 Why aggregate into groups of 64k prefixes?  After all, this still does not 
 offer a practical way to ascertain a granularity that isolates different 
 entities at /64 or /48.  It is not possible to ascertain these boundaries 
 even at a single prefix.  There is 37k BGP entries offering IPv6 
 connectivity.  Why not hold each announcement accountable and make 
 consolidated reputation a problem ISPs must handle?  Of course, such an 
 approach would carry an inordinate level of support and litigation costs due 
 to inadvertent collateral blocking.  Such consolidation would be as 
 impractical as would an arbitrary consolidation at /48.
 
 Plently of people use IP to ASN mappings as part of their input for 
 reputation today.

Dear Joel,

Unfortunately, ISPs are bad at responding to email abuse complaints.  There are 
exceptions where reputation needs to be escalated to the ASN, as was the case 
in Brazil which then involved litigation.  You're welcome, but operations at 
that level will not scale and might lead to balkanization.

With respect to IPv6 granularity, there is only ~7k ASNs.  As IPv6 adoption 
increases, this should approach 37k.  In addition, there are more /32 prefixes 
than /48.  Each /32 represents a span greater than the entire IPv4 Internet.  
The network covered by each prefix represents an address span IPv4 squared in 
size.  The sparse nature of abuse and the size of IPv6 prefix space makes 
collecting evidence and distributing detected abuse by IP address or prefix 
both expensive and slow, where any IP address query mechanism is likely to 
result in self inflicted DDoS. 

If email offered authentication of the sourcing domain or that of a domain 
certificate, then reputation could be fairly applied and easily distributed. 
This ability is essential for IPv6.

Regards,
Douglas Otis



Re: Sufficient email authentication requirements for IPv6

2013-04-09 Thread Douglas Otis

On Apr 9, 2013, at 11:28 AM, SM s...@resistor.net wrote:

 Hi Keith,
 At 09:56 09-04-2013, Keith Moore wrote:
 You have it backwards.  Internet email has long been under DDoS attack from 
 email address reputation services.
 
 Quoting Nathaniel Borenstein  [1]:
 
  One man's blacklist is another's denial-of-service attack.
 
 Email reputation services have a bad reputation.  In some respect it is 
 similar to the email delivery problem where the smaller set of good people 
 are negatively affected because of the larger set of bad people.
 
 Regards,
 -sm
 
 P.S. You are the only participant who has been able to override the existing 
 consensus during a Last Call. :-)
 
 1. http://www.ietf.org/mail-archive/web/ietf/current/msg29826.html 

Dear SM,

In full agreement with Nathaniel.  Avoiding unfair collateral blocking is why 
source domain authentication, not authorization, is vital.

Regards,
Douglas Otis



Re: Sufficient email authentication requirements for IPv6

2013-04-08 Thread Douglas Otis

On Mar 31, 2013, at 1:23 AM, Doug Barton do...@dougbarton.us wrote:

 On 03/30/2013 11:26 PM, Christian Huitema wrote:
 IPv6 makes publishing IP address reputations impractical.  Since IP address 
 reputation has been a primary method for identifying abusive sources with 
 IPv4, imposing ineffective and flaky  replacement strategies has an effect 
 of deterring IPv6 use.
 
 In practice, the /64 prefix of the IPv6 address has very much the same 
 administrative properties as the /32 value of the IPv4 address. It should 
 be fairly straightforward to update a reputation system to manage the /64 
 prefixes of IPv6. This seems somewhat more practical than trying to change 
 the behavior of mail agent if their connectivity happens to use IPv6.
 
 That only works insofar as the provider does not follow the standard 
 recommendation to issue a /48. If they do, the abuser has 65k /64s to operate 
 in.
 
 What's needed is a little more intelligence about how the networks which the 
 IPv6 addresses are located are structured. Similar to the way that reputation 
 lists nowadays will black list a whole /24 if 1 or a few addresses within it 
 send spam.
 
 The problems are not insoluble, they're just different, and arguably more 
 complex in v6. It's also likely that in the end more work on reputation lists 
 will provide less benefit than it did in the v4 world. But that's the world 
 we live in now.

Dear Doug,

Why aggregate into groups of 64k prefixes?  After all, this still does not 
offer a practical way to ascertain a granularity that isolates different 
entities at /64 or /48.  It is not possible to ascertain these boundaries even 
at a single prefix.  There is 37k BGP entries offering IPv6 connectivity.  Why 
not hold each announcement accountable and make consolidated reputation a 
problem ISPs must handle?  Of course, such an approach would carry an 
inordinate level of support and litigation costs due to inadvertent collateral 
blocking.  Such consolidation would be as impractical as would an arbitrary 
consolidation at /48.  

Prior traffic is required to review reverse DNS PTR records, which is resource 
intensive due to unavoidable delays.  Our IPv4 reputation services will not 
block entire /24s based upon a few detected abusive sources.  CIDR listings 
grow only after abuse exceeds half.   Even this conservative approach is 
problematic in places like China.  There are 4 million /64 prefixes for every 
possible IPv4 address .  Taking an incremental CIDR blocking approach still 
involves keeping track of a prefix space 4 million times larger than the entire 
IPv4 address space, where it is generally understood sharing the same IP 
address carries a risk.  Are you really suggesting that sharing the same /48 
carries a similar risk?

The goal should be to avoid guesswork and uncertainty currently plaguing email.

v6 BGP announcement growth graph is published at: 
http://bgp.potaroo.net/v6/as2.0/

Regards,
Douglas Otis







Re: Sufficient email authentication requirements for IPv6

2013-04-02 Thread Douglas Otis

On Mar 30, 2013, at 11:26 PM, Christian Huitema huit...@microsoft.com wrote:

 IPv6 makes publishing IP address reputations impractical.  Since IP address 
 reputation has been a primary method for identifying abusive sources with 
 IPv4, imposing ineffective and flaky  replacement strategies has an effect 
 of deterring IPv6 use. 
 
 In practice, the /64 prefix of the IPv6 address has very much the same 
 administrative properties as the /32 value of the IPv4 address. It should 
 be fairly straightforward to update a reputation system to manage the /64 
 prefixes of IPv6. This seems somewhat more practical than trying to change 
 the behavior of mail agent if their connectivity happens to use IPv6.

Dear Christian,

The announced prefix space currently represents more than 4 million times the 
entire IPv4 address space.  This means the /64 prefix space can not be 
considered comparable to IPv4 address space.  Go to 
http://bgp.potaroo.net/v6/as2.0/ and look for Total address space advertised 
(/64 equiv).  The number of announced prefixes over /64s currently shows 
18,206,079,529,779,202.

Much of the IPv4 has already had Reverse DNS PTR records traversed scanning for 
hints about whether any specific address appears to represent dynamically 
assigned access.  This guesswork allows about 1/3 of the IPv4 space to be 
ignored by blocking them from sending public (port 25) SMTP messages.

Reverse DNS PTR records offers a costly means to differentiate residential and 
non-residential access when done on a realtime basis.  A significant benefit of 
a comprehensive reputation mapping of the entire IPv4 address space is that any 
reverse naming exceptions are incorporated into the reputation values which 
also eliminates dependence on reverse DNS performance.

In IPv6, there can not be any pre-mapping.  This places reverse PTR review at 
the mercy of the even more broken IPv6 reverse zone provisioning.  Any 
mis-configuration of the reverse name space, which is common for IPv4 from 
residential systems, greatly increases the resources consumed by the growing 
proportion of sessions emitted by compromised systems.  Few expect reverse PTRs 
and hostnames to match, but to offer names not hinting at being for a dynamic 
assignment.  Greatly increased delays caused by DNS misconfiguration, along 
with a need to handle a larger number of sessions, will make testing reverse 
PTR records highly resource prohibitive and problematic for IPv6.

In the end, Reverse PTRs can be assigned any name and thus can not serve as a 
basis to assess accountability.  Once a conclusion is reached that only 
AUTHENTICATED initiating domains offer a means to fairly establish a basis for 
acceptance, use of reverse PTR records becomes far far less attractive.  The 
ability to authenticate forward domains initiating messages improves security 
and is better suited for a future of more mobile and dynamic infrastructure.

Many email domains will find themselves obligated to authorize IPv4 outbound 
servers using SPF.  Return-Path mail parameters locating authorization reduces 
backscatter abuse at the cost of reduced delivery integrity.  However this 
parameter's relationship over mail's entire path is too fragile to serve as a 
basis for acceptance.  Since DKIM allows any message to be relayed by design, 
it can not offer a means to mitigate abuse when any marginal domain must be 
accepted, as for domains considered too big to block.

In addition, problems related to DKIM header field spoofing permitted when 
signatures are still considered valid, along with a growing range of dangerous 
email content that references compromised i-frames, makes responding to new 
threats a growing problem.  IPv6 pushes this problem further over the edge 
without the introduction of the initiating domains having been authenticated.  
IPv6 addresses can not serve this function, and there is progress being made in 
respect to use of DANE and the like.

Regards,
Douglas Otis


Re: Sufficient email authentication requirements for IPv6

2013-03-30 Thread Douglas Otis
Dear Jason,

On Mar 30, 2013, at 7:57 AM, Livingood, Jason 
jason_living...@cable.comcast.com wrote:

 On 3/29/13 12:58 PM, John Levine jo...@taugh.com wrote:
 
 
 As a result, it is questionable whether any IPv6 address-based
 reputation system can be successful (at least those based on voluntary
 principles.)
 
 It can probably work for whitelisting well behaved senders, give or take
 the DNS cache busting issues of IPv6 per-message lookups.
 
 Since a bad guy can easily hop to a new IP for every message (offering
 interesting new frontiers in listwashing) I agree that it's a losing
 battle for blacklisting, other than blocking large ranges of hostile
 networks.
 
 Agree. The IP blacklisting that worked well for IPv4 is completely
 unsuited for IPv6 (I'd go as far as to say it is a complete failure, no
 matter if you look at different size prefixes or not).

Agreed.

 The only model that I personally can see working at the moment for IPv6 is
 a mix of domain-based reputation and whitelisting. I like domain-based
 better since it is managed by sending domains on a distributed basis.

Current domain based strategies such as SPF offer fragile dependence on return 
path parameters that may incur a large number of transactions to resolve 
authorizations.  Use of DKIM must also consider the signing domain neither 
controls actual sources, intended recipients, or message relaying.

 Mail acceptance for IPv4 worked inclusively - receivers accept unless IP
 reputation or other factors failed. IMHO with IPv6 that model may need to
 be turned around to an exclusive one - so receivers will not accept mail
 unless certain factors are met (like domain-based authentication or the
 IPv6 address is on a whitelist). I'd expect MAAWG will continue to be a
 good place for mail ops folks to work through this stuff.

While SPF offered a fix for DSN back-scatter, neither this scheme nor DKIM 
provide a suitable basis for domain reputation.  Neither authorization nor 
signed message content provide any direct evidence of abuse accountability.  

Permission for this occurs by leaving the future of email primarily in the 
hands of those having conflicts of interest.  For example, none of the current 
domain based schemes offer a means to hold those paid to send bulk email 
accountable.  Several would even be happy to see IPv6 email require IPv4 
providers to relay IPv6 email.

Here is the link that illustrates the serious problem.
http://www.bungi.com/Dom-v6.pdf

And again, I call on the IETF to work on this problem.

Regards,
Douglas Otis








Re: Sufficient email authentication requirements for IPv6

2013-03-29 Thread Douglas Otis

On Mar 29, 2013, at 9:58 AM, John Levine jo...@taugh.com wrote:

 As a result, it is questionable whether any IPv6 address-based reputation 
 system can be successful (at least those based on voluntary principles.)
 
 It can probably work for whitelisting well behaved senders, give or take
 the DNS cache busting issues of IPv6 per-message lookups.
 
 Since a bad guy can easily hop to a new IP for every message (offering
 interesting new frontiers in listwashing) I agree that it's a losing
 battle for blacklisting, other than blocking large ranges of hostile
 networks.
 
 Fortunately, the IETF as a whole is not called upon to solve this
 problem right now.  People interested in mail reputation are welcome
 to drop by the spfbis WG and the discussions in appsarea about
 updating authentication and authentication logging RFCs.

Dear John,

The Internet is under a DDoS attack specifically against an email address 
reputation service.  This affects everyone, especially the IETF.

Strategies not premised on low overhead AUTHENTICATION are of little benefit.   
We can no longer continue business as usual.  I call upon the IETF to solve 
this problem.  It is within their charter.  It is within their capabilities.  
We can not make everyone upgrade, but we can establish a path that has a chance 
of offering a solution.

Regards,
Douglas Otis








Sufficient email authentication requirements for IPv6

2013-03-28 Thread Douglas Otis
Dear IETF,

In response to various strategies to reject IPv6 email lacking either DKIM
or SPF, the non-negotiated approach suggests far greater review is needed.

Here is a paper illustrating problems with DKIM.
https://www.dropbox.com/sh/jh4z407q45qc8dd/MlcUTUFUf4/Domains%20as%20a%20basis%20for%20managing%20traffic.pdf

Rather than offering a means to negotiate, returning a 554 response is seen
as a way to coerce senders to try other MX records.  In
https://tools.ietf.org/html/rfc5321 this code does not clarify why a
connection has been rejected, but implies in the case of a
connection-opening response, No SMTP service here.

An alternative might be to use existing negotiation techniques for scalable
source authentication:

http://tools.ietf.org/html/rfc4954 offers 530 5.7.0 Authentication required
   This response SHOULD be returned by any command other than AUTH,
   EHLO, HELO, NOOP, RSET, or QUIT when server policy requires
   authentication in order to perform the requested action and
   authentication is not currently in force.
530 seems like a better response.

421 is far more likely to be understood as fallback for problematic
clients, but remembering anything about prior IPv6 clients is unworkable.

http://tools.ietf.org/html/rfc4954 offers a means to properly negotiate
enhanced requirements.  Since DKIM in its current form can not enhance
authentication to a level able to mitigate abuse, it does not justify
negotiation.  SPF is not about authentication.  SPF is an authorization
scheme.

Smarthost services for naive senders new to IPv6 could permit an easy
introduction to scalable authentication schemes like StartTLS.  Formalized
negotiations can solve an abuse problem by placing added burdens on senders
for a scalable scheme that should prove far more robust.  I can expand on
this if anyone is interested.

Regards,
Douglas Otis


Re: Sufficient email authentication requirements for IPv6

2013-03-28 Thread Douglas Otis
Hello Hector,

On Mar 28, 2013, at 3:53 PM, Hector Santos hsan...@isdg.net wrote:

 Hi Doug,
 
 On 3/28/2013 2:13 PM, Douglas Otis wrote:
 Dear IETF,
 
 In response to various strategies to reject IPv6 email lacking either DKIM
 or SPF, the non-negotiated approach suggests far greater review is needed.
 
 Whats the difference with IPv6 connections?  Should it matter? Does it matter?

IPv6 makes publishing IP address reputations impractical.  Since IP address 
reputation has been a primary method for identifying abusive sources with IPv4, 
imposing ineffective and flaky replacement strategies has an effect of 
deterring IPv6 use.

 Here is a paper illustrating problems with DKIM.
 https://www.dropbox.com/sh/jh4z407q45qc8dd/MlcUTUFUf4/Domains%20as%20a%20basis%20for%20managing%20traffic.pdf
 
 This requires a sign up to obtain/view. Sorry.

Javascript disabled?  Here is a simpler alternative:
http://www.bungi.com/Dom-v6.pdf

 Rather than offering a means to negotiate, returning a 554 response is seen
 as a way to coerce senders to try other MX records.
 
 Don't follow. A 55x is a permanent rejection. Not a SMTP protocol instruction 
 to retry.

The 554 return code is to indicate no SMTP service is at the host.  It is 
illogical to assume other MX records will offer different results, but that is 
the expectation for their pseudo authentication scheme to work. 

 An alternative might be to use existing negotiation techniques for scalable
 source authentication:
 
 http://tools.ietf.org/html/rfc4954 offers 530 5.7.0 Authentication required
This response SHOULD be returned by any command other than AUTH,
EHLO, HELO, NOOP, RSET, or QUIT when server policy requires
authentication in order to perform the requested action and
authentication is not currently in force.
 530 seems like a better response.
 
 It may be, but it may force a client to continue a AUTH sequence and thats 
 not possible if ESMTP is not in place or AUTH is not part of the ESMTP 
 options.

Improved authentications using a negotiation sequence offers efficient and 
robust means for ensuring email delivery integrity.  DKIM can not control 
abusive sources.  SPF does not offer source authentication for mitigation 
control either.   

 Sounds like there are apples and oranges being mixed up here.
 
 421 is far more likely to be understood as fallback for problematic
 clients, but remembering anything about prior IPv6 clients is unworkable.
 
 Don't follow.  SMTP clients are following a SMTP state machine:
 
   4xx means retry
   5xx means don't retry

Some providers are trying to make an exception for 554 to mean the host does 
not support SMTP (when DKIM or SPF was not previously seen from a specific 
client). This is to imply a need to keep trying other hosts in other MX records 
after retrying a 421 code.   WIth IPv6, the same providers will also impose a 
flaky reverse DNS PTR record requirement as well.   For IPv4, these PTR records 
helped in the exclusion of residential access.   For IPv6, such mapping is also 
impractical and represent a waste of resources.

 http://tools.ietf.org/html/rfc4954 offers a means to properly negotiate
 enhanced requirements.  Since DKIM in its current form can not enhance
 authentication to a level able to mitigate abuse, it does not justify
 negotiation.  SPF is not about authentication.  SPF is an authorization
 scheme.
 
 I can interpret SPF as an authentication protocol for IP::DOMAIN association 
 authentication assertion which allow for the policy-based domain defined 
 authorization for the sender domain to send mail.

Many domains are obligated to authorize outbound servers being shared by other 
domains.  Assumptions that authorization means authentication may prove 
damaging by holding the wrong domains accountable.   Nothing ensures the return 
path has been asserted by the return path domain.

   This is can be done at the SMTP level prior to the DATA state.  DKIM is a 
 Payload (RFC 822/2822/5322) protocol which would require the DATA state to be 
 reached first in order to apply any sort of dynamic SMTP online response 
 other than 250 accept.

Aren't SMTP Auth negotiations the right methodology?

 Smarthost services for naive senders new to IPv6 could permit an easy
 introduction to scalable authentication schemes like StartTLS.  Formalized
 negotiations can solve an abuse problem by placing added burdens on senders
 for a scalable scheme that should prove far more robust.  I can expand on
 this if anyone is interested.
 
 Having a hard time understanding how IPv6 has anything to do with DKIM or SPF 
 that would be different than IPv4.  Even then, why make the distinction?  
 Does it matter what port is used?  The public vs private port?Only with 
 private port can you raise the bar for client correction (SMTP compliancy) 
 over what is currently allowed for public port operations where there is 
 legacy relaxations to allow for anonymous local or hosted domain reception

Re: Last Call: draft-kucherawy-marf-source-ports-03.txt (Source Ports in ARF Reports) to Proposed Standard

2012-05-08 Thread Douglas Otis

On 5/7/12 11:23 PM, Murray S. Kucherawy wrote:

-Original Message-
From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of Scott 
Kitterman
Sent: Monday, May 07, 2012 10:49 PM
To: ietf@ietf.org
Subject: RE: Last Call:draft-kucherawy-marf-source-ports-03.txt  (Source 
Ports in ARF Reports) to Proposed Standard


If all one is doing is figuring out why something like a DKIM signature
failed on an otherwise legitimate message, then I agree the source port
isn't a useful input to that work.  In fact, as far as DKIM goes, the
source IP address is probably not useful either.

If, however, one is trying to track down the transmission of fraudulent
email such as phishing attacks, source ports can be used to identify
the perpetrator more precisely when compared to logs.  Support for this
latter use case is why I believe RECOMMENDED is appropriate.

Which is exactly the case (abuse report) the second to last paragraph
takes care of.  I agree RECOMMENDED is appropriate there and you have
it there.

For auth failure analysis I read you as agreeing it's not needed.
There are some authorization methods that use IP address, so I don't
think that for auth failure reports inclusion of IP address and source
port are comparable.

Based on your response, I don't understand your objection to dropping
the RECOMMENDS for auth failure reports and keeping it  for abuse
reports?

I don't think it's possible for software to identify correctly a case of an 
accidental authentication failure versus detected fraud.  If it were, then I'd 
agree that for the simple authentication failure case the source port isn't 
useful.

In the absence of that capability, isn't it better to give the investigating 
user as much information as possible to use in correlation of logs and such?

Dear Murray,

This is not about individual submissions or retaining privacy.  This is 
about retaining the only (weakly) authenticated piece of information 
within public SMTP exchanges.  All other SMTP elements are easily 
spoofed and worthless at positively identifying compromised systems for 
the purpose of subsequent isolation.   Attempts to track ports in the 
presence of LSN overlooks the highly transitory translations.  However, 
the LSN scheme provides a means to determine the source IP address.


Regards,
Douglas Otis





Re: Last Call: draft-kucherawy-marf-source-ports-03.txt (Source Ports in ARF Reports) to Proposed Standard

2012-05-07 Thread Douglas Otis

On 5/7/12 3:35 PM, Scott Kitterman wrote:

 On Monday, May 07, 2012 12:50:25 PM The IESG wrote:
 The IESG has received a request from an individual submitter to
 consider the following document: - 'Source Ports in ARF Reports'
 draft-kucherawy-marf-source-ports-03.txt as Proposed Standard
 ...

 I think adding the source port field has value, particularly for
 abuse reporting, but I think making it RECOMMENDED for authentication
 failure reporting is not appropriate.

 The last two paragraphs of section three read (trivial typo - there's
 an extra new line in the first paragraph that I removed here):

 When any report is generated that includes the Source-IP reporting
 field (see Section 3.2 of [ARF]), this field SHOULD also be present,
 unless the port number is unavailable.

 Use of this field is RECOMMENED for reports generated per
 [AUTHFAILURE-REPORT] (see Section 3.1 of that document).

 The first corresponds to use in abuse reporting. As described in
 this draft and the references, I think the addition of source ports
 for abuse reports is well justified. OTOH, if you look at Section
 3.1 of RFC 6591 [AUTHFAILURE- REPORT], it gives the purpose of the
 most of the various data elements it RECOMMENDS as to aid in
 diagnosing the authentication failure.

 I'm not aware of any authentication methods supported by RFC 6591
 [AUTHFAILURE-REPORT] where source port makes a difference in
 authentication results. If RFC 6591 is extended in the future to
 include one that does, that would be the time to make source port
 RECOMMENDED for authentication failure reports. In the mean time
 it's just additional overhead and message size.

 My suggestion would be to change the last part of section three to
 read:

 When any authentication failure report [AUTHFAILURE-REPORT] is
 generated that includes the Source-IP reporting field (see Section
 3.1 of [AUTHFAILURE-REPORT]]), this field MAY also be included.

 Other than that, I think it's ready to go.

Dear Scott,

Agreed.  Logging ports translated by LSNs is not recommended.  The only 
tangible data represents the source IP address made available by LSN 
services.  Both of which touch upon the changes you recommend.   At some 
point, authentication reporting also needs to be updated as well.


Regards,
Douglas Otis



Re: Yet Another Reason?

2012-02-02 Thread Douglas Otis

On 2/2/12 1:55 PM, Alan Johnston wrote:

Is this yet another reason not to have IETF meetings in the USA? ;-)

  
http://yro.slashdot.org/story/12/02/02/1719221/do-you-like-online-privacy-you-may-be-a-terrorist

The FBI and their would-be tipsters could be flat out trying
investigate everyone who uses encryption, anonymizer and privacy tools
on the Internet, or changes SIM cards in their mobile phone!  At least
wearing T-shirts with inscrutable slogans isn't on the list yet or
we'd all be rounded up...

Seriously - who writes this stuff?

Dear Alan,

I can't help but imagine Jeff Foxworthy saying:

You might be a redneck if you think someone--
  concerned about privacy
  signing on to Comcast or AOL
  paying using cash
  changing their SIM card
  concealing data in photos (is there an app for that?)
  purchasing chemicals
  reading equipment manuals
  reading revolutionary literature
  writing program code
  watching attack coverage
  looking at stadium seating maps
--might be a terrorist.

Call (888) 705-JRIC and mention redneck. :^)

Regards,
Douglas Otis
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Protocol Definition

2012-01-05 Thread Douglas Otis

On 1/5/12 9:13 AM, Dave CROCKER wrote:

 On 1/5/2012 7:01 AM, Dave Cridland wrote:
 On Thu Jan 5 14:48:54 2012, Dave CROCKER wrote:
 If protocol corresponds with program or algorithm, then what is
 the communications term that corresponds to process?

 It's tempting to say port number, but that doesn't seem very
 satisfying.

 Session?

 That's an appealing suggestion. It is based on a 'state' existing
 between the two end points and it is above transport (so we don't
 have to worry about tcp vs udp vs...).

 On the other hand, isn't a session able to have more than one
 connection and, therefore, possibly be running more than one
 protocol?


Dave,

Agreed.  A multiple stream protocol aggregates multiple endpoints into 
an Association where each then signals supported protocols.  Within each 
protocol, various algorithms are applied, often in phases such as Bind, 
Listen, Accept, Connect, Close, Shutdown, SendMsg, RecvMsg, 
GetPeerName.  An algorithm might be expressed using computer languages 
that incorporate elaborate mathematical models, such as simple hash 
functions used to validate packets.  One of the protocols supported is 
SDP Session Description Protocol that carries media over the multiple 
streams.  This provides for Sessions, Associations and Connections.


See:
http://tools.ietf.org/html/draft-loreto-mmusic-sctp-sdp-07#page-3

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-kucherawy-dkim-atps-11.txt (DKIM Authorized Third-Party Signers) to Experimental RFC

2011-12-08 Thread Douglas Otis
I support adoption of dkim-atps as an experimental RFC.  It would have 
been clearer to use the term Author-Domain rather than Author.  Clearly, 
it is not the Author offering Authorization.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Plagued by PPTX again

2011-11-21 Thread Douglas Otis

On 11/17/11 4:14 PM, Randy Bush wrote:

PDF/a is something browsers and natively by different OSs that can
directly display.  When submitting formats that are not PDF/a, convert
and automatically link to the converted output with a prompt requesting
approval.

http://www.digitalpreservation.gov/formats/fdd/fdd000125.shtml
so where is the web page that tells me for platform x how to convert 
my generated pdf, which i have been using as the pub format for years, 
into pdf/a? the link under Guidelines for Creating Archival Quality 
PDF Files is a broken link.

The Florida Center for Library Automation website on the page:
http://fclaweb.fcla.edu/content/pdfa-1

Includes a similar link:
Guidelines for Creating Archival Quality PDF Files
http://fclaweb.fcla.edu/uploads/Lydia%20Motyka/FDA_documentation/PDFGuideline.pdf

Don't expect this link will remain stable either. :^(

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Plagued by PPTX again

2011-11-17 Thread Douglas Otis

On 11/17/11 9:17 AM, Robinson Tryon wrote:

On Wed, Nov 16, 2011 at 8:56 PM, Melinda Shoremelinda.sh...@gmail.com  wrote:

On 11/16/2011 01:45 PM, Christian Huitema wrote:

Just saying, but if we want to ensure that presentations are readable 50
years from now, and do not embed some kind of malicious code, we might stick
to ASCII text, right?

Yes, clearly.

It's hard to know what to say about the suggestion that PowerPoint
is an appropriate archival format, other than maybe it's time for
folks to learn more about archiving.  To be honest it's my impression
that it's just people trying to find some reason - any reason - to
justify their preferred tools.  The notion that current PowerPoint
formats being the ones most likely to be interpretable in 2061, of
the formats now available, really doesn't hold up under serious
scrutiny.

But hey, now we know that at least the proponents of the archival
view are going to have crud-free slides, in the interest of
parsability at unknown future times.


We can certainly hope so!


Melinda


Reviewing this thread, it seems like there are three central desires
regarding IETF materials:

1) Interoperability: Universal access to working content (e.g. reading
presentation slides from a current meeting)
2) Archiving: Storage and search of content
3) Creation: Ease of content creation/desire to use particular products

If the current formats used to produce content (e.g. PPT, PPTX, ODF,
etc...) can be easily and accurately converted into formats such as
PDF/A-1, and if the text from the source formats can be exported (for
search/accessibility purposes), then I think we can make everyone
happy.

The export tools aren't perfect, of course, so this will place a
burden on content creators to either choose a source format that has
good export or to craft their input files so that they can be cleanly
exported by the available software.

If authors take on the responsibility of creating and verifying the
fidelity of exported versions, then I think everything will be peachy.
What can we do to encourage this practice?

Agreed.

PDF/a is something browsers and natively by different OSs that can 
directly display.  When submitting formats that are not PDF/a, convert 
and automatically link to the converted output with a prompt requesting 
approval.


http://www.digitalpreservation.gov/formats/fdd/fdd000125.shtml

-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Plagued by PPTX again

2011-11-16 Thread Douglas Otis

On 11/15/11 10:26 AM, Frank Ellermann wrote:

On 15 November 2011 18:56, Noel Chiappaj...@mercury.lcs.mit.edu  wrote:


Gee, I don't see my OS listed on that page. What do I do know?

Let DuckDuckGo tell you what it knows about Powerpoint viewer ubuntu.

FWIW I like ppt(x) better than pdf, anything pdf is huge.  For simple
slides (x)html or whatever the slide option of xml2rfc produces could
be nice, packaged as mozilla archive format or any other style of a
zipped subdirectory.
Many exploits appear in ppt and pptx formats (where pptx reduces 
scanning complexity).  Limiting acceptance (or publishing) to pdf/a is a 
safer and more stable  choice.  Documentation formats related to IETF 
efforts should be stable to ensure documents can be read at a later date.


In May of this year, patches were needed to mitigate ongoing PPT threats.
http://technet.microsoft.com/en-us/security/bulletin/ms11-036
http://www.openoffice.org/security/cves/CVE-2010-2935_CVE-2010-2936.html
http://blogs.technet.com/b/mmpc/archive/2009/04/02/new-0-day-exploits-using-powerpoint-files.aspx

OpenOffice permits file system based references as well.

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jdfalk-maawg-cfblbcp-02.txt (Complaint Feedback Loop Operational Recommendations) to Informational RFC

2011-10-05 Thread Douglas Otis

On 10/4/11 11:43 PM, Eliot Lear wrote:

For the record, I tend to dislike pollution of the RFC series with PR
blurbs as well.  This having been said, I would be far more interested
in a discussion about the actual substantive content of the document.

Eliot

Eliot,

Thank you for asking.

In addition to PR and copyright issues that forbid modification of the 
draft, there are a few technical issues that also affect the MARF draft 
as well.  The assertion of being DDoS experts does not explain why 
potential DDoS attacks perpetrated by leveraging recipient resources 
necessary for processing SPF scripts had been overlooked.  SPF can be 
exploited to initiate a highly distributed demand against any targeted 
domain unrelated to any domain found within an attacking email campaign.


The assertion that it is difficult to forge a DKIM message overlooks the 
purpose of feedback and what these reports imply.  DKIM does NOT assure 
a domain can be held accountable for any unsolicited message which 
creates some risks to feedback resources.  The domain of the DKIM 
signature may serve as a basis for permitting feedback to the specific 
domain, but should not be used to confirm any unrelated domains.


A more serious problem exists with the use of SPF in permitting 
feedback.  This problem is made more problematic by 
Authentication-Results headers not capturing the IP address used to 
verify SPF records, although the MARF report includes a purported 
Source IP address.  There is still no way to determine how this 
purported IP address information had been derived, or how it might 
relate with SPF verification assertions.  An SPF record can be crafted 
to return a pass result unrelated to the originating IP address.  As 
such, SPF can be subverted to gain feedback access with lax validation 
results returned in Authentication-Results headers.  This oversight had 
been officially challenged, but never corrected. :^(


Any domain can resolve any IP address.  The draft's assertion that 
forging the IP address is difficult must be considered in respect to the 
use of SPF in qualifying for feedback.  The source IP address does not 
receive feedback.  The feedback relationship is further undermined with 
consensus on which message element, whether the EHLO, the MailFrom, or 
the PRA selects the SPF resource record.  In addition, feedback is 
likely to occur when SPF assertions fail, so it becomes imperative to 
understand what was used to initially qualify SPF feedback relationships.


In addition, this draft describes use of third-parties located at 
different domains without advising how these entities being given access 
to feedback streams are to be vetted, or whether they should be allowed 
at all.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-jdfalk-maawg-cfblbcp-02.txt (Complaint Feedback Loop Operational Recommendations) to Informational RFC

2011-10-04 Thread Douglas Otis

On 10/4/11 9:09 AM, J.D. Falk wrote:

About MAAWG
  
 MAAWG [1] is the largest global industry association working against

 Spam, viruses, denial-of-service attacks and other online
 exploitation.  Its' members include ISPs, network and mobile
 operators, key technology providers and volume sender organizations.
 It represents over one billion mailboxes worldwide and its membership
 contributed their expertise in developing this description of current
 Feedback Loop practices.
  
  Could the PR blurb be removed?
  
  I think it's useful in this document.  People reading IETF documents

  aren't likely to know what MAAWG is, and a short paragraph doesn't
  seem untoward.  I'd agree, if there were excessively long text for
  this, but it's brief.

MAAWG will insist on keeping this.  The primary purpose, in my mind, is to show 
that even though this wasn't written within the IETF it was still written by 
people who really do know what they're talking about.
I agree with Frank on this issue.  The PR blurb should not be included.  
If MAAWG finds removal unacceptable, they are free to publish the 
document themselves among their other documents.  MAAWG has a closed 
membership heavily influenced by ISPs and high volume senders.  The IETF 
has normally resisted this type of influence by not referring to 
specific organizations.  Such influence is not always beneficial from 
the perspective of many IETF objectives.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Expiring a publication - especially standards track documents which are abandoned

2011-10-04 Thread Douglas Otis

On 9/4/11 7:23 AM, todd glassey wrote:
There are any number of IETF RFC's which were published and then 
accepted in the community under the proviso 'that they would become 
IETF standards' which in many instances they do not. Further many of 
them are abandoned in an uncompleted mode as standards efforts.


To that end I would like to propose the idea that any IETF RFC which 
is submitted to the Standards Track which has sat unchanged in a 
NON-STANDARD status for more than 3 years is struck down and removed 
formally from the Standards Track because of failure to perform on the 
continued commitment to evolve those standards.


Why this is necessary is that the IETF has become a tool of companies 
which are trying to get specific IETF approval for their wares and 
protocols - whether they are open in form or not. The IETF entered 
into a contract with these people to establish their standard and 
published those documents on the standards track so that they would be 
completed.  Since they have not been completed as IETF Standards the 
Project Managers for those submissions have formally breached their 
contract to complete that process with both their WG members who 
vetted those works as well as the rest of the IETF's relying parties.


As such it is reasonable to put a BURN DATE on any Standards Track 
effort which has stalled or stopped dead in its tracks for years.


Todd Glassey

Todd,

I do not hold that view.  It should not require careers be dedicated in 
the advancement of specifications.  As in programming, many of these 
specifications should become components supporting more complex 
protocols.  While the current publications tools help tremendously in 
allowing interested parties determine the current state of a 
specification, I have not found standard's track informative.


It seems there should be an effort to better objectify specifications, 
in other words, allow them to be deconstructed into separate elements.  
Above the deconstruction, allow higher level organizations to offer a 
breakdown of options, relative use, current and best practices.  This 
high level organization should not include any normative language, other 
than in arranging existing specifications.  It would be with this higher 
level arrangement where status ranking makes more sense.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-03.txt (IANA Reserved IPv4 Prefix for Shared Transition Space) to Informational RFC

2011-09-22 Thread Douglas Otis
Dual-Stack Lite, RFC6333 that makes these conversions using a single NAT 
by combining IPv6 address space with a common 192.0.0.0/29.  This 
approach does not suffer from scaling limitations other than 
constraining access points to 6 IPv4 interfaces where IPv6 provides the 
native IP protocol.   While taking a chunk out of 240/4 should not 
introduce any hardship, the intended use is for compound NAT topology 
seems aimed at retaining the provider's IPv4 infrastructure.  Such 
inferior IPv4 networks will certainly expedite demand for IPv6 access.


Any IPv4 need can be satisfied by the CPE  that conforms with RFC6333 at 
roughly the cost of the monthly service.  Does it really make since to 
endorse a strategy that attempts to produce inferior networks to delay 
an upgrade and impact many services now offered over IPv4?  This is 
likely to be significant mistake, IMHO.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-12 Thread Douglas Otis

On 9/9/11 6:33 PM, Thomas Narten wrote:

I am surely going to regret posting, because I have largely tuned out
of this discussion due to the endless repetition, etc. I am not
supportive of the current document, because I don't think it solves
anything. To me, it smack a bit of change for changes sake.

One of the key problems that isn't being addressed is that mixing
advancement of a spec with revising a spec are fundmentally at
odds with each other.

Advancing a spec is done for marketing, political, process and other
reasons. E.g., to give a spec more legitimacy. Or to more clear
replace an older one. Nothing wrong with that.

But the real reason that the IETF *should* be revising specs is to fix
bugs and improve protocol quality.

By definition, you cannot revise a spec (in a real, meaningful way)
and advance at the same time. The spirit (if not letter) of
advancement says you advance a spec, when there are implementations
*based on the spec being advanced*. That means you can't revise a spec
and at the same time have implementations derived from the revised
spec.  (You can have implementations based on mailing list
discussions, but that is NOT the same thing.)

The IETF is about making the Internet work better. That means revising
specs (from a technical perpective) when they need to be revised.

If we want to fix what's broken, we should focus on getting documents
revised (without simultaneously advancing them).
But once you do that, one quickly finds out that there are real and
sometimes  complicated
reasons why revising documents is hard.

In many cases, widely deployed protocols really need to have a revised
spec developed (and the authors will readily admit that). But that
just doesn't happen, not because of process, but because of other much
more fundamental problems. E.g., Not enough energy from the relevant
experts. key people who know a spec have moved on to other newer
technologies or other higher priority things. Fixing specs can also be
painful because some vendors won't or can't change their deployed
implementations, so don't really want an updated spec that invalidates
their implementation. etc., etc. It can be very hard for a WG to walk
the line between we need to fix this and can we tweak the spec
without invalidating various deployed implementations.

IMO, these sorts of issues are the real reasons documents don't
advance more. It's not just about process.

Agreed. Well said.

-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [v6ops] 6to4v2 (as in ripv2)?

2011-07-27 Thread Douglas Otis

On 7/27/11 4:31 AM, Mark Townsley wrote:

On Jul 27, 2011, at 7:09 AM, Fred Baker wrote:

On Jul 26, 2011, at 6:49 PM, Brian E Carpenter wrote:


Since 6to4 is a transition mechanism it has no long term future *by 
definition*. Even if someone chooses to design a v2, who is going to implement 
it?

Actually, I think one could argue pretty effectively that 6rd is 6to4-bis.

+1

+1

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-v6ops-6to4-to-historic (yet again)

2011-07-26 Thread Douglas Otis

On 7/26/11 12:58 PM, SM wrote:

Hi Ron,
At 07:30 AM 7/25/2011, Ronald Bonica wrote:
draft-ietf-v6ops-6to4-to-historic will obsolete RFCs 3056 and 3068 
and convert their status to HISTORIC. It will also contain a new 
section describing what it means for RFCs 3056 and 3068 to be 
classified as HISTORIC. The new section will say that:


- 6-to-4 should not be configured by default on any implementation 
(hosts, cpe routers, other)
- vendors will decide whether/when 6-to-4 will be removed from 
implementations. Likewise, operators will decide whether/when 6-to-4 
relays will be removed from their networks. The status of RFCs 3056 
and 3068 should not be interpreted as a recommendation to remove 
6-to-4 at any particular time.


draft-ietf-v6ops-6to4-to-historic will not update RFC 2026. While it 
clarifies the meaning of HISTORIC in this particular case, it does 
not set a precedent for any future case.


The above conflates document labels, Historic in this case, and advice 
to vendors and operators.  Redefining the meaning of Historic in a RFC 
that is not even a BCP is a bad idea.


I am fine of 6-to-4 not to be configured on by default and obsoleting 
RFCs 3056 and 3068.   I do not support the redefinition of Historic or 
the claim that there is IETF Consensus.

Agreed.

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-dkim-rfc4871bis-12.txt (DomainKeys Identified Mail (DKIM) Signatures) to Draft Standard

2011-06-24 Thread Douglas Otis

On 6/23/11 8:24 AM, John Levine wrote:

In article4e02ee24.2060...@gmail.com  you write:

On 6/22/11 11:14 AM, Dave CROCKER wrote:

Folks,

The bottom line about Doug's note is that the working group extensively
considered the basic issue of multiple From: header fields and Doug is
raising nothing new about the topic.

Dave is quite right.  Doug's purported attack just describes one of
the endless ways that a string of bytes could be not quite a valid
5322 message, which might display in some mail programs in ways that
some people might consider misleading.  If it's a problem at all, it's
not a DKIM problem.
Perhaps you can explain why the motivation stated in RFC4686 includes 
anti-phishing as DKIM's goal?  Why of all the possible headers ONLY the 
From header field MUST be signed?  Why RFC5617 describes the From 
header field as the Author Author address that is positively confirmed 
simply with a Valid DKIM signature result?  Both RFC4686 and RFC5617 
overlooked a rather obvious threat clearly demonstrated by Hector Santos 
on the DKIM mailing list:  Pre-pended singleton header fields.


Neither SMTP nor DKIM check for an invalid number of singleton header 
fields. These few header fields are limited to one because they are 
commonly displayed.  Multiple occurrence of any of these headers is 
likely deceptive, especially in DKIM's case.  DKIM always selects header 
fields from the bottom-up, but most sorting and displaying functions go 
top-down selection.


Complaints from John, Dave, and Barry and others is likely and 
understandably out of fatigue.  They just want the process to be over.  
We are now hearing there is a vital protocol layering principle at stake 
which even precludes DKIM from making these checks!  Really?


Although DKIM will be securely hashing the headers fields which MUST 
include the From header,  developers are being told they must ignore 
multiple singleton header fields discovered in the process.  It is not 
their burden!  As if securely hashing, fetching any number of public 
keys, and verifying any number of signatures isn't?


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Last Call: draft-ietf-dkim-rfc4871bis-12.txt (DomainKeys Identified Mail (DKIM) Signatures) to Draft Standard

2011-06-21 Thread Douglas Otis


This version of DKIM also introduced use of RFC5890 instead of RFC3490, 
which allows use of the German esset and the Greek final sigma, drops 
string-prep, and defines 3,329 invalid code points.  Unfortunately, this 
version of the DKIM also failed to exclude use of Fake A-Labels, which 
when presented to the user may also prove highly deceptive.


Details of this concern were stated in the tracker at:
http://trac.tools.ietf.org/wg/dkim/trac/ticket/24

Regards,
Douglas Otis


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Gen-ART LC Review of draft-cheshire-dnsext-multicastdns-12

2010-12-22 Thread Douglas Otis

On 12/22/10 2:11 PM, Ben Campbell wrote:

 Thanks for the response. Further comments below. I elided sections
 that I think have been addressed.

 On Dec 15, 2010, at 4:30 AM, Stuart Cheshire wrote:

 [..]

 Yes, IDNA is not needed for Multicast DNS.

 I think it would be highly unfortunate if we end up saying two
 different flavors of DNS use different approaches to
 internationalization. But if there are good technical reasons not to
 use IDNA, then it would be good to state them. Perhaps the reasons
 you already mention apply--in which case it would be helpful to state
 that. Would you consider IDNA to exist to solve this historical
 problems in DNS that don't exist in mDNS?


The IDNA patch for DNS is not a complete or without problems. Although 
RFC4795 indicates compliance with RFC1035, the latest specification 
published by Microsoft, MS-LLMNRP — v20101112

http://download.microsoft.com/download/a/e/6/ae6e4142-aa58-45c6-8dcf-a657e5900cd3/%5BMS-LLMNRP%5D.pdf
includes the following questionable statement:
,---
[RFC4795] does not specify whether names in queries are to be sent in 
UTF-8 [RFC3629] or Punycode [RFC3492]. In this LLMNR profile, a sender 
MUST send queries in UTF-8, not Punycode.

'---
Rather than making the same mistake, leave mDNS as is.

-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-cheshire-dnsext-dns-sd-07.txt

2010-11-19 Thread Douglas Otis

On 11/18/10 4:51 AM, RJ Atkinson wrote:

IESG Folks,

   The IETF already has taken MUCH MUCH too long handling this document.
Each time this I-D gets revised, new and different issues are raised.
While I am generally OK with the way IETF processes work,
this document is an exception.

   Excessive nit-picking is going on with this document, especially
since it is already globally deployed and clearly works well.
Further, there are multiple interoperable implementations already
deployed, which is an existence proof that the current I-D is
sufficient.  This I-D is quite different from most documents heading
to Proposed Standard, because for most I-Ds interoperability hasn't been
shown and operational utility in the deployed world hasn't been shown.

   Perfection is NOT what IETF processes require for a Proposed Standard
RFC.  Please stop seeking or asking for perfection from this I-D.

   Please just publish the document as an RFC **RIGHT NOW**
and AS-IS.

   Even if IESG folks really think more document editing is needed,
then still publish it RIGHT NOW and AS-IS.  If folks really want
to see document clarifications, that can be done LATER when the
document advances along the IETF standards-track.

   Now, before the IESG go off and try to point fingers at the author
for taking time between revisions of this I-D, please consider
that the root problem is that the Goal Posts keep being moved back
so that no matter how complete one set of author edits might be,
the document can never get to a state where it is acceptable.


+1,

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Admission Control to the IETF 78 and IETF 79 Networks

2010-07-12 Thread Douglas Otis


On 7/12/10 11:39 AM, Martin Rex wrote:

Personally, I'm heavily opposed to an approach along these lines.
It is a big plus that MAC addresses can be trivially changed,
and I regularly connect with random MACs in public places.
   
Russ and Ted discussed use of MAC addresses for access.   I may have 
missed or misunderstood their point, although such a scheme is often 
used (and easily defeated) in typical coffee-shop settings.  I may be 
wrong, and this is a good list for learning such things.


When security is desired, something like WPA2 Enterprise EAP-TTLS seems 
more realistic.  Perhaps other options need to be included to overcoming 
third-party software for versions of Windows.  This approach would keep 
information and privacy better secured, and systems less exposed to 
various exploits, since some attendees may actually need protection in 
the big city. :^)


Better security can be found with 802.1X-2010 that resolves some 
vulnerabilities by using MACSec 802.1AE to encrypt data between logical 
ports.  This suffers a drawback of deploying client certs, of poor 
coverage, along with the anxiety that EAP-TPM might cause.

Personally, I'm somewhat less concerned about a unique or fixed ID in
my DSL-router.  I have only one DSL subscription with one single ISP,
and I need to authenticate to my ISP with useridpass -- which makes
we wonder why should there be a unique/fixed ID in that device,
it is absolutely unnecessary.
   
Securing wireless must detect MitM attack. Using a cert at the server 
when making changes seems a small price.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Admission Control to the IETF 78 and IETF 79 Networks

2010-07-02 Thread Douglas Otis

On 7/1/10 8:26 AM, Fred Baker wrote:

While it is new in IETF meetings, it is far from unusual in WiFi networks to 
find some form of authentication. This happens at coffee shops, college 
campuses, corporate campuses, and people's apartments. I think I would need 
some more data before I concluded this was unreasonable.
   
Beijing is truly a big city.  Some reports show China has more Internet 
users than North America, with other regions being a distant third. 
Isn't restricting access using a MAC address passé?  Within these 
hundreds of millions of users, are popular intrusion tools, such those 
distributed on Ubuntu CDs being marketed as Free Internet, that are 
able to quickly crack WEP, and even WPA.  Brute force techniques for 
WPA2 might even utilize low cost online services of clustered video 
cores run against captured initial four-way handshakes which should 
discourage use of simple pass phrases.


A reasonable chance of keeping access away from this savvy population, 
will likely require enterprise WPA2.  Otherwise, it might become popular 
during the event to use high-gain antennas to obtain uncensored access.  
Distribution of such content could prove embarrassing, especially when 
logs reveal it came over IETF networks.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Douglas Otis

On 6/28/10 12:35 PM, Martin Rex wrote:

To me it looks like Obsolete:  has been used with quite different
meanings across RFCs, and some current uses might be inappropriate.

Although it's been more than two decades that I read rfc821 (and
none of the successors), I assume that all those RFC describe _the_same_
protocol (SMTP) and not backwards-incompatible revisions of a protocol
family (SMTPv1,v2,v3).  I also would assume that you could implement an
MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
fully interoperable with an implementation of rfc821.  So for a large
part we are looking at a revised specification of the same single protocol,
and the term obsoletes should indicate that you can create an
implementation of the protocol based solely on a newer version of the
specification describing it and remain fully interoperable with an
implementation of the old spec when (at least when using the mandatory
to implement plus non-controversial recommended protocol features).


For RFCs that create backwards-incompatible protocol revisions, and
in particular when you still need the old specification to implement
the older protocol revision, there is *NO* obsoletion of the old
protocol by publication of the new protocol.  Examples where this
was done correctly:  IPv4IPv6, LDAPv2LDAPv3, HTTPv1.0HTTPv1.1.

A sensible approach to obsolete a previous protocol version is to
reclassify it as historic when the actual usage in the real world
drops to insignificant levels and describepublish that move in an
informational RFC (I assume that is the intent of rfc-3494).


Examples of clearly inappropriate Obsoletes:  are the
TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
backward-incompatible protocol revisions of TLS and where the new RFCs
specify only the behaviour of the new protocol version and even
fail to clearly identify the backwards-incompatible changes.


And if you look at the actual use of TLS protocol versions in the
wild, the vast majority is using TLSv1.0, there is a limited use
of TLSv1.1 and very close to no support for TLSv1.2.

(examples https://www.ssllabs.com/ssldb/index.html
  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


What irritates me slightly is that I see this announcement
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=935k2=34176tid=1277751536

which is more of a bashing of existing and widely used versions
of SSLv3 and TLS, instead of an effort to improve _one_ of the
existing TLS protocol revisions and to advance it on the standards
maturity level and make it more easily acceptable to the marketplace.

Adding explicit indicators for backwards-incompatible protocol changes
in rfc-5246 might considerably facilitate the assessment just how much
changes are necessary to an implementation of a predecessor version
of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
to be a big mess and fixing it wouldn't hurt.

MUST requirements in spec ought to be strictly limited to features
that are absolutely necessary for interoperability _and_ for the
existing market, not just nice to have at some time in the future.
The only TLS extension that deserves a MUST is described
in rfc-5746 (TLS extension RI).


One of the reasons why some working groups recycling protocol
revisions on proposed rather advancing a widely deployed protocol
to draft is the the better is the enemy of the good.
   


Make everything as simple as possible, but not simpler. Albert Einstein

The current scheme is already too simple.  Too simple because any 
resulting utility does not justify promotion efforts.  Reducing the the 
status categories will not greatly impact the goal of spending less time 
at advancing related RFCs to the same level, where often an originating 
wg will have closed.  Making changes that impact a large number of 
interrelated protocols will have these efforts causing as much 
disruption as utility.  Rather than providing stability, efforts at 
simplification are likely to inject as many errors, as those corrected.


Four years ago, an effort to create a cover sheet for standard 
protocols was attempted.  After gaining majority support within the wg, 
subsequently the wg closed without a clear explanation for the IESG 
push-back.  Often no single RFC or STD encapsulates a standard.  In 
addition, references to a wider set depends upon an evolving set of 
numbered RFCs, where tracking these relationships often requires complex 
graphs.


With a cover sheet approach, core elements are described separately 
from extension,  guidance, replaces, experimental, and 
companion elements.  Many overlapping protocols can be defined as 
representing different associations of RFCs.  This scheme lessens 
dependence on a concise relationship described in each RFC,  or 
attempting to resolve relationships based upon the roughly maintained 
categories that frequently offer little insight or reflect actual use.


Cover sheets that 

Re: Models of change Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-16 Thread Douglas Otis

On 6/15/10 11:04 AM, ned+i...@mauve.mrochek.com wrote:

 And since I'm not in the best of moods I'll also answer in kind by
 saying us application engineers might also be waiting for someone
 with half a clue as to how to design a proper standard API to come
 along and do that.


Ned,

Agreed, better progress should have been made.  What impact do you see a 
new suite of network APIs making?


It is not hard to understand a view that one should avoid making NATP 
translations, where IPv6 should easily be able to avoid this issue.  
When dealing with older code that should have been changed,  dual stack 
transitional schemes, such as ds-lite or 6to4, depend less on existing 
code working directly with IPv6.  Most expect port mapping agility, or 
manual intervention will retain functionality by moving this function to 
the realm of newer equipment.  Access to maintenance interfaces is 
another area where proprietary schemes are working well.  Even Debian 
distributions such as Ubuntu, offer pre-installed services which make 
remote configuration easier and safer.  Having fewer maintenance 
interfaces exposed directly to the Internet is a good thing, since few 
older interfaces have adequate protection.


Here is a document that explains how the aiport router supports an API 
for managing port mappings:

http://tools.ietf.org/html/draft-cheshire-nat-pmp-03

This approach avoids complex service and device specific structures, and 
dependence upon insecure, complex, and proprietary assignment protocols 
that ultimately depend upon users being updated and knowing when to 
click okay.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-10 Thread Douglas Otis

On 6/9/10 5:57 PM, Ned Freed wrote:

 I note in passing that this might have played out differently had we
 gotten SRV record support in place for all protocols a lot sooner.
 But without it for HTTP in particular you're faced with the need for
 multiple port 80s in a lot of cases.


Disagree.  HTTP is a bad example, since it allows canonical names to be 
replaced with a name offered by clients for supporting name based 
virtual hosts.   In other words, a single port supports thousands of 
websites. :^)



 Clearly, with skill and non-commodity equipment, a configuration
 supporting multiple IPv4 addresses at an access point can be
 implemented in conjunction with IPv6.

 Of couse it can. But that's precisely the point - neither the skill
 nor the non-commodity equipment are available in most cases. And
 even when they are, a lot of people, like myself, run the costs
 versus benefits and IPv6 ends up losing.


Agreed, but that changes once IPv6 becomes an imperative for these 
services, such as websites.  At that point, its easier to scale a 
transitional solution when using fewer IPv4 addresses.  As such, those 
few wishing to retain multiple IPv4 addresses lacking IPv6 connectivity 
are likely the last to adopt IPv6.



 Fortunately, it remains easy to adopt the resource conservative
 IPv4 configurations supported by commodity routers when obtaining
 IPv6 connectivity.  Why should the IETF advocate an increased IPv4
 use that lacks benefit once a network has been configured?

 More strawmen. We're not talking about increased IPv4 use, but
 rather decent support for existing, long-deployed IPv4 use. If you
 seriously think you can get people to dump their existing setups in
 favor of something that is  a major PITA to deal with and offers no
 immediate benefit, well, I have a couple of bridges available at fire
 sale prices.


I still have my English standard spanners, but they are seldom used.  
The impetus to change occurs after IPv6 becomes an imperative, such as 
doing business with a region dependent upon IPv6.  After that, 
complaints related to NATs will fade, and support for IPv4 will be seen 
as the PITA.  The inflection point for this may move faster than imagined.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-10 Thread Douglas Otis


On 6/10/10 3:12 PM, Ned Freed wrote:

 On 6/10/10 2:48 PM Douglas Otis wrote:
 Disagree.  HTTP is a bad example, since it allows canonical names
 to be replaced with a name offered by clients for supporting name
 based virtual hosts.   In other words, a single port supports
 thousands of websites. :^)
 True but irrelevant. The issue isn't support for multiple web sites,
 it's support for multiple servers. Virtual hosts are fine as long as
 everything you've got runs under the same web server and on the same
 hardware. But in many cases things don't work this way. Let's see.
 I'm running at least seven different pieces of software right now
 that include a web server component. Now, not all of them provide
 public services, and for those it doesn't matter that they're not on
 port 80. But three of them do provide public services.

 Of course redirects and proxies provide a means of getting around
 these limitations. But now you're talking about a substantial
 increase in application and configuration complexity. Multiple IPs
 are a lot simpler and easier to manage.


A Pareto complaint scenario does not require every possible 
configuration be accommodated, when it also accommodates existing 
configurations.  Only a small percentage of customers receive multiple 
IPv4 addresses from their provider.  Of those, only a few run 
incompatible services on different systems.  While those who participate 
within the IETF likely represent an exceptional population, it seems 
unlikely special accommodations for minor portions of a market is 
necessary before progress is possible.  Those who wish to retain their 
current configurations can do so, forgoing IPv6 connectivity.  For them, 
this becomes a question of what overcomes their current equilibrium 
(their PITA factor).  Indirectly this is being answered when large 
providers internally route Internet services using IPv6 when lacking 
adequate IPv4 address space.


From a security standpoint, direct routing to a device reduces 
complexity and operational issues.  Whether a device is an LP tank 
sensor in the backyard, a power meter on the side of the house, or a 
heat pump in the basement, direct routing offers enhanced functionality 
and security for those who opt-in.  Currently, low-end routers are 
commonly 0wned and are untrustworthy, and often lack adequate logs, 
assuming these logs could be trusted.  Direct routes allow packets to 
arrive unaltered, where only its source is being trusted.



 The impetus to change occurs after IPv6 becomes an imperative, such
 as doing business with a region dependent upon IPv6.  After that,
 complaints related to NATs will fade, and support for IPv4 will be
 seen as the PITA.  The inflection point for this may move faster
 than imagined.

 In other words, the way you see this unfolding is that once there's
 a significant IPv6-connected base somewhere, probably in some
 emerging market, somebody will view it as  practical to deploy
 services that can only be reached by IPv6. As more and more of this
 happens, it will create a need for everyone to have IPv6
 connectivity, which will then lead to more IPv6-only services, and so
 on.

 If this is accurate, I think you need to go back and reread John
 Klensin's recent messages for why this scenario really isn't all that
 likely to unfold the way you think.


Region might mean a market segment or a geographic area, whether the 
impetus results from a lack of IPv4 address space, a lack of IP 
security, or a lack of functionality.


Do you have a reference to one of John's messages?  Over the years, this 
economic reference has been raised.  Adding complexity to commodity 
devices for a minor portion of a market makes little sense.  Higher-end 
routers offer a solution, but this means considering available 
alternatives when price does not seem justified.  I don't see any 
strategy being proposed to force those not wishing to participate.  For 
years I have had access to multiple static IP addresses, but never used 
more than one, nor would the services you seem to describe meet most 
residential AUPs.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-09 Thread Douglas Otis

On 6/7/10 12:49 PM, John C Klensin wrote:

My belief is that we have a serious IPv6 marketing and
transition problem until and unless we can get a level of
functionality for IPv6 (and, really, for IPv4/IPv6 mixtures of
the sorts that Ned's notes imply) at a level of investment
roughly equivalent to the costs we are now paying for IPv4
alone.   I want to stress that level of investment and terms
like expensive are measured in requirements for knowledge,
maintenance, configuration fussing, etc., not just hardware.
They also include some important issues about support costs on
the vendor/ISP side: if an ISP sells a business IPv6 service
with certain properties and customers get into trouble, that ISP
is itself in trouble if the support requests require third-level
or engineering/design staff involvement to understand or
resolve.  When the hardware costs we are talking about are in
the same range as one month's connectivity bills (and all the
numbers you and Ned mentioned are, at least for me), they just
wash out and disappear compared to aggravation, fussing, and
other sysadmin costs.
   
IMHO, it would be a mistake to expect low end routers targeting home and 
small office environments to eventually include features for handling 
multiple IPv4 addresses in conjunction with an IPv4 to IPv6 transition 
strategy, largely for the reasons you give.   When multiple providers 
are involved, some choices are available for multiple IPv4 addresses 
where devices terminating a provider's network are connected through a 
vlan switch with trunking.  Or terminated with a selection of mid-range 
routers ~$400/$50 new/used price range, such as cisco 871 or 2600.   
Instead of expecting a company's support to deal with with involved 
configurations, solutions are increasingly met by co-location services, 
or VPS where the providers offer network/power redundancy,  dual stack 
rout-ability, and support expertise.


An automatic 6to4 tunnel for an isolated IPv6 network,  routes on a 
per-packet basis to a border router in another IPv6 network over IPv4 
infrastructure.  Tunnel destinations are determined by the IPv4 address 
of the border router extracted from the IPv6 address starting with the 
prefix 2002::/16 having the format 
2002:/border-router-IPv4-address/::/48, which likely makes this a 
function of the ISP.  When IPv6 is available, each device becomes 
accessible with unique IP addresses.  A conservative approach for scarce 
IPv4 addresses is to associate dedicated servers/services with specific 
ports of a single global address, a feature supported by nearly all 
commodity routers.  Whenever accessing IPv6 networks over the Internet 
becomes imperative, ISPs will suggest boilerplate solutions.  However, 
it seems unlikely these will include anachronistic use of IPv4 addresses.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-09 Thread Douglas Otis

On 6/9/10 1:19 PM, Ned Freed wrote:

When IPv6 is available, each device becomes
accessible with unique IP addresses.  A conservative approach for scarce
IPv4 addresses is to associate dedicated servers/services with specific
ports of a single global address, a feature supported by nearly all
commodity routers.  Whenever accessing IPv6 networks over the Internet
becomes imperative, ISPs will suggest boilerplate solutions.  However,
it seems unlikely these will include anachronistic use of IPv4 
addresses.

And so, having no other argument to make, we resort to pejoratives?
Sorry, this was in reference to an approach based on passed 
assumptions.  The inflection point for when multiple IPv4 addresses at 
an access point becomes anachronistic will occur with an IPv6 
connectivity imperative driven by the lack of IPv4 addresses.


In most small office/home office (SOHO) cases, a single IPv4 address is 
both sufficient and well supported for use with IPv4 and IPv6 remote 
networks.  Additional IPv4 global addresses for an access point will 
likely involve recurring costs due to complexity and dependence upon 
this scarce resource.  The inflection point for when multiple IPv4 
addresses at an access point become anachronistic occurs with an IPv6 
connectivity imperative.  Perhaps the US will delay acceptance of this 
imperative, long after the rest of the world has embraced IPv6.  After 
all, US, Liberia, and Burma have yet to adopt metric measures. :^)
Calling small business use of a small number of IPv4 addresses 
anachronistic
doesn't change the fact that this is a widespread practice fully 
supported by
an ample number of reasonable quality router products. And you're not 
going to
get IPv6 deployed in such cases without a drop-in replacement that 
adds IPv6
support to what's already there. 
Clearly, with skill and non-commodity equipment, a configuration 
supporting multiple IPv4 addresses at an access point can be implemented 
in conjunction with IPv6.  This could be practical when many within an 
organization are affected, but would not involve commodity low-end 
routers.  Such configurations will remain rare due to IPv4 resource 
consumption, and greater support complexity.  Fortunately, it remains 
easy to adopt the resource conservative IPv4 configurations supported by 
commodity routers when obtaining IPv6 connectivity.  Why should the IETF 
advocate an increased IPv4 use that lacks benefit once a network has 
been configured?


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-02 Thread Douglas Otis

On 6/2/10 12:39 AM, Yoav Nir wrote:

Nice to hear just worked in the context of IPv6. Did your router give you 
just an IPv6 address, or also an IPv4 address?  If both, does the IPv6 address ever get 
anywhere on the Internet, or is it always NATted?
   
The router appears to use RFC3056, with the external IP address in the 
range for 6to4 that was noticed when visiting APNIC.NET.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The point is to change it: Was: IPv4 depletion makes CNN

2010-06-01 Thread Douglas Otis

On 6/1/10 9:57 AM, Olivier MJ Crepin-Leblond wrote:

On 30/05/2010 23:52, Phillip Hallam-Baker wrote :
   

People are not going to use IPv6 if it takes the slightest effort on
their part. People are not going to switch their home networks over to
IPv6 if it means a single device on the network is going to stop
working. In my case it would cost me $4K to upgrade my 48 plotter to
an IPv6 capable system. No way is that going to happen till there are
$50 IPv6 plotters on EBay.
 

Sorry, but that's a red herring.
You're speaking about IPv4 decommissioning, not IPv6 implementation.
Implementing IPv6 will do nothing to your local plotter. Your computer
will keep addressing IPv4 to it.
Nothing stops you from always running dual stack at home, with your IPv4
behind your NAT/PAT.

Have you tried implementing IPv6 at home?
   
By accident when solving a network drop-out problem within a congested 
wireless environment, installing an airport extreme router also offered 
IPv6 over an IPv4 ISP.  Everything just worked.
When later changing providers, the cable modem needed extensive tweaking 
before everything worked, which then lowered throughput by about 35%.  
To overcome this, several commodity routers were tried, but they were 
unable run DHCP once the modem's NAT was disabled.   Double NATs cause 
additional breakage.  Once again, the airport extreme just worked.  This 
was learning the hard way it seems.


Unless one is careful, one might find themselves using IPv6 without 
their knowledge, both globally and locally.  Capturing local traffic 
showed several applications already making use of the local IPv6 address 
space.   And I'd even wager that an IPv4 plotter would work,  since an 
HP IPv4 printer does.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: TSV-DIR review of draft-hethmon-mcmurray-ftp-hosts-11.txt

2010-05-17 Thread Douglas Otis

On 5/17/10 10:06 AM, Joe Touch wrote:

My point here is that if you're discussing alternatives, you need to
address why this alternative was not used. There may be useful reasons
(in specific, using a separate command allows you to reuse some error
codes more usefully), but you're also incurring an extra round trip,
which people tend to count these days.
   

Joe,

The use of AUTH follows HOST and precedes USER and PASS.  Your 
suggestion of combining USER+HOST exposes USER.


International consensus was reached between the ISO, IETF, ITU, and 
UNCEFACT on use of UTF-8 for interoperability.  However, this draft 
requires Puny-code input for international domain names.  While 
Puny-code allows IDNs to be encoded using ASCII, Puny-code is not 
intended as a human interface, nor is Puny-code interoperable with 
existing name services, and certificate validations.  Distinguished 
names in certs are UTF-8 encoded.   Local name services such as LLMNR, 
mDNS or Bonjour resolve domain names using UTF-8 queries.


Don't assume a server responding represents a valid server for the 
host.  More attention should be given to client compliance checks and 
the human interface.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-hethmon-mcmurray-ftp-hosts (File Transfer Protocol HOST Command) to Proposed Standard

2010-05-13 Thread Douglas Otis

On 5/12/10 9:34 PM, John C Klensin wrote:

Doug,
Let's separate two issues.  One is whether or not this
particular proposal, with or without RFC 4217 (an existing
Proposed Standard), is appropriate.  If it is not, or cannot
exist in harmony with 4217, then it reinforces my view that it
should not be put on the Standards Track without a more
comprehensive examination in the context of existing FTP work
and proposals.
   
This draft describes server side validations, while leaving open host 
name compliance with x.509 distinguished names, and limits input to 
Puny-code rather than UTF-8 as specified by RFC3280.  Should people be 
expected to input ACE-labels or make UTF-8/ACE-label conversions in both 
directions?  In addition, there remains many insecure implementations of 
FTP without also introducing these changes.  Use of FTPS, as suggested, 
is not normally part of any pre-configured FTP offering.  FTP also 
remains problematic as implemented in many browsers, where of course 
this proposed virtual host FTP is likely attractive.


Section 3.1 states:

.---
[The HOST input] should normally be the same host name
that was used to obtain the IP address to which the FTP control
connection was made, after any client conversions have been completed
that convert an abbreviated or local alias to a complete (fully
qualified) domain name, but before resolving a DNS alias (owner of a
CNAME resource record) to its canonical name.

Internationalization of domain names is only supported through the use of
Puny-code as described in RFC3492.
'---

Does this meet a reasonable expectations for IDN support?  Many local name 
services do not use Puny-code, but use of UTF-8 instead.


The other is whether we should proceed with any FTP work at all.
Especially in the context of 4217 (you were aware of that when
you wrote your comments, weren't you), I find your remarks
completely unpersuasive.  One could reasonably argue that it is
time to establish a SASL binding for FTP (maybe it is; a WG
could figure that out), but I think it is hard to argue that FTP
generally is any worse from an authentication, authorization, or
privacy standpoint than any other protocol that we've protected
by running it over an encrypted tunnel.  YMMD, of course.
   
The scant coverage of client related compliance suggests there is 
little interest in seeing security improved.  The crude introduction of 
host names makes security related efforts more difficult to resolve.  In 
the meantime, FTP continues to represent a security hazard, despite 
IETF's efforts.  This draft, in its current form, is unlikely to bring a 
positive change in security.  As such, it seems right to discourage 
changes that might interfere with efforts at achieving better 
implementations.  I share others doubts that further efforts such as 
this will lead to improve security, but instead lend false impressions.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-hethmon-mcmurray-ftp-hosts (File Transfer Protocol HOST Command) to Proposed Standard

2010-05-12 Thread Douglas Otis

On 5/12/10 9:38 AM, Joe Abley wrote:

On 2010-05-12, at 12:32, Paul Hoffman wrote:

   

The use of FTP dwarfs the use of SFTP by at least two orders of magnitude.
 

Sure.

To paraphrase my comment (or at least re-state it in a clearer way) from a 
protocol perspective, setting aside deficiencies in particular implementations, 
it seems more apropos to convey the message that FTP is inadequate in various 
ways, and to point towards some alternatives, than to imply (through the 
appearance of protocol maintenance) that FTP is alive and well and a good 
choice of protocol for new applications.
   
Agreed.  Use of plain-text authentication, even with a pretext of 
restricting directory views, lacks merit.   Most operating systems 
enforce directory access without dependence upon the access application. 
  Suggestions, that in effect recommends FTP to maintain security, 
would be misleading especially with many outstanding exploits still 
found in clients and browsers.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-hethmon-mcmurray-ftp-hosts (File Transfer Protocol HOST Command) to Proposed Standard

2010-05-12 Thread Douglas Otis

On 5/12/10 2:39 PM, John C Klensin wrote:

 Others may disagree, but our success record when we tell people not
 to do something that works well for them and, in their opinion, poses
 no risks, is terrible.  All saying FTP is dead, go use something
 else can accomplish is to drive the work away from the IETF.


In this case, the IETF should say Use something more secure.  The 
proposed enhancement combines multiple host's credentials to avoid 
transparent techniques that could offer network isolation as well.  Your 
concern would be valid when there is also a commensurate effort at 
improving security.  Unfortunately, the opposite is true.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Towards consensus on document format

2010-03-16 Thread Douglas Otis

On 3/16/10 6:22 AM, Julian Reschke wrote:
Speaking of which: did we ever *measure* the acceptance of 
draft-hoffman-utf8-rfcs? As far as I recall, there was lots of support 
for it.

The draft expired at rev 5, but can be found at:
http://tools.ietf.org/html/draft-hoffman-utf8-rfcs

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: RFC archival format, was: Re: More liberal draft formatting standards required

2009-07-13 Thread Douglas Otis


On Jul 12, 2009, at 4:42 PM, Doug Ewell wrote:

This thread has been headed down the wrong path from the outset, as  
soon as Tony Hain wrote on July 1:


An alternative would be for some xml expert to fix xml2rfc to parse  
through the xml output of Word. If that happened, then the  
configuration options described in RFC 3285 would allow for wysiwyg  
editing, and I would update 3285 to reflect the xml output process.  
I realize that is a vendor specific option, but it happens to be a  
widely available one.


I modified that, along the course of the thread, to suggest that a  
separate word2rfc tool might be a more sensible option.


To the extent the .doc format is highly flexible -- which isn't  
really true anyway; it's been rather stable since 1997, and the new  
XML-based format is called .docx -- I can see that as an obstacle  
for someone writing such a conversion tool.  But I challenge anyone  
to find the slightest suggestion in this thread that we should  
publish IETF documents directly in Word format. Let's at least argue  
the same point, folks.


These concerns took your concept to a logical conclusion.  Notice the  
definition for sttbListNames in:

http://www.microsoft.com/interop/docs/OfficeBinaryFormats.mspx

Logically, rather than modifying TCL xml2rfc code to interpret xml2rfc  
structures embedded within Word structures, Visual Basic would  
represent a more likely tool, since it is already supported by the  
Word application.  To view this support, double click a control in  
Design Mode, and see Word open a Visual Basic editor.  Visual Basic  
provides access to ActiveX routines, where in 2007, additional content  
based routines along with custom XML storage for its binary format had  
been added.  Although placing controls directly into a Word document  
is not the norm (prints as a graphic),  these controls can generate  
RFC compliant outputs, and even bibliographic XML fragments to assist  
in the generation of the bibliographic sections.  No TCL code would be  
needed.A less risky alternative to that of Word might be to use  
Java with Open Office.


From the IETF perspective, in addition to the ASCII text files being  
used as the archived form, xml2rfc files are retained to generate  
alternative presentations and as input for generation process.  The  
concern related to the use of the Word input format, which has changed  
in 97, 00, 02, 03, 07, and is likely again in 10, remains that of  
security.  Changes are not always apparent, and even format  
documentation can not be relied upon when details related to active  
components are ill defined.  The security concern is in regard to the  
embedded program language, especially when the program is to be relied  
upon as the means to generate IETF compliant outputs.  The Internet is  
not a safe place, where a practice of embedding programs that can  
cause harm into what could have been innocuous text should be  
considered a bad practice.  Currently, collaboration between  
individuals might be accomplished by sharing xml2rfc input files,  
which are also retained with the plain text  RFC output.  Reliance  
upon Word input files as a replacement for xml2rfc files will  
invariably lead to a bad practice of depending upon potentially  
harmful embedded programs.


Use of xml2rfc conversions has uncovered some odd quirks.  The tool  
does not cache bibliographic database selections.  Either this works  
on-line, or the entire database needs to be local.  Not to diminish  
the service offered by Carl Malamud, occasional sporadic connections  
to the xml.resource.org servers can be a cause of angst for authors  
who have not obtained the entire tarred xml bibliographic database.   
Lately, the dependability of the xml2rfc approach has become less  
reliable when dealing with cryptic entries and beta TCL needed to  
generate I-D boilerplate language as required by nit checker.


This makes one wonder whether there could be a better way.  A hybrid  
approach might offer the similar features found in xml2rfc with the  
simpler the inputs supported by 'roff.  This would not exclude the use  
of Word, but would not depend upon any of Word's content automations.   
Perhaps a bit of Perl could provide the pre and post processors to  
handle something that resembles the xml2rfc front section.  While roff  
is not perfect, it has been more stable than other WISIWYG word  
processors and, when used in conjunction with separate pre/post  
processors, can generate the desired alternative outputs.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Avoid unknown code sources (was: Re: RFC archival format)

2009-07-08 Thread Douglas Otis


On Jul 7, 2009, at 9:19 PM, Doug Ewell wrote:


Douglas Otis dotis at mail dash abuse dot org wrote:

The concern is about the application accepting document  
instructions and text and then generating document output.  When  
this application is proprietary, it is prone to change where  
remedies might become expensive or impossible.


The implication is that open-source software is inherently stable  
and commercial software is inherently unstable.  That's hardly a  
safe across-the-board assumption.


When an application is open and the OS APIs change, recompilation  
often provides a remedy.  When the application is proprietary, whether  
patches become available as a remedy to an OS change depends upon the  
vendor, who might also expect some precondition be met.  This concern  
becomes somewhat more pressing when the same vendor controls both the  
application and the OS. :^(


The evolution in hardware tends to force the use of different  
operating systems which may no longer support older applications.


Tends to, may.  Sounds like FUD to me.  I haven't had any  
trouble using Word 2003 under XP to read documents I created in Word  
95 thirteen years ago.


Often application APIs exhibit security vulnerabilities.  Unforetold  
changes to improve security inevitably will inhibit some  
applications.  It was your good fortune that the application was  
updated and made available at a reasonable price.  Backward  
compatibility modes to support older applications might lessen  
security, or simply not function.  After all, due to security  
concerns, libraries are continuously updated, sometimes in  
incompatible ways.


IIRC, I did work back in the early 90's that contained Russian  
written using Word 5.  Conversion proved difficult since  
proprietary fonts were needed.  Document recovery then required a  
fair amount of work to rediscover the structure and character  
mapping.  Trying to get any version of Word to generate plain text  
outputs consistently always seemed to be a PITA, that varied from  
version to version, and never seemed worth the effort.


All work involving Cyrillic text was hit-and-miss fifteen years ago.  
Every word processor or other application had its own custom format.  
Many used KOI8-R, some used CP866 (or worse, CP855), a few used ISO  
8859-5.  PDF files depended entirely on the embedded font to convey  
meaning; copy-and-paste from PDF was useless.  Compatibility  
problems in the era before widespread Unicode adoption were hardly  
limited to Word.


Not having source available significantly increased recovery efforts,  
nor could this effort be shared per the EULA. :^(


When people are required to input Word Document instructions into  
their Word application, they might become exposed to system  
security issues as well.


Might be.  More FUD over security.  Has anyone suggested  
*requiring* users to employ mail-merge-type macros as part of I-D  
preparation, or is this just a general flame against Word?


The concern about what might be embedded within documents was not in  
regard to simple macros, but that of a program language capable of  
compromising the operating system.   The concern was voiced in  
opposition to suggestions for using Word input files as a means to  
generate inputs for I-D or RFC generation utilities.  Of course,  
collaborators are likely to share these input documents as well.   
Sharing potentially hazardous input files among often virtual  
strangers represents a bad practice with respect to security.  This is  
not any different than warning users not to click on greeting- 
card.exe email attachments.


The variability of the Word data structures makes identifying  
security threats fairly difficult, where many missing features  
seem to be an intended imposition as a means to necessitate use of  
the vendor's macro language.


Translation: I don't like Microsoft.


IMHO, unnecessary risks are being taken with respect to code having  
unknown origins.  In other words, this is an argument about ensuring  
people are able to recognize the gun before pulling its trigger.  As a  
result of their iFrame innovation and inevitable clickjacking,  
websites now need to inhibit iFrames with X-FRAME-OPTIONS (supported  
by IE 8 and NoScipts).  Users are also warned to disable this feature  
within their browsers.


WMA files with .mp3 extensions will launch and prompt with a system  
pop-up for the installation of OS extensions obtained from unknown  
locations hidden within the mislabeled files.  Often users mistakenly  
trust messages that appear generated by the OS.  In view of mistaken  
trust, why are document related exploits low on a list of concerns  
when discussing the generation of archival formats?  Why call this  
FUD?  There is nothing uncertain about the concern.


Inherent security issues alone should disqualify use of proprietary  
applications.


Hey, maybe if I say the word security enough times, people will  
get

Re: RFC archival format, was: Re: More liberal draft formatting standards required

2009-07-06 Thread Douglas Otis


On Jul 3, 2009, at 3:16 PM, Doug Ewell wrote:


Douglas Otis dotis at mail dash abuse dot org wrote:

Reliance upon open source tools ensures the original RFCs and ID  
can be maintained by others, without confronting unresolvable  
compatibility issues.


Whether a tool is open source or not has nothing to do with how many  
people know how to use it.  Are you talking about maintainability of  
the documents or of the tools?


The concern is about the application accepting document instructions  
and text and then generating document output.  When this application  
is proprietary, it is prone to change where remedies might become  
expensive or impossible.  The evolution in hardware tends to force the  
use of different operating systems which may no longer support older  
applications.


IIRC, I did work back in the early 90's that contained Russian written  
using Word 5.  Conversion proved difficult since proprietary fonts  
were needed.  Document recovery then required a fair amount of work to  
rediscover the structure and character mapping.  Trying to get any  
version of Word to generate plain text outputs consistently always  
seemed to be a PITA, that varied from version to version, and never  
seemed worth the effort.


It would also be a bad practice to rely upon unstable proprietary  
formats having limited OS support and significant security issues.


Oh, stop.  Word 2007 can read and save Word 97 documents.


Instead of 10 years, go back another 5 years.  When people are  
required to input Word Document instructions into their Word  
application, they might become exposed to system security issues as  
well.  The variability of the Word data structures makes identifying  
security threats fairly difficult, where many missing features seem  
to be an intended imposition as a means to necessitate use of the  
vendor's macro language.  Inherent security issues alone should  
disqualify use of proprietary applications.


 Applications for Windows, which has a 90% to 93% desktop market  
share, can hardly be said to suffer from limited OS support.


When support is almost exclusively Windows, this still represents  
limited support.   It would be sending the wrong message to mandate  
the use of proprietary operating systems or applications in order to  
participate in IETF efforts.  After all, lax security often found  
within proprietary operating systems and applications threatens the  
Internet.


And turning off macros is becoming more and more common among Word  
users; it's even a separate non-default document format under Word  
2007.


The many automation features fulfilled by TCL and xml2rfc will likely  
be attempted with the native word processor scripts.   The latest  
Word, if you can afford it, is almost ISO/IEC 29500:2008 Office Open  
XML compliant.  Perhaps Word will be compliant in its 2010 version. :^(


I know The Penguin doesn't like the fact that Word is closed-source,  
but -- like the multiple discussions being lumped under RFC  
archival format -- we need to separate that issue from questions of  
whether the app is any good.  And if we're talking about an author  
using Word (or TextPad or roff or whatever) to pre-process a file  
into an RFC Editor-friendly format, which can then be converted to  
traditional RFC text or HTML or PDF or something, then isn't the  
horror of using Word limited to that author?


Open source includes more than just Linux, and the exposure of  
requiring proprietary applications or operating systems would affect  
nearly all IETF participants that maintain existing documents or  
generating new ones.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: RFC archival format, was: Re: More liberal draft formatting standards required

2009-07-03 Thread Douglas Otis


On Jul 3, 2009, at 8:07 AM, Doug Ewell wrote:

As always when this discussion occurs, there are at least three  
different issues swirling around:


1.  ASCII-only vs. UTF-8
2.  Plain text vs. higher-level formatting, for text flow and  
readability

3.  Whether it is a good idea to include high-quality pictures in RFCs

There are not the same issue, and it would help combatants on both  
sides not to mix them up.


I don't know where the argument don't help authors prepare I-Ds  
using the tools of their choice, unless they are open-source fits  
into this picture.


Perhaps some of these difficulties can be remedied by allowing use of  
RFC 2223 with perhaps extensions by RFC 2346.  What is missing are  
likely automation tools able to accept this original publication  
practice.  This approach allowing postscript, html, and pdf output has  
not kept pace with the automation provided by the combination of TCL  
code and XML formats detailed in RFC 2629.  If there is interest to  
revisit the use of roff and standardize preprocessors similar to that  
of xml2rfc, it should not take much effort to include these techniques  
as a means to extend what can be included within an ID and RFC.  For  
this not to create too many problems, RFC 2223 should be updated.


Reliance upon open source tools ensures the original RFCs and ID can  
be maintained by others, without confronting unresolvable  
compatibility issues.  It would also be a bad practice to rely upon  
unstable proprietary formats having limited OS support and significant  
security issues.


-Doug 
 
___

Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: More liberal draft formatting standards required

2009-07-02 Thread Douglas Otis


On Jul 2, 2009, at 9:22 AM, Marshall Eubanks wrote:



On Jul 2, 2009, at 12:16 PM, Ted Hardie wrote:


At 10:19 PM -0700 7/1/09, Douglas Otis wrote:
for wanting more than just plain text documents is to permit  
inclusion of charts, graphs, and tables, for a visual society


It seems to me that where this discussion has faltered before is on  
whether this is, in fact, a requirement.


You are exactly correct, and I can recall several interminable  
discussions of this.


To save time, I would suggest adopting the Patent Office rules on  
Perpetual Motion. People advocating for a change to facilitate  
figures (or to allow complicated math, such as tensor analysis)  
should have an existence proof, i.e., a document that requires the  
change to be published. (A document that left the IETF to be  
published elsewhere for this reason would also do.)


What appears to be missed in these conversations represents a  
dissatisfaction of the generation tools and output quality, which is  
easily shared.  There is good reason to avoid closed source generation  
tools, however the IETF has already employed and permitted the use of  
roff inputs and outputs, which appears to offer a reasonable means to  
satisfy the many requirements already in place.


A suggestion to use Word XML outputs as a means of providing WISIWYG  
operation misses what is currently in place within xml2rfc needed to  
generate tables, state diagrams, and graphs.  Yes, these elements are  
_currently_ contained within existing RFCs,  but  in ASCII form.  Even  
though these elements are structured using ASCII, textual processing  
must still accommodate special handling of these clumsy visual elements.


Although I am not blind, the simple instructions required by roff  
tools should allow those visually impaired a superior means for  
understanding the intent of visual graphics, rather than guessing what  
a series of white-space and characters are attempting to convey within  
diagrams or equations.


In addition, there are currently several RFCs already created using  
roff, as were my first attempts at writing I-Ds.  Due to IETF's  
current level of support for xml2rfc, this mode of input now offers an  
easier means to generate acceptable output.  IMHO, roff tools can  
still offer higher quality output that is more compatible with various  
presentation media than outputs generated from xml2rfc.


Perhaps the IETF may wish to better retain the older roff methods by  
offering better boilerplate and processing support for this currently  
acceptable method for generating I-Ds and RFCs.  A wiki style web-page  
with IETF custom roff pre-preprocessors could facilitate roff inputs  
for the creation of ID and RFC documents.  The availability of roff to  
html output should also make creating previews as a type of iterative  
WISIWYG mode of creation possible.  This would be no different than  
the steps used with xml2rfc.


IIRC, .ps generated from roff tools are still acceptable inputs as  
well, although I expect the current publishing automation is likely to  
balk at output from these older methods.  Too bad though.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: More liberal draft formatting standards required

2009-07-01 Thread Douglas Otis


On Jul 1, 2009, at 10:58 AM, Tony Hain wrote:

An alternative would be for some xml expert to fix xml2rfc to parse  
through the xml output of Word. If that happened, then the  
configuration options described in RFC 3285 would allow for wysiwyg  
editing, and I would update 3285 to reflect the xml output process.  
I realize that is a vendor specific option, but it happens to be a  
widely available one.



Reasons for wanting more than just plain text documents is to permit  
inclusion of charts, graphs, and tables, for a visual society.  A safe  
way to provide this type of output using stable open-source code would  
be with roff preprocessors, like eqn, pic, tbl.


Word's closed code is continuously changing.   Availability of this  
closed application depends upon OS compatibility and version  
regressions.   Both are moving targets.  In addition, Word formats  
permit inclusion of potentially destructive scripts within highly  
flexible and obfuscating structures.   troff normally outputs .ps and  
is supported by various utilities like X viewers in open source.  Unix  
based OSs like OSX and Linux still support this document format, where  
grohtml can generate HTML output. The disadvantage of using the roff  
approach has been a lack of IETF boilerplate pre-processors.  Merging  
XML structures could be combined with powerful roff presentation  
utilities to generate IETF documents.


In many respects, roff offers simpler and proven stable formats.  The  
present xml2rfc utilities do not offer wysiwyg.   Combining custom pre- 
processors and visualization utilities, the roff suite offers greater  
security, stability and versatility for all OSes and media  
presentations types, along with iterative wysiwyg modes of operation.   
There would be little difference using roff tools from using xml2rfc,  
however the results could show a marked visual improvement.  A desire  
for security might even foster a resurgence in roff tools to leverage  
proven and stable document generation.


Everything old is new again.

-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: DNS over SCTP

2009-05-29 Thread Douglas Otis


On May 29, 2009, at 7:33 AM, Francis Dupont wrote:

I don't understand your argument: it seems to apply to UDP over SCTP  
but here we have SCTP over UDP.  BTW the easiest way to convert DNS  
over UDP into DNS over SCTP is to use an ALG (application layer  
gateway) which in the DNS is known as a caching server (such servers  
are already used to provide IPv4/IPv6 transport conversion).


The goal is to apply the SCTP protocol as a means to better protect  
DNS from source spoofing, resource exhaustion, reflected attack  
exploitation, and increased latency.  SCTP in any form does not  
prevent deployment of DNSSEC.  SCTP might even better facilitate  
DNSSEC than EDNS0.  Use of DNS on SCTP, even when tunneled over UDP,  
should be explored.  The issues related to DDoS risk related to cached  
macros were presented to various DNS WGs and forums.  Unfortunately,  
this DNS issue earned little respect from the proponents of the  
protocol using macros and extensive record chaining.  The prevalent  
response was to declare DNS broken by pointing to other aspects of DNS  
at risk.  SCTP seems a reasonable solution in the face of this  
neglect.  Problems are likely to grow much faster than adoption of  
DNSSEC.  In fact, adoption of DNSSEC may make some aspects of DDoS  
exploitation worse.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: DNS over SCTP

2009-05-28 Thread Douglas Otis


On May 28, 2009, at 9:45 AM, David Conrad wrote:


On May 28, 2009, at 5:47 AM, Alessandro Vesely wrote:
I don't trust the data because it is signed, I trust it because the  
signature proves that it originated from the authoritative server.


Not quite.  The signature over the data proves that the holder of  
the private key has signed the data.  The origin of that data then  
becomes irrelevant.


This discussion started by describing how an authorization protocol  
might utilize macros embedded within a DNS cache to stage relatively  
free DDoS attacks, all of which would be made worse by DNSSEC.   
Preventing DNS poisoning was also a concern expressed, which is likely  
to go hand in hand with the DNS enabled attack.  Since DNS is normally  
connectionless, security solutions like SSL have been dismissed.
While DNSSEC may protect against data corruption, such protection  
depends upon the thorny problem of verifying a key will be solved in a  
practical and politically acceptable manner.  This protection also  
requires authoritative servers to rapidly adopt DNSSEC without also  
confronting other insurmountable deployment issues.  Fool me once,  
shame on you.  Fool me twice...


Therefore, if I'm connected with the authoritative server over a  
trusted channel, I can trust the data even if it isn't signed.


Not really.  You are relying on the fact that the authoritative  
server and (potentially) the channels it uses to communicate to the  
originator of the data have not been compromised.


Assume SCTP becomes generally available as a preferred transport for  
DNS.  If so, an ability to corrupt DNS information would be greatly  
reduced, whether data is signed or not.  In addition, SCTP can safely  
carry larger signed results without the DDoS concerns that will exist  
for either TCP or EDNS0 over UDP.  Deploying DNS on SCTP should be  
possible in parallel with the DNSSEC effort.


By induction, if a resolver only uses either signed data or trusted  
channels, I can trust it.


A trusted channel is superfluous when the data is signed.


Receiving signed data represents just a fraction of the challenges  
facing DNSSEC. :^(


The limitations in TCP or SCTP security stem from an attacker's  
ability to compromise one or more routers, so as to either tamper  
with the packets on the fly, or redirect them to some other host.  
That's much more difficult than forging the source address of an  
UDP packet, though.


True, but object security removes even the residual risk of channel  
compromise (e.g., a compromised router).


However, pragmatically speaking, I suspect it is going to be much,  
much easier to get DNSSEC deployed than it would be to get every  
router/firewall/NAT manufacturer and network operator to support/ 
deploy SCTP, not to mention getting every DNSSEC server to support  
DNS over SCTP.


While TCP represents a possible fall-back method whenever UDP  
overflows, TCP is not assured.  Instead of seldom, low prevalence  
might better describe TCP use in DNS.  In addition, DNS servers prefer  
UDP over TCP when resources become scarce.  TCP produces greater  
latency, requires more back and forth exchanges, and strands resources  
whenever confronting spoofed connection attempts.  While EDNS0 allows  
UDP to carry larger signed packets, this also increases UDP's exposure  
to increased reflected attacks that leverage the brute strength of DNS.


On the other hand, SCTP reserves resources until a request is  
confirmed by a returned cookie, which also allows data to be exchanged  
sooner than would be possible with TCP.  Unlike TCP, SCTP carries  
chunks over multiple streams rather than non-delineated bytes over a  
single stream.  SCTP connections consume minimal resources and can  
sustain longer sparse associations.  SCTP also tunnels over UDP to  
provide compatibility with legacy NATs and firewalls.  SCTP might soon  
become popular with browsers due to its inherent improvements on  
security and performance over TCP.  A solid SCTP stack is now  
available in FreeBSD that has corporate friendly source licenses. :^)


If there is one lesson that should have been learned from the DNSSEC  
effort, resolving DNS problems will require dedicated long term  
planning.  Within the same timeframe as DNSSEC, SCTP has been able to  
provide reliable and safe transport.  You might be using SCTP whenever  
you make a phone call or watch your TV.  It seems that the telephone,  
more than the Internet, is what people expect to just work.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: IETF 78 Annoucement

2009-05-26 Thread Douglas Otis


On May 25, 2009, at 4:56 PM, John C Klensin wrote:

With a train, you have to pick the correct train, and then leave  
the train at the correct stop. A bit more complicated to be honest.  
By interacting with people, you often can handle the most  
complicated train ride, but yes, it might be more complicated with  
train.


Complication that, in many cases, is severely complicated by being  
tired, exhausted, and out of focus from a long flight.


It could be there are only two more generations be able to travel  
extensively by jet airplanes.  Perhaps zeppelin travel will return.   
Trains are about 5 times more energy efficient that planes, and about  
3 times that of autos. While there is little safety difference between  
planes and trains, there is a significant difference between that of  
autos.  Dealing with trains is also much less common for those in  
North America than from other locales.  While I agree making sense of  
train schedules, and at times even knowing which direction the train  
should be heading to find the correct tracks, can be challenging.   
Patrik is right.  Sometimes you need to ask. :^)


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: SS7

2009-05-20 Thread Douglas Otis

Jim,

Telco has a goal of maintaining 5 9's where SCTP was developed to  
carry their SS7 protocol.  The FreeBSD stack for SCTP supports IPv4,  
IPv6 and multi-homing.  This protocol immediately recovers from  
network path failures as it heartbeats against alternative paths.  The  
protocol can sustain a large number of connections over extended  
periods, takes less time for connection setup than TCP, and overall  
latency can be less than UDP while still supporting partial reliability.


SCTP has improved error detection suitable for jumbo frames.  In  
addition, the error checking algorithm is now an instruction in i7  
core Intel processors and is found in many NICs.  The SCTP connection  
setup returns a cookie to minimize resource exhaustion concerns, to  
guard against source address spoofing, connection hijacking, and DDoS  
attacks.


http://www.iec.org/online/tutorials/ss7_over/index.asp

-Doug

On May 20, 2009, at 6:04 AM, jhan...@redcom.com wrote:


Dear Sir or Madam,

Is there a working group on converging legacy protocol stacks and  
mapping
legacy stacks addressing mechanisms into IPv6 addressing? For  
example, RFC
4291  4038 specifies that :::x.y.z.w represents IPv4 node  
x.y.z.w -
is there a similar draft concept for specifying a Message Transfer  
Part

point code for an SS7 stack or other telephony protocol network layer
mechanisms into IPv6 addresses?

Thanks much,
-Jim Hanley



 \^^^/
 (0 0)
***ooO*(_)*Ooo
Jim HanleyREDCOM Laboratories, Inc.
Software Engineer 1 Redcom Cntr   __
T +1.585.924.7550  Victor, NY   /  __   /\/  ___/\
F +1.585.924.654714564  ___/  /\/  /_/  /\__\/
jim_han...@redcom.com USA /_/ /___/ /
http://www.redcom.com/\_\/\___\/
*Oooo.
  .oooO (   )
  (   )  ) /
   \ (  (_/
\_)



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Subscriptions to ietf-honest

2009-03-24 Thread Douglas Otis


On Mar 23, 2009, at 3:27 PM, Dave CROCKER wrote:




Steven M. Bellovin wrote:
It's happened to me twice, with two different lists of his.  I've  
complained to him, but to no avail.  I wonder if the CAN SPAM act  
applies.


IANAL but my impression is that it definitely does apply, possibly  
multiply and possibly even with sanctions.  As noted, this is a  
relatively tricky topic, but I am still pretty sure he goes far  
beyond the limits it defines.


http://uscode.house.gov/download/pls/15C103.txt

The term commercial electronic mail message means any electronic  
mail message the primary purpose of which is the commercial  
advertisement or promotion of a commercial product or service  
(including content on an Internet website operated for a commercial  
purpose).  The term commercial electronic mail message does not  
include a transactional or relationship message.


Transactional or relationships include:
a subscription, membership, account, loan, or comparable ongoing  
commercial relationship involving the ongoing purchase or use by the  
recipient of products or services offered by the sender;  (iv) to  
provide information directly related to an employment relationship or  
related benefit plan in which the recipient is currently involved,  
participating, or enrolled;


where:
 It is the sense of Congress that -
(1) Spam has become the method of choice for those who distribute  
pornography, perpetrate fraudulent schemes, and introduce viruses,  
worms, and Trojan horses into personal and business computer systems;  
and
(2) the Department of Justice should use all existing law enforcement  
tools to investigate and prosecute those who send bulk commercial e- 
mail to facilitate the commission of Federal crimes, including the  
tools contained in chapters 47 and 63 of title 18 (relating to fraud  
and false statements); chapter 71 of title 18 (relating to obscenity);  
chapter 110 of title 18 (relating to the sexual exploitation of  
children); and chapter 95 of title 18 (relating to racketeering), as  
appropriate.


CAN-SPAM also limits legal standing to network providers and law  
enforcement.


Since the IETF distributes email-addresses of subscribers, rather than  
obscuring them, when email-addresses are obtained from received  
messages that relate to some ongoing issue, this would not be  
harvesting.  It is not uncommon to even see emails that ask why you  
unsubscribed.  As long as the email relates to a prior relationship,  
it would be difficult to make a strong case, especially when the IETF  
is complicit in the distribution of the email-addresses.  One might  
even ask why are these email-addresses included if it would be illegal  
to respond to these addresses.


Unless the emails are deceptive in some way, CAN-SPAM does not seem to  
apply.  Perhaps the IETF may reconsider obscuring email-addresses.


-Doug

 
 
___

Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Comments requested on recent appeal to the IESG

2009-03-02 Thread Douglas Otis


On Feb 28, 2009, at 4:14 AM, Alessandro Vesely wrote:


Douglas Otis wrote:


The safety of an assumption about an authorizing domain originating  
a message depends upon the reputation of the SMTP client for its  
protection of the PRA and Mail From.  Unfortunately, identifiers  
for the SMTP client have been omitted.  It appears ESPs have agreed  
not hold SMTP clients accountable to reduce ESP support calls.   
Having some of their customer's domains affected by a security  
breach is preferred over all customer's domains being affected  
(even when it would be appropriate for all domains).


However, choosing an ESP should involve more thinking. There are  
several reasons why end users need to trust their ESPs. If your ESP  
were malicious, they could play any sort of trick, probably also  
spoiling your bank account. Thus, I think end users should trust  
their ESPs not less than they trust their banks.


Not including the SMTP client identifiers in Authentication-Results  
header prevents an SMTP client's protection of authorization  
references to be directly vetted by the recipient's MUA.  Without any  
evidence of such annotation specific vetting in the Authentication- 
Results header, trust would be misplaced when expecting that it  
occurred by receiving ESPs.  The Authentication-Results header reports  
the receiving ESP's implementation of authorization mechanisms where,  
due to the issues discussed, these mechanisms may not accurately  
resolve the intended authorizations.  The direct vetting of the SMTP  
client protection of authorization references by the MUA is not prone  
to authorization errors, and therefore better protects users.


As far as I understand it, users should configure the MUA giving the  
domain of their ESP and telling it to trust auth-res headers from  
that domain only. The ESP is obviously responsible to avoid that  
counterfeit auth-res records signed by the ESP's domain are  
already present in the original headers. (This isn't foolproof,  
though: users who transfer their mail via, say, gmail's pop3 client,  
may be tempted to trust their original domain's records, which might  
be written by malicious authors who send to gmail directly.)


The concern does not involve the filtering of spoofed Authentication- 
Results headers, which of course must be guarded as a new threat.



[...]
While MUAs might prompt users to enter the domains they wish to  
trust, this entry would be rather perilous.  The exclusive use of  
the MAIL FROM or the PRA is not the norm, so it would be foolhardy  
to assume otherwise.  MUAs should obtain an IP address list of SMTP  
clients vetted by either an organization or a community.


I'm sorry I elided the rest of the discussion, but the last point  
you make IMHO summarizes much of the misunderstandings about  
reporting the SMTP client's IP address. Shouldn't users delegate to  
their ESPs the choice of an organization or community who maintains  
such vetted lists of SMTP clients?


Receiving ESPs struggle to ensure legitimate email is not lost, which  
includes email where the protection of authorization identifiers may  
not be assured.  The role played by the Authentication-Results header  
is to support the annotation of a message's origination, but it omits  
essential information required to vet the SMTP client's protection of  
the authorization identifiers, especially when authorization  
references are in doubt.  Annotating the origination of a message  
should include the vetting of the SMTP client's protection of the  
related identities.  This vetting goes beyond whether a message should  
be delivered or not, which is the decision normally made by ESPs.   
Without additional specific vetting, messages should not be annotated,  
otherwise this exposes recipients to being deceived by bad actors.


In case they should, then the whole job of authenticating the SMTP  
client can be carried out by the MTA, and communicating the address  
is useless. Maintaining such vetted lists formally (if not  
semantically) resembles the work of DNSBLs, which is why I thought  
that an addition like

  Authentication-Results: example.com;
dnsbl=pass zone=zen.spamhaus.org address=192.0.2.3
would have been equivalent to your proposal.


It is important that the IP address of the SMTP client be included  
with any authorization mechanism to allow essential annotation  
specific vetting.  Annotation vetting goes beyond what is required  
when deciding to accept messages.


The proposal was:

  Authentication-Results: trusted-isp.net;
senderid=pass header.from=192.0@example.com

Rather than the current specifications:

Authentication-Results: trusted-isp.net;
senderid=pass header.from=example.com

There is no reason to repeat the local-part within within the results  
header since Sender-ID indicates the local-part is assumed valid and  
Mail From allows only a single email-address.  This change should not  
be problematic

Re: Comments requested on recent appeal to the IESG

2009-02-26 Thread Douglas Otis


On Feb 26, 2009, at 10:47 AM, Alessandro Vesely wrote:


Douglas Otis wrote:


3) Separate SMTP clients share the same IP addresses.   
(Unfortunately this is also a common practice.  Brazil, Poland, and  
other places have many ISPs that expect hundreds or thousands of  
customers to run outbound SMTP services that NAT to a single or  
small number of IP addresses.  It is also common for VPS services  
to run servers out of common IP addresses.)


The domain that operates the NAT is responsible for letting their  
users connect to port 25. Operating through a NAT may disrupt an  
inner site's activity, if some of its co-NAT sites behave badly.  
Again, recipients can mark this weak authentication characteristic  
by domain name.


Alessandro,

Tens of thousands of domains might use the same NATed addresses  
offered by a network carrier.  For authorization mechanisms, if the  
SMTP client IP address was included within the Authentication-Results  
header, then messages emerging from known NATed addresses could be  
easily identified, and appropriately they would not receive  
annotations as having originated from an authorizing domain.  The  
proactive protections afforded by inclusion of the SMTP client IP  
address would be substantial.


There are hundreds of thousands of legitimate SMTP clients in  
comparison to hundreds of millions of domains.  This sizable  
difference makes SMTP client based annotation assessment a thousand of  
times more comprehensive.  It is already very difficult to detect  
spear phishing events.  Basing assessments only upon events evidenced  
by the domain, rather than by the SMTP client IP address, means this  
insidious activity is more likely to go on unabated.


4) Authorization results that are based upon a virtual record when  
no SPF record is found.  (Injecting SPF mx/24 record mechanisms  
whenever no SPF is discovered is also a common practice.)


That's totally bogus. I may accept a bug in my ISP's SPF checking  
software, but deliberately falsifying the data requires that  
trusted-isp.com be removed from my set of trustees.


Customers of such providers will still desire their email to be  
annotated.  They will be asked to enter the name of their provider  
into the MUA while being unaware of any underlying SPF record  
guesswork.  Allowing the MUA a means to check the status of the SMTP  
client reduces risks created by authorization guesswork or a  
provider's acceptance requirements.  Judging whether to annotate is a  
much different decision from deciding whether to accept a message.

...

It can only be said that a domain *authorized* the SMTP client.


Yes, the client is authorized to say it sends on behalf of that  
domain: it is a user of that domain by definition.


This statement is not correct.  This is why it is so wrong to confuse  
*authorization* with *authentication*.


A package delivered by Fred bearing Sam's return address where Sam  
references Fred as being *authorized* does not represent an assurance  
that the package is really from Sam.  Only when Fred has a reputation  
of always checking sender identities against return addresses can  
there be any assurance that the return address is valid.  In addition,  
an authorization of Fred by Sam should not be considered a guarantee  
that Fred always checks sender identities against the return  
addresses.  An authorization only means packages delivered by  
unauthorized carriers should be considered suspect.  This philosophy  
is often the basis for publishing SPF records when dealing with back- 
scatter.  The reputation most relevant as to whether a return address  
might be valid is that of the carrier Fred.

...
I doubt that displaying 192.0@example.com will make the  
recipient safer.
The alternative might be h...@example.com is used with a message  
asking you to complete your W2 form at a specific web-site.
Unfortunately, the MUA is unable to check whether 192.0.2.3 has a  
recent history of spoofing domains.  Employees of example.com may  
be unaware of the limitations imposed by their outsourced email  
service, or may have created records intended to help ensure  
message delivery. Bad actors can now leverage any righteous  
assumptions that border MTAs might make to produce extremely  
convincing socially engineered attacks.  See items 1- 6.  These  
attacks are made easier whenever *authorization* is erroneously  
elevated into being considered *authentication* without adequate  
safeguards being provided.  Safeguards based only upon a domain  
will not isolate the services culpable for domain spoofing and will  
inhibit an effective response to SMTP client exploits.


With the possible exception of some guys of the IT department, I  
think no employee of example.com can tell if 192.0.2.3 is or is not  
the IP address of their ESP.


Per the draft, the MUA is expected to check the reputation of the  
authenticated origin identity (this would be the SMTP client IP

Re: Withdraw of [rt.amsl.com #13277]: Authentication-Results Header Field Appeal

2009-02-26 Thread Douglas Otis


On Feb 25, 2009, at 11:42 PM, Murray S. Kucherawy wrote:


Doug,

On Wed, 25 Feb 2009 00:10:21 -0800, Doug Otis wrote:
The Sender-Header-Auth draft clouds what should be clear and  
concise concepts. Organizations like Google have already remedied  
many of the security concerns through inclusion of free form  
comments.


For the sake of being thorough, I looked into this.  A lead mail  
engineer at Gmail (I assume you're referencing Gmail and not  
Google's internal mail) tells me their inclusion of the relaying IP  
address as a comment in their Authentication-Results header fields  
has nothing to do with any sort of remedy in reference to any  
concerns they have about the specification.  It is for use by some  
other internal processes (which he was not at liberty to discuss  
further).


This overlooks their claim that SMTP client IP address information is  
useful, even for undisclosed reasons.  Even as a comment, it confirms  
IP addresses found elsewhere using regex as a remedy for defeating  
spoofed headers holding bogus IP addresses.



Since you cited a plurality, do you have any other specific examples?


Unfortunately other major DKIM provider Yahoo! does not offer this  
feature.  Is your question seems aimed at ensuring the ESP wagons are  
fully circled?  The draft omits information that is essential for  
checking whether a message source represents that of a NAT, for  
example.   This is not about whether to accept a message, which might  
be where the reputation of the domain would matters, this is about  
determining whether the *authorized* client is known to protect  
message elements used to reference the authorizations.  The  
Authentication-Results header is not about which messages are to be  
rejected, this header is about what results are safe to annotate.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Withdraw of [rt.amsl.com #13277]: Authentication-Results Header Field Appeal

2009-02-25 Thread Douglas Otis
The appeal of the Authentication-Results header draft is reluctantly  
being withdrawn.  While this draft confuses authorization with  
authentication, it is being withdrawn in the hope that subsequent Best  
Current Practices will soon remedy the short-comings noted by the  
appeal.  This withdrawal is being done to better expedite adoption of  
the header, while at the same time recognizing the severe security  
deficiencies the current definition of this header imposes.


The Sender-Header-Auth draft clouds what should be clear and concise  
concepts. Organizations like Google have already remedied many of the  
security concerns through inclusion of free form comments.   
Unfortunately, comments are not a good vehicle for standardization,  
but perhaps some form of extension will soon adopt a standardized  
means to introduce vitally important SMTP client IP addresses.  The  
appeal was not taken lightly, but feedback from those within the email  
community appears indicate a willingness to adopt this header standard.


Douglas Otis and Dave Rand
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Comments requested on recent appeal to the IESG

2009-02-25 Thread Douglas Otis

Doug Otis wrote:
Since *authorization* does not *authenticate* a domain as having  
originated a message, this leaves just the IP address of the SMTP  
client as a weakly authenticated origin identifier.  The IP  
address of the SMTP client is the input for Sender-ID or SPF  
*authorization* mechanisms.


Hm... the IP address is authenticated by the TCP session.


Yes, weakly authenticated.

Next, the self-styled domain is looked up checking for an  
authorization for the given IP address.


Such as example.com references an SPF record that produce the following:

Received: by SMTP id j2cs23447wfe;
 Tue, 24 Feb 2009 09:51:01 -0800 (PST)
Return-Path: i...@example.com
Authentication-Results: trusted-isp.com; spf=pass smtp.mail=example.com
...
--- or ---

Received: by SMTP id j2cs23447wfe;
 Tue, 24 Feb 2009 09:51:01 -0800 (PST)
Return-Path: i...@example.com (SMTP Mail From parameter)
Authentication-Results: trusted-isp.com; senderid=pass  
header.from=example.com

...
From: The Clubthe_c...@example.com

Up to DNS reliability, a positive result proves the validity of the  
domain name, in the sense that the domain admins have stated that  
messages originating from the given IP may righteously claim to  
originate from that domain.


Positive authorization results will not safely indicate whether a  
domain originated a message (and will not represent authentication)  
when any of the following might be true:


1) Publishing the SPF record was intended to ensure delivery or to  
minimize backscatter, but was not to ensure source authenticity.   
(Ensuring delivery is a common reason for publishing SPF records.)


2) The authorization reference (such as Mail From or From as above)  
was not restricted to just the domain's users. (It is a common  
practice not to impose restrictions on the use of the Mail From or the  
PRA.)


3) Separate SMTP clients share the same IP addresses.  (Unfortunately  
this is also a common practice.  Brazil, Poland, and other places have  
many ISPs that expect hundreds or thousands of customers to run  
outbound SMTP services that NAT to a single or small number of IP  
addresses.  It is also common for VPS services to run servers out of  
common IP addresses.)


4) Authorization results that are based upon a virtual record when no  
SPF record is found.  (Injecting SPF mx/24 record mechanisms whenever  
no SPF is discovered is also a common practice.)


5) The SPF Authorization is not intended to be referenced from a PRA  
message element. (There may be uncertainty whether a PRA is a valid  
reference due to conflicts between RFC 4406 and RFC 4408.)


6) A domain that employs outside email providers to handle inbound  
email, but then inadvertently includes an MX mechanism where the  
inbound provider also offers outbound services to thousands of domains  
from the same IP address space.  (This now will likely be found to be  
a common mistake.)



Why wouldn't that make up an authentication of the given domain name?


Any common situation from 1 - 6 eliminates any reasonable assurance  
that a domain originated the message.  It can only be said that a  
domain *authorized* the SMTP client.  Assigning reputations to a  
domain on that basis is not easily reconciled, nor will marking the  
domain unsuitable for annotation offer timely protection for other  
domains that are also sharing the SMTP client IP address.  This is not  
about blocking messages, this is about annotating messages.


Checking the reputation of the authenticated origin identifier  
determines whether this identifier protects the message elements  
used to establish the *authorization* prior to revealing the  
*authorization* results.


It is not clear to me what checking would the MUA do in practice.


When annotating a domain as having originated a message is based upon  
SMTP client authorization, the reputation of SMTP clients is  
critically important.  As enumerated by the list 1 - 6, not all IP  
addresses used by SMTP clients will restrict a domain's use to just  
their users.



[...]
In addition, including the IP address of the SMTP client makes the  
header less deceptive.  Recipients must consider which identifier  
has been authenticated.  It is the *authenticated* IP address of  
the SMTP client that is being *authorized* by Sender-ID or SPF.   
Using this IP address to replace the local-part avoids the  
conditional inclusion of the local-part that is required by section  
2.4.3.


I doubt that displaying 192.0@example.com will make the  
recipient safer.


The alternative might be h...@example.com is used with a message  
asking you to complete your W2 form at a specific web-site.
Unfortunately, the MUA is unable to check whether 192.0.2.3 has a  
recent history of spoofing domains.  Employees of example.com may be  
unaware of the limitations imposed by their outsourced email service,  
or may have created records intended to help ensure message delivery.  
Bad actors can now 

Re: Comments requested on recent appeal to the IESG

2009-02-21 Thread Douglas Otis
.  This is not any different from that of the iprev  
mechanism.  Unfortunately, the iprev mechanism may impose excessive  
overhead due to nature of the reverse namespace.



 TO:

 End users making direct use of this header field may inadvertently  
trust information that has not been properly vetted.  [SPF] results  
SHOULD BE handled as specified by section 3.4.3.


This is the same confusion of venues as cited above.


The Authentication-Results draft is responsible for omitting the input  
provided the authorization mechanism.  It is not the authorization  
mechanism itself that is responsible for the omission.


The goal of the appeal is to better ensure information is available  
that is required to assess the reputation of the authenticated origin  
identity as specified by section 4.1.  The presence of this  
information will not be of harm to recipients, and will better ensure  
their safety as well as that of the domain.  The issue is only whether  
to display or not the results of this header at the MUA after checking  
the reputation of its source.  In order to perform this check, the  
authenticated origin identity must be clearly represented in the  
trusted headers.


The IESG faces the hard decision of whether they are to act in the  
greater interests of better protecting millions of recipients, or  
acquiesce to the interests of influential providers acting out of self  
interest.


Douglas Otis and Dave Rand

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [mail-vet-discuss] -19 of draft-kucherawy-sender-auth-header

2009-01-14 Thread Douglas Otis


On Jan 13, 2009, at 9:02 AM, SM wrote:


Hi Doug,
At 18:53 12-01-2009, Doug Otis wrote:
(see section 3.4.1 of [MAIL]) of an address, the pvalue reported  
along with results for these mechanisms SHOULD NOT include the  
local- part.


SHOULD NOT is not an recommendation to do something.


Someone marketing their services using email will read this as saying  
Unless local-part macros are included within the SPF record,  
annotations will be limited. To be noticed, local-part macros are  
required.  An earlier draft published an example of a local-part  
exploits leveraging cached SPF records.  The exploit is able to  
sustain sizable attacks while utilizing little of the attackers  
resources, beyond the sending of email.  The more a DNS cache is  
inundated, the more effective the attack becomes.  :^(



Are you recommending coercion to resolve conflicts?  Not all SMTP


The question I asked was about implementations.  I'm at a lost as to  
why you see that as recommending coercion.


Since the IP address of the SMTP client is omitted in the  
Authentication-Results header, domains must be assessed on a fairly  
static basis.  Using domains in the case of SPF or Sender-ID increases  
the number of assessments by more than an order of magnitude, that  
will also need to rely on the actions of many additional entities.   
The indirection of domain assessment significantly impairs effective  
responses to PRAs being exploited.  The exploit may have been enabled  
by compromised access, or imprudent record use by some inbound  
provider, neither directly be controlled by the domain.  As recipients  
fall prey, mounting damages will likely have a coercive effect.  Since  
assessment of the SMTP client is precluded by the Authentication- 
Results header, this eliminates practical, prompt, and comprehensive  
assessments, and places recipients at much greater risk.  The omission  
shields providers from being held accountable by pointing to some  
domain, rather than toward the likely source of a problem.  This  
indirection comes at the expense of recipient security.  Too bad  
security and authentication appears to have lost its meaning. :^(


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [mail-vet-discuss] -19 of draft-kucherawy-sender-auth-header

2009-01-13 Thread Douglas Otis


On Jan 12, 2009, at 6:53 PM, Murray S. Kucherawy wrote:

[Apologies for the double-send; the headers got munged by my editor.  
-MSK]


Doug Otis wrote:


[...]  while omitting the IP address of the SMTP client.  This  
prevents compliance with section 4.1 reputation check of an  
authenticated message origination that is needed to mitigate highly  
disruptive and injurious situations.


No, it doesn't.  An implementation can be perfectly compliant in  
other ways, such as doing the reputation evaluation (be it IP or  
domain name) at the border MTA.


The IP address of the SMTP client, as seen at the border, may not be  
captured by trace headers, nor are there reliable methods for  
selecting the border traces.  The goal of the Authentication-Results  
header is to permit subsequent annotations based upon its content  
regarding a message's origination.  Section 4.1 correctly recommends  
checking the reputation of the message's authenticated origination  
prior to applying annotations (revealing results).  Annotations  
regarding a message's origination represents a critical security  
function that should be treated separately from whether a message is  
to be accepted.


Unfortunately, the Authentication-Results header offers no reputation  
information, nor does it offer the authorization record type it relied  
upon, nor does it reveal the authenticated source of the message.   
Reputations of SMTP clients pertaining to what annotations are safe  
can indicate whether Mail From or PRA use appears to be restricted.   
This information can thereby provide non-disruptive methods to  
mitigate conflicts between RFC 4406 and RFC 4408.  The conflict may  
otherwise lead to dangerous annotations achieved by confidence artists  
that can easily take advantage of the significant difference between  
offering authorization and being authenticated, and how SPF records  
were intended to be used.


Ultimately, the application applying annotations must ensure the  
information captured by an Authentication-Results header is supported  
by the reputation of the parties involved.  For Sender-ID or SPF, this  
would be both the SMTP client and the inbound server adding the  
Authentication-Results header.  Checking the reputations of a domain  
offering the authorization might provide an indication of being a look- 
alike attack, but guarding against this risk is best handled at the  
border MTA, or through folder placements, and not by selective  
annotations as the draft suggests.


 There are a lot of good reasons for doing it that way too discussed  
in the draft (and, in fact, reasons not to do it elsewhere).


Reputation checks at the border are important, but they normally  
pertain to whether a message is to be accepted, and seldom are about  
which annotation is safe to apply.


Domain checks are unable to deal with compromised access in a non- 
disruptive manner, nor can domains selectively permit annotations  
based upon what the SMTP client may or may not protect.  When an SMTP  
client is found to not protect a particular scope, the lack of  
protection may impact what is safe to annotate for thousands of  
domains.  However, these domains can not be readily ascertained  
because SPF does not offer reverse listings. :^(


Placement within the authentication header has been made dependent  
upon an undefined and unsupported notion as to whether a local-part  
had been used to obtain authorization.  [...]


Your assertion presupposes no SPF implementation knows, or is  
capable of knowing, whether or not it expanded a local-part macro.   
Even if the former is true, it's hardly a difficult thing to add,  
and the user of an SPF module can easily err on the side of safety  
and assume that it wasn't in either case.  The normative text in  
this draft covers that possibility.


Having a message noticed and read is improved by gaining greater  
annotations and this represents a very strong incentive.  The  
currently unsupported (and fairly recent annotation qualification)  
along with a natural desire to obtain greater annotation will have the  
effect of promoting the use of dangerous local-part macro mechanisms.   
These macros are able to generate an unlimited number of different DNS  
transactions by exploiting cached SPF records.  The SPF macro  
mechanism offers a free DNS attack while spamming!



SM asked:

Are there any implementations of the technique you are suggesting?  
The feedback received from other implementors showed that they  
neither use the above technique nor do they support your point of  
view.


I'd really like an answer to that question as well, since the work  
in the draft is based on a number of real-world implementations that  
simply don't do it the way you envision.  You seem, however, to  
prefer to dodge that challenge.


There are systems now that can offer feedback within a few hundred  
seconds of an exploit.  Without much effort, this can be tailored to  
offer 

Re: [mail-vet-discuss] -19 of draft-kucherawy-sender-auth-header

2009-01-09 Thread Douglas Otis


On Jan 9, 2009, at 12:48 PM, Lisa Dusseault wrote:


Hi Doug,

Does anybody support your review of sender-auth-header, to the point  
of believing that the document should not be published?  So far you  
are still very much in the rough part of rough consensus.


thanks,
Lisa

On Wed, Jan 7, 2009 at 1:14 PM, Douglas Otis do...@mail-abuse.org  
wrote:



Murray,

There has been progress, but a few extremely critical security  
related areas still need to be addressed.


I have posted a draft that reviews the sender-auth-header draft.

The text and html versions are available at:

http://www.ietf.org/internet-drafts/draft-otis-auth-header-sec-issues-00.txt
http://www.sonic.net/~dougotis/id/draft-otis-auth-header-sec-issues-00.html


Funny that you describe your concern as involving rough consensus.   
The draft itself can't decide when it should stop pretending about  
what defines authentication, and remains remains contradictory on this  
critical subject.


It states that only _authenticated_ information should be included  
within the Authentication-Results header for either Sender-ID or  
SPF.  At the same time, the draft defines Sender-ID and SPF as being  
an authorization method and _not_ the authentication of the domain.
In fact, there is no way to know whether Sender-ID results were based  
upon SPF version 1 records in its current form, or whether a domain  
even intended positive results to affirm its identities, or whether  
just negative results of a Mail From were intended to mitigate back- 
scatter.  This leaves the issue of authentication itself clearly in  
the rough.


In addition, there is also the matter of encouraging the use of  
dangerous local-part macros when one wishes to obtain email-address  
annotations.  At least the Sender-ID specification states local-parts  
are _not_ verified.  What is providing the authorization remains  
unknown for SPF, even though the local-part is ignored in Sender-ID.   
In addition, there is no consensus between either Sender-ID or SPF as  
to which elements of a message are to be used to access version 1  
records.  Clearly, scoping issues are also in the rough.   
Nevertheless, this header is willing to label results of this mess  
Authentication-Results.


The remedy being sought is to replace the local-part of the  
authorizing email-address with a converted string representing the  
IP address of the SMTP client that is being authorized.  This allows  
the authenticated origin of a message to be vetted, in addition to  
what _might_ be an authorizing domain.  A fair compromise.


While there are influential proponents of this draft, this draft and  
the experimental SPF and Sender-ID RFCs remain dangerous as written.   
With a few minor modifications, the Authentication-Header draft would  
become much safer.  Satisfying those that represent influential  
special interests should not cause the IETF to dismiss their  
stewardship role.   We all know there is money to made picking up the  
pieces, but there are more productive ways to make a living.


-Doug 
  
___

Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


  1   2   3   >