Re: How I deal with (false positive) IP-address blacklists...

2008-12-11 Thread Douglas Otis


On Dec 9, 2008, at 2:42 PM, Keith Moore wrote:

when the reputation is based on something (like an address or  
address block) that isn't sufficiently fine-grained to reliably  
distinguish spam from ham, as compared to a site filter which has  
access to more criteria and can use the larger set of criteria to  
filter more accurately.



Email systems resources must be defended when confronting millions of  
compromised systems and infiltrated providers slow at removing abusive  
accounts.  Resources are best preserved when acceptance is decided  
prior to the exchange of message data.  Mapping regions known to host  
compromised systems or having been frequently hijacked is typically  
done by IP address.  As Ned mentioned, some systems block ranges that  
span across announced routes.  Although there is no reason for this,  
the growing size of the problem and the address space requires  
negative assessments be done by CIDR.


Rather than depending upon knowing the location of specific abusive  
sources, the Internet needs a registry of legitimate sources which  
includes contacts and IP address ranges.  Such a list should reduce  
the scale of the problem, and allow safer exclusions.  Normal defenses  
using Turing tests fail as the state of the art advances.  Even if  
there was a registry, what egalitarian identifier can be used to  
defend the registration process?  Receipt of text messages or faxes?   
Postal mail?  What can replace the typical Turing test?



-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Accountable Use Registry was: How I deal with (false positive) IP-address blacklists...

2008-12-11 Thread Douglas Otis


On Dec 11, 2008, at 1:51 PM, John C Klensin wrote:


As soon as one starts talking about a registry of legitimate  
sources, one opens up the question of how legitimate is  
determined.  I can think of a whole range of possibilities -- you,  
the ITU Secretary-General, anyone who claims to have the FUSSP,  
governments (for their own countries by licensing or more  
generally), ICANN or something ICANN-like, large email providers,  
and so on.  Those options have two things in common. Most (but not  
all) of them would actually  be dumb enough to take the job on and  
they are all unacceptable if we want to continue to have a  
distributed-administration email environment in which smaller  
servers are permitted to play and people get to send mail without  
higher-level authorization and certification.


Perhaps I should not have used the word legitimate.  The concept of  
registry should engender a concept of accountability.


Once one considers IPv6, just the network portion covers 2^32 times as  
many IP addresses as are present in IPv4.  In this quantity, IPv6  
addresses do not offer a scalable means upon which a server is able to  
impose a defense against abuse.  The server will handle addresses in  
rather large groups as the only method left available.  The  
consolidation of addresses into large groups will be the enemy of an  
egalitarian effort wanting to ensure access to all players.


Counter to this, much of the email abuse has been squelched by third- 
parties who allow network providers a means to indicate what traffic  
of which they are accountable.  This is done in part by the assignment  
of address ranges as belonging to dynamically assigned users.  It does  
seem as though a more formalized method though a registry support by  
provider fees would prove extremely beneficial at reducing the scale  
of the IP address range problem raised by IPv6.  By formalizing a  
registration of accountable use, along with some type of reporting  
structure or clearinghouse, IPv6 would have a better chance of gaining  
acceptance.  It would also empower providers to say what potentially  
abused uses they which to support.


While I freely admit that I have not had hands-on involvement in  
managing very large email systems in a large number of years now, I  
mostly agree with Ned that some serious standards and  documentation  
of clues would be useful in this general area.  But I see those as  
useful if they are voluntary standards, not licensing or external  
determination of what is legitimate.  And they must be the result of  
real consensus processes in which anyone interested, materially  
concerned, and with skin in the game gets to participate in  
development and review/evaluation, not specifications developed by  
groups driven by any single variety of industry interests and then  
presented to the IETF (or some other body) on the grounds that they  
must be accepted because anyone who was not part of the development  
group is obviously an incompetent idiot who doesn't have an opinion  
worth listening to.


Agreed.

That has been my main problem with this discussion, and its  
variants, all along.  While I've got my own share of anecdotes, I  
don't see them as directly useful other than as refutations of  
hyperbolic claims about things that never or always happen. But,  
when the IETF effectively says to a group ok, that is a research  
problem, go off and do the research and then come back and organize  
a WG, it ought to be safe for someone who is interested in the  
problem and affected by it --but whose primary work or interests lie  
elsewhere-- to more or less trust the RG to produce a report and  
then to re-engage when that WG charter proposal actually appears.   
Here, the RG produced standards-track proposals, contrary to that  
agreement, and then several of its participants took the position  
that those proposals already represented consensus among everyone  
who counted or was likely to count.  Independent of the actual  
content of the proposal(s), that is not how I think we do things  
around here... nor is laying the groundwork for an official  
determination of who is legitimate and who is not.



A registry of accountable use in conjunction with some type of  
reporting structure seems a necessity if one hopes to ensure a player  
can obtain the access that they expect.  In other words, not all  
things will be possible from just any IP address.  Providers should  
first assure the Internet what they are willing to monitor for abuse,  
where trust can be established upon this promise.  Not all providers  
will be making the same promise of stewardship.  Those providers that  
provide the necessary stewardship for the desired use should find both  
greater acceptance and demand.  Such demand may help avoid an  
inevitable race to the bottom.


-Doug
___
Ietf mailing list
Ietf@ietf.org

Last Call: draft-ietf-dkim-ssp (DomainKeys Identified Mail (DKIM) Author Domain Signing Practices (ADSP)) to Proposed Standard

2008-11-25 Thread Douglas Otis

Recent changes did not correct concerns described in:
http://tools.ietf.org/id/draft-otis-dkim-adsp-sec-issues-03.txt


--o-- Changes meaning of DKIM's on-behalf-of:

A highly detrimental aspect of this draft is its change to the meaning  
of RFC 4871's signature header's i= (on-behalf-of) value.


It uses Alleged Author as a term being checked by this mechanism,  
although DKIM is not to confirm the identity of the author.  As such,  
ADSP is in conflict with the DKIM WG charter!


Compliance with either all or discardable requires an Author  
Signature.   This means the on-behalf-of value MUST match against  
the From header field's email-address (the Author), either explicitly  
or by being left blank.  This differs from RFC 4871 that specifies  
this field as containing the identity of the user or agent on behalf  
of which this message is signed.  RFC 4871 also requires this identity  
to be within the key reference domain.  RFC 4871's definition allows  
this field to indicate, even opaquely, the entity or account  
authenticated when the message was accepted for signing.


Being able to associate signatures with authenticated identities is  
essential in controlling abuse.  This ability is _absolutely_  
essential if DKIM's 'i=' and 'd=' is to form a reputational basis for  
IPv6 messages.   Abuse problems are commonly caused by compromised  
systems.  ADSP efforts at preventing forgery, even affirming the  
identity of the author, remains possible while also allowing the on- 
behalf-of value to _always_ represent an authenticated identity  
within the key reference domain.   Instead, the ADSP Author  
Signature definition requires that a signing domain pretend to have  
authenticated the Author, even when it may have been the Sender's  
email-address or some other account.  Compliance should only require a  
signature to use a key that is referenced at or above the email- 
address domain of the From header field.  Those domains that always  
indicate the identity, even opaquely,  that was authenticated will  
earn trust.  Authenticated identifiers can be used to mitigate replay  
abuse caused by any problematic domain granted access.  This approach  
also allows a provider a simple means to rehabilitate their  
problematic accounts.



--o-- Advising against wildcards for applications unrelated to ADSP:

This draft uses _adsp._domainkey to prefix a TXT records.  This is  
problematic for a few reasons.  Whenever a domain wishes to defend  
against unauthorized use, publishing  _domainkey sub-domains at ever  
existing DNS node becomes required.  As such, every node will appear  
to possibly contain DKIM public keys.  DKIM uses arbitrarily defined  
key selectors, where use of enterprise or carrier NATs may impair the  
protection of DNS cache.  By placing the _adsp. prefix below the  
_domainkey sub-domain,  presence of the _domainkey sub-domain no  
longer indicates possible cache poisonings.  Scanning for possible  
poison using _domainkey  as the only known domain becomes  
impossible. :^(


The reason Dave Crocker gave for placing _adsp sub-domains below the  
_domainkey sub-domains used for keys was to facilitate domain  
delegations to DKIM email providers.   However, ADSP protection must  
be applied against every _existing_ node, where a delegation benefit  
therefore lacks merit.  This draft should instead depend upon the  
presence of discovery records defined by RFC 5321, and specify  
specific resource record types that contain the ADSP assertions, and  
not utilize prefixed TXT records to assert domain-wide policies.


This draft advises against use of wildcards, especially for publishing  
(_adsp._domainkey.)*. TXT records.  However, not using wildcards  
represents a greater administrative effort.  Dependence upon NXDOMAIN  
versus NOERROR has also been problematic in the past.  These issues  
occur where ANY or  cached records erroneously override the  
desired response.  The ADSP draft not only depends upon DNS, it  
mandates specific API error codes that potentially can impact email  
from any domain.



--o-- Prevents message addressing within From header field where the  
domain does not have records published within DNS:


Section 4.3's non-existence of email-addresses within DNS must be  
treated as a non-compliance with all, or ADSP serves no purpose.



--o-- Anti-phishing protection requires loss of delivery status  
notifications:


The term discardable and its definition clearly indicates that RFC  
5321's concept of reliable delivery is to be ignored. However, if this  
mechanism performs its intended function, there should be little  
reason to relinquish NDNs.



--o-- Section 3.2 includes a factual error:

This section states that a valid signature by an Author Domain is  
known to be compliant with any possible ADSP for that domain.   
Compliance with ADSP requires an Author Signature, not just a  
signature by the Author domain (as it should have been 

Re: Last Call sender-auth-header

2008-11-21 Thread Douglas Otis


On Nov 21, 2008, at 11:02 AM, SM wrote:


At 20:00 20-11-2008, Douglas Otis wrote:
It is rather startling that adoption of an experimental RFC is  
being presumed by this draft.  As such, those not adopting this  
experimental PRA RFC run the risk of being


There are existing implementations of these experimental RFCs.


Sorry, this should have said universal adoption.

The sender-auth-header draft states  as to the validity of the  
message's origin and describes both Sender-ID and SPF as e-mail  
authentication methods in common use.  When SPF records are published  
to mitigate MailFrom backscatter,  it is not reasonable to assume  
authorized MTAs restrict MailFrom to only authorizing domains.  There  
is _nothing_ to indicate MTAs require authorization and then bind  
authorizations to specific domain controlled accounts.   Whether  
restrictions are imposed on PRAs or MailFroms based upon  
authentication-results headers for Sender-ID and SPF represents highly  
speculative and highly dangerous conjecture.  Since these identities  
can not be authenticated in this manner, it would be perilous to  
assign negative reputations against these speculative domains as well.


While a lack of authorization might form a basis to refuse acceptance  
or to decide folder placement, there are no practical conventions,  
experimentally or otherwise, that supports this draft's presumption of  
authentication for either SPF or Sender-ID.  The only (weakly)  
authenticated identifier is the IP address of the SMTP client seen by  
the border MTA, whether this SMTP client was authorized by Sender-ID  
or SPF or not.  This information, which is essential to assess and  
assign reputation, has been omitted!  The IETF should seriously  
question the motives for such omission.


Adoption of this draft may then require the IETF to endorse Sender- 
ID.  After all, email


As I mentioned before on the mail-vet mailing list, this header is  
used to convey results of DomainKeys, DKIM, SPF and Sender-ID  
evaluation.  It is not an endorsement of a specific evaluation method.


The typical user will see this header making the following statement--

Authentication-Results: myisp.com; sender-id=pass [EMAIL PROTECTED]

A reasonable person who understands the definition for authentication  
will be mislead into believing they are being assured by myisp.com  
that [EMAIL PROTECTED] originated the message.


For either SPF or Sender-ID, an assumption of authentication requires  
universal adoption of both MailFrom AND PRA restrictions by ALL  
authorized MTAs.  Omitting the only authenticated identifier  
associated with either of these methods helps sell the lie that SPF or  
Sender-ID are authentication methods.  Since there is not universal  
adoption of MailFrom and PRA restrictions by all authorized MTAs for  
both Sender-ID and SPF, this lie can act as a type of extortion.


Those mislead by this header will be even more prone to confidence  
artists.  Confidence artists can now take advantage of current email  
infrastructure that does not impose the experimental restrictions and  
have their message receive misleading endorsements made possible by  
this header.  Perhaps creating this problem is seen as a means to  
drive demand for MTAs that can bind PRA and MailFrom restrictions  
against validated accounts.  Although such efforts are likely  
impractical in many cases, there is no way to opt-out other than not  
publishing SPF records.   Since many may depend upon SPF to mitigate  
backscatter from larger providers, removing the local-part, adding the  
IP address of the SMTP client, and the term authorized-by is a  
reasonable solution that is much less likely to prove misleading.   
Unless of course, the desire is to mislead?  Having the IP address  
available permits access to automated reputation assessments that have  
a reasonable chance at providing timely protection, unless of course  
the desire is not to offer protection?  Protecting against just look- 
alike attacks likely represents a manual and slow process.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Last Call sender-auth-header

2008-11-20 Thread Douglas Otis

Last call for
 http://www.ietf.org/internet-drafts/draft-kucherawy-sender-auth-header-17.txt
The intent of this posting is to generate broader discussion.

Browser and MUA plugins may soon annotate email messages with  
identifiers contained within Authentication-Results headers.  For  
Apple Mail, just setting changes can cause this header to be displayed  
above the message.  The reason given for the Authorization-Results  
header is stated in section 1 Introduction:

,---
The intent is to create a place to collect such data when message  
authentication mechanisms are in use so that a Mail User Agent (MUA)  
can provide a recommendation to the user as to the validity of the  
message's origin and possibly the  integrity of its content.

'---

There is an essential consideration when using this header given in  
section 4.1 Header Position and Interpretation

,---
An MUA SHOULD NOT reveal these results to end users unless the results  
are accompanied by, at a minimum, some associated reputation data  
about the message that was authenticated.

'---

This statement would have been more clearly stated as:
,==
An MUA SHOULD NOT reveal these results to end users unless the results  
are accompanied by, at a minimum, some associated reputation data  
about the authenticated origination identifiers within the message.

'==

This consideration is to preclude the apparent endorsement whenever  
authenticated identifiers have obtained negative reputations.  In  
addition, only authenticated identities are safely assigned negative  
reputations. Therefore, it is extremely important to understand what  
authentication means, and which identifiers can be authenticated.  In  
addition, since reputation results are not included in this header,  
and there is no assurance reputations have been checked.  The Browser  
and MUA plugins will need to obtain reputations for these identifiers  
independently to comply with section 4.1.


With respect to Sender-ID, the auth-header draft and author presume  
the PRA email-address captured within the Authentication-Results  
header will be considered authenticated whenever the sending MTA is  
authorized by the SPF record located by PRA.  This authorization, as  
the draft states, is denoted by sender-id=pass.  A presumption of  
the PRA being authenticated expects there will be PRA restrictions  
imposed by all SPF authorized MTAs.  A presumption of PRA  
authentication is why the weakly authenticated IP address of the SMTP  
client that was authorized by the SPF record is omitted.


Section 1.3  Definitions says:
,---
Generally it is assumed that the work of applying message  
authentication schemes takes place at a border MTA or a delivery MTA.   
This specification is written with that assumption in mind.

...
It is also possible that message authentication could take place on an  
intermediate MTA.  Although this document doesn't specifically  
describe such cases, they are not meant to be excluded from this  
specification.

'---

The position of the Authentication-Results header, with respect to the  
Received header added by the border MTA, is unknown.   Browser and MUA  
plugins will be required to guess which Received header was added by  
the border MTA, but even then, the Received header is not assured to  
include the IP address of the SMTP client.


As such, the auth-results draft clearly expects that the SMTP client's  
reputation plays no role.  Reputations are to be based only upon the  
PRA when applying annotations. : ^(


It is rather startling that adoption of an experimental RFC is being  
presumed by this draft.  As such, those not adopting this experimental  
PRA RFC run the risk of being assigned a negative reputation, or of  
misleading those believing PRA are authenticated as a result of this  
misleading header.  In addition, although SPF includes local-part  
macros that might invite DDoS attacks, these seldom used macros can  
only deny and not affirm the validity of a domain's local-parts.


With respect to reputation in regard to SPF or Sender-ID, it is _only_  
the IP address of the SMTP client seen by the border MTA that has been  
weakly authenticated.  Unfortunately, the authentication-results  
header has neglected to include this SMTP client IP address.  This  
omission might have been to mislead recipients into believing PRAs are  
authenticated, or to force adoption of PRA restrictions, or to deflect  
the affect of abusive behavior.  For example, it is common to see less  
scrupulous ad campaigns or perhaps compromised servers emit abuse from  
specific IP addresses.  Blocking all email that carry a specific  
domain seldom represents a measured or practical response to a problem  
isolated to specific servers.  As such, the favorable annotation of  
spam may occur, or one compromised server may disrupt email from the  
entire domain.


Blocking only servers that appear abusive would be far less disruptive  
than blocking by the 

Re: Context specific semantics was Re: uncooperative DNSBLs, was several messages

2008-11-14 Thread Douglas Otis


On Nov 14, 2008, at 1:38 PM, Ted Hardie wrote:

If we are documenting practice and nothing more, then the  
publication stream can move to informational and text can be added  
on why a new RR, which would normally be expected here, is not being  
used (essentially, inertia in the face of 15 years deployment).   
That may be a valuable use of an RFC; documenting the way things  
really work often is.  But it shouldn't go onto the standards track,  
as there is a known technical deficiency.


Agreed.

-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


draft-kucherawy-sender-auth-header and last call draft-hoffman-dac-vbr-04

2008-11-06 Thread Douglas Otis
New email headers' misuse of the term authentication will prove  
highly deceptive for recipients and damaging for domains!


The Random House dictionary definition of authenticate says:
1. to establish as genuine.
2. to establish the authorship or origin of conclusively or  
unquestionably, chiefly by the techniques of scholarship: to  
authenticate a painting.

3. to make authoritative or valid.

The Oxford dictionary adds:
[ intrans. ] Computing (of a user or process) have one's identity  
verified.


When a header is labeled Authentication-Results and contains my- 
trusted-isp.com; sender-id=pass [EMAIL PROTECTED], a reasonable  
person should expect this gives recipients the impression that my- 
trusted-isp.com has confirmed that the message originated from [EMAIL PROTECTED] 
.


Available public information for path registration mechanisms, even  
information within the sender-auth-header draft, is not especially  
helpful in assuring clarity.  Proponents of Sender-ID compare path  
registration mechanisms to that of a telephone's Caller-ID as a means  
to indicate who originated a message.  A website by a well known  
Redmond vendor describes Sender-ID as follows (capitalization added  
for emphasis):


The Sender ID Framework is an e-mail AUTHENTICATION technology  
protocol that helps address the problem of spoofing and phishing by  
VERIFYING the DOMAIN NAME FROM WHICH e-mail messages are sent.  SIDF  
has been APPROVED by the Internet Engineering Task Force to help  
increase the detection of deceptive e-mail and to improve the  
deliverability of legitimate e-mail.  SIDF is an e-mail AUTHENTICATION  
protocol designed to be implemented at no cost for all senders,  
independent of their e-mail architecture.


The experimental RFC4406 Sender-ID: Authenticating E-Mail also states:

Section 2 Problem Statement:
---
The PRA version of the test seeks to AUTHENTICATE the mailbox  
associated with the most recent introduction of a message into the  
mail delivery system.

...
In both cases [referring to the PRA and to SPF's MailFrom], the domain  
associated with an e-mail address is what is AUTHENTICATED; no attempt  
is made to AUTHENTICATE the local-part.  A domain owner gets to  
determine which SMTP clients speak on behalf of addresses within the  
domain; a responsible domain owner should not authorize SMTP clients  
that will lie about local parts.

---

The truth is Sender-ID is NOT approved by the IETF as a standard  
offering sound guidance to MTA operators!   No standardized mechanism  
today permits PRA and MailFrrom restrictions without the risk of email  
disruption!


Unless impractical restriction are imposed upon all possible PRA  
header fields by all outbound MTAs that might carry email by a domain  
that might publish SPF records,  it is never safe to assert that  
Sender-ID's or SPF's authorization of an SMTP client authenticates (or  
confirms) which domain originated a message!


The SPF record may have been employed to mitigate back-scatter while  
using a shared MTA that imposes no MailFrom or header field  
restrictions.  The shared MTA may impose access limitations as their  
means of control.  Since there are no practical means to generally  
impose restrictions upon the PRA fields as REQUIRED by the  
experimental RFC4406, and the MailFrom as REQUIRED by the experimental  
RFC4408, path registration mechanisms at most provide meaningful  
results when the SMTP client is NOT authorized.  Even then, when an  
SMTP client is not authorized, this can not be considered to mean the  
message is fraudulent.  Often the MTA-NOT-AUTHORIZED state is used to  
justify the silent dropping of DSNs.


If it comes to pass that recipients become commonly deceived by  
Authentication Results header's nebulous Authentication-Results and  
pass terms,  MTA operators may soon find themselves obligated to  
impose universal restrictions upon all possible PRA fields and to  
adopt this proprietary algorithm.  This makes one wonder whether the  
sender-auth-header was clever way to sell the authentication lie.   
Perhaps it is just cheaper to pretend something is authenticated. : ^(


Currently, it is unsafe to conclude that a domain even intended to  
have Sender-ID applied!  This brings into rather serious question what  
is meant by the term Authentication with respect to either Sender-ID  
or SPF.


The path registration mechanism only provides meaningful results when  
the SMTP client is NOT authorized.  In this case, not accepting a  
message may be a reasonable response, but only when one is prepared to  
make a significant number of exceptions.


Can authors of these drafts, and the IETF if the drafts become  
accepted, dodge being culpable in the deceptive use of the term  
Authentication and pass instead of MTA-Authorized.  The vouch by  
reference strategy also assumes all the listed authentication  
mechanisms  equally verify an originating domain.  Of course they don't.


Added to 

draft-hoffman-dac-vbr-04

2008-10-29 Thread Douglas Otis
This draft introduces several areas where the vouching services  
results could be exploited.


Section 3.  Validation Process

2.  Verifies legitimate use of that domain using one or more  
_authentication_ mechanisms as described herein


This draft incorrectly describes Sender-ID and SFP as a domain  
Authentication mechanism.  Sender-ID and SPF represents a method to  
authorize SMTP clients by IP address.  This method does not  
authenticate that a domain represents the source of the message.   
This would be expecting all shared outbound servers to impose  
restrictions on the use of the PRA, Mail-From, or the From.
Depending upon whether DKIM, domainkeys, Sender-ID, or SPF is the  
mechanisms being used, any of these fields or any of these mechanisms  
might be tried.  In addition to communicating the type of message  
being vouched for,  the domain exclusion method or the domain  
authentication method being employed by the recipient should also be  
included.  It should be clear that neither Sender-ID or SPF represent  
a means to authenticate the valid use of the domain.  The VBR-info  
header should also communicate the mechanism supported by the domain  
in conjunction with the vouching service.  It is possible that a  
domain is sent through shared servers where only the use of DKIM is  
considered to offer a safe assertion with respect to the veracity of  
the message.  In an effort to suppress DSN abuse, an SPF record might  
also be employed.   To avoid the exposure to possible abuse that  
shared servers might create for either SPF or Sender-ID (which could  
mistake the use of SPF), the vouch header should also include the  
supported method, such as sm=dkim, and the types of supported  
transactions and the domain exclusion or authentication mechanism  
should be returned by the vouching service to verify that the header  
conforms with what the domain is trusting as a safe.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: placing a dollar value on IETF IP.

2008-10-28 Thread Douglas Otis


On Oct 28, 2008, at 8:03 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] 
 wrote:


Interoperability of standards is a hard-won prize, whether in the  
IETF or elsewhere. The cost of producing documents is a mere drop in  
the bucket. In addition, cost is a very slippery thing to get ahold  
of because of the difference between expenses and investments.


Agreed.  The prize may go to the influential, rather than being  
decided on merit.  In the end, it is often about either supporting a  
service or the cost of a service.  The goal, and not the cost, is what  
is important.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


draft-ietf-dkim-ssp-06 in last call?

2008-10-08 Thread Douglas Otis

Here is a draft that outlines a few concerns for draft-ietf-dkim-ssp-06:

http://www.ietf.org/internet-drafts/draft-otis-dkim-adsp-sec-issues-03.txt

These concerns were not fully discussed on the DKIM list expect for  
voting for an as is.  Unfortunately a voting process offers little  
clarity.


There appears to be a factual error in the draft.  Any restrictive  
ADSP assertion such as ALL or DISCARDABLE creates an additional  
requirement for what valid DKIM signature remains valid with respect  
to compliance with ADSP assertions.


See:  draft-ietf-dkim-ssp-06 Section 2.7.  Author Signature

This section defines an Author Signature as a valid signature where  
the on-behalf-of (DKIM signature i=value or its default) matches  
against the From header field.


Section 3.2 ADSP Usage however says:
.
|If a message has a Valid Signature from an Author Domain, ADSP
| provides no benefit relative to that domain since the message is
| already known to be compliant with any possible ADSP for that
| domain.
'

Clearly, these two sections are in conflict.  In addition, the Author  
Signature definition is in serious conflict with the working group's  
charter.


Since the DKIM key itself can assert a restriction upon the on-behalf- 
of local-part, there might be some justification to generally require  
signatures using local-part restricted keys to also match against the  
From header field before being considered valid.  It would be  
dangerous to only impose this requirement based upon the existence of  
an ADSP record.  SSP attempts to use DNS SERVFAIL to detect an  
attempt to block the ADSP records, but this status may not be apparent  
behind a resolver.  A conditional requirement is ill considered from a  
security standpoint, and may even invite abuse.


Once the issue of restricted local-part keys is properly handled in an  
independent fashion, then attempts to require the on-behalf-of match  
against the From header field conflates DKIM and an ADSP record into a  
poor replacement for either S/MIME or OpenPGP.  After all, the DKIM  
signature and it's on-behalf-of fields are normally invisible to the  
recipient, and DKIM in conjunction with an ADSP still does not assure  
control over the Display Name being seen.  An MUA can always annotate  
a message to indicate specifically what portions of the message match  
against the DKIM signature's on-behalf-of and domain.   The SSP  
draft is conflating a valid DKIM signature and ADSP record into  
becoming an assertion of the identity of the message's Author.  Such  
conflation clearly and dangerously exceeds the DKIM charter.


The underlying goal of ADSP was to afford a domain control over the  
use of their domain within a From header field.  This goal can be  
fully met without stipulating that the DKIM signature be on-behalf- 
of the identity within the From header field.  The on-behalf-of  
should relate to what the domain authenticated, even when the on- 
behalf-of is opaque.   It is common for what is being authenticated  
by a provider to not be the email-address within the From header  
field.  Had SSP permitted an ALL assertion and a practice of the on- 
behalf-of to opaquely or otherwise reflect what the signing domain  
authenticated, then DKIM and ADSP would be effective at detecting Bot- 
Net related abuse, the current email/Internet plague.  Perhaps some  
stats will soon be published regarding this concern.  A link will be  
published when ready.


It even seems SSP might be attempting to sabotage DKIM's anti-abuse  
utility.  There is no ADSP assertion that a domain is not used to send  
or sign email, and ADSP DISCARDABLE at every node is how a domain  
might wish to protect their hierarchy.  Requiring the ADSP record to  
also be below the _domainkey makes it impossible to also know  
whether the domain does not publish DKIM public keys as well.  Rather  
than a term like DISMISSABLE, DISCARDABLE also implies that DSN can be  
lost.  SSP fails to even indicate that its assertions pertain to  
SMTP.  This lack of protocol specificity might therefore disrupt any  
message where the From header field domain is not published in DNS as  
a result of implied DNS existence.


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dkim-ssp-06 in last call?

2008-10-08 Thread Douglas Otis


On Oct 8, 2008, at 2:28 PM, Stephen Farrell wrote:


Doug,

Firstly, this draft is not yet in IETF LC, so you've jumped the gun  
a bit.


In any case, I believe you had, and took, a number of opportunities  
to present your concerns at f2f meetings, on the list and in I-Ds  
like the one cited below. However, you did not garner support for  
your suggested changes. (Frankly, you didn't succeed in fully  
explaining your concerns, at least I still don't quite get what  
worries you.)


A praiseworthy goal of SSP, when widely adopted, would be as a  
mechanism to help limit the use of a domain in the From header field  
for email by requiring a signature by that domain.  Unfortunately SSP  
extends this goal to an area beyond their charter.  Who should cry foul?


For a signature to be compliant with even the lesser of the two ADSP  
restrictive assertions, the signature's on-behalf-of  or its default  
must match against the From header field, the message's author.  Why  
is that?


This contrived requirement prevents a proper use of the on-behalf-of  
field as a means to indicate what had been authenticated.  This  
authentication information is needed to better deal with replay abuse  
without collaterally blocking the entire domain.  The overhead of  
verifying two DKIM signatures will then be needed to accommodate an  
on-behalf-of not matching against the From header field.   Also as a  
result, the signature associated with the From header field is likely  
to be a blank on-behalf-of.  Two signatures where one informative  
signature could have been employed is not being practical.  Requiring  
a domain's message to include a signatures that is on-behalf-of of  
the From header field is a bad practice.  This practice is wasteful.   
This practice is bad for the environment.  There is nothing gained by  
a blank on-behalf-of value, except as a requirement to implement  
this very bad practice.  As a rhetorical question, why would large  
domains not wish to give recipients any option other than all-or- 
nothing?  Clearly the on-behalf-of the From header field compliance  
requirement gives larger domains a distinct and unfair advantage in  
this respect.


Some large providers manage to control outbound traffic to where only  
5% of their traffic is likely the result of Bot-Net or bad-actor  
related abuse.  When considered within the context of a signature  
lacking meaningful replay abuse prevention, a blank on-behalf-of   
offers nothing to improve the situation.  There is also a real danger  
where message annotations that are based upon the existence of ADSP  
record and a DKIM signature might be seen by recipients as an  
assurance the From header field has been validated.  With the DKIM WG  
being within the Security Area, this should have been a concern, since  
ADSP compliance provides no other option than to have the DKIM  
signature be on-behalf-of the From header field.


But hopefully we'll get the benefit of IETF wide review in the near  
future so maybe someone else will be able to see what the DKIM list  
didn't.


My hope as well.  I hope to see DKIM to offer a practical means for  
ushering in IPv6 email.  IPv6 is soon too massive and soon too tangled  
to be properly handled by black-hole lists.


Lastly, by voting process, I guess you must mean the question I  
asked after the end of WGLC, (thread at [1]). That wasn't a voting  
process, it was me checking that relatively few WGLC comments didn't  
reflect a lack of consensus, that the changes to handle the few WGLC  
comments that were posted had been processed ok, and that there was  
in fact no one at all on the list to speak up and agree with your  
concerns. So, I checked, and we got a bunch of +1 messages to the  
effect that the draft was ready to send to the IESG and, so far,  
zero messages agreeing with your position.


IMHO, the damaging decisions were not the result of mailing list  
discussions.  When attempting to discern what lead to the decisions,  
little can be discerned from a long series of empty votes.  The  
reference to voting was not about whether to go to last call.  Of  
course, this too was rather typical.  How is it that a statement of a  
factual error in the draft not find thoughtful discussion on the  
mailing list,  other than a vote on some suggested fixes?  The lack of  
reasoning goes to heart of the concern.  When asked for how or why a  
critical decision was made, an ensuing discussion offer an assurance  
that the reasoning was principled.  You can always add two  
signatures sounds too much like Let them eat cake from my  
perspective.


-Doug



Stephen.

[1] http://mipassoc.org/pipermail/ietf-dkim/2008q3/010706.html

Douglas Otis wrote:
Here is a draft that outlines a few concerns for draft-ietf-dkim- 
ssp-06:


http://www.ietf.org/internet-drafts/draft-otis-dkim-adsp-sec-issues-03.txt

These concerns were not fully discussed on the DKIM list expect for
voting

IETF mail-lists outbound server being exploited

2008-09-24 Thread Douglas Otis
There is a growing stream of spam being emitted from 64.170.98.32.   
This appears to be the outbound server for IETF mailing-lists.  Some  
of the spam has had headers added that  indicate the message was  
originally received from an open relay or an IP address on several  
block-lists (with much of it encoded unicode).   The messages  
identified as spam have included the original message content.  This  
is creating a problem where receiving messages from the IETF is less  
reliable.


Postmaster@ has been copied.  Redacted examples are available upon  
request.


-Doug
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Update of RFC 2606 based on the recent ICANN changes ?

2008-07-07 Thread Douglas Otis


On Jul 7, 2008, at 10:49 AM, John C Klensin wrote:

--On Monday, 07 July, 2008 17:19 + John Levine
[EMAIL PROTECTED] wrote:
John,

While I find this interesting, I don't see much logical or  
statistical justification for the belief that, if one increased (by  
a lot) the number of TLDs, the amount of invalid traffic would  
remain roughly constant, rather than increasing the multiplier.


And, of course, two of the ways of having networks [to] clean up  
their DNS traffic depend on local caching of the root zone (see  
previous note) and filtering out root queries for implausible  
domains.  Both of those are facilitated by smaller root zones and  
impeded by very large ones.


Agreed.  This is happening while some email providers suggest  
widespread adoption of MX resource records targeting roots to signify  
opting-out.  Not only does this form of email opt-out unfairly burden  
the victim, this scheme also victimizes roots.  Are roots really  
inexhaustible and capable of sustaining high levels of horizontal  
growth, and ever greater levels of DNS misuse while adopting an  
additional security layer?  How will roots be able to block abuse once  
it proves destructive?


From the human aspect, the list of common file extensions is mind- 
numbingly long.  With a changing TLD landscape, one will no longer be  
sure whether a reference is to a file or to an Internet host.  This  
becomes critical since automation is often used to fully construct  
links.  Will obvious names be precluded such as .C0M, or those less  
obvious having international domain names?  While this might help  
ICANN raise money, their profit seems destine to come at the expense  
of those currently supporting existing infrastructure.  If domain  
tasting is an example of governance, then how can ICANN be trusted to  
operate in the greater interest of the Internet?  It seems more  
reasonable to extend ccTLDs into a comparative list of international  
domain names where desired, and then wait a decade to measure its  
impact and to allow wider deployment of DNSsec.


Smaller steps rather faith in ever greater capacity seems more  
appropriate.  If DNS were to approach the ability of roots to respond,  
then DDoS attacks take on truly global proportions.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-pearson-securemail-02.txt

2008-05-05 Thread Douglas Otis

On May 3, 2008, at 3:44 PM, Frank Ellermann wrote:

 SM wrote:

 SenderID and SPF does not authenticate the sender.

 For starters they have different concepts of sender, PRA and  
 envelope sender, and RFC 4408 section 10.4 offers references (AUTH +  
 SUBMIT) for folks wanting more.

Agreed.  Neither SenderID or SPF offers authentication.  Both of these  
schemes provide a method for domains to _authorize_ IP addresses used  
by SMTP clients.  This can not be described as authentication since  
SMTP clients are often shared by more than one domain.  This scheme is  
fully dependent upon secure routing through questionable boundary  
issues.  In addition to the section 10.4 references, DKIM is another  
possible choice.

-Doug 
   
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-klensin-rfc2821bis: closing the implicit MX issue

2008-04-15 Thread Douglas Otis

On Apr 14, 2008, at 7:33 PM, Tony Hansen wrote:

 The SMTP implementations that have made the transition to support  
 IPv6 appear to already have done it in a way that supports   
 records for the implicit MX case. In some cases they are following  
 RFC 3974, and other cases they are just using getaddrinfo() and  
 letting it do the rest. Note that RFC 3974 itself was supposedly  
 based on experience in deploying IPv6. At least one of these MTAs  
 is in common use around the network in the IPv4 world.

 In essence, these implementations are following the RFC 821 and RFC  
 974 (sans WKS) rules of looking for address records. They've ignored  
 the A-only wording of RFC 2821 and are acting like we specify  
 currently in 2821bis-09.

 In my queries I haven't yet found any IPv6-capable SMTP server that  
 doesn't do it.

 I've seen examples of sites that are in regular use that mail would  
 break in an IPv6-only world if implicit MX to  did not work.

 From this viewpoint, running code wins.

Abusive running code wins as well.  Indeed, many bad-actors will  
appreciate not being limited to A or MX records, as specifically  
specified in RFC 2821 discovery mechanisms.  Whether the specifics of  
this specification were in error is a separate issue, where endorsing  
 records as a valid discovery mechanism represents a substantial  
change, and one likely to prove unsuccessful when more widely adopted  
and then abused.  The A resource record enabled a transition to MX  
resource records.  The necessity of the implied MX record is no longer  
justifiable.  Permitting A records as a means to discover SMTP servers  
often generates a steady amount of abuse by bad-actors that use A  
record only host-names as spoofed domains in their email-address.

To overcome what is destine to become an undesirable MX fall-back  
mode, some have suggested bogus MX records could be published  
instead.  Bogus MX records would then become perhaps the _only_ means  
to protect hosts not involved in the public reception of SMTP  
messages.  Much of the undesired traffic does not directly emanate  
from clients controlled by bad-actors.  Often, undesired traffic  
results from attempts to validate oft spoofed email sources.  The  
level of this undesired traffic can be substantial.

 I'm also swayed by the principle of least surprise. Some of the  
 responses I've gotten have been along the lines of Why's this a  
 question? Of course you do  lookup. One person who had a site  
 set up with an IPv6-only connection and no MX record told me I  
 wanted to forward my e-mail to an account on that machine. It worked  
 the first time, so I didn't see a need to change it. As mentioned  
 above, at least one of the IPv6-capable MTAs is in common use around  
 the network in the IPv4 world, and turning on IPv6 on those boxes  
 should not cause surprises.

Those wanting inbound SMTP on their IPv6 only hosts can routinely  
include MX records in addition to their  records.  Keeping a  
requirement to publish either A or MX records would alleviate the rest  
of the world from seeking protection by publishing bogus MX records  
for all hosts where inbound SMTP is not desired.  High levels of abuse  
often require public inbound receivers to specialize in defending  
against the abusive traffic.  The percentage of public SMTP servers  
represent a shrinking minority of hosts that might benefit from a  
convenience of not needing to publish MX records.  However, to improve  
both the performance of SMTP servers in general, and acceptance rates  
of IPv6 only hosts, publishing MX records for an email domain is  
likely to become increasingly critical.  Least surprise is assured  
by discovery methods that work on a large scale while also limiting  
avenues for abuse.  Having  records imply MX resource records on a  
large scale without rampant abuse would be astonishing.

 Last of all, I'm swayed by the discussions around RFC 974 and the  
 DRUMS archive search around the question of what led to the wording  
 change in 2821bis saying explicitly to do A lookups. These indicate  
 that the intent of adding the A record description was to be  
 descriptive, not prescriptive nor proscriptive.

SMTP represents a great achievement.  Much of this achievement has  
occurred by minimizing administrative efforts needed to establish SMTP  
services.  These minimal administrative efforts now play a significant  
role in fostering current levels of abuse afflicting SMTP.   
Standardizing on  as an MX fall-back moves SMTP and IPv6 in the  
wrong direction.  Sensitivity to abuse adopts a practice of opt-in  
rather than opt-out.    as an MX fall-back may necessitate SMTP  
IPv6 hosts the need to publish bogus MX records for the specific  
purpose of opting out.

 So the bottom line is that I see sufficient support for including  
  lookups when implicit MX comes into play.

It seems the specification should attempt 

Re: Last Call: draft-klensin-rfc2821bis

2008-03-28 Thread Douglas Otis

On Mar 27, 2008, at 8:31 PM, Keith Moore wrote:
 David Morris wrote:

 Perhaps you could help us out and share a reference to  
 documentation of such problems. I for one have not personally  
 observed any problems related to using the A record to locate a  
 mail server when there is no MX.

 part of the problem is that most hosts don't wish to receive mail  
 but there is no way to indicate this to a mailer that is trying to  
 deliver mail to them.

Agreed.  Although MX records provide a discovery method for SMTP  
services, fall-back to A records prevents an assertion of no SMTP  
services when A records exist at the domain.

 if the host has an A record, under the current specifications, a  
 mailer that gets a message (presumably spam) with a recipient  
 address at that domain is expected to try to deliver that message.   
 and unless that host implements a dummy SMTP server that refuses all  
 incoming mail with a permanent error the sending mailer will keep  
 getting connection refused - which is treated as a temporary error  
 and the sending mailer will keep retrying.  this clogs up mail queues.

As John Levine pointed out, a message with an originating email- 
address referencing a domain only containing  records is likely to  
be refused.  In part to avoid potential issues handing NDNs as Frank  
suggested, and in part that each IPv6 hostname offers a vast number of  
domains and addresses that can be exploited as a spoofed source due to  
the RFC2821bis fallback specifically including IPv6  records.

 and the dummy SMTP server works, but it consumes resources on the  
 host and eats bandwidth on the network.  having a way to say don't  
 send this host any mail in DNS seems like a useful thing.  and we  
 simply don't need the fallback to  because we don't have the  
 backward compatibility issue that we had when MX records were  
 introduced.

Not sanctioning IPv6  records as an MX fall-back avoids the  
undesired traffic now caused by SMTP spoofing of A records.  MX  
records might then be seen as an opt-in mechanism from the perspective  
of IPv6, since opt-out mechanism are onerous for those not wishing to  
participate.  While Bill and others expressed concerns of being tied  
to DNS, whatever replaces DNS must also offer separate service and IP  
address resolution mechanisms.  The concerns related to DNS seems to  
assume there would not be separate service/address mechanisms, but  
this would not suit those running their services out of different  
domains.

Not sanctioning IPv6 MX to  fallback actually makes IPv6 easier to  
manage in that email policies will not be required at all IPv6  
hostnames, as they would be for IPv4.  Those wanting to employ small  
and simple services connected directly to the Internet might otherwise  
find these services inundated by undesired traffic whenever their  
hostname is used to spoof an email source.  Not sanctioning IPv6 MX to  
 fallback makes IPv6 safer from abuse, perhaps enough to  
compensate for the quadrillions of hostnames and IP addresses that  
might be involved.  Over time SMTP itself may not remain viable as an  
exchange between anonymous parties if RFC2821bis retains IPv6   
records as a fall-back for MX records.

-Doug
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-klensin-rfc2821bis

2008-03-24 Thread Douglas Otis

On Mar 24, 2008, at 11:42 AM, Ned Freed wrote:

 John C Klensin [EMAIL PROTECTED] wrote:
 --On Saturday, 22 March, 2008 23:02 -0700 Douglas Otis
 [EMAIL PROTECTED] wrote:

 The update of RFC2821 is making a _significant_ architectural  
 change to SMTP by explicitly stating  records are within a  
 list of SMTP server discovery records.

 Well, to be very precise, 2821 is ambiguous on the subject.

   Agreed.

 Some people have read (and implemented) it as if the text said  
 address records, implying a default MX for the  case as well  
 as the A one.   Others have read it more narrowly and only  
 supporting the default for A RRs.To the extent to which  
 2821bis is expected to eliminate ambiguities that were present in  
 2821, it had to say something on this subject.

   It might, however, be best to simply document the ambiguity. I  
 suspect that implementation reports would show some implementations  
 querying for both  and A RRs, while others query only for A  
 RRs. I am not convinced that 2821bis _should_ try to resolve this.

 I don't think we have a choice. it is obvious that this ambiguity  
 can lead to interoperability problems, and that means this has to be  
 resolved if the document is to obtain draft standard status.

Anyone who relies upon just publishing  records to enable the  
discovery of their receiving MTAs are unlikely to find this to be  
widely interoperable.  Until such time that A records for discovery  
have been deprecated, the A record method of discovery or email- 
address domain validation continues to be used.  MX records have been  
in use for a period long enough to safely rule out new explicitly  
implied MX records.  The choice seems rather clear, especially in  
the face of undesired traffic created by implied use of address  
records.  Please don't add  or any future address record to a list  
of records likely to be abused by spammers.

 If it says address records (as the current text does),

   Actually, it says address RR (if I understand what text we're  
 discussing). I believe we're discussing Section 5.1 of

 http://www.ietf.org/internet-drafts/draft-klensin-rfc2821bis-09.txt

 where it says
 
  The lookup first attempts to locate an MX record associated with  
 the
  name.  If a CNAME record is found instead, the resulting name is
  processed as if it were the initial name.  If no MX records are
  found, but an address RR (i.e., either an IPv4 A RR or an IPv6 
  RR, or their successors) is found, the address RR is treated as  
 if it
  was associated with an implicit MX RR, with a preference of 0,
  pointing to that host.

 (Please correct me if I misunderstand what text we're discussing.)

This is the correct section.

 you (and perhaps Mark and others) dislike the consequences and  
 claim significant architectural change. If it is changed to  
 explicitly indicate that only A RRs can be used to imply an MX  
 record, then I assume that those who read 2821 the other way and  
 are now supporting  records to generate an implied MX would  
 claim significant architectural change.

   Indeed, that seems likely (though not demonstrated).

   But regardeless, we're on shaky ground if we try to force either  
 kind of implementation to change. May I suggest a different  
 starting point:

 On the contrary, it is expected that options can be eliminated from  
 documents moving to draft, especially if in doing so an  
 interoperability problem is removed.

Clarity can be established and interoperability _improved_ by limiting  
discovery to just A and MX records.  Perhaps a note might be included  
that at some point in the future MX records may become required.

   I think there's very strong consensus that the presence of an MX  
 RR demonstrates the intention to receive email addressed to the  
 domain in question. I don't think there's any consensus that the  
 presence of an  or A RR demonstrates such an intent.

 Perhaps, but the fact remains that MX-less configurations are  
 currently legal and in use.

While there are some that use A records, which remains largely due to  
prior practices, these prior practices did _not_ include the use of  
 records.

   There is, however, considerable history that the presence of an  
 address RR _in_combination_with_ a process listening on port 25 of  
 the IP address in question indicates a willingness to receive email  
 for a domain identical to the domain of that address RR.

To avoid ambiguity, A and not  records would be an accurate  
statement of prior practices.

 OK...

   Whether or not we have any consensus that this historical  
 practice should be deprecated (I would vote YES!), rfc2821-bis is  
 not, IMHO, the right place to deprecate it.

 On this we agree. We haven't gone through any sort of consensus  
 process on this and since there is no justification for deprecating  
 use of A-record-only configurations as part of a move to draft this  
 cannot

Re: Last Call: draft-klensin-rfc2821bis

2008-03-23 Thread Douglas Otis

On Mar 20, 2008, at 3:30 PM, John C Klensin wrote:



 --On Friday, 21 March, 2008 09:03 +1100 Mark Andrews
 [EMAIL PROTECTED] wrote:

  I think Doug is saying don't let domains with just 
  records be treated as valid RHS of email.  Today we
  have to add records to domains with A records to say that
  these are not valid RHS of email.  With MX synthesis
  from  you create the same problem for domains with
   records.

  user@A record owner
  user@MX record owner
  user@ record owner  * don't allow this.

 Mark, Doug,

 With the understanding that this is just my personal opinion (as  
 editor, I'll do whatever I'm told) _and_ that I'm personally  
 sympathetic to phasing out even the A record implicit MX...

 It seems to be that 2821bis is the wrong place to try to fix this,  
 especially via a comment posted well after the _second_ Last Call  
 closed.   The current phrasing is not an oversight. It was  
 explicitly discussed on the mailing list and this is the behavior  
 that people decided they wanted.


John,

In the past you had made several comments that RFC2821bis would not  
change SMTP, and that you had also stated  records where NOT  
defined as SMTP server discovery records.  (Not in those words of  
course.)  It does not appear this change was your choice, but  
nonetheless and surprisingly this unfortunate change is now being made.

The update of RFC2821 is making a _significant_ architectural change  
to SMTP by explicitly stating  records are within a list of SMTP  
server discovery records.  This change represents a poor architectural  
choice since this _will_ increase the burden on networks being spoofed  
by abusive email.  Due to high levels of abuse, confirming validity of  
email domains by checking for discovery (A and MX) records in the  
forward DNS zone often replaces an alternative of checking PTR records  
in the in-addr.arpa reverse DNS zone.  The reverse zone suffers from  
poor maintenance where its use creates a sizeable burden for  
recipients.  RFC2821bis now adds  records to a list of records  
that must be checked to disqualify public SMTP server domains within  
the DNS forward direction.  This change adds to the transactional  
burdens already headed in the wrong direction.  It would seem a sound  
architectural change would be to deprecate A records as a means to  
qualify domains for message acceptance, but RFC2822bis adds   
records instead.  This situation becomes considerably worse when  
domain tree walking or wildcards are then preferred over checks  
against discovery records.

It was not my intention to post this after last call, but this only  
came to my attention recently.  For that I am sorry, nevertheless this  
issue may deserve greater consideration.

-Doug










___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-klensin-rfc2821bis

2008-03-20 Thread Douglas Otis
While this response may be a bit late, the change in section 5.1  
indicating SMTP server discovery now explicitly supports IPv6 address  
records represents a significant change from RFC2821.

While a desire to retain current practices has some justification,  
extending an already dubious and archaic practice to the explicit use  
of IPv6 raises significant questions.

The level of misuse afflicted upon SMTP often results in an  
exploration of DNS SMTP discovery records to determine whether a  
purported domain might be valid in the forward direction.  To remain  
functional, reverse DNS checks are often avoided due to the poor level  
of maintenance given this zone.  A move to deprecate A records for  
discovery when performing these checks to ascertain domain validity  
would favourably impact the level of undesired transactions.  To  
combat rampant domain spoofing, some domains also publish various  
types of SMTP related policy records.  To counter problems related to  
wildcard policy records, a lack of policy may be conditioned upon  
presences of possible SMTP discovery records.

Adding IPv6 to the list transactions needed to qualify SMTP domains  
that is predominately triggered by geometrically growing levels of  
abuse or misuse appears to be a remarkably poor choice.  To suggest a  
domain might reply upon this mechanism again appears to be remarkably  
poor advice.  Reliance upon a communication system should not be  
predicated upon such a questionable mechanisms.  During the next  
disaster, would you want FEMA to not use MX records or to depend upon  
IPv6 address records?  Not including IPv6 as a discovery record would  
better protect networks in the face of growing misuse of SMTP while  
also better ensuring the integrity of SMTP.

-Doug
___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: IPv4 Outage Planned for IETF 71 Plenary

2007-12-18 Thread Douglas Otis


On Dec 18, 2007, at 9:29 AM, Stephen Farrell wrote:


While I think the original idea of doing this during a plenary is  
fine, doing it in the meeting areas on Tuesday evening does sound  
like a better option. Awarding success with real beer at the social  
iff you can print the coupon would motivate me at least:-)


How would you ensure that a participant doesn't print-out a coupon for  
everyone on the attendance list, and then party hearty.  Perhaps they  
could enter their name and a number given them much like their T-shirt  
coupon.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: TMDA backscatter

2007-10-16 Thread Douglas Otis


On Oct 16, 2007, at 5:00 AM, Frank Ellermann wrote:


Douglas Otis wrote:


Do you expect everyone to use inbound servers to send?


No.  Of course I'd expect that mail to postmaster to the IP of an  
MTA talking to an MX I care about works.  BTW, it would be nice if  
only the MXs of the envelope sender address talk to other MXs, we  
could then scrap SPF in this inherent RMX parallel universe.   
Just decree that everybody has an implicit v=spf1 mx a  -all  
policy, what a dream.  Actually SPF is a clumsy approximation of  
this dream.


Requiring MX for discovery eliminates additional transactions needed  
to assert no policy record or valid email domain.  However, the SPF  
or RMX concept is seriously flawed.


Defining valid sources might scale when done by-name but never by- 
address.  By-Name is how MX records declare destinations.  PTR  
record-sets adjacent to an MX record could associate valid client  
domains.  Perhaps special names like _NONE or _ANY could be  
defined for PTRs under a _MX-OUT leaf.  DNS would then be able to  
return addresses needed to validate a specific client host-name  
within a single transaction, but not for all possible host-names.   
Being able to validate clients within an authorized source domain  
should safely satisfy your need for a protection mechanism.


Fully resolving just one MX record for either IPv4 and IPv6 addresses  
might necessitate 20 DNS transactions when permitting 10 MX targets.   
SPF allows 10 MX records, where each may contain 10 targets.  Unless  
IPv6 is excluded, subsequent transactions for both A and  records  
might be needed where a high level of NXDOMAINs would be ignored.   
This means 10 x (10 + 10) or 200 DNS transactions satisfy SPF  
semantics for each email-address checked.  However, some suggest  
using SPF to authorize DKIM domains or PRAs first.  When DKIM or PRAs  
are not authorized, it is common to then check MAIL FROM.  Each  
message might therefore invoke 400 DNS transactions per message.   
Even after allowing this number of transactions, the resulting list  
may not encompass the addresses of all valid outbound hosts.


Use of MX records are normally predicated upon the sending of  
messages.


Yes, I proposed a simplification of your DNS attack scenario based  
on call back verification instead of SPF.  AFAIK CBV works by  
connecting to an MX of the alleged sender, and trying a RCPT TO the  
envelope sender address.  If that results in unknown mailbox the  
receiver knows that the envelope sender address is dubious at best.


Spammers also know this, that's why they forge plausible  
addresses. An SPF FAIL cuts them off at this stage.  As long as  
there are more than enough unprotected plausible (= surviving  
CBV) addresses the spammers can carry on as is.


See above.

However, overall amplification is much less when a message  
triggers an automated response.


When I got about 1,000 bogus bounces and other auto-replies per day  
in 2004 I didn't care about other side effects, my mailbox was  
almost unusable.


See above.


An automated response is likely to be filtered


Filtering behind a POP3 modem connection is tricky, after all I  
still wanted to get good bounces, challenges, and other auto- 
replies.


Filtering or rejecting auto-responses before final delivery needs  
something in the direction of BATV plus good heuristics to identify  
non-standard backscatter (as outlined in RFC 3834).


BATV doesn't directly fight forgeries, it only stops backscatter  
before final delivery (ideally rejecting it at the MX of the  
alleged originator).  It's a nice scheme, use it if you can.  But  
it won't help me (as receiver) to reject forged mail from Douglas  
Otis.


See above.

SPF permits an enormous amount of DNS traffic to be directed  
toward an innocent, uninvolved domain.


Well, I'd try something more direct when I'd wish to attack a  
domain, your idea is IMO far too convoluted.


SPF enables a cached record to redirect the subsequent DNS  
transactions of a recipient.  In addition, SPF provides an attacker  
the means to manage a sequence of MX records independently from what  
is seen as the originator.


And your claim that a domain under attack by bogus A or   
queries caused by MX records of the attacker, based on CBV, SPF, or  
what else, has no chance to find out who the attcker is, might be  
also wrong.


Logs of the DNS and mail servers may provide clues about which  
domains appear to have staged the attack, but the domain visible to a  
mail server is able to transition rapidly which prevents any filter  
from curtailing an ongoing attack.


The querying hosts can find out why they do this, and at that point  
one or more name servers of the attacker (where he manages the  
bogus MX records) are also known.


Finding one of perhaps hundreds of DNS servers used by an attacker  
will not offer a meaningful defence.  The ability to redirect DNS  
transactions afforded a cached SPF

Re: TMDA backscatter

2007-10-15 Thread Douglas Otis


On Oct 15, 2007, at 5:51 AM, Frank Ellermann wrote:


Douglas Otis wrote:

By using the local-part in a spam campaign, one cached SPF record  
is able to reference an unlimited sequence of MX records.


In theory IANA could publish one _spf.arpa v=spf1 mx a  -all  
record, and everybody could use it with v=spf1  
redirect=_spf.arpa. That one SPF record can (kind of) reference an  
unlimited number of MX records doesn't depend on SPF's local-part  
macro.


This adds perhaps 21 useless DNS transactions for each email  
received?  Do you expect everyone to use inbound servers to send?


And e-mail providers wishing to offer per user policies could  
also create corresponding subdomains replacing [EMAIL PROTECTED]  
addresses by say [EMAIL PROTECTED] addresses.  Likewise  
attackers trying to cause havoc can use [EMAIL PROTECTED]  
addresses, they don't need SPF's local part macro for this purpose.


After all in your attack scenario it's the attacker who controls  
the MX records pointing to bogus addresses in the zone of the victim.


Use of MX records are normally predicated upon the sending of  
messages.  When sending a message, MX records may have a potential  
DNS traffic amplification of about 10.  However, overall  
amplification is much less when a message triggers an automated  
response.  An automated response is likely to be filtered, and the  
sender risks being blocked, whether or not the MX record is doing  
something nefarious.  Many situations can achieve this level of  
network amplification.


However, SPF's macros transforms a spam related attack into being  
completely free without tainting the messages.  Sending spam should  
be considered a given as it represents a large portion of today's  
traffic.  SPF permits an enormous amount of DNS traffic to be  
directed toward an innocent, uninvolved domain.  Once the attack  
duration exceeds a recipient's negative caching, no additional  
resources of the attacker are consumed!  The attack can come from  
anywhere, is not easily traced, and, beyond universal banning of SPF  
records, there is no defence!  ACLs on DNS will not help, nor will  
BCP 38.  This attack is likely to continue for a long duration.


Spammers often send messages continuously.  A spam campaign can  
come from any source, purport to be from any domain, and yet  
attack the same innocent victim which can not be identified by  
examining the message.


Your attack scenario has nothing to with a spam campaign, the goal  
of a spam campaign is to send unsolicited mails to receivers.  The  
goal of your attack scenario is to flood a zone with bogus and huge  
A or  queries.  And in your scenario the mail cannot purport to  
come from any domain, it has to come from domains under the control  
of the attacker where he manages his bogus MX records.


SPF adds a layer of indirection.  SPF allows the local-part of spam  
to generate DNS transactions without consuming additional resources  
of attackers once their SPF record has been cached!  The attack can  
then query any MX, A, , PTR, and TXT records from any domain  
unrelated to message content.  Even the attacking MX record might be  
unrelated to any domain found within the message.  SPF's added  
indirection provides spammers a complete disguise.  Recipients using  
SPF are to know which messages are staging an attack.  This is true  
even when they do not wish to play a role in a reported ongoing  
attack.  Expect spammers to take advantage of this generous SPF  
gift. : (


Compared to an SPF related attack, most auto-replies will consume  
a greater amount of an attackers resources, identify the source of  
the attack, and not achieve the same level of amplification.


Backscatter doesn't consume resources of the spammer (apart from  
sending the UBE with a forged envelope sender address), and it can  
be mail from any plausible address, not identifying the real  
source.  That's precisely the problem solved by SPF FAIL and PASS.


RFC3464 represents a far better solution as it removes incentives.


SPF enables a long duration DDoS attack.


Sorry, from my POV you still arrive at MX records can be abused,  
not limited to SPF (or rather only limited if done using SPF).


Of course MX records can be abused.  SPF increases the potential  
level of abuse by a factor of 10 or 20.  With SPF macro's, abuse also  
becomes completely free.



Block all SPF records?


At least we won't need a new rfc-ignorant zone to implement this  
proposal... ;-)


Prevention might require DNS servers be modified to exclude SPF  
records.  Do not expect compliance without some form of blocking  
imposed.



Maybe you should propose some MX processing limits for 2821bis.


Most systems are fairly careful about where they send messages to  
avoid being blocked, nor would the normal use of MX records  
require all targets be resolved.  Timeouts recommended by RFC2821  
ensure this operation remains fairly safe, unlike

Re: The IETF duty for Carbon emissions, energy use ct.

2007-10-15 Thread Douglas Otis


On Oct 15, 2007, at 5:18 PM, Brian E Carpenter wrote:


Joel,

The volunteers built a model that was sustainable with a modest  
amount of capital and time. both jabber and and streaming audio.


For which many thanks from many of us, I'm sure. Also, the  
Secretariat built a tool so that all slides can be uploaded before  
each session, or even in real time during the session.


Of course, it isn't the same thing as telepresence over dedicated  
bandwidth, but that's in a different financial league.



IM conversations or debates using text only are difficult, to type  
the least.  Few have the patience needed to wait for others to finish  
typing before changing the topic.  Access to the room microphone  
remotely would be nice.  Perhaps a fee-based vetting might be used to  
limit access to this expensive resource.  Also, those attending would  
not want to contend with thousands of others queued up on-line to ask  
their question and then perhaps one follow-up.


Many IETF participants already use Skype.  Skype runs on Linux, OSX,  
and Windows.  Skype works through many different firewalls as well.   
It could be interesting to find whether a phone bridge can be easily  
controlled by the room moderator is available.  The cost may not be  
too great when using network based voice clients and a soft switch?   
Is there an open-source alternative?


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: TMDA backscatter

2007-10-11 Thread Douglas Otis


On Oct 11, 2007, at 2:23 AM, Frank Ellermann wrote:


Douglas Otis wrote:

Macro expansion and text strings can be combined with any SPF  
record mechanism and CIDR mask.  Any email-address differing in  
just the local-part must then be iteratively processed across the  
SPF record set (up to 10 follow-on SPF records or mechanisms).


Yes, an attacker can use 10 mx mechanisms in his malicious policy  
for domains under his control with names containing 10 different  
parts of the LHS (local part) in his envelope sender address.


Each of 10 MX records may necessitate resolving 10 target addresses  
for a total of 100 targeted DNS transactions per SPF name  
evaluation.  By using the local-part in a spam campaign, one cached  
SPF record is able to reference an unlimited sequence of MX records.   
Targets within MX records can also utilize a small variable label  
together with large static labels to leverage DNS name compression.   
Without MX records being cached with dependence upon negative cache  
expiring, the network amplification of an SPF related attack using MX  
targeting would be about 10.  After a recipient's negative caching  
expires, repeating a sequence of local-parts in a spam campaign could  
thereafter use cached MX records.  Negative caching is not always  
under the control of the victim.  Once a set of MX records becomes  
cached and their target's negative response expire, the entire attack  
becomes entirely _free_ for spammers.  Spammers often send messages  
continuously.  A spam campaign can come from any source, purport to  
be from any domain, and yet attack the same innocent victim which can  
not be identified by examining the message.  : (


The SPF record is able to conclude with +all and still obtain a  
PASS


Sure, a PASS for mail from unknown strangers only guarantees that  
bounces and other auto-replies won't hit innocent bystanders.  A  
PASS can be still spam or an attack, but it won't create  
backscatter, that is more or less the idea of SPF.


Compared to an SPF related attack, most auto-replies will consume a  
greater amount of an attackers resources, identify the source of the  
attack, and not achieve the same level of amplification.  In the case  
of SPF, the attack can become entirely free and completely hidden  
while spamming!  Rate limiting auto-replies would be a much safer  
solution for the recipient to perform.


A single message can be sent to 100 different recipients.  100 x  
100 = 10,000 DNS transactions!


The number of RCPT TO for a given MAIL FROM at the border MTA where  
SPF is checked is irrelevant, it's evaluated once.  Actually the  
same MAIL FROM at a given border MTA has no further effect while  
the SPF and MX records of the attacker are still cached.  But of  
course an attacker can send more mails to more domains with  
different envelope sender addresses, and he can tune the TTL of his  
SPF and MX records.


Each recipient can be within a different domain where a MAIL FROM and  
the PRA may be both evaluated.  By expanding upon the local-part of  
the email-address, caching SPF records actually aids the attack.  The  
only tuning required would be that of the local-part sequence.   
Duration of the sequence only needs to be a bit longer than the  
negative caching of the recipient.


Whatever he does, your attack scenario depends on the mx mechanism,  
and you get an amplification factor of about 10 DNS queries to the  
victim per DNS query to the attacker.


Do you really think a spammer is unable to attack at a gain of 10,  
and then continue the attack at no cost once the duration of the  
attack exceeds the negative caching (determined by the recipients).   
SPF enables a long duration DDoS attack.  Good luck at attempting to  
prevent this type of attack or at identifying the source.  Block all  
SPF records?


The attacker could also try to abuse call back verification with  
his bogus MX records for a better amplification factor without SPF.


The SPF can also attack those using wildcard MX, TXT or A records.   
This attack would be instantly free and much simpler as it would not  
require waiting for negative caching to expire.  : (


AFAIK SPF is so far the only protocol specifying processing  
limits for DNS based attacks.  RFC 2821 certainly doesn't talk  
about it, and I'm not aware of any processing limits wrt SRV or NS  
records.


SPF introduces a long list of macros locating a sequence of records,  
that then often involve additional DNS transactions.  Even the time  
limit applied by SPF disables DNS congestion avoidance!



Maybe you should propose some MX processing limits for 2821bis.


Most systems are fairly careful about where they send messages to  
avoid being blocked, nor would the normal use of MX records require  
all targets be resolved.  Timeouts recommended by RFC2821 ensure this  
operation remains fairly safe, unlike that for SPF.  Even the packet  
limitation of DNS provides a fairly

Re: TMDA backscatter

2007-10-10 Thread Douglas Otis


On Oct 10, 2007, at 12:12 AM, Frank Ellermann wrote:


Douglas Otis wrote:

Due to the macro nature of SPF, full processing is required for  
each name evaluated.


If what you call full processing is the procedure to fetch the  
policy record of the domain in question, and match the connecting  
IP against this policy for a PASS / FAIL / DUNNO verdict, then  
this doesn't depend on using macros or not in the policy.


IP address evaluations to determine a PASS / FAIL / DUNNO verdict may  
require the expansion of macros inserted into SPF records.  SPF  
macros have the following syntax:


%{symbol | rev-label-seq | # of right-most labels | chars-into “.” }

rev-label-seq = “r”

chars-into “.” = “-” | “+” | “,” | “/” | “_” | “=”

# of right-most labels = 1 - 128

symbol =
 s = email-address or EHLO (initial parameter)
 l = left-hand side of email-address (initial parameter)
 o = right-hand side of email-address  (initial parameter)
 d = email-address domain or EHLO (initial parameter)
 i = SMTP client IP addr decimal octet labels (initial parameter)
 p = r-PTR domain with IP addr validated (not initial parameter)
 v = in-addr IPv4, or ip6 if IPv6 (auto generated)
 h = EHLO host-name  (initial parameter)

Macro expansion and text strings can be combined with any SPF record  
mechanism and CIDR mask.  Any email-address differing in just the  
local-part must then be iteratively processed across the SPF record  
set (up to 10 follow-on SPF records or mechanisms).


For example, an MX record may be located using a label derived from  
some component of the email-address local-part.  The address for each  
target name within the MX record must then be resolved and perhaps  
masked prior to checking for a match.  The macro expansion therefore  
permits the same SPF record to cause a different set of DNS  
transactions to occur over the duration of a spam campaign.  The  
macro expansion provides the spammer a means to fully leverage  
recipient resources any number of times against any domain.  The  
domain being queried can be wholly unrelated to a domain within the  
email being evaluated.   In otherwords, a flood of queries generated  
by spam recipients can target any innocent bystander.  The SPF record  
is able to conclude with +all and still obtain a PASS while staging  
their completely free attack.


Maybe you mean that macros can reduce the chances for a DNS cache  
hit, e.g. for per-user-policies based on the local part macro.


The per-user-policy feature of SPF makes a reflected amplification  
attack entirely free for the spammer.  Each new name evaluated from  
their cached records can generate 508 KB in DNS traffic, a higher  
amplification than that achieved in an open recursive attack.


The SPF group dismissed this concern by suggesting DNS amplifications  
could be further increased by including a series of bogus NS records.


A single message can be sent to 100 different recipients.  100 x 100  
= 10,000 DNS transactions!  Each message might be evaluated twice,  
once for the PRA, and then again for the MAIL FROM.  Who knows,  
perhaps this might include the DKIM domain if the use of SPF is not  
denounced.



With SPF, the recipient might be interested in the PRA, or MAIL FROM.


When I talk about SPF it's generally RFC 4408, i.e. MAIL FROM and  
HELO, not the Sender-ID PRA.  The PRA doesn't necessarily help to  
identify a permitted or forged MAIL FROM.  In other words the PRA  
or FWIW DKIM are not designed to avoid backscatter.


DKIM could be extended to avoid backscatter and replay abuse.   
Unfortunately, Sender-ID is being heavily promoted as the anti- 
phishing solution.  A problem not addressed by RFC4408.



when an SPF hammer is the only tool


Folks didn't like the idea to reintroduce RFC 821 return routes ;-)
Of course receivers are free to use any tool to avoid backscatter,  
but SPF is today the only tool designed for this job.


I do not agree.  Discouraging abuse by using RFC3464 is a far safer  
tool.  SPF is dangerous and breaks email.


Cleaning up how DSNs are constructed would be _far_ more effective  
than asking that all DSNs be dropped when SPF does not produce a  
PASS.


No.  When I was the victim (back in 2004 before SPF started to  
work) the _size_ of DSNs and other backscatter wasn't the main  
issue, the  _number_ of bogus auto replies was the real problem.


SPF now gives bad actors a much more dangerous tool than auto-replies  
ever represented. : (



JFTR, not PASS can be FAIL or DUNNO.  FAILs should be rejected.

Your idea to abuse DSNs for indirect spam or malware delivery is a  
valid consideration, but in practice the volume of all backscatter  
not limited to DSNs is the real problem.


Limiting the size of DSNs is okay, but it's not far more  
effective than reject a.s.a.p., in fact it misses the point wrt  
backscatter.


This was in terms of reducing risk.  SPF is a dangerous mechanism  
which attempts to leverage a recipient's resources.  Had

Re: TMDA backscatter

2007-10-09 Thread Douglas Otis


On Oct 9, 2007, at 3:59 AM, Frank Ellermann wrote:


Douglas Otis wrote:


There is a real risk SPF might be used as basis for acceptance


You can combine white lists with SPF PASS as with DKIM PASS, the  
risk is very similar.


Due to the macro nature of SPF, full processing is required for each  
name evaluated.  With SPF, the recipient might be interested in the  
PRA, or MAIL FROM.  Of course DKIM now adds d= or i= identities.   
Since DKIM is prone to replay abuse, when an SPF hammer is the only  
tool, more names are likely to be added to the evaluation.  These  
additional evaluations are intended to reduce the number of messages  
lost since neither the PRA and MAIL FROM are limited to the path  
registered by an SPF address list.  Now imagine an iterative process  
of developing comprehensive lists of all addresses used on behalf of  
a domain which attempts to embrace IPv6. : (



Much of the danger of auto responses has to do with DDoS concerns.


It depends on the definition of DDoS.  From my POV as POP3 user  
over a V.90 connection 10,000 unsolicited mails are just bad, no  
matter what it is (spam, worm, DSN, or auto-response).


Perhaps IANA should allow ISPs to register their dynamic address  
space.  Greater protection is afforded when excluding dynamic address  
space.  Nevertheless, more protection would be afforded by a  
convention that excludes message content within a DSN.  For each  
directly addressed spam source, there are 3 DSN sources that include  
spam message content.  Cleaning up how DSNs are constructed would be  
_far_ more effective than asking that all DSNs be dropped when SPF  
does not produce a PASS.  Perhaps that is why Bell Canada ensures all  
addresses achieve a PASS. : (


It's not really a DDoS.  SPF at least helped me to get rid of the  
bogus DSNs and other auto-responses since three years, smart  
spammers are not interested to forge SPF FAIL protected addresses.


Spammers are equally uninterested in abusing MTAs that produce DSNs  
compliant to RFC3464 where message content has been removed.  This  
strategy also avoids the SPF overhead and related risks. : )


BTW, I think the definition of Joe job in the sieve EREJECT draft  
is obsolete, the mere abuse of plausible addresses is no Joe  
job and IMO also not a real DDoS.  But it's certainly bad for the  
victims, it can be bad enough to make a mailbox unusable for some  
victims.


Yes, DSNs that include content are a problem.  Dropping NDN or DSN  
that indicate a failure of some sort also makes email less reliable.   
Email has become far less reliable as a result.  The TPA-SSP scheme  
for DKIM allows a return path to authorize the DKIM signature only to  
encourage the issue of DSNs.  Again, those DSNs should still exclude  
original message content.


A safer approach would be to format all DSNs per RFC3464 and  
remove original message content.


I'd hope that a majority of receivers already does this, that's  
state of the art for some years now.  Or rather truncate is state  
of the art, not complete removal of the body.


It would be rather tempting to make this mode of DSN operation a  
requirement.  At any point of time, we see some 20 million different  
MTAs which do not remove message content are currently being  
exploited.   Perhaps we should add a new list that indicates which  
MTAs do not remove content on DSNs?  We could then let our customers  
decide whether they want traffic from these MTAs as well.  Not all  
that different from open-proxies and aimed at restoring DSN integrity.


Mailman made a mistake where an error caused a DSN that returned  
original content without first verifying the validity of the  
return path.


Auto responders aren't in a good position to verify the validity of  
the return path.  Good positions to do this are the MSA based on  
RFC 4409 and later the MX based on RFC 4408.


RFC4408 did not mitigate a potential DDoS exploit.  This exploit  
exists when recipients attempt to process SPF as specified.  There  
are several SPF parsing libraries that lack even suggested limits on  
subsequent address resolutions for MX records, for example.  There  
are some libraries that restrict the number of MX targets and cause  
some domains to not be fully resolved at times.  Even this reduction  
in possible MX targets will not make these libraries much safer.


The SPF DoS draft example clearly illustrates an SPF process (current  
open source compliant with RFC4408) might generate more than 50 to  
100 DNS transactions per name evaluated.  The level of this risk  
depends upon the negative caching of recipients and how frequently  
the local-part of the spam campaign repeats.  The latter only needs  
to be a bit longer than the former.  The evaluation process may still  
result in an SPF PASS.  Any reliance upon SPF is likely to cause a  
list of names being evaluated to grow.  An SPF iterative process also  
means poisoning DNS is made far easier.  Keep

DKIM reputation

2007-10-08 Thread Douglas Otis


On Oct 8, 2007, at 4:54 PM, Keith Moore wrote:


Tony Finch wrote:

On Thu, 4 Oct 2007, Keith Moore wrote:

the vast majority of domains won't be able to use DKIM without  
seriously impairing their users' ability to send mail.


You seem to be assuming that the vast majority of domains have  
really shitty message submission servers or connectivity.


It's a combination of several things - one, requiring that a domain  
operate its own mail submission servers which sign their mail (and  
all that that implies, like maintaining the private keys).  Two,  
many domains will be too small to develop enough of a reputation to  
be whitelisted, and any spammer can create a temporary domain which  
will have about as good a reputation as the vast majority of those  
domains.
Three, as long as people use Windows boxes, spammers will be able  
to compromise them and hijack them to use them to originate mail on  
behalf of their domains, thus degrading those domains' reputation.


So basically if you're a small domain, you're SOL.  If you're a  
large domain, people can't afford to blacklist you unless you  
originate a lot of spam anyway.


Keith,

The DKIM component that establishes reputation is being discussed  
within the DKIM WG.  The DKIM signature offers an alternative to the  
IP address which serves as perhaps the only other assured basis for  
reputation.  Of course the IP address also shares all of these  
problems.  A DKIM signature can help avoid some of the reputation  
problems associated with shared use of an IP address (which is a  
larger problem for smaller domains).  For larger domains, there might  
be some concern related to replay abuse, where again, smaller domains  
also enjoy an advantage in being able to squelch compromised systems.


Don't be too quick to condemn DKIM.  There should be a simple  
mechanism which allows email-domains to autonomously authorize DKIM- 
domains.  This feature should defray some of your concerns.   
Delegating a zone of one's domain would be expensive to manage but is  
currently the only means now permitted.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


TMDA backscatter

2007-10-08 Thread Douglas Otis


On Oct 8, 2007, at 4:37 AM, Frank Ellermann wrote:


SM wrote:


TMDA may cause backscatter.


After an SPF PASS, the backscatter by definition can't hit an  
innocent bystander.  By the same definition any backscatter after  
an SPF FAIL hits an innocent bystander, and therefore is net abuse.


There is a real risk SPF might be used as basis for acceptance,  
rather than just for qualifying DSNs.  As a basis for acceptance,  
this can cause email to fail.  The macro expansion of SPF records  
permits the _same_ DNS record within a spam campaign to generate a  
large number of subsequent and different DNS transactions to be sent  
by recipients to innocent bystanders.  Much of the danger of auto  
responses has to do with DDoS concerns.  Unfortunately, SPF  
represents a far graver concern than that caused by auto-responses.


A safer approach would be to format all DSNs per RFC3464 and remove  
original message content.  This reduces incentives for abusing the  
automated responses.  Mailman made a mistake where an error caused a  
DSN that returned original content without first verifying the  
validity of the return path.  Had TMDA been a requisite for initial  
acceptance, just those white-listed would have been prone to this error.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Spammers answering TMDA Queries

2007-10-03 Thread Douglas Otis


On Oct 3, 2007, at 2:59 PM, Hallam-Baker, Phillip wrote:

There is more we can do here but no more that we should feel  
obliged to do - except for the fact that we are a standards  
organization and should eat the dog food.


In particular, sign the messages with dkim and deploy spf.


Few problems should be caused by DKIM, although it might be difficult  
to claim DKIM solves a particular problem affecting IETF mailing lists.


The same is not true for SPF.  SPF is experimental, can be  
problematic, and is very likely unsafe for use with DNS.  SPF carries  
suitable warnings indicating it may cause problems.  SPF may  
interfere with the delivery of forwarded messages.  SPF might be used  
in conjunction with Sender-ID.  Suggested solutions for dealing with  
Sender-ID requires yet another version of SPF be published.  Use of  
which might fall under:
http://www.microsoft.com/downloads/results.aspx? 
pocId=freetext=SenderID_License-Agreement.pdfDisplayLang=en


Possible application of Sender-ID will cause IETF lists to break once  
SPF is published.  The purported use of SPF for curtailing forged  
DSNs requires policy settings which then create new problems.  When  
desired, names rather than address lists should be used to register  
an email path.  A name path approach avoids the dangerous DNS  
transactional issues.  Rather than relying upon unscalable SPF  
address lists, instead an extension might be applied to DKIM.  The  
DKIM extension could offer a means to prevent DSNs from being dropped  
when Mail From domains differ.


http://www1.tools.ietf.org/wg/dkim/draft-otis-dkim-tpa-ssp-01.txt

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Random addresses answering TMDA Queries

2007-10-02 Thread Douglas Otis


On Oct 2, 2007, at 6:14 PM, John Levine wrote:


The Secretariat tells me that Spammers are responding to TDMA
queries so that their mail goes through.  They have made the
suggestion that we clear the list of people once per year.


Isn't there an engineering principle that if something is broken,
you don't fix it by breaking it even worse?

Naive challenge/response systems like TMDA never worked very well, and
on today's Internet they've become actively dangerous.  About 90% of
all email is spam, and just about all spam has a forged return address
at a real domain, often taken from the spam list.  This means that
most TMDA challenges go to innocent bystanders.  Given the volume of
spam, it also means that even though only a small fraction of
addresses send autoresponses, that's enough to badly pollute any
system that uses C/R for validation.  If you look at the bogus
addreses, I would be rather surprised if they weren't mostly random
non-spammers that either auto-acked their way in, or else are people
like me who ack all challenges because it's the easiest way to get
other people's C/R crudware to shut up.

There are plenty of workable ways to filter spam.  I've found that,
ironically, it is particularly difficult to get people to set up
effective filters in environments full of grizzled old nerds.  A lot
opinions about the nature of spam and filters seem to have been formed
in about 1999 or 2000 and haven't been re-examined since then, so when
I suggest, e.g., that well chosen DNSBLs can knock out 80% of the spam
with essentially no false positives, which is true, they don't believe
it.


Agreed.

Email related filtering mechanisms are often broken and can be  
dangerous.  Recipients without DNSBLS are likely seeing only a small  
percentage of valid email.


Of the junk hitting MTAs, more than half is likely to contain a copy  
of spam reflected off someone's server.  IETF lists have recently  
created their share of this traffic.


-Doug

 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-23 Thread Douglas Otis


On Aug 23, 2007, at 3:33 AM, John C Klensin wrote:

Marginal and criminal elements are _always_ the most innovative  
people around if there is profit in stretching the boundaries of  
the rules.  In normal environments, the consequences of those  
innovations are limited by effective legislation that criminalizes  
sufficiently bad behavior and by enforcement and punishment  
structures that create significant negative incentives for that  
behavior.


Unfortunately, much of the undesired behavior happens under a guise  
of commerce.  CAN-SPAM illustrates marketing associations' influence,  
where those harmed now have no legal standing.  Any amount of e-mail  
referencing prior visits is not spam.  Any request to stop may not be  
immediate, and does not impair any number of affiliates.  Even  
replying Unsubscribe indicates the e-mail is being read, and makes  
the address a valuable commodity.


The opt-out fallacy extended into a Do Not Solicit E-mail list  
that was dropped after the fallacy of enforcing opt-out became  
apparent.  Unfortunately an inability to enforce opt-out did not  
change legislation into an opt-in model.  Prior to this  
legislation, the only model able to limit spam has been Opt-In when  
publishing bulk e-mail.  Is it really easier to ask for forgiveness  
than to ask for permission?


Legislators remain oblivious to practical considerations for curbing  
abuse and enforcement.  Unless employed by the government or an  
Internet provider, one has no standing to even seek enforcement or  
punishment.  Which problem were legislators attempting to solve, and  
for whom?


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: New models for email (Re: e2e)

2007-08-21 Thread Douglas Otis


On Aug 21, 2007, at 4:59 PM, John C Klensin wrote:

I'm not convinced that is worth it --and strongly suspect it is  
not-- but, if Doug believes otherwise, I'm looking forward to  
seeing a proposal.


I hope to have a draft ready in the near future.  When SMTP was  
developed, HTTP did not exist in its present state.  HTTP over a  
clustered file system is now able to function as a separate message  
channel for even the largest provider.  This separate channel can  
significantly lower the +90% levels of spam currently carried by  
SMTP, while also substantially moving the burdens toward the sender  
where it belongs.


Why change?  When a sender is suspected of spamming, the receiving  
MTA could limit exchanges to Transfer By Reference instead of issuing  
an ultimately more expensive refusal or message acceptance.  In the  
case of TBR, SMTP would be limited to just exchanging an HTTP message  
reference within the SMTP envelope.  This separate HTTP channel  
provides both delivery confirmation, and ensures the validity of the  
reference.


Whether the message is accessed depends solely upon the reputation of  
the HTTP publisher.  Any additional information could be abused and  
might require filtering.  Filtering burdens the recipient, where bad  
actors enjoy a significant advantage.  Not  relying upon filtering,  
finding an identifier independent of the IP address, excluding  
friendly information, and reducing the burden on the recipient are  
the general guidelines.


As the message reaches the MDA, the MDA would have the option to  
proxy access via BURL, fetch the message and revert to a normal SMTP  
form, or convert the reference into a pseudo message compatible with  
MUAs where recipients decide whether a message is to be accessed.   
TBR does not prevent spam, however it significantly lowers the  
recipient's burden and provides a simple means for avoiding unwanted  
or spoofed messages.  Details will follow in a draft.


-Doug





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-17 Thread Douglas Otis


On Aug 16, 2007, at 3:10 PM, Keith Moore wrote:

I also think there might be some merit in holding mail on the  
sender's outgoing mail server until the receiver's MX is willing to  
deal with it, thus pushing more of the costs toward the sender of a  
message.


The concept of holding messages at the sender could be a basis for a  
change in SMTP for supporting this model, but at a message  
granularity.  The changes would be similar to that of BURL but would  
be Transfer by Reference or To Be Read, TBR.


The results seen after returning 4xx errors to SMTP clients is often  
the client will then retry frequently.  A better solution would be to  
devise an architecture that allows the message reference to always be  
accepted, but where actual message delivery occurs over a different  
channel.  This new mode of operation would occur when both sender and  
receiver support an transfer by reference.  Often more than 90% of  
messages received are undesired.  These messages will be either  
refused, placed into a junk folder, bounced, or deleted.  Many  
bounces will be deleted as well.  Actually transferring the message  
is sub-optimal when done within the SMTP sessions due to extremely  
high levels of abuse.  Accepting the message rather than a message  
reference also burdens the SMTP server with the duty of retaining its  
disposition and possibly issuing bounces.


A problem also plaguing email is falsified originating addresses.   
This has lead to rampant abuse where virtually little within a  
message is trusted as being valid.  While DKIM helps solve this  
problem, this solution demands additional resources of the  
recipient.  Whether DKIM will be able to withstand abuse may be in  
doubt, after all most email services are gratis.  Providers might opt  
for a cheaper solution.  Transfer by reference ensures the  
origination of the message is not falsified at substantially less  
cost for the inbound SMTP process.


To implement an transfer by reference, the MSA would create two  
hashes based upon the email message content.  The MSA then publishes  
the email message at a web-server where the combined hashes provide  
an access reference for security.  Rate-limiting 404 errors makes  
attempts at guessing a link futile.  If there is a desire to ensure  
even those running the recipient's MTA do not have access the  
message, a scheme like Open-ID could be employed.  When there are  
multiple recipients, a script on the web-server could request who is  
accessing the message.  Identifying who is fetching the message can  
also be automated by inserting the recipient's address into the link  
as follows:


MSGREF: http://[EMAIL PROTECTED]/~moore/.mq/(sha1+sha256)

The web-server would track when an outbound message was posted, and  
whether it had been accessed by all intended recipients.  The web- 
server could generate a report of outstanding recipients that failed  
to fetch the message after some interval.  The web-server may attempt  
to retry sending the link to the recipient, or just notify the sender  
of a delivery failure.


Banks or financial institutions could use https to better secure  
message content, and to ensure customers they are at the expected web- 
site.  Any new MUA designed to handle transfer by reference should be  
able to annotate whether a publisher had been listed within their  
address book.  This feature should help avoid look-alike exploits.


Fetch messages from the reference link offers a positive confirmation  
of delivery.  From a commerce standpoint, this confirmation would be  
of significant value.  Those concerned about keeping their IP address  
private, should utilize an external proxy before accessing the web- 
sites.


When an MTA encounters a down-stream MTA or user-agent that does not  
support transfer by reference, it could fetch the message to remain  
compliant.  The cost of handling messages will is likely to be a  
primary motivation for transfer by reference, along with secure  
content, and a solid confirmation of delivery.


-Doug








___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: New models for email (Re: e2e)

2007-08-17 Thread Douglas Otis
On Fri, 2007-08-17 at 09:27 -0400, John C Klensin wrote:

 I'm not convinced of the merits of the general idea you are
 pushing here, mostly because I still believe in the value of
 mail systems with relay (or, if you prefer, store-and-forward)
 capability.  If one requires that one be online to receive
 actual messages, with end-to-end connectivity to the sender's
 mail holding store, then sites with no direct connectivity to
 the Internet are out of luck and sides with intermittent
 connections or very long delay connections (think about a lot of
 developing areas on earth as well as the interplanetary problem)
 are in bad trouble.

The option will need to be negotiated at every stage.  It should be
rather easy to translate transfer by reference into the current
behavior at any point within the SMTP store-and-forward chain.  When the
MTA confronts a down stream MTA or MDA where the user is without access
to http, or not yet upgraded, then at this point the message would be
fetched.  Delivery reliability from that point forward needs to be high.
One might assume messages will have been vetted by publisher reputation
prior to fetching and that the message is near the last stage.

 An important side-effect of the current mail model is that, if
 the final delivery server is network-close to the intended
 recipient, it doesn't make much difference to the recipient
 whether mail flows into that server at gigabit bandwidth or at a
 few kilobits/second or worse or it things flow only at 0300
 local time.  The only connections that are really
 network-performance-sensitive are between the sending MUA and
 the submission server and between the final delivery MTA or
 message store and the mail-retrieving user.  We've used those
 properties to very good advantage in the past and, I believe,
 will continue to use them to good advantage.

As have I.  The MDA could easily offer both modes to the user agent,
assuming the MDA has been upgraded to support transfer by reference.

 I share Dave Crocker's concern about deployment of replacements
 for the current email model (see the article in the recent issue
 of Internet Protocol Journal), but maybe it isn't impossible if
 the need is significant enough and the solution doesn't make
 other things worse (which I think yours may, but I haven't seen
 enough of a proposal to evaluate).

Yes, I read the article.  Unlike Dave, I don't see SPF/Sender-ID
helping, but instead path registration by address creates serious
problems.  I agree with Keith, the use of IP addresses as authentication
tokens needs to change, especially when there is a desire to introduce
the use of IPv6 with email.

In addition, the trust model does not work when vetting prior to the
critical acceptance of responsibility for message delivery occurs at the
MSA.  This operation might be by millions of small and large
organizations operating at a flat rate, free of charge, or gratis.  The
bad actors have access to essentially all MSAs.  Reasonable vetting will
not happen at this stage, although we might all agree that it should.

The MSA can not be trusted.  Transfer by reference permits every stage
subsequent to the MSA to apply their own vetting.  The reference can not
be falsified.  In most cases, less that 10% of the message will end up
being transferred.  A co-worker sent me his latest stats.  His MTA
currently finds 1% of the message received are not spam, where the
overall volume of spam has increased by more than a order of magnitude
in just the last year.  He is also someone currently blocking several AS
networks in his seemingly futile efforts in controlling MTA bandwidth. 

 One could preserve the relaying by turning message transmission
 and delivery into a multiple-stage handshake:
   * Message notifying recipient of availability
   * Message from recipient to store asking for
  transmission of the real message
   * Transmission of the real message

What vetting can occur?  What element within a message can be trusted?
A message reference when accessed through a different channel can not be
falsified.  Even with DKIM, there is little that can be trusted.
Perhaps there might be exceptionally restricted MSAs shielded from
bad-actors, but this would not be the typical situation.

SMTP needs to change into a system that delivers an outside channel
message reference.  Outside channel message references can not be
spoofed.  Fetching the message though an outside channel provides
content security and a reliable indication the message was received.
SMTP normally working in a transfer by reference would be far more
reliable and far more suitable for commerce.

 but that would be a fairly complex piece of protocol work, with
 attacks possible at each step.

Yes, and to what end?

 I note that, while one would want to update its non-existent
 security mechanisms before trying to use it today, the
 capability of sending a message structured as
 
[...]
Content-type: multipart/???; 

Re: e2e

2007-08-16 Thread Douglas Otis


On Aug 16, 2007, at 6:54 AM, [EMAIL PROTECTED]  
[EMAIL PROTECTED] wrote:


Such a document could help frame the discussion and identify  
weaknesses that need further work such as the SPF/DKIM bandaids. We  
have gotten to the present situation because various people doing  
work on the problem have blinders on as that focus on one narrow  
area of the problem. We need to describe the totality of the  
situation, point out what works fine or has already been fixed, and  
suggest a way forward for fixing the problems.


By the way, SPAM is not the problem. The problem is anonymity and  
lack of accountability.


This should be stated just a bit differently.

Anonymity of SMTP clients and lack of accountability for SMTP  
client's access vetting is a major part of the problem.  CSV  
extensions would have made a difference and would have laid a  
foundation suitable for an IPv6 era.


Instead, SPF places DNS infrastructure in peril without identifying  
who administers the SMTP client.  This inability to authorize SMTP  
clients by name is not an oversight.  Path registration would have  
been safe using the SMTP client's name instead of expecting DNS to  
return _all_ IP addresses of _all_ systems operating on behalf of a  
domain.  Even with SPF, the receiver can't be sure whether an  
authorization is for a bounce address or some purported originator.   
To ensure the proprietary purported originator algorithm works,  
mailing lists and forwarders must now identify themselves as a  
purported originator using this algorithm.


DKIM, although extremely useful for filtering phish, avoids a means  
to authorize SMTP clients while insisting replay abuse is not a  
concern.  Again, this inability to authorize the SMTP client is not  
an oversight.  After all, SMTP is a store and forward protocol, and  
path registration impinges upon freedoms SMTP provides, right?  Of  
course, no provider wants to be identified to then deal with complaints.


Many providers filter messages prior to hitting a mailbox.  When not  
accepted, the message might be bounced as a DSN.  When not bounced, a  
suspect message is easily lost.  A sizable portion of spam is a  
spoofed bounce which just increases the chances a valid DSNs will be  
lost.  Here BATV might help, but only when domains are restricted to  
specific outbound servers.  A sizable portion of the spam is from  
large providers.  Their mail normally represents a mixture of good  
and bad.


Can accountability ever be forced upon 900 pound gorillas, or even an  
organization of 900 pound gorillas?  No.  Accountability will simply  
never happen, and gorilla solutions are likely to make the problem  
much worse.


SMTP needs to change how delivery integrity is maintained.  The  
delivery of a message needs to be controlled by the individual, and  
made separate from SMTP.  SMTP needs to change into offering a  
reference for the retrieval of messages.  Such a strategy ensures  
originating domain spoofing is less likely without burdening the  
recipient.  Not even a subject line should accompany the link  
comprised of a local-part and a double hash of the message and time- 
stamp.  Publishing could use HTTP/HTTPS and a back-end supported by a  
Lustre network FS, for example.  A message publishing system could  
provide notifications when a message is not fetched after some  
interval.  DSNs would not be relied upon, nor are DSNs currently that  
reliable anyway.


Of course this would not be practical or empowering without also  
offering individuals some tools.  Initially, the vetting could be  
done on behalf of individuals where their email could be pre-fetched  
and filtered.   However, Individuals will also be able to directly  
deal with the link vetting process themselves.  Individuals should  
have many choices in how their incoming email links are vetted.  Of  
course, the vetting process would depend upon the reputation of the  
publisher.


Any filtering that might happen after the message is fetched is  
responsibility of the individual or organization acting on behalf of  
the individual.  When a message contains malware, the individual or  
organization should report this to the authorities, but the bouncing  
of these message will have been made obsolete.


-Doug







___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-16 Thread Douglas Otis


On Aug 16, 2007, at 12:30 PM, SM wrote:

If we get rid of the anonymity for relays delivering to final  
destination, i.e. gmail.com sending a message for [EMAIL PROTECTED]  
to an aol.com mailserver, then most of the spam stream goes away.  
At that point, we only have to worry about spam which subverts the  
submission mechanism or some other weakness in the system.


That would be SPF.


SPF will not restrict the Return-path!  Any SMTP client sending a  
message with a bounce address of [EMAIL PROTECTED] will _not_ be  
considered unauthorized.  Regardless of where the SMTP client is  
located, [EMAIL PROTECTED] will achieve the highest level of  
authorization.  White-listing is the only reasonable application for  
SPF, which is why '+all' is a good thing.   SPF is prone to shared  
servers, a multitude of authorization levels, and an uncertainty  
whether  authorization is intended for the PRA or Return-Path.


The resources of recipients who attempt to process SPF record-sets  
can be easily exploited.  SPF might be instrumental in staging DDoS  
attacks or in poisoning DNS, despite the use of BCP38 and ACLs on  
recursive resolvers.  By utilizing local-part macros, cached SPF  
records can issue tens or hundreds of new DNS transactions directed  
toward any victim domain's A, , or MX records when referenced by  
a message that could easily be part of a large spam campaign.  Curt  
timeouts in processing SPF scripts will circumvent congestion  
avoidance.  The attack will emerge from recipient's resolvers, and  
SMTP logs will be unlikely to offer a clue how it was achieved.  The  
bots will remain cloaked, and their full resources can remain  
dedicated to spamming while DoSing the heck out of their opposition.


It would also restrict you to using a Return-path which is valid  
for the network from which you are sending the mail.  That may not  
be practical in some environments where you don't have an existing  
account and you can only send mail through the local mail server  
(no access to submission port on home server, e.g. outgoing TCP 587  
blocked or network unreachable).


In addition, SPF may cause forwarded and mailing-list messages to be  
lost despite gaining access to secure submission.  SPF is an example  
of what not to do when confronting spam.  Don't create scripts that  
might be utilized by bad actors.  Those advocating SPF need to better  
consider the risks.  The benefits, assuming there are benefits, do  
not outweigh sizable risks to critical infrastructure.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-15 Thread Douglas Otis


On Aug 15, 2007, at 12:16 PM, Fred Baker wrote:

in that context, here's one that one could use to dramatically  
reduce spam intake.


That suggests a simple approach - in one's firewall, null route the  
addresses reported by the reputation service as spam spews. It's a  
network layer solution to an application layer problem, yes, and it  
has all of the issues that reputation services have, and btw, you  
still want to run spamassasin-or-whatever on what comes through.  
Cisco IT tells me that it results in a dramatic reduction in spam,  
however, and saves them serious numbers of monetary units.


The communication system isn't being a filter, properly speaking -  
it is simply routing some traffic to black holes using standard  
routing technology. And it doesn't relieve the application of the  
burden of filtering. But it can help reduce the volume of crapola  
at the application.


The early version of the Realtime Black-hole list published as a  
BGP feed worked in the same manner.  The current owners recognized  
the need to reduce the burden on an email filtering. : )


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: e2e

2007-08-15 Thread Douglas Otis


On Aug 15, 2007, at 2:06 PM, Keith Moore wrote:

and I've had more than my share of legitimate mail fail to be  
delivered (in either direction) because of such measures. you may  
consider that legitimate for your or cisco's purposes. whether to  
throw away mail that can potentially be from customers is a  
business decision that cisco can make.  that doesn't mean it's a  
reliable way to run a network.


Keith,

More agree with you than you might expect.  To make email more  
robust, email delivery can change into being nothing more than a  
signaling protocol.  Instead of messages, the entire body of the  
message is replaced by an encoded link.


From [EMAIL PROTECTED] is derived from a link in the form of:

 https://example.com/~local-part/double hash of content and time

The double hash coupled with 404 error rate-limiting offers security,  
where web logs would be used to indicate whether a recipient received  
a message.  The SMTP server can accept all messages and not be  
concerned about DSNs that are no longer needed or used.  There would  
not be any cryptographic signatures of the message or path  
registrations to place a burden upon recipients when deciding where  
the message originated.  Security could be further enhanced by  
utilizing some version of Open-ID.


All messages can be accepted without costing much in the way of  
storage and without creating concerns about message disposition.  The  
responsibility of what gets tossed is directly controlled by the  
recipient.  The recipient can decide by using whatever reputation  
system they wish.  The integrity of the delivery system would depend  
upon feedback given the sender by the server publishing their  
messages.  For the transition, existing clients might depend upon the  
recipient to click on a message link, or to apply some automated  
reputation/selection process plugin.


The growing lack of DSNs would drive the remaining portion of email  
users into depending upon message links and message publishing rather  
than using SMTP to send whole messages.  Perhaps the current  
situation provides enough motivation to change.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-wilson-class-e-00.txt

2007-08-08 Thread Douglas Otis


On Aug 8, 2007, at 3:02 AM, Harald Alvestrand wrote:

What happened to draft-hain-1918bis-01, which tried to get more  
address space for private Internets, but expired back in 2005?


I see the point about regarding 240.0.0.0/4 as tainted space and  
therefore being less than useful on the public Internet.


RFC 3330 listed as not currently part of the public Internet:

0.0.0.0/8   this 16,777,216
10.0.0.0/8  private  16,777,216
127.0.0.0/8 loopback 16,777,216
169.254.0.0/16  link-local   65,536
172.16.0.0/12   private   1,048,576
192.0.2.0/24test-net256
192.168.0.0/16  private  65,536
192.18.0.0/15   benchmark   131,072
224.0.0.0/4 multicast   268,435,456
240.0.0.0/4 reserved268,435,466
 -
   587,569,816 (13.68% of total non-public)
 4,294,967,296 (total)
 3,707,397,480 (addresses public)

Some larger providers and private organizations who depend upon  
private IPv4 addresses have complained there is no suitably large  
private IP address range which can assure each user within their  
network can obtain a unique private IP address.  It would seem class  
E could, and might already, function as a larger private IP address  
range.


-Doug






___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-wilson-class-e-00.txt

2007-08-08 Thread Douglas Otis


On Aug 8, 2007, at 10:52 AM, Marshall Eubanks wrote:

On Aug 8, 2007, at 1:35 PM, Douglas Otis wrote:

On Aug 8, 2007, at 3:02 AM, Harald Alvestrand wrote:

What happened to draft-hain-1918bis-01, which tried to get more  
address space for private Internets, but expired back in 2005?


I see the point about regarding 240.0.0.0/4 as tainted space  
and therefore being less than useful on the public Internet.


RFC 3330 listed as not currently part of the public Internet:

0.0.0.0/8   this 16,777,216
10.0.0.0/8  private  16,777,216
127.0.0.0/8 loopback 16,777,216
169.254.0.0/16  link-local   65,536
172.16.0.0/12   private   1,048,576
192.0.2.0/24test-net256
192.168.0.0/16  private  65,536
192.18.0.0/15   benchmark   131,072
224.0.0.0/4 multicast   268,435,456


This is simply wrong. Multicast is certainly part of the public  
Internet, it is certainly used on the
public Internet and (I might point out) people (including yours  
truly) make money from it.


You are right.  Indeed Multicast is part of the public Internet.  The  
concern has been with respect to availability of general purpose  
public IP addresses, where multicast would be excluded as would  
private IP addresses.  This should have read not currently part of  
the 'general use' public Internet.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-wilson-class-e-00.txt

2007-08-08 Thread Douglas Otis


On Aug 8, 2007, at 1:22 PM, David Conrad wrote:


Hi,

On Aug 8, 2007, at 10:18 AM, Hallam-Baker, Phillip wrote:

Which widespread IPv4 stacks?


I think it might be easier to identify stacks that don't disallow  
240/4.  I don't actually know of any widespread ones.


Rather than wall off the space as private and thus put it beyond  
any use we should think about what other uses we might be putting  
it to.


Calling address space private obviously does not put it beyond any  
use.  In fact, there are folks out there who are burning public IP  
address space for internal infrastructure that could instead be  
using private space but can't because their internal  
infrastructures are too large.


The long term view for IPv4 employment should be an address space  
primarily used by internal networks.  IPv4 is supported by a large  
array of SOHO equipment that will not disappear anytime soon.  A near- 
term solution for IPv4 exhaustion will likely involve routers  
bridging between either private or public IPv4 address space into an  
Internet consisting of a mixture of IPv6 and IPv4.  Internal use of  
IPv4 should accommodate internal deployments exceeding 16 million IP  
addresses.  With rapid expansion of network equipment, 268 million  
addresses represents a far more reasonable range of addresses that  
rangers which are likely to be employed internally.


Such a larger range of internal addresses could even encourage use of  
older IPv4 router equipment to support these larger internal  
networks.  An aggressive strategy using this approach could be far  
more effective at postponing an enviable exhaustion of IPv4 addresses  
than would a year to few months reprieve a public assignment of the  
Class E space might provide.  Not having a larger IPv4 private  
address space will cause existing IPv4 equipment to be less valuable  
when it can only be utilized within extremely limited address ranges  
by any particular organization. : (


BTW, there is a typo after

2.  Caveats of Use

  Many implementations of the TCP/IP protocol stack have the
  204.0.0.0/4 address...

This should have been 240.0.0.0/4 addresses.

-Doug






 
 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: on the value of running code (was Re: Do you want to have more meetings outside US ?)

2007-08-03 Thread Douglas Otis


On Aug 3, 2007, at 11:24 AM, Dave Crocker wrote:


My point was about the failure to make sure there was large-scale,  
multi-vendor, in-the-wild *service*.  Anything that constraint [in]  
what can go

wrong will limit the ability to make the technology robust and usable.


There are currently millions of unconstrained large-scale, in-the- 
wild services being manipulated and controlled by criminals.   
Constraints that must be taken seriously are related to the economies  
limiting the staging of DDoS attacks.  Criminals often utilize tens  
of thousands of 0wned systems.  These systems often send large email  
campaigns.  Any scheme that expects receipt of a message to invoke a  
process that initiates additional traffic must be carefully considered.


Expecting recipients to employ the local-part of a purported  
originator's email-address to then construct dozens or even hundreds  
of DNS transactions wholly unrelated to the actual source of the  
message is a prime example of how economies related to DDoS attacks  
are being gravely shifted in the wrong direction.


Spammer are already spamming, either directly or through an ISP's  
outbound server.  Lacking a reasonable constraint on the recipient  
process, criminals can attack without expending their resources and  
not expose the location of their systems.  Using SPF/Sender-ID as an  
example, just one DNS resource record is able to source an attack  
comprised of millions of recipient generated DNS transactions.  The  
lack of receipt process constraint in this case is an example of  
either negligence or incompetence.  Here an attack may employ several  
levels of indirection and yet nothing germane to the attack will be  
found within email logs.


Not imposing reasonable constraints does not make the Internet either  
more robust or usable.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-03 Thread Douglas Otis


On Aug 3, 2007, at 2:54 PM, Hallam-Baker, Phillip wrote:

I don't see a duty of care here. There is no general obligation in  
law to give up an economic interest just to help others.


Rather than allowing IP addresses to be traded, an annual per IP  
address use fee could be imposed.  This fee could provide the  
economic incentive for returning IP addresses not generating revenues  
that justify paying the fee.  Rather than increasing various  
membership fees which tend to benefit larger interests, flat use fees  
could be more democratic.  Initially setting fees to levels  
comparable to current revenues should not be disruptive.  Of course  
fees would be justified by covering just the expenses related to  
services being offered.


When sourcing revenue from either a name or IP address use fee, this  
might also cover some services offered by ISOC.  These organizations  
share duties related to supporting the Internet.  As with any  
democracy, funding mechanisms potentially impair equity.  The goal  
would be to find a balance that insures availability of information  
and resources needed for interchange and interoperability.


This could be seen as analogous to a TV or radio station who license  
their frequencies.  License fees are based upon annual costs of  
enforcement, policy and rulemaking, user information, and  
international activities.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-02 Thread Douglas Otis


On Aug 2, 2007, at 4:27 PM, Iljitsch van Beijnum wrote:

NAT isn't the only answer to the question I can't get IPv4  
addresses, what do I do? Using IPv6 and a proxy to reach the IPv4  
world is much, much cleaner. And it also works from v4 to v6. We  
really should start advocating this as the preferred transition  
mechanism.


It seems you both are in agreement.  Wouldn't a proxy for reaching  
IPv4 represent Philip's State B?



A) Has at least one full IPv4 address plus an IPv6 /64.
B) Has a share in a NAT-ed IPv4 addesss plus an IPv6 /64.
C) Has at least one full IPv4 address
D) Has a share in a NAT-ed IPv4 address


Such a topology must offer a means to declare the transitions between  
IPv6 and IPv4.  Perhaps the HIP WG may offer a popular method to  
navigate within the growing complexity.  Could SSH, LDAP, and a  
dynamic DNS server within a commodity residential access point  
represent a solution as well?  This might introduce an era where  
rapid routing changes becomes the norm, and where most network  
connection are expected to use TLS or SSH tunneling.  These highly  
extensible protocols are within the capability of today's  
microprocessors in commodity products.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: on the value of running code (was Re: Do you want to have more meetings outside US ?)

2007-08-01 Thread Douglas Otis
On Tue, 2007-07-31 at 17:24 -0400, Keith Moore wrote:
 IMHO, running code gets more credit than is warranted.  While it is
 certainly useful as both proof of concept and proof of
 implementability, mere existence of running code says nothing about
 the quality of the design, its security, scalability, breadth of
 applicability, and so forth.  running code was perhaps sufficient in
 ARPAnet days when there were only a few hundred hosts and a few
 thousand users of the network. It's not sufficient for global mission
 critical infrastructure.

A simple axiom Do not execute scripts from strangers is often
violated.  The Noscript plugin for Firefox represents an excellent (and
highly recommended) example of this principle in action.  Unfortunately,
a mouse-click is often not required for a computer to become 0wned. : (

When coping with spam, security issues related to DNS are often ignored.
Concerns raised in the draft-otis-spf-dos-exploit were dismissed by
suggesting list of bogus NS records are not limited to the same extent
anyway.  Many libraries implementing SPF do not impose limits on the MX
record, or the number of NXDOMAIN, suggested as fixes in the OpenSPF
group's rebuttal.

http://www.openspf.org/draft-otis-spf-dos-exploit_Analysis

Ironically, the rebuttal points out a bogus NS record method that
worsens a DDoS barrage that can be caused by SPF.  SPF remains a
significant risk, even when limited to just 10 SPF record transactions
per email-address evaluated.  With local-part macro expansion, these DNS
transactions represent a gift of a recipient's resources given at no
cost to the spammer.  DDoS attacks made absolutely free and unstoppable!

Offering a method to macro expand elements of the email-address
local-part, when used in a spam campaign, allows a _single_ cached SPF
record to cause an _unlimited_ number of DNS transactions from any
remote DNS resolver servicing SPF libraries.  Uncached targeted DDoS
attacks are not tracked by email logs and can not be mitigated.  The
gain of this attack can be highly destructive, while remaining virtually
free to spammers wishing to also stage the attack.

In addition to offering a means for staging a DDoS attack on
authoritative servers, unfettered access afforded to remote recursive
DNS servers by SPF scripts permits brute force DNS poisoning.  Even
knowing whether SPF related exploits are ongoing is not easily
discerned.  With the current state of affairs related to web browsers,
the axiom Do not execute scripts from strangers is a concept that
should be seriously taken to heart.

-Doug





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Charging I-Ds

2007-08-01 Thread Douglas Otis


On Jul 31, 2007, at 5:16 PM, Peter Sherbin wrote:

The current business model does not bring in enough cash. How do  
we bring in more in a way that furthers ietf goals?


E.g. other standards setting bodies have paid memberships and/or  
sellable standards.


IETF unique way could be to charge a fee for an address allocation  
to RIRs. On their side RIRs would charge for assignments as they do  
now and return a fair share back to IANA/IETF.


A IP address use fee might help solve two problems.  When based upon  
relative scarcities, IPv4 space should demand a higher premium.   
Even .5 cents per IPv4 address could generate perhaps 10 million per  
year.  This fee might help free up some unused IP address space,  
where some of these funds could be allocated to the various  
Internet supporting services.  Meeting fees could then reflect just  
the cost of the meeting itself.  This might be analogous to licensing  
radio frequencies.


If IETF start charging for reading contributors' papers how much  
voluntary contribution such arrangement would generate?


Charging to publish would interfere with information tracking.  One  
of the attractive features of the IETF has been a free information  
exchange where a document's status is directly declared.  Charge a  
fee may devolve into searching various independent websites where  
documents would have an unknown status with respect to the IETF.   
Much of the authority conveyed is in the assigning of status.



Is there a guarantee that a pre-paid content remains worth reading?


This sounds like a question an ad agency might ask.  Would pre-paid  
content permit uploading videos? : )


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: DHCP failures (was RE: Do you want to have more meetings outside US ?)

2007-08-01 Thread Douglas Otis


On Jul 31, 2007, at 6:30 PM, John C Klensin wrote:

And, while I'm picking on DHCP because I personally had more  
problems with it, I see IPv6 authconfig as being exactly the same  
issue: we are telling the world that these things work and they  
should be using them; if we can't make them work for our own  
meetings...


Whether one regards IPv6 as ready for prime-time depends upon  
location.  IPv6 appears to represent a metric measurement in the only  
industrially developed nation, despite a 1975 act of Congress, still  
is using fahrenheit, ounce, pound, inch, feet, and mile.  There will  
always be problems offering an excuse not to adopt change, even when  
the rest of world has.  Oddly, a 2x4 is neither, but might be  
required to promote change.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: secdir review of draft-ietf-dkim-ssp-requirements-04

2007-07-16 Thread Douglas Otis


On Jul 16, 2007, at 2:27 PM, David Harrington wrote:

Don't overlook 5.1 #1:
---
The author is the first-party sender of a message, as specified in  
the [rfc2822].From field.

---

Per RFC2822:
---
3.6.2. Originator fields
... The From: field specifies the author(s) of the message, that  
is, the mailbox(es) of the person(s) or system(s) responsible for the  
writing of the message.  The Sender: field specifies the mailbox of  
the agent responsible for the actual transmission of the message.

---

The From: field does not actually refer to who sent the message.   
Here 'sender' is being used in poorly defined fashion.  This field  
refers to originators of the message, and specifically _not_ the  
sender.  Even the Sender: field does not directly indicate who  
administers the SMTP client physically transmitting the message.   
Here the term Author's domain is considered to include any parent  
domain extending all the way up to TLDs.


Don't overlook 5.1 #3 resource record location prohibition.

It is very common for different protocol's RRs to reside at a common  
location.  These records are resolved by different RR types.  Why was  
this statement incorrectly worded?


2) section 5.1, bullet #4 says the WG might not be able to reach a  
consensus on a solution that completes within a deterministic  
number of steps, and if they do not reach consensus, then they MUST  
document the relevant security considerations. Even if they DO  
reach consensus, they will need to document the security  
considerations. I'm not sure how they will document the security  
considerations of not reaching consensus. I think there are range  
of topics mixed into bullet#4, and they need to be broken out  
before security for these things can be considered.


This requirement is for an uncertain concept that DNS can safely  
establish policy in a hierarchal fashion.  It is uncertain whether  
this can be accomplished within a reasonable number of transactions  
without also creating a potential for DDoS attack.  This uncertainty  
is further exacerbated by there not being any safe existing  
hierarchal email policy structures within DNS.  Absence of policy  
records is normal for existing email.


It would be possible to establish policy in conjunction with SMTP  
discovery RRs required to support the email-address domain in  
question.  This approach precludes a need to extend SSP coverage into  
subdomains, or a need to traverse domains searching for parent  
domain's SSP records.  A strategy that depends upon a policy  
hierarchy is sure create dangerously excessive traffic at second  
level domains, and result in possible DDoS exploits.


3) section 5.3 bullet #2 calls for a concise linkage between the  
identity in the From field and the DKIM i= tag. Isn't the concise  
linkage that you need here some type of identity authentication? If  
not, how do you know the mapping is actually valid?


This is made worse by a lack of clarity with respect to 5.1 #1 and  
goes to the heart of a major security problem.  It is a normal  
practice to employ email service providers who transmit messages from  
domains independent of the email-address domain signed by a DKIM  
header.  Per-users keys will call into question the caching ability  
of DNS.  Domain centric keys will preclude an architecture where  
messages are normally signed prior to submission.


So how can an email-address authoritatively be signed to convey  
assurances to recipients?


This can be done by:

1) an authorization-by-name RR in DNS at SMTP discovery RRs
  (A concept currently excluded from consideration.)

2) an ad-hoc exchange of keys between domains

3) delegation of key domain to email providers

Both methods 2  3 lead to a rather serious difficulty when  
attempting to resolve the messages at risk of having been spoofed  
when a provider's server has been compromised.  Instead of reporting  
the domain of the provider as being compromised, method 2  3  
requires a comprehensive list of perhaps thousands of a provider's  
customer's domains will need to be reported instead.  Private keys  
will be distributed to servers directly connected to the Internet,  
where of course there is a high risk keys may become compromised.


4) security requirement#1 - What must SSP do to prevent such DoS  
attacks? what must SSP do to prevent being vulnerable to such DoS  
attacks? Note that these are two separate questions with  
potentially different mitigation strategies.


Not attempting to establish a hierarchal policy within a domain  
should be the first step to assure against DDoS risks.  Instead limit  
policy to SMTP discovery RRs locations instead.  Unfortunately, the  
SSP requirements draft anticipates use of hierarchal policy instead,  
hence what may appear to be double-speak.


5) security requirement#2 - what must SSP do to prevent such  
attacks? Keeping exchanges small might help, but how about  
establishing a secure channel, and using data 

Re: IPv4 to IPv6 transition

2007-07-14 Thread Douglas Otis


On Jul 13, 2007, at 10:57 AM, Hallam-Baker, Phillip wrote:

I think we need to look beyond whether NAT is evil (or not) and  
whether NATPT is the solution (or not) and look to see how we might  
manage a transition to IPv6 in a way that is not predicated on  
holding ISP customers hostage.


People have been there and done that, anyone remember when the anti- 
spam blacklists started talking about 'collateral damage' with  
great glee? Within a very short time a very large number of email  
admins hated the self appointed maintainers of the blacklists more  
than the spammers.


Anyone with a published mailbox not protected by a blocking strategy  
quickly learns email is dysfunctional.  Lost DSNs of silently  
discarded messages, or messages directed to a junk folders never  
reviewed are all too common problems.  Black-hole lists bounded by AS  
CIDR ranges is not a gleeful strategy.  Some ISPs host servers  
involved in nefarious activities and reassign addresses periodically  
to thwart a per address blocking strategy.


To disabuse unfair accusations of abusing the service, we will offer  
a full range of blocking strategies freely selected by each  
customer.  Customers can then find their optimal balance between  
filtering and IP address blocking techniques of which there are  
many.  Unfortunately, few filtering applications alone are able to  
handle the current level of abuse without address black-holing.  And  
unfortunately, offering a more graduated approach increases the level  
of abuse seen on the Internet as a whole.


IPv6 will not make this problem go away.  IPv6 will necessitate  
development of a means to positively verify the administrative domain  
of the _client_ sending traffic into the public address space before  
the transition to IPv6 can be embraced by existing public messaging  
services.




We have three Internets: IPV4, IPv4+NAT and IPv6.


Perhaps this breakdown should include these perspectives as well.

IPv6+6to4, IPv6+NAT.
This strongly suggests to me that during the transition, a period I  
expect to last until 2025, we will want the standard Internet  
allocation to be a /64 of IPv6 plus a share in a pool of IPv4  
addreses behind a NAT.


The correlation of IPv6 addresses within IPv4 space might be  
dynamically assigned by newer services, such as PNRP.  NATs will  
remain a part of the Internet long into the foreseeable future,  
regardless of what might be said or done.  Perhaps new types of DNS  
records need to be developed to express complex paths now required by  
today's applications operating within the reality of mixed  
topologies.  Such navigation will likely be handled at what might be  
seen as new sub-layers.



What I would like to get to is a situation where a consumer can

1) purchase Internet connectivity as a commodity with a well  
defined SLA that assures them that the connection will work for the  
purposes they wish to use it


2) obtain a report from their Internet gateway device(s) that tells  
them whether the SLA is being met or not.


When an ISP permits their customers to abuse the Internet, recipients  
of this abuse MUST BE ALLOWED to block abusive traffic.  These blocks  
will not disappear quickly, nor are these blocks directly managed by  
ISPs offering now blocked outbound connectivity.  A customer of such  
an ISP may have no recourse, but to seek different providers  
regardless of what is within their SLA.  There is no need to receive  
a report, the customer will be aware of the problem rather quickly.   
Perhaps this can be solved by offering a tasting period. ; )


From the point of view of the consumer all that matters is that  
their client applications and their peer-2-peer applications work.  
The typical consumer does not host servers at their home and we  
want the sophisticated consumer to move to IPv6.


Consumers often violate the AUP of their residential Internet  
provider.  Acknowledging this, some AUPs are making exceptions where  
semi-private versus public server use might be allowed.  These  
exceptions often permits things like remote access or various peer-to- 
peer applications.  These applications are fairly common and  
supported by various mechanisms included in typical IPv4 wireless  
routers.


Most application protocols work just fine behind NAT. FTP works  
with an ugly work-around. The main protocol that breaks down is  
SIP. I am mighty unimpressed by the fact that when I plug the VOIP  
connector made by vendor X into the wirless/NAT box made by vendor  
X that I have to muck about entering port forwarding controls by  
hand. And even less impressed by the fact that a good 50% of the  
anti-NAT sentiment on this list comes from employees of vendor X.


STUN does not appear to me to be an architecturally principled  
approach, it is a work around.


Techniques employed to navigate through NAT and firewall barriers are  
often based upon various assumptions.  The small percentage of 

Re: Autoreply

2007-07-13 Thread Douglas Otis


On Jul 13, 2007, at 9:54 AM, Ken Raeburn wrote:


On Jul 13, 2007, at 09:05, John C Klensin wrote:
However, I think the IETF benefits from policies whose effect is  
to keep the clueless and inconsiderate off our mailing list until  
they can be educated.


I think most organizations or lists would benefit from such  
policies.  But where does the education come from, if not us?  Are  
we expecting them to attain a certain level of clue at some  
unspecified elsewhere, before they can join up to discuss the  
GSSAPI or CALSIFY or something else pretty well removed from  
needing an understanding of the workings of the mail systems of the  
Internet at large?  We certainly aren't giving them any help in  
that regard with our list welcome message.


I'd be okay (not happy) with a policy of unsubscribe and send a  
friendly form letter explaining why and how to fix it, though I  
don't think it's as good as keeping delivery going when that's easy  
and doesn't impact the rest of the list membership.  But a policy  
of simply unsubscribing would likely lead them to the conclusion  
that the IETF mail system is broken (and if you consider policies a  
part of the system I'd say they'd be right), and that by  
association, the IETF is as lame and clueless as we're claiming the  
subscriber and his sysadmins are.


Why write code to _accommodate_ auto-response senders?  Mailman  
already has a mechanism in place for bounces.  This mechanism will  
not unsubscribe the account, but instead disables messages from  
being delivered.  At some point, recipients will become aware of a  
problem.  The archives will contain messages missed, and confirm a  
cessation.  Suspending delivery ensures auto-responses are not made  
to other list posters as well.  Everyone will be happier except those  
sending auto-replies.  TT.


In the case of ongoing bounces, it is not possible to notify  
recipients of an altered status or how they might reinstate  
delivery.  However, it will be far more educational to _not_ issue  
notifications in the auto-response case as well.  Disabling delivery  
of messages without notification require the clueless to  
investigate.  This lesson may be shared with IT staff when they  
becoming part of a really clueless recipient's investigation.  The IT  
staff are also best able to ensure future notifications are curtailed.


In the Mailman management page, indicate the repeated sending of  
messages like I'm on vacation may be a reason for delivery to  
become disabled.  Problem solved with an effective educational  
component included.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Autoreply

2007-07-12 Thread Douglas Otis


On Jul 12, 2007, at 8:33 AM, Iljitsch van Beijnum wrote:


On 12-jul-2007, at 16:57, JORDI PALET MARTINEZ wrote:

So I instruct here the secretariat to *automatically* take the  
appropriate measures with this case and any other similar one in  
the future, such as restricting (only) postings from the bouncing  
email address until the poster comes back from vacations (or  
whatever)


That doesn't help much, because then we all still get private  
vacation messages. Please kick these people off the list.


The web interface for Mailman allows subscribers to re-enable their  
account to receive messages when deliveries bounce too often.  As  
such an account might become disabled without notice.  In the  
subscriber's management web-page, a message could indicate that a  
reason for messages being disabled may also include use of auto- 
replies.  Not sending to such an account responding with auto- 
responses takes care of the problem.  Such auto responses will appear  
as duplicate messages and should be rather easy to automatically  
detect.   Disabling the account in this manner should be painless,  
where of course a subscriber would then need to notice a lack of  
traffic.  This lack of traffic should prompt them to check their  
account status, and click the enable radio button.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure (was Re: Updating the rules?)

2007-07-11 Thread Douglas Otis
On Wed, 2007-07-11 at 09:55 +0200, Eliot Lear wrote:
 Doug,
 
 
  When short cuts are taken in PKI as with SMTP, there should be some 
  concern.
 
  DKIM voids vetted CAs, as the public key is obtained from DNS, this 
  provides the URL association directly.
 
 It's really not the same.  The implications of a compromised DKIM key 
 are bilateral *at best*, whereas a CA, particularly a well known one 
 will have far broader impact.
 
 But that's not what I was talking about.  What I was referring to was 
 Ohta-san's implication that PKI is fundamentally flawed.  Perhaps it is, 
 but I don't see anything better for key distribution to millions of 
 people.  If you, he, or anyone else comes up with something better, I'm 
 all ears.

I agree with your statements about PKI.  PKI helps formalizes safe
procedures to exchanging information.  PKI also reduces the number of
entities who's attention to these procedures needs to be vetted.

For DKIM, an ad-hoc exchange of keys or the delegation of a zone are the
methods available to authorize a third-party.  This authorization could
often be needed to permit a provider's added signature be recognized as
authoritative.  This is accomplished by making the signature by the
third-party transparently appear to be that of their customer. 

How keys are exchanged or handled might be accomplished in any number of
ways.  A provider may find it easier to collect private keys and
selector locations, issue public keys and selector locations, or obtain
a zone of their customers to permit them to directly deal with keys and
selectors to better manage key rollovers.

When a service provider becomes compromised in some manner, the many
domains they provide signatures for could be at risk. The private key or
keys the provider uses could be used to spoof the messages for any of
their customers from any SMTP client.  These keys are likely distributed
to many servers connected to the Internet.  This might also include keys
published in DNS when the service provider controls different zones for
thousands of other domains.  The use of these keys may not be limited to
just DKIM as well.

Without an authorization by-name mechanism for DKIM, the recipient is
unable to vet which third-party they are trusting.  The originator would
also need to hope their keys are handled safely using some ad-hoc
procedure.  When one of these third-parties becomes compromised, users
would need to be asked to check which key selectors were used to sign
messages for perhaps thousands of different domains.  This could be a
rather daunting list caused by just one compromise.

DKIM could prevent this type of unmanageable compromise situation by
using an by-name authorization scheme instead. DKIM should recommend
against an exchange of keys or the delegation of zones.  When keys used
by provider-x becomes compromised, all their customers will be able to
retract their authorization record, and just signatures made by
provider-x would then be suspect.  When dealing with a compromise
situation, a simple report is all that is needed.  As it is now, a
compromise report may look like an encyclopedia of keys and selectors.

The impact would be much more painful to those compromised as well.  The
report would need to list all their customers.  Those customers will
likely receive phone calls from competitors.  Knowing this, the
temptation would be to not report a breach, and require each of their
customers to report a problem as it gets discovered by them perhaps much
later.  One compromise may then sound like an epidemic of compromises.

Authorization by-name and not by key exchange is a much safer
alternative DKIM.  Authorization by-name would also permit the detect of
possible replay abuse made impractical with ad-hoc key exchanges.

-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure (was Re: Updating the rules?)

2007-07-10 Thread Douglas Otis


On Jul 8, 2007, at 10:34 PM, Eliot Lear wrote:


This can be said of any technology that is poorly managed.


So, you merely believe that the infrastructure of PKI is well  
managed.


In all but a single instance I have no evidence to the contrary.   
The one case of an exploit was extremely well publicized and  
ameliorated within days.  And that was years ago.


Trust Models.

Once a CA is vetted, it can be leveraged as a point of trust.  The  
trust is of an association with a URL validated by the certificate.   
This works for the most part, with notable exceptions.  The fact that  
exceptions are notable is notable.



When short cuts are taken in PKI as with SMTP, there should be some  
concern.


DKIM voids vetted CAs, as the public key is obtained from DNS, this  
provides the URL association directly.


Although DKIM depends upon DNS, it makes two significant compromises.

1) The only anti-replay mechanisms available are:
 -  direct association of SMTP client with signing domain.
 -  per-user keys.
 -  SPF to associate SMTP client with signing domain.

2) The only authorization mechanisms available are:
 - ad-hoc exchange of public or private keys.
 - delegating DNS key zone.


Replay-abuse:

Direct association with an SMTP client is not always practical.

Timely per-user key cancelation is unlikely, and may even cause DNS  
to become inundated with short-lived key records.


SPF allows bad actors to script a series of transactions through  
_any_ remote DNS cache.


Granting this access via SPF to bad actors places potential victims  
at substantial risk of being inundated.  SPF is macro expanded for  
each email-address checked.  A spam campaign originating from a  
domain of many local-parts will be able to instantiate hundreds of  
DNS transactions per message.  Why not, the attack will be free to  
any spammer.  The victims of bad actors can be attacked from the same  
cached long lived SPF record.  Poisoning DNS usually starts by  
preventing an authoritative DNS from responding, where then directing  
remote domains to make a series of queries would be needed.  SPF  
enables both tasks.


Proponents of SPF scripts felt that granting bad actors such access  
to remote DNS resolvers was justified.  Their justification was based  
upon the level of damaged that might be caused by a chain of bogus NS  
records anyway.  Of course any such susceptibility to bogus NS record  
chaining can be handled with changes to the DNS protocol.  Damaged  
created by SPF can not be handled by such changes to DNS.  In  
addition, bogus NS record chaining can also be greatly amplified by  
SPF.  Bogus NS record chaining should have increased the concern,  
instead just opposite resulted. : (


Unfortunately, while DNSsec might avoid the cache poisoning enabled  
by SPF, it will also increase the odds of SPF of being successful at  
staging a DDoS attack.


Lack of authorization by-name:

The ad-hoc exchange of keys is a disaster waiting to happen.   
Thousands of private keys per server could be at risk.  In addition,  
who signed has been obfuscated.  There is no reason to obfuscate  
DKIM's signing process.  Users will not be examining signatures and  
are unable to understand how it was verified.


Deflecting accountability away from entities transmitting messages  
into a public SMTP servers is the reason for:


 - SPF's absurdly expansive IP address lists instead of  
authorization of by-name.


 - DKIM's lack of an authorization by-name mechanism.

DKIM and SPF are less than hopeful signs.  The intent of these  
schemes is to deflect accountability away from the SMTP client  
publicly transmitting the spam.  How does this instill trust?


-Doug





 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure (was Re: Updating the rules?)

2007-07-10 Thread Douglas Otis


On Jul 10, 2007, at 1:51 PM, Stephen Kent wrote:


At 1:13 PM -0700 7/10/07, Douglas Otis wrote:

On Jul 8, 2007, at 10:34 PM, Eliot Lear wrote:


This can be said of any technology that is poorly managed.


So, you merely believe that the infrastructure of PKI is well  
managed.


In all but a single instance I have no evidence to the contrary.   
The one case of an exploit was extremely well publicized and  
ameliorated within days.  And that was years ago.


Trust Models.

Once a CA is vetted, it can be leveraged as a point of trust.  The  
trust is of an association with a URL validated by the certificate.


your reference to a URL is a very specialized (not generic)  
description of how one might interpret the security services  
associated with a CA.


Agreed.  I should have could be of an association with...  The intent  
was to relate this to DKIM, which of course is even more specialized.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Application knowledge of transport characteristics (was: Re: Domain Centric Administration)

2007-07-09 Thread Douglas Otis


On Jul 8, 2007, at 10:53 PM, Lars Eggert wrote:


On 2007-7-5, at 19:07, ext Tom.Petch wrote:
If we had a range of transports (perhaps like OSI offered), we  
could choose the one most suited.  We don't, we only have two, so  
it may become a choice of one with a hack.  But then that limited  
choice may be the reason why the Internet Protocol Suite has  
become the suite of choice for most:-)


We have four standards-track transport protocols (UDP, TCP, DCCP  
and SCTP), and, FWIW, SCTP has a concept of record boundaries.


Designers of applications and higher-layer protocols still have a  
tendency to ignore SCTP and DCCP and the particular features they  
can offer to applications. This can make applications more complex,  
because they need to re-invent mechanisms that a more appropriate  
transport protocol would have provided.


A desire to use TCP or UDP could also be due vested interests in  
existing solutions.  Utilization of transports with poor error  
detection capabilities as found with TCP or UDP checksums, and even  
Fletcher-32, have modulos that can mask fairly common memory or  
interface errors.  SCTP extends error detection and ensures jumbo  
frames are afforded end-to-end error detection equivalent to 1.5 KB  
ethernet LAN packets.  A number of errors go undetected by TCP or  
UDP.  Handling such errors by upper protocols layers is usually not  
robust, if even existent.  When reliability, high availability, low  
latency, and elimination of head of queue blocking matters, SCTP  
offers a clean solution.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Application knowledge of transport characteristics (was: Re: Domain Centric Administration)

2007-07-09 Thread Douglas Otis


On Jul 9, 2007, at 3:47 AM, Stephane Bortzmeyer wrote:

Designers of applications and higher-layer protocols still have a  
tendency to ignore SCTP and DCCP


Because experience shows them that, unfortunately, they do not  
cross most firewalls and NAT devices?


This is, sadly, yet another case of Internet ossification.


SCTP does not illustrate Internet ossification at all, but rather  
these deployment issues are indicative of a particular desktop OS  
clinging to CP/M. : )


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: A new transition plan, was: Re: the evilness of NAT-PT, was: chicago IETF IPv6 connectivity

2007-07-07 Thread Douglas Otis


On Jul 7, 2007, at 11:19 AM, Iljitsch van Beijnum wrote:


On 6-jul-2007, at 20:53, Douglas Otis wrote:

How will SMTP servers vet sources of inbound messages within an  
IPv6 environment?  Virtually every grain of sand can obtain a  
new IPv6 address.


Simple: look at prefixes rather than individual addresses. If  
2002::2002 is a spammer, then you may want to assume that  
2002::2003, 2002::2004 etc are also spammers. With IPv6, the CIDR  
distance between nodes under different administration should be  
considerably larger than with IPv4, where you'll often see systems  
belonging to different people on consecutive addresses.


Recommending CIDR blocking will not win accolades.  This is one of  
the evils most want to avoid.  The dynamics of IPv6 addressing is  
part of the problem.  IPv6 might utilize a translation of some IPv4  
address, perhaps of a session crossing through the IPv4 address space  
into IPv6 address space via gateways.  These translations will likely  
be dynamically assigned.  Just the ease of being able to obtain a new  
IPv6 address represents a problem although this is also touted as a  
feature.  IPv6 address vetting is unlikely to effectively track bad  
actors, nor will reverse DNS be all that useful.


An IPv6 address may traverse any number of translation points as  
well.


Huh? What are you talking about?


There will be many ways to bridge between the IPv4 and IPv6 address  
space.



This complex topology spells the end of SMTP in its current form.


And that's a bad thing?


Probably not.  However, the IETF might want to consider what is  
needed to transition into something more robust once IPv6 space is  
allowed to send email.  Difficult to obtain non-volatile references  
are essential for tracking abuse history.  The difficulty is with the  
difficult-to-obtain reference.


PNRP depends upon groups.  Groups work in a fashion similar to that  
of a mailing list.  This suggests there could be intermediaries  
offering references.  Evidence of abuse should be limited to the  
transmitting domain and not the individual.  Dealing with abuse  
should be internal to the domain.


Limiting accountability to the domain is a major sticking point  
within the current email infrastructure.  Those transmitting messages  
into public servers wish to avoid accountability.  However,  
accountability simply does not scale down at an individual to  
individual level.  There must be some form of collective  
accountability established.


One way this could be accomplished would be to create a message that  
is nothing more than a URI.  This could be done in a manner similar  
to BURL.  The URI could be constructed of a double hash of a time- 
stamp, source identifier, and content.  Message storage and retrieval  
could even utilize object storage.


The group would be the web URL where the sender stored the  
individuals' messages.  Recipient access could depend upon either the  
obscurity of the double hash or handled by a scheme similar to  
OpenID.  This approach should scale, and yet ensure references are  
obtained from those offering an HTTP message access service.  This  
would allow combining both SMTP and HTTP into a unified URL  
reputation vetting system independent of any underlying IP address.


It's the fundamental lack of, well, everything, in SMTP that allows  
all this spam that we're suffering from these days and makes it  
impossible to get rid of things like the base64 encoding overhead.


The transition will likely need to be evolutionary and not  
revolutionary.  MUAs could be modified to handle either form of  
messaging.  At some point, those still using the old methods will be  
too few to matter.


Build a better mousetrap rather than complain that the mice don't  
like the cheese.


Agreed.  But is this something the IETF will even consider?  Are  
there too many vested in the old mousetrap?


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Updating the rules?

2007-07-06 Thread Douglas Otis
On Thu, 2007-07-05 at 09:29 +0200, Brian E Carpenter wrote:
 I posted draft-carpenter-rfc2026-changes-00.txt at
 Russ Housley's request. Obviously, discussion is very much
 wanted.
 
  Brian
 
 http://www.ietf.org/internet-drafts/draft-carpenter-rfc2026-changes-00.txt
 
 This document proposes a number of changes to RFC 2026, the basic
 definition of the IETF standards process.  While some of them are
 definite changes to the rules, the intention is to preserve the main
 intent of the original rules, while adapting them to experience and
 current practice.

In:

3.  Detailed Proposals for Change

Acronyms could be structured to clarify extensions versus stand-alone.

Perhaps acronyms could look something like:

SMTP
SMTP/Text-Format
SMTP/Host-Reqs
SMTP/Msg-Size
SMTP/Anti-Spam
SMTP/Auth
SMTP/Pipelining
SMTP/Binary-Mime
SMTP/TLS
SMTP/DSN
SMTP/Multipart
SMTP/Enhanced-Status
SMTP/Ext-DSN
SMTP/Auto-Response
SMTP/Submission 

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: A new transition plan, was: Re: the evilness of NAT-PT, was: chicago IETF IPv6 connectivity

2007-07-06 Thread Douglas Otis


On Jul 6, 2007, at 3:07 AM, Iljitsch van Beijnum wrote:

And from an architectural perspective, address translation is  
clearly a dead end. One of the reasons we argue against NATs is  
not that there aren't other major problems, it's that people  
haven't managed to get the message on NATs yet.


Well, an iceberg looks very differently depending on whether you  
look at it above water or below. The problem with NAT is like  
almost all persisting problems: the bad consequences aren't felt in  
the place where they're created.


It should be abundantly clear, being Internet robust is not a  
requirement set by the marketplace.  People want their multiplayer  
games and conferencing programs to just work!  A transition to full  
IPv6 will be perilous, as it is _not_ possible to drop IPv4 support  
in most environments.


Unfortunately, the NAT problem also represents a business  
opportunity.  This is true whether or not the solutions are condoned  
by a standards body.  In the case of IPv6, Teredo UDP IPv4 tunneling,  
Teredo servers, and PNRP (a name service to navigate Teredo  
topologies) represents an immediate solution.  A solution that  
introduces _more_ translations.


The ideals of end-to-end assume the end is Internet robust.  With  
Teredo and PNRP, external services play a significant role.  Will the  
end, in conjunction with extremely complex topology ever become  
Internet robust?


How will SMTP servers vet sources of inbound messages within an IPv6  
environment?  Virtually every grain of sand can obtain a new IPv6  
address.  An IPv6 address may traverse any number of translation  
points as well.


This complex topology spells the end of SMTP in its current form.   
Perhaps SMTP could depend upon SMTP Client names or change into a  
type of URI based notification process, where messages are held by an  
HTTP server.  The URI of the HTTP server might then replace reliance  
upon SMTP Client IP address reputation.  SMTP represents just one  
protocol heavily dependent upon IPv4.


As IPv4 becomes constrained, IPv4 based access control improves.   
Fully adopting IPv6 introduces another problem, IPv6 address access  
controls.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: A new transition plan, was: Re: the evilness of NAT-PT, was: chicago IETF IPv6 connectivity

2007-07-06 Thread Douglas Otis


On Jul 6, 2007, at 1:52 PM, John C Klensin wrote:


--On Friday, 06 July, 2007 11:53 -0700 Douglas Otis
[EMAIL PROTECTED] wrote:


...
How will SMTP servers vet sources of inbound messages within an  
IPv6 environment?  Virtually every grain of sand can obtain a  
new IPv6 address.  An IPv6 address may traverse any number of  
translation points as well.


This complex topology spells the end of SMTP in its current form.   
Perhaps SMTP could depend upon SMTP Client names or change into a  
type of URI based notification process, where messages are held by  
an HTTP server.  The URI of the HTTP server might then replace  
reliance upon SMTP Client IP address reputation.  SMTP represents  
just one protocol heavily dependent upon IPv4.


Doug, I think you are conflating two problems.  There is running  
code (and extensive history) to demonstrate your conclusion is not  
correct; the other issue may indicate that some of the things that  
are now under development in the IETF are wrong-headed and that  
they need to be rethought --now or later.


It is more a matter of remaining relevant.  If not now, then later  
matters even less.


First, SMTP was designed to work in a multiple-network environment,  
with gateways and relays patching over differences far more  
significant than anything related to the different between  IPv4  
and IPv6.  Sometimes it underwent smaller or larger transformations  
in those other network-transport environments (e.g., using much  
more batch-oriented envelope structures).  But anyone old enough to  
have seen a message move from EARN to the Internet and possibly  
then back to BITNET or into FidoNet or UUCP-based mail has rather  
strong empirical evidence that a mere change of network protocols  
doesn't do much to SMTP.   It is perhaps worth remembering that RFC  
821 contains a good deal of text, some appendices, and  
miscellaneous dancing around the topic of SMTP over non-TCP services.


You are right about SMTP working across topologies.  I setup UUCP  
for offices back when dedicated business lines were considered too  
expensive.  I set up older messaging schemes for my customers while  
conducting a peripheral design business.  This business demanded  
timely exchanges of schematics, wire-lists, PCB photo-plots, test- 
vectors, and firmware.  When first learning of the Internet, it was  
costing more than $10 per one page fax to Japan.  It sounded as if  
the Internet was some sneaky method to steal expensive telco services.


Low cost Internet services together with today's level of abuse has  
lead to unintended DDoS attacks.  Defending against this unintended  
attack depends upon IP address history.  When this history proves  
ineffectively at refusing traffic from previously noted bad actors,  
SMTP no longer works.  It has been an ongoing battle to ensure bad  
actors are unable to exploit easily obtained IP addresses.  This  
includes addresses used by dial-up accounts, compromised residential  
systems, and even ISPs not enforcing adequate AUPs.  Compromised  
residential systems makes discerning the last category difficult, and  
perhaps soon abandoned as yet another casualty of faltering security.


On the other hand, any authentication, authorization, or validation  
technique that depends either specifically on IPv4 addresses or on  
some sort of end-to-end connection between the final delivery MTA  
(or MUA after it) and Submission MTA (or earlier) is going to be in  
a certain amount of trouble: from IPv6, from long relay chains,  
from long-delay networks, and so on.  Obviously some of the  
proposed mechanisms in that class are much more sensitive to  
network variations (or, in the worst cases, any situation in which  
there is not a single connection between the MSA and the delivery  
server) and maybe the IETF should be looking harder at those  
characteristics before publishing or standardizing them.


Depending upon DNS to migrate protocol behavior is one concern.   
Wildcard records in DNS can increase the amplifications of some  
intentional DDoS attacks.  To support SMTP, some servers will issue  
a long series of scripted DNS transactions which might have been  
dictated by bad actors (as if the goal was to taint and/or kill  
DNS).  At least PNRP does not depend upon DNS.  What will the IETF  
offer as an alternative to this proprietary name resolution protocol?


But the oldest sender-authentication and message integrity  
mechanisms of all don't depend on such endpoint connections over a  
single network technology: I, and I imagine many others, have had  
mailboxes on and off for 20-odd years that will not accept any  
traffic that does not arrive with a digital signature over the  
content that can be verified by software in the delivery  
(receiving) system and with requirements on the signed content,  
such as sequence numbers or time stamps, to prevent replay  
attacks.  Such approaches either require a strong PKI or secure out

Re: Domain Centric Administration, RE: draft-ietf-v6ops-natpt-to-historic-00.txt

2007-07-03 Thread Douglas Otis


On Jul 2, 2007, at 11:06 AM, John C Klensin wrote:

Of course, almost none of the issues above are likely to go away,  
or even get better, with IPv6... unless we make some improvements  
elsewhere.   And none of them make NAT a good idea, just a  
solution that won't easily go away unless we have plausible  
alternatives for _all_ of its purported advantages, not just the  
address space one.


The initial use of IPv6 in North America will likely involve Teredo  
enabled NATs and Teredo servers.  It does not seem NATs will go away  
anytime soon, especially those adding Teredo compliance to ensure  
multi-player games function without router configuration.


Unfortunately many exploits now bypass protections once afforded by  
NATs or peripheral firewalls.  Browsers are always in transition and  
can be exploited with their many hooks into OS services and  
applications.  It seems security is sacrificed to enable some new  
proprietary interface.  This is an area where standardization has  
seemly failed.


Browser exploits have become so pervasive as to require our company  
to extensively retool behavior evaluations.  For example, SMTP  
reputations are being converted to a progressive scale to adjust for  
the growing prevalence of 0wned systems.  It seems much of the  
malware activity is just harder to detect.


It gets worse.  NATs are not a complete solution, and represent a new  
challenge.  PNRP clouds combined with new complex routing paths  
represents a risk that will be even harder to evaluate and to enforce  
policies in a scaleable fashion.


In the early days of the Internet, the level of commerce and related  
crime was far lower than it is today.  People are now filing their  
Federal taxes on-line.  What the Internet is being used for has  
changed significantly.  When defending against criminal exploits,  
there is less doubt about risks.  The hazards are very apparent,  
although they might be harder to detect.


The security section for the next great idea should carefully  
review and strategize how the world is to handle resulting abuse.   
That section is unfortunately significantly growing in importance  
every day.  What seemed like a good idea, can easily become a nightmare.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: chicago IETF IPv6 connectivity

2007-07-03 Thread Douglas Otis


On Jul 3, 2007, at 8:34 AM, Joel Jaeggli wrote:


Arnt Gulbrandsen wrote:


IMNSHO, the sensible time is to do it when the relevant RIR runs  
out of addresses. I'm sure the IETF can get a couple of thousand  
IPv4 addresses for temporary use even years after that time, but  
it would seem a little hypocritical to do so.


The network at both of IETF meetings I've attended felt a little  
archaic: abundant addresses, no paperwork, no firewall, no NAT.


So basically, you're complaining that you came to the IETF and  
received production quality Internet service?


Do IETF networks add missing IPv6 root glue?  If so, would this be  
beyond production quality?


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Mailman request spam exploit

2007-07-03 Thread Douglas Otis
There is weakness in Mailman being exploited to send spam, and this  
is affecting IETF mailing lists.  A response as-if a DSN per RFC  
3464 should help curtail this exploit.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: chicago IETF IPv6 connectivity

2007-07-03 Thread Douglas Otis


On Jul 3, 2007, at 3:44 PM, Hallam-Baker, Phillip wrote:

The point about eating dog food is not to order the salespeople to  
eat the dog food or else. If the salespeople refuse to eat the dog  
food you are meant to go back and fix it. The approach being  
suggested here is to tell the salespeople to eat it and like it.


For most startups, it is not the quality of the dog food that  
matters.  It is the quality of the sales people.  No one expects a  
first version of anything to not have bugs. : )


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Domain Centric Administration, RE: draft-ietf-v6ops-natpt-to-historic-00.txt

2007-07-02 Thread Douglas Otis


On Jul 2, 2007, at 8:14 AM, Hallam-Baker, Phillip wrote:


My point here is that the principal objection being raised to NAT,  
the limitation on network connectivity is precisely the reason why  
it is beneficial.


There is no other device that can provide me with a lightweight  
firewall for $50.


Teredo enabled NATs are likely how IPv6 address use becomes common  
place.  This creates interesting security problems as this bypasses  
normal policies.  Even so, many exploits are not prevented by NATs  
and peripheral defenses.  Exploits simply depend on the lines of code  
found within browsers and their many hooks into OS services and  
applications.


The problem has become so pervasive as to require extensive  
retooling.  For example, SMTP reputations must be made more  
progressive in an attempt to accommodate a pervasive level of 0wned  
systems.  The battle rages where NATs are not a complete solution,  
but instead represent a new challenge.


-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Last Call: draft-ietf-dkim-ssp-requirements (Requirements for a DKIM Signing Practices Protocol) to Informational RFC

2007-06-28 Thread Douglas Otis
This draft lays out what is destine to become email acceptance  
criteria based upon DKIM signing practices.  DKIM depends upon public- 
key cryptography and uses public keys published under temporary  
labels below a _domainkey domain that must be at or above the  
identity being signed to meet strict acceptance criteria.  Once SSP  
is deployed, those wishing to benefit from DKIM protections must  
ensure their messages meet the strict expectation of a signature  
added by a domain at or above their email-address domain.  This  
strict practice is the only significant restriction currently  
anticipated by these SSP requirements.


What is missing as a requirement in this document that would offer a  
practical means to facilitate meeting the strict requirement  
established by SSP itself.  Currently this requires either some type  
of undefined exchange of keys, delegation of a DNS zone at or below  
the _domainkey label, or a CNAME DNS resource record tracking an  
email provider's public versions of the public key they use, in  
conjunction with some agreed upon domain selector and the customer's  
domain reference placed within the signature.  None of these  
solutions are not either very practical or really all that safe.   
This approach also obscures who actually signed the message and on  
who's behalf.


There is a requirement that could offer a solution that is both safe  
and scaleable.  This requirement would remove any necessity to use ad- 
hoc exchanges of keys, delegation one's DNS zone, or setting up  
fragile CNAMEs coordinated at the customer's domain, tracking the  
selectors and public keys used by authorized email providers.  The  
requirement is to facilitate the authorization of third-party  
domains by name.  This can scale and would be far safer and easier to  
administer as well.


There is a draft that illustrates how this might work for SSP.

This draft has not yet reached the internet-draft directory, so here  
is a copy that can be viewed now.


http://www.sonic.net/~dougotis/dkim/draft-otis-dkim-tpa-ssp-01.txt
http://www.sonic.net/~dougotis/dkim/draft-otis-dkim-tpa-ssp-01.html

-Doug





 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Design of metalanguages (was: Re: Use of LWSP in ABNF -- consensus call)

2007-05-17 Thread Douglas Otis


On May 17, 2007, at 9:29 AM, Tony Finch wrote:

It would help future users of ABNF if the specification did not  
implicitly endorse syntax that we now know to be unwise.


+1

Especially when not germane to ABNF definitions.  The construct  
should stand on its own when used.  Perhaps labeled as  
ThereBeDragons. :)


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [ietf-dkim] Re: Use of LWSP in ABNF -- consensus call

2007-05-17 Thread Douglas Otis


On May 17, 2007, at 2:27 PM, Dave Crocker wrote:

I think you are assuming a more constrained discussion than what  
I've been seeing on this thread.  The thread has discussed  
everything from removing the rule, to redefining it, to declaring  
it deprecated, to adding some commentary text.


I think that only the later was suggested.  Declaring the macro  
deprecated would not remove the definition or redefine it.  Any  
additional comments should simply indicate this construct should be  
used with caution and defined locally with a different mnemonic.  On  
the bright side, this does not impact the ABNF definition.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Use of LWSP in ABNF -- consensus call

2007-05-16 Thread Douglas Otis


On May 15, 2007, at 1:10 AM, Clive D.W. Feather wrote:


Tony Hansen said:
I share your concerns about removing rules that are already in  
use --

that would generally be a bad thing.  However I'm interested in the
consensus around whether a warning or a deprecation statement  
would be a

good thing.


LWSP has a valid meaning and use, and its being misapplied somewhere
doesn't make that meaning and usage invalid. I could see a note being
added. However, anything more than that is totally inappropriate.


+1

Frank's text in
http://www1.ietf.org/mail-archive/web/ietf/current/msg46048.html
would be fine:

  Authors intending to use the LWSP (linear white space) construct
  should note that it allows apparently empty lines consisting only
  of trailing white space, semantically different from really empty
  lines.  Some text editors and other tools are known to remove any
  trailing white space silently, and therefore the use of LWSP in
  syntax is not recommended.

However, it doesn't belong in security considerations.


Discarding of lines is likely in response to some type of exploit.   
The consideration for not using LWSP would be in regard with how  
security requirements may create incompatibilities.  This is the  
correct section.



What about moving LSWP, and this text, to a separate section of  
Annex B:

B.3 Deprecated constructs?


Agreed. That would also be appropriate.

Another problem regarding LWSP is in regard to _many_ differing  
definitions.  A profusion of differing definitions alone becomes a  
valid reason to deprecate the mnemonic.  This definition represents a  
poor practice as related to security which should not be facilitated  
through standardization.  By removing this problematic construct,  
better solutions are more likely to be found.  At least (ab)use of  
the mnemonic will have been discouraged.  Any continued use of this  
mnemonic should be discouraged and the note added should clarify one  
of the reasons for this mnemonic being deprecated is specifically due  
to its varied and checkered meanings in other drafts.


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Use of LWSP in ABNF -- consensus call

2007-05-16 Thread Douglas Otis


On May 16, 2007, at 5:19 AM, John C Klensin wrote:



Doug, John,

It seems to me that we have two separate issues here (I'm not even  
going to go so far as problems):


(1) Some documents have used the term LWSP in a way that is not  
strictly conformant with the definition in the ABNF document.   
Unless the definition is unclear, that is a problem for those  
documents -- and the review and approval process that permitted  
them to slip through with non-conforming definitions -- and not for  
the ABNF spec.   It seems to me we fix that by changing those  
documents to use different production names with local definitions,  
not by deprecating features of ABNF.


This is not deprecating a feature of ABNF other than deprecating a  
specific, confusing, and ill-advised mnemonic found in a pre-defined  
macro library.  As this construct is often ill-advised, there is  
little justification for it to be included in the ABNF pre-defined  
macro library.  This change does not preclude any construct, it only  
affects ease of use.  This should invoke greater care and forethought.


(2) What I learned back when I was doing definitional work for  
programming languages was that one wants one's basic syntax and  
syntactic metalanguages to be as simple as possible, with a  
minimum  of macro constructions and with a minimum of defined  
terminals.


From that point of view, it is easy to argue that ABNF has just  
gotten too complex, both as the result of trying to formalize some  
characteristics of 822 while maintaining a single-pass syntax  
evaluator and possibly as a second-system effect.


While it is a poor craftsman who blames his tools, it is also  
difficult to justify standardizing a macro construct proven often to  
be problematic.  When an author is looking for a macro, this  
construct and mnemonic should not be within a pre-defined macro  
library.  Exclusion of LWSP does not represent any hardship.


That, however, is an argument that ABNF itself is broken and should  
not be on the standards track.  It is not an argument for trying to  
fine-tune it by deprecating a production or two.   The broken  
argument itself was made by a few of us back when RFC 2234 was  
being evaluated.  We lost and, at this point, it would take a lot  
more than one sometimes-misused construction to justify tampering  
with it, even to the extent of selectively deprecating features.


Although ABNF limitations might lead to sub-optimal syntax, how does  
it prevent optimal syntax from being defined?  This change does not  
tamper with the ABNF language.  It only deprecates a pre-defined  
macro proven problematic and encumbered with differing definitions.   
Any future draft will not find the lack of this macro definition to  
represent any type of hardship.  Those reading IETF drafts however  
will find whatever replaces this macro will be specifically defined  
where security concerns are more likely to have been given greater  
consideration.  Whatever macro then commonly replaces LWSP could be  
given a different mnemonic and added to a subsequent version of this  
draft.


-Doug






___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Use of LWSP in ABNF -- consensus call

2007-05-16 Thread Douglas Otis

In response to off-line comments,

Although LWSP has been placed within core rules, LWSP is _not_ a  
rule core to the ABNF definition of ABNF.   LWSP is _not_ essential.   
Deprecating this macro does _not_ impact the definition of ABNF.   
This macro can be deprecated to ensure it will not promote use of  
this construct, nor should this macro be used to supplant other  
definitions.  The LWSP jersey can be retired without damaging the  
definition of ABNF or otherwise limiting the future use of ABNF.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Use of LWSP in ABNF -- consensus call

2007-05-16 Thread Douglas Otis


On May 16, 2007, at 5:47 PM, John C Klensin wrote:


I would
have no problems if that note made it clear that use of LWSP in a  
context in which it could end up on a line by itself (in a context  
in which lines are significant) can be particularly problematic.


I see those options as very different from deprecating something  
that is used successfully and correctly in a number of standards  
and incorporated into them by reference.   Since it is in use, and  
the definition is actually quite clear, deprecating it seems  
completely inappropriate.


There is no single definition for LWSP across many documents.  A  
statement that both warns about possible adverse effects and that a  
different label should be considered for new drafts solves both the  
problem of conflicting definitions AND inappropriate promotion of  
this construct.  A statement that LWSP is deprecated does not imply  
it is now obsolete.   Other documents which have referenced this  
document will simply find the definition deprecated.  New drafts  
should consider either creating a more appropriate definition, or use  
a different mnemonic with their desired definition.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Use of LWSP in ABNF -- consensus call

2007-05-15 Thread Douglas Otis


On May 15, 2007, at 10:16 AM, John Leslie wrote:

   I did some research, and found the following mentions of LWSP:

rfc0733 obs-by rfc0822
rfc0822 defs LWSP-char = SPACE / HTAB   obs-by rfc2822
rfc0987 refs rfc0822
rfc1138 refs rfc0822
rfc1148 refs rfc0822
rfc1327 refs rfc0822
rfc1486 refs rfc0822
rfc1505 refs rfc0822
rfc1528 refs rfc0822
rfc1848 defs LWSP-char ::= SPACE / HTAB
rfc2017 refs rfc0822
rfc2045 refs rfc0822
rfc2046 refs rfc0822
rfc2110 refs rfc0822
rfc2156 refs rfc0822
rfc2184 refs rfc0822
rfc2231 refs rfc0822
rfc2234 defs LWSP = *(WSP / CRLF WSP)   obs-by rfc4234
rfc2243 refs rfc0822
rfc2378 defs LWSP-char = SP | TAB
rfc2530 refs rfc2234
rfc2885 defs LWSP = *(WSP / COMMENT / EOL)
rfc3015 defs LWSP = *(WSP / COMMENT / EOL)
rfc3259 defs LWSP = *(WSP / CRLF WSP)
rfc3501 refs rfc2234
rfc3525 defs LWSP = *(WSP / COMMENT / EOL)
rfc3875 defs LWSP = SP | HT | NL
rfc4234 defs LWSP = *(WSP / CRLF WSP)
rfc4646 refs rfc2434

   Based on this, I recommend outright deprecation. The RFC4234
definition is wildly different from the RFC822 usage (which is
substanitally more often referenced): thus any use of it will tend
to confuse. It's also a bit dubious, quite specifically allowing
lines which appear to be blank, but aren't. :^(

   The RFC4234 definition, in fact, is referenced by only 3 RFCs:

RFC2530 Indicating Supported Media Features Using Extensions to DSN  
and MDN

RFC3501 INTERNET MESSAGE ACCESS PROTOCOL - VERSION 4rev1
RFC4646 Tags for Identifying Languages

   The use under RFC2530 is a bit vague (with LWSP wrapping);  
likewise

under RFC3501 (otherwise treat SP as being equivalent to LWSP). The
use under RFC4646 has caused known problems.

   This would seem to justify deprecation, IMHO.


Agreed.

From a standards standpoint, half a dozen definitions for an ABNF  
mnemonic is absurd.


Perhaps something like:

The LWSP mnemonic has been deprecated and should not be used in  
future drafts.  Explicit definitions based upon different mnemonics  
should be used instead.  If possible, syntax should guard against  
possible security concerns related to visual deceptions.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Withdrawal of Approval and Second Last Call: draft-housley-tls-authz-extns

2007-04-11 Thread Douglas Otis


On Apr 11, 2007, at 4:54 AM, Brian E Carpenter wrote:


Ted,

Well, if IPR owners don't actually care, why are they asking  
people to

send a postcard?  It would seem to be an unnecessary administrative
burden for the IPR owners, yes?


My assumption is that they care if the party that fails to send
a postcard is one of their competitors. That's what the defensive
clauses in these licenses are all about, afaics.

...
Disclaimer: These are my own personal opinions and not necessarily  
the

opinions of my employer; I'm not important enough to affect the
opinions of my employer.  :-)


Ditto. (Full disclosure: Ted and I have the same employer, but
we're allowed to disagree :-)


A statement of claims and conditions should also protect software  
distribution as a practical matter, or this limitation should be  
clearly disclosed.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: NATs as firewalls

2007-03-10 Thread Douglas Otis


On Mar 9, 2007, at 10:17 PM, David Morris wrote:

In the low end bandwidth space I play, a extra 192 bits on every  
packet is significant to end user performance. As others have  
noted, it seems like the fairly effective anti-spam technique of  
associating reputations with network addresses will be stressed by  
exploding the number of addresses ... stressed because the  
originators of spam will be able to be more agile and because the  
memory and CPU required to track such reputations explodes.


Perhaps by the time IPV4 scarcity is a critical economic issue, the  
continuing trend of cheaper faster last mile internet connectivity  
as well as server system capability cost reductions will  
converge... or perhaps some combination of legal and techical  
solutions will push spam into the noise level. Etc.


Unwanted traffic will likely become much worse.  DKIM is an example  
of how it took years to define a domain-specific cryptographic  
signature for email.  Although defining a signing policy  remains, it  
is doubtful the results will prove practical at controlling  
cryptographic replay abuse in a diverse network landscape.  Where a  
responsible signer might rate-limit account holders or exclude bad  
actors, some means is still needed to authorize transmitters to  
determine whether an assumption of control is still valid.  DKIM has  
no safe provision to authorize transmitters unless within the domain  
of the signer.  It seem unreasonable, when considering how diverse a  
IPv4/IPv6 landscape will become, to then expect all related network  
providers will obtain a zone from each of their customer's domains  
and configure it for each of the protocols.  That constraint  
represents an administrative nightmare.


DKIM can be adjusted, but can this be done within a suitable  
timeframe?  Without a name-to-name authorization scheme, controlling  
abuse will remain by the IP address.  When those addresses happen to  
be gateways into IPv4 space operating as giant NATs, the collateral  
impacts will make today's problems seem like the good old days.   
Retaining an open system of communication may then become untenable,  
and that would be a shame.


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: DNS role (RE: NATs as firewalls, cryptography, and curbing DDoS threats.)

2007-03-09 Thread Douglas Otis


On Mar 9, 2007, at 2:41 AM, Brian E Carpenter wrote:


Phill,

I'm not playing with words. The style of 'connection' involved in a  
SIP session with proxies is very different from that of a classical  
TCP session or a SOAP/HTTP/TCP session, or something using SCTP for  
some signalling purpose. And audio or video streaming over RTP is  
something else again.


Java programmers that I know already open/close by DNS name without  
knowing whether IPv6 is in use. But that is the plain TCP style of  
session, underneath. There is a lot more than that in the network.


Once IPv4 and IPv6 converge to a greater degree, going from point A  
to point B will represent a more complex journey.  For DNS to better  
facilitate this transition, where encapsulation or tunneling  
techniques might be used, a greater amount of information could prove  
useful.  Much of this information may help make the IPv4 journey  
transparent.  The added complexity better navigates through this new  
landscape and might help Phillip obtain his goals of communicating  
between points A and B using multiple protocols and connections.


IPv6 Aggregatable Global Unicast Address Format allows the IPv4  
address to be embedded into the IPv6 header.  Neighbor Discovery uses  
the 6to4 prefix in DNS.  Teredo permits existing IPv4 NATs to forward  
traffic, rather than expecting a NAT to perform the IPv4 to IPv6  
translation.  Of course there is also the IPv6 to IPv4 relays, where  
the mapping might be dynamically assigned.


Despite a widely held and perhaps altruistic view that network  
providers only need to be concerned about connectivity, in reality  
they also have the unprofitable and undesired task of excluding  
unwanted traffic.  Altruism of Internet Openness must be tempered by  
network provider stewardship curbing unwanted traffic.  Translations  
and much larger address space impinge upon this essential, albeit  
thankless task.  Without this stewardship, the Internet will suffer  
from the bad behavior of a few, making the Internet unusable for  
everyone.


Cryptography offers a solution, but only when also combined with a  
safe, lightweight means for authorizing the transmitter.  The most  
practical means for bridging between these disparate worlds of IPv4  
and IPv6 would be through the use of DNS validated names.  In other  
words, Name X authorizes Name Y that has been validated using  
existing DNS records.  An extensible means for implementing such name  
based authorization could be by name hash labels.  The underlying  
reason for authorization is to prevent cryptographic signature replay  
abuse.  Constraining bandwidth currently curtails much of the  
unwanted traffic while avoiding more draconian techniques.


Phillip has considered this topic more broadly and is proposing a  
grander scheme.  Working on a comprehensive and safe solution at this  
time makes a good deal of sense.  As UDP will be carrying much of the  
traffic, where UDP itself is connectionless, the scope could be  
generalized at the packet.  Of course no scheme would be efficient on  
a per packet basis.


-Doug





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: DNS role (RE: NATs as firewalls, cryptography, and curbing DDoS threats.)

2007-03-08 Thread Douglas Otis


On Mar 8, 2007, at 2:13 AM, Brian E Carpenter wrote:


On 2007-03-08 02:06, Hallam-Baker, Phillip wrote:
OK I will restate. All connection initiation should be exclusively  
mediated through the DNS and only the DNS.


Would that include connections to one's DHCP server, SLP server,  
default gateway,

and DNS server?

Hmm...


This is represents a controversial topic where my perspective might  
be somewhat unique.  But...


In systems that are attempting to be as open as possible, where email  
is an example, the primary method for controlling abuse has been to  
truncate connections based upon IPv4 addresses.  This address space  
allows methods using a deterministic amount of solid state  
resources.  When that address space become complex, where sources  
represent a series of addresses translated by gateways, or even using  
an IPv6 address, tracking soon exceeds practical resource provisioning.


The quality of the control depends upon rapid turn-around by reacting  
from solid state resources.  The alternatives are many orders of  
magnitude slower.  It is not just the route, which is part of the  
assessment, but also the specific point of origin being tracked.   
There are techniques to consolidate the address space, but this is  
not an ideal solution.


Use of IPv4 addressing as a means to control abuse will soon become  
problematic.  One approach would be to identify the client by name,  
and then allow merged messages be cryptographically identified by  
name, where this name then authorizes the specific client by name.  A  
weakness of cryptographic identification is that it can be replayed.   
While there might be rate limits in place initially, message replay  
thwarts reliance on solely the cryptographic identification.


By relying upon the stewardship of the client responding promptly to  
reports of abuse, tracking the names of the clients permits the use  
of any IP addressing scheme.  The cryptographic identifier would  
authorize the client, or the message would be slowed using various  
techniques to impose a type of receiver side rate limiting, that  
might otherwise be lacking.  Rate limiting affords time to respond  
and limits the level of damage.


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: NATs as firewalls and the NEA

2007-03-07 Thread Douglas Otis


On Mar 6, 2007, at 1:39 PM, Jeff Young wrote:

For better or worse, the centralized means of control you mention  
may well come in the form of the latest IPTV networks being built  
by large telco providers.  As telco battles cable for couch  
potatoes, they've realized that mucking with television reception  
is perhaps the best way to overload their customer service call  
centers.  As such, the demarc between ISP* and customer is moving  
inside the home.  There may still be a Linksys or Netgear wirless  
device attached to these networks but there will be an IP router  
that is partially controlled by the ISP on site.


Depending on your stomach for getting involved there will be,  
according to predictions, ~40 million households worldwide on some  
type of IPTV in the next few years alone.  We may not have the  
opportunity to replace existing hardware, but there is the  
opportunity to influence what goes in-line before it.


The centralized controls should be able to modulate connectivity.   
It seems almost every day represents a new zero day exploit of some  
sort.  In light of this, it would be helpful for connectivity to be  
limited until there is some automated acknowledgment that signals  
specifically what connectivity is required for remediation.  While it  
is possible to centrally identify threats, there is no uniform means  
to modulate connectivity during identified vulnerable transitions.


During these transitions, clicking on a link or accepting a message  
represents a genuine threat.  As it is now, compromised systems lack  
any centralized control and have placed the Internet dependability  
and related commerce at risk.  Some mechanism similar to that of the  
NEA seems needed.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: NATs as firewalls, cryptography, and curbing DDoS threats.

2007-03-07 Thread Douglas Otis


On Mar 7, 2007, at 9:01 AM, John C Klensin wrote:

It is true that I tend to be pessimistic about changes to deployed  
applications that can't be sold in terms of clear value.  I'm  
also negative about changing the architecture to accommodate short- 
term problems.  As examples of the latter, I've been resistant to  
changing (distinguished from adding more features and capability  
to) the fundamentals of how email has worked for 30+ years in order  
to gain short-term advantages against spammers.


There will be growing concerns related to abuse when ISPs deploy IPv6  
internally and then use IPv4 gateways to gain full access to the  
Internet.  Any changes related to controlling abuse should be aimed  
at identifying entities controlling transmission.  Resolving the  
address using a domain name at least identifies the administrative  
entity of the client.  For example, multimedia streaming has been  
fraught with security exploits.


As traffic merges into common channels, there will be a desire to  
minimize cryptographic identifier abuse, in particular for things  
like DKIM.  While there exists an experimental method for a domain to  
authorize a client, this technique represents a significant  
hazard.  This hazard is created by the iterative construction of  
address lists and the macro expansion of local-part components of a  
email-address.  The inherent capability of this method permits a  
sizable attack to be stage without expending additional resources of  
the attacker.  In addition, this experimental scheme fails to  
identify the point of transmission staging the attack.


Those offering outbound services desire that acceptance be based upon  
their customer's reputation rather than upon that of their  
stewardship.  With the experimental scheme, the administrative entity  
for the client is not relevant, although essential when guarding  
against abuse.  There are several orders of magnitude more customers  
than outbound service providers.  Guarding against abuse must depend  
upon a means to consolidate the entities being assessed.


There are millions of new domains generated every day at no cost to  
the bad actors.  When IPv6 becomes more common, the IP address may  
even exceed a scalable defensive.  The long standing practice  
allowing clients to remain nameless will need to change.  For SMTP,  
the EHLO should resolve.  Any authorization scheme should then be  
based upon a name lookup and not upon a list of IP addresses for  
thousands of transmitters.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: DNS role (RE: NATs as firewalls, cryptography, and curbing DDoS threats.)

2007-03-07 Thread Douglas Otis


On Mar 7, 2007, at 3:00 PM, Harald Tveit Alvestrand wrote:

Here I was thinking that the DNS needs to be an useful name lookup  
service for the Internet to function, and now PHB tells me it's a  
signalling layer.


Either I have seriously misunderstood the nature of signalling,  
seriously misunderstood the nature of the DNS, or I have reason to  
dislike this term.


*shudder*.


Perhaps signaling over simplifies the suggestion, and perhaps Philip  
sees this differently as well.


Once IPv4 does not offer an identifier for defending against DoS  
attack, a safe basis could be established with a two step approach  
using DNS.  Verify clients by name with a single DNS transaction.   
This offers a safe identifier that avoids DoS concerns.  These  
identifiers can be subsequently authorized by name as well.  DNS is  
well suited to resolve a small answer by name.


One approach for name based authorization would place an encoded  
hash label of the domain name being authorized within the authorizing  
domain.  Client validation can be as simple as resolving the name of  
the client, where this name can then be utilized in conjunction with  
a name based authorization.  In the case of DKIM, DNS also supplies  
the public key as well.


The concern was to avoid the indirect or reflected attacks DNS can  
produce, where a simple strategy can avoid these problems.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: NATs as firewalls and the NEA

2007-03-06 Thread Douglas Otis


On Mar 5, 2007, at 5:51 PM, Hallam-Baker, Phillip wrote:

Quite, the technical part of my proposal is essentially a  
generalization of the emergent principle of port 25 blocking. While  
people were doing this before SUBMIT was proposed the SUBMIT  
proposal made it possible to do so without negative impact on  
legitimate users.


How do we establish the political coalition necessary to act? There  
is clearly additional discussion necessary within the IETF  
community to achieve a measure of consensus. I agree that the IETF  
list is not the place for that.


We need more than just consensus in the IETF though. We need to  
convince the ISPs to act who in turn must persuade the vendors of  
SOHO routers. The ISPs have leverage, they write RFPs. The ISPs and  
others discuss this type of issue in forums such as MAAWG. The  
institutional issue is how to present an IETF consensus to such fora.


This need does not seem to be anticipated in the IETF constitution.  
The body with the closest mandate would appear to be the IAB.


While outbound controls in low cost SOHO routers, NATs, DSL or cable  
modems could prove useful, there is a significant hardware  
installation base that will not be replaced anytime soon.  Unless  
ISPs are willing to invest in a centralized means of control within  
their networks and then endure the resulting support, the problem  
will persist.  Such an investment is likely to be seen as in conflict  
with maximizing revenues.


Guidelines for ISP best practices might include a recommendation for  
access device features, however it seems unlikely anything that  
requires additional support, especially those that instruct users to  
disable some feature, as being a lost cause.  It seems unlikely any  
ISP will wish to embrace this effort, regardless of need.


The scope for the NEA effort could have been broader.  The NEA  
control mechanism is lacking, and this effort will not consider  
compatibility with the Internet as a whole.  This seems like a missed  
opportunity for improving protections where ISPs could also stand to  
benefit.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: The Devil's in the Deployment RE: NATs as firewalls

2007-03-04 Thread Douglas Otis


On Mar 4, 2007, at 11:11 AM, Brian E Carpenter wrote:

But irrelevant - the problems that NAT causes, and that having  
sufficient address space (a.k.a. IPv6) solves, are orthogonal to  
security. That is the whole point in this thread.


Of course stateful firewalls and NATs offer protection, whether for  
IPv4 or IPv6.  Most notable concerns are in regard to routing both  
IPv6  IPv4.  Accommodating IPv6 likely require a sizable investment,  
with the effect of diminishing the value of an IP address.  Will this  
mean network behavior might then run amok?


Reducing the value of the IP address will impact security, as many  
protocols depend upon IP address ACLs and black-hole lists.  Being  
unable to readily track IPv6 address space will likely introduce an  
era where public acceptance of messaging adopts CA certificates over  
the use of IP addresses.  This practical necessity improves security,  
but also at a cost.


-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: NATs as firewalls

2007-03-01 Thread Douglas Otis


On Mar 1, 2007, at 9:57 AM, John C Klensin wrote:

I continue to believe that, until and unless we come up with models  
that can satisfy the underlying problems that NATs address in the  
above two cases and implementations of those models in mass-market  
hardware, NATs are here to stay, even if we manage to move to IPv6  
for other reasons.  And, conversely, the perceived difficulties  
with NATs will be sufficiently overcome by the above issues to  
prevent those difficulties from being a major motivator for IPv6,  
at least for most of the fraction of the ISP customer base who  
cannot qualify for PI space.


One of the features contained within Microsoft Vista is a stack  
terminating an IPv6 address encapsulated using RFC4380 Teredo (IPv6  
over IPv4 UDP).  This also works in conjunction with their new name  
resolution protocol offering address structures for navigating  
through Teredo compliant NATs and firewalls.


While this may require rather heavily lifting track the UDP traffic,  
this constrains the growth of router tables and helps retain the  
viability of IPv4 addressing.  At the same time, offers a transition  
into the IPv6 address space which moves a bit closer to the end-to- 
end ideals by leveraging compliant NATs and Firewalls.  Whether  
Teredo proves secure or PRNP functions well, the PNRP name resolution  
service represents a proprietary solution that appears to be without  
IETF IPR statements.  Is this good or bad?  It is concerning.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Identifications dealing with Bulk Unsolicited Messages (BUMs)

2007-02-22 Thread Douglas Otis


On Feb 22, 2007, at 1:41 AM, Brian E Carpenter wrote:

The level of bulk unsolicited messages exceed more than 90% of the  
volume in many cases


I estimate 95% of moderated non-member mail that hits the IESG list  
to be b.u.m.


Much that slips past somewhat static (and not very effective) lists  
come from a small percentage of network providers not managing  
prohibitions of bulk unsolicited messages.  On one hand, network  
provider's revenues depend upon traffic, any traffic.  On the other  
hand are support calls.  Effective black-hole operators only deal  
with network providers.  Network providers can stipulate which  
address ranges are placed on the black-hole operator's policy based  
lists.  When bulk unsolicited messages are detected from sources  
enabled by the network provider, the network provider is contacted  
first.  Deference is afforded when a network provider responds, where  
avoiding a listing is truly a shared interest.


Customers of network providers that do not to respond to reports, and  
that also have a high density of IP addresses emitting bulk  
unsolicited messages are unfortunately at risk.  When a customer  
becomes listed, the black-hole list operator will likely inform them  
they must contact their network provider, as the network provider  
must act on their behalf.  It is impossible to develop relationships  
with billions of network provider's customers, where those wishing to  
send bulk unsolicited messages are also often less than truthful.   
Short of making bulk unsolicited messages outright illegal or  
permitting complete mayhem, the tussle remains between black-hole  
list operators and network providers, an aggregate of receivers  
versus an aggregate of transmitters.


Network providers very much desire black-hole operators to  
automatically delist IP addresses when their customers complain to  
the black-hole operator.  Ongoing efforts in the ASRG voice this  
desire in a draft aimed at advising black-hole list operators.  This  
draft does not clarify how network providers are identified, or  
attempt to describe the network provider's role in controlling bulk  
unsolicited messaging.  Ignoring the role of the network provider may  
be extremely profitable for some, but is also likely to be highly  
detrimental for the Internet as a whole.


A better way to deal with this problem would be to impose stiff  
sanctions on network providers who fail to handle reports of bulk  
unsolicited messages.  This will mean they need to deal with  
fraudulent accounts or block infected computers.  Currently, the PITA  
created by black-hole lists create some financial incentive that  
restrains BUMs at their current, albeit high, levels.  Ignoring the  
role network providers play in controlling bulk unsolicited messages  
will likely allow this problem to grow much worse.


-Doug

 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Identifications dealing with Bulk Unsolicited Messages (BUMs)

2007-02-21 Thread Douglas Otis


On Feb 21, 2007, at 4:31 AM, Brian E Carpenter wrote:


On 2007-02-18 13:46, Tony Finch wrote:

On Sun, 18 Feb 2007, Harald Tveit Alvestrand wrote:
If this was effective, blacklists would have solved the spam  
problem.

They are 90% effective


You what? Which Internet would that be?

Blacklists at the level of sending domains (or reputation systems  
that function like blacklists) are a failure. Maybe you are  
fortunate and dotat.at is not blacklisted. You won't feel so  
fortunate when it does get blacklisted one day, if you happen to  
find out why your mails are being dropped.


The preferred solution would be to abolish email black-hole lists,  
and rely upon effective AUP enforcement of network providers that  
prohibit bulk unsolicited messaging.  Unfortunately some countries,  
such as the United States for example, permit bulk unsolicited  
messages following a few guidelines that are rarely enforced.  In  
addition, the US law also prevents victims of bulk unsolicited  
messages from seeking relief in court, as only providers and the US  
government have standing.


The level of bulk unsolicited messages exceed more than 90% of the  
volume in many cases, where a majority commonly see figures in excess  
of 80%.  Without use of email black-hole lists, many systems become  
saturated with unwanted messages.  This is particularly true where  
network bandwidth is the limiting factor.  Both Sender-ID and DKIM  
require entire messages to be received before acceptance criteria can  
be applied.  Methods to identify and filter messages based upon  
originating email-addresses will not offer any relief, where a high  
turnover of millions of domains every day makes this effort far less  
effective as well.


Nevertheless, bulk unsolicited messages are also effective at  
infecting or enticing victims.  These messages must be stopped.  No  
email black-hole list can be 100% effective, but can eliminate much  
more than two-thirds of this unwanted traffic.  This reduction often  
rescues resources needed for message analysis aimed at improving  
basic security protections.


Black-hole lists also have false positives.  At times, false  
positives encourage network providers into either establishing or  
enforcing AUPs that prohibit bulk unsolicited messages.  Only network  
providers can adequately deal with this problem, as the messages must  
be prevented before they are sent.  This remains an ugly and ongoing  
process, where outright banning of bulk unsolicited messaging is  
really the only practical solution.  Such prohibitions can be effective.


At any point in time, about 2% of the sources are creating a  
problem.  Of course, these 2% are those not yet black-hole listed as  
well.  The amount of abuse from black-hole listed sources quickly  
becomes nil.  Black-hole listing the address space of providers that  
ignore bulk unsolicited messages coming from their networks can also  
be effective at eventually changing their policies.  No source should  
be listed for 5xx without first contracting the network provider as  
determined by the ASN.  The network provider is the only suitable  
actor able to resolve this problem.  Black-hole lists are just an  
ugly band-aid.  However, time and time again, the network provider's  
role is ignored in the various email strategies.


Something that could greatly assist the network provider would be a  
scheme that identifies the entity actively transmitting the  
messages.  The transmitter's IP address can become black-hole listed,  
should the entity running the transmitter not become aware of a  
problem.  Transmitter identifiers would also benefit network  
providers in that their customers could be directly contacted  
instead.  Unfortunately, the transmitter remains obscured in all the  
emerging standards.


-Doug












___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Identifications dealing with Bulk Unsolicited Messages (BUMs)

2007-02-19 Thread Douglas Otis
On Sun, 2007-02-18 at 13:20 -0800, Douglas Otis wrote:

---

The safe way forward would be to demand that security be considered
first and foremost.  In a store and forward scheme, start the chain of
identification from the transmitting entity, where the originating
entity is then able to authorize the transmitting entity when they
differ.

---

As clarification, validating public transmitters, and assuring
email-addresses by way of transmitter authorizations should be
considered separate events.  In the case of DKIM, it is much easier and
safer to distinguish between public and private transmissions.  This
recommendation should not be considered a suggestion for reverting to
using something as cumbersome as bang addressing.

Identifying public transmitters permits feedback that can offer
protection for IP address reputations.  Nor will email-address
assurances identify message authors.  The lack of a transmitter
authorization where such is normally obtained simply signals recipients
to be cautious with a message.

-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Identifications dealing with Bulk Unsolicited Messages (BUMs)

2007-02-18 Thread Douglas Otis
The IP address of the SMTP client can be found within an ASN to uncover
a network provider.  Helos might verify, which may then also identify a
domain used by a network provider's customer.  Of course the host names
within the reverse PTR may also verify as well.  Identifying the network
provider is perhaps the most reliable, as an IP address represents a
basic element of message interchange and routing.

Bulk Unsolicited Messages (BUMs) can cause great harm.  BUMs represent a
great preponderance of message volume, and place a growing burden on
recipients.  These messages can also directly infect systems and offer
links that might then exploit thousands of flaws existing in browsers.
The United State's CAN-SPAM act of 2003 permits BUMs as long as the
recipients can find valid headers and a link contained within the
message.  Without BUMs being universally illegal, only network
provider's enforcement of AUPs prohibiting this activity protects the
Internet.

About 2% of email sources send BUMs that represent more than 80% of the
volume.  Once even more network providers decide to profit from revenues
obtained carrying BUMs, then very rapidly the Internet is likely to
suffer a breakdown created by unrestrained greed.  Rejecting messages
from abusive sources as quickly as possible is not likely to keep pace
with the rising levels of BUMs and their changing strategies.

Rather than applying pressure on network providers to enforce protective
AUPs prohibiting BUMs, some believe identifying individuals sending
messages will allow a greater portion of BUMs to be blocked.  The
concept of using domain identification techniques has already been
countered with domain tasting at a daily churn of as much as 5% of
existing domains.  SPF, Sender-ID, and DKIM offers domain identifiers
that can point to a network provider's customer's customer.

Thomas Nast created a cartoon that captures the situation rather well.
http://www.spartacus.schoolnet.co.uk/USAnast3.jpg

Efforts failing to identify those actually transmitting messages are
disheartening, as these methods overlook significant risks imposed by
the redirection tactics.  This tactics also assume someone else will
deal with the BUM problem.  The more removed from who is transmitting
the message, the less likely any enforcements efforts will be effective
in the milli-second Blitz Kreig being waged.

SPF/Sender-ID uses a DNS based script language that allows the
local-part of an email address to modify a tremendous number of DNS
transactions generated from one cached record.  SPF actually allows DDoS
to be generated by BUMs free for the attacker, and nearly impossible to
trace.

DKIM started out as a good idea, but lacks a means to identify who is
transmitting the message.  Limitations imposed by DKIM require
email-address domain owners to submit private keys to their
email-providers.  Don't worry about the replay problem, as some claim
Sender-ID has that covered.  Both of these schemes require acceptance of
the entire message, which preclude any relief from a BUM onslaught.

Organizations created by providers with a goal to offer solutions have
instead decided to ignore risks and offer solutions unlikely to even
prevent spoofing, let alone make a dent in the BUM problem.  Australia's
Spam Act of 2003 makes it illegal to send even one unsolicited
commercial email to Australia.  The maximum daily penalty is $1.1
million for companies and $220,000 for individuals or anyone knowingly
involved.  At least the Aussies appear to grok the severity of the
problem.

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Identifications dealing with Bulk Unsolicited Messages (BUMs)

2007-02-18 Thread Douglas Otis
On Sun, 2007-02-18 at 12:51 +0100, Harald Tveit Alvestrand wrote:
 On second thought, I know that you know this field well enough that I 
 *must* have misunderstood your message.
 
 Could you please restate your missive in such a way that it's clear:
 
 - What problem you think the IETF can help solve
 - What action you think the IETF should take to solve it?
 
 I'm afraid the whole spam thing has gotten so convoluted by now that I need 
 instructions in large type to figure out which path out of the weeds you're 
 pointing at

There is a very dangerous trend developing in how email and network
providers wish to deal with bulk unsolicited messaging.  In essence,
their desire is to hold content originators accountable.  At the same
time, are promoting identification strategies that obscure who is
actually transmitting the message.  Although identifying the domains
creating message content might seem ideal, not identifying the domain
transmitting the message is highly perilous from a network security
standpoint.

Case in point would be Sender-ID.  Here the SMTP client's IP address is
authorized by matching it against a possible originating domain's list
of all IP addresses used publicly within the Internet.  A safe
alternative (that does not attempt to obscure the transmitting entity)
would be to first authenticate the transmitting entity, and then check
whether an originating domain has authorized the transmitting entity.
The transmitting entity can be authorized using a Name-Hash with one
small and safe DNS transaction.  Sender-ID's efforts to obscure the
transmitting entity has created an extremely dangerous DDoS threat
instead.

Case in point would be DKIM.  Here the originating email-address is
assured _only_ by the signature of a parent domain.  DKIM could have
been structured to allow the signature to always be applied by the
transmitting entity instead, and then allow this entity to be authorized
by originating domains when the domains differ.  DKIM lacks a means to
identify on who's behalf the signature was applied when the domains
differ, and a means for originating domains to authorize the
transmitting entity.

DKIM's signatures can be replayed, as can any crypto-graphic signature.
DKIM's strategy to obscure the transmitting entity means a definition
must now be establish as to which upper portions of the domain hierarchy
can be considered authoritative for email-addresses.  In addition,
email-address domain owners that outsource email services will need to
exchange private keys on a large scale with these transmitting entities.
DKIM's efforts to obscure the transmitting entity means MTAs will
warehousing thousands of private-keys.  In addition, the diffusion of
DKIM signatures at the transmitting entity also makes dealing with
replay abuse virtually impossible.

---

The safe way forward would be to demand that security be considered
first and foremost.  In a store and forward scheme, start the chain of
identification from the transmitting entity, where the originating
entity is then able to authorize the transmitting entity when they
differ.

---

For Sender-ID, validate the SMTP client, and then allow the PRA to
authorize the SMTP clients using a small single Name-Hash DNS
transaction.

For DKIM, allow the signature to indicate on which header's behalf the
signature was applied.  This could be done by extending the 'i='
semantics, or more simply by naming the header.  A small single
Name-Hash DNS transaction could then safely authorize the transmitting
entity.  This removes the need to establish which elements of the domain
hierarchy are authoritative for email-addresses, and removes the extreme
risk created when exchanging and warehousing private keys at
transmitting MTAs, as the current scheme now demands.

In either approach, having a strong identifier of a transmitting entity
allows a much faster response to abuse.  The current trend of obscuring
the transmitting entity must be stopped, or the IETF will face the
perils these schemes will surely create.


-Doug


 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-opes-smtp-security (Integrity, privacy and security in OPES for SMTP) to Informational RFC

2007-01-13 Thread Douglas Otis
On Fri, 2007-01-12 at 00:42 -0500, Barry Leiba wrote:
 Eliot Lear said...
  I'd have to go further than what you wrote.  I believe the document 
  should explicitly discuss interactions with DKIM, as that document is in 
  front of the IESG at this time for approval as a Proposed Standard.  
  Many modifications to a message will invalidate a DKIM signature.  It 
  may be possible for an OPES agent to resign, but there are implications 
  there too that should be discussed.
 
 I'm with Ted here: this is a very high-level document, not one that's 
 actually specifying the OPES SMTP adaptation.  Perhaps (just perhaps; 
 I'm not convinced of that either) the final adaptation specification 
 should talk about DKIM.  But not this one.
 
 In particular, I'll note that there are many places where a mail message 
 can be modified today, in ways that break the DKIM signature -- in an 
 SMTP server, in a Sendmail milter, in a Sieve script, in a mailing-list 
 expander, and so on.  Think of OPES in SMTP as a standardized version of 
 Sendmail milter (which would, I hope, fix some of the unfortunate 
 limitations of the latter).  Sure, there are things it might do that 
 could invalidate DKIM signatures.  And there are lots of things it might 
 do that won't.
 
 Apart from a note that says, Changing the message might invalidate DKIM 
 signatures, so go look at the DKIM spec and make sure you understand 
 what you're doing, I don't see what some future OPES SMTP adaptation 
 document should do about this.  And I certainly don't see what this 
 document should do about it.

DKIM has other problems.  DKIM mandates the From header be signed, but
also severely limits the range of identities that can be associated with
the signature.  As such, DKIM breaks badly when signing headers that
contain [EMAIL PROTECTED] [EMAIL PROTECTED] or [EMAIL PROTECTED].  The 
downgrade
process will remain with email, which then calls into question the
practicality of mandating the signing of UTF-8 based upon a premise of
visual recognition sans annotation.

Had the DKIM signature permitted the capturing of the identity on who's
behalf it was being applied without the restrictions, then not signing
headers would remain far less problematic.  Protection against spoofing
_must_ depend upon some form of annotation anyway and an ability to
associate one domain with another.  DKIM, as currently envisioned, is
not robust and mandates an impractical sharing of private keys or zone
delegations as the _only_ means to link headers with signatures.  Even
confirming DKIM identities may soon require conversion to ACE labels.

-Doug




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Something better than DNS?

2006-11-29 Thread Douglas Otis


On Nov 29, 2006, at 8:53 AM, Hallam-Baker, Phillip wrote:


I don't think that would be the only patent you would need 


Here is a somewhat more complete list:

http://ops.ietf.org/lists/namedroppers/namedroppers.2006/msg01076.html

-Doug



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Something better than DNS?

2006-11-28 Thread Douglas Otis


On Nov 28, 2006, at 4:31 PM, Emin Gun Sirer wrote:


Stephane  Phillip,

I'm thinking of writing a short report that summarizes the  
invaluable discussion here and beefing up the system sketch. I  
think we now agree that it is possible to have multiple operators  
manage names in a single, shared namespace without recourse to a  
centralized super-registry.


You might want to review patent 7,065,587 as well.  Rather than a  
hierarchical name space, there are GUIDs and friendly names  
socially structured into groups.  In addition to friendly names,  
GUIDs can combine with DNS names as well.  There is no need for a  
super-registry, but rather a way to generate GUIDs.  Perhaps this is  
the structure of things to come, where belonging to a group matters  
more than a centralized authority.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Something better than DNS?

2006-11-27 Thread Douglas Otis


On Nov 27, 2006, at 7:48 AM, John C Klensin wrote:

On the other hand, if one is going to have a network in which all  
resources are publicly available and unambiguous without prior  
negotiations between each client and server and in which one  
doesn't want to allow the time and resources for a post-query  
disambiguation process (which is exactly what we do to identify the  
desired Joe Smith from that pool) then identifiers must be  
unique.   Not overlapping name spaces, or fragments of a name space  
that the client gets to pull together based on its own choice  
algorithms, or a fraction of the aggregate name space chosen on the  
basis of least bad or most complete service by a name-vendor,  
but _unique_ and comprehensive.


PNRP attempts to ensure numeric identifiers are unique, where names  
freely associate without concerns of conflict.  The basic goal is to  
overcome limitations of DNS using more elaborate structures.  This  
extra information might provide a sequence of IPv4 and IPv6 addresses  
needed to navigate through various gateways and NATS.  The end-points  
of this navigation are introduced through groups where each member  
of the group retains other member's certificates as a means to  
verifying and differentiate possible naming overlaps.  This does not  
require unique names , but rather numbers based upon similar concepts  
developed by the T10 group for CAS.


This different (and some what scary strategy) allows for the ideal  
end-to-end networking paradigm.  Each and every network node is  
visible to the Internet for true end-to-end communication.  Firewalls  
designed to shore up flaws often found with complexity created by  
ever growing features are bypassed.  The demand for (anti-productive)  
gaming and effortless collaboration tools are designed to segment  
networks into groups.  This demand may soon offer a (proprietary)  
alternative to DNS, where navigation still takes place at the browser  
using this different type of namespace.  As noted in the  
promotional literature, there are no copyrights on numbers.


-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: SRV records considered dubious

2006-11-22 Thread Douglas Otis
On Tue, 2006-11-21 at 21:28 -0800, Dave Crocker wrote:

 The MX record was, in fact, a great leap forward (after a number of
 false starts.)  I can tout its success vigorously because I had
 nothing to do with it but have always marveled at how profound its
 benefit has been.  Indeed I'd be happy to wax extensively on the basis
 of my views, but I suspect that a scholarly consideration of the MX
 contribution is not quite the focus for this thread.
 
 SRV instantiates the MX service model, except for other protocols.
 
 As long as we ignore the underscore names, etc. encoding methods
 that were chosen for defining a particular SRV -- and by ignore, I
 mean ignore, rather than imply anything positive or negative -- then I
 do not see why SRV is more (or less) dangerous than MX.

SRV records facilitated a transition from WINS.  The problem with SRV is
caused by their induced long timeouts when discovering these services
often placed behind corporate firewalls.  Most of those services are not
safely exposed to the Internet.  Until this discovery process completes,
a laptop is unusable for the duration, which can be a long period.
While there may be valid reasons for SRV records, what is seen by the
Internet must be organized in separate zones split at the _tcp label. It
would appear kludged support for DNS is also why new RRs are
dysfunctional and CNAMES are fragile.  

 Or perhaps I should be asking:  MX was excellent for a particular
 service model.  And rendezvous requirements do suggest that there is
 benefit in being able to have different services, under a generic
 domain, vectored to different actual hosts.
 
 So:
 1) Aren't there other instances of that model -- I'll call it a
proxy or store-and-forward model;

 2) Aren't there other models that it could be useful for?

Yes, except for problems caused by a particular vendor's use.

 If there are cases for which SRV is dangerous for, what are they?

Services that are typically found using SRV records places this service
into a bad neighborhood.  In this neighborhood, security is so poor, a
wall must surround the entire area.  

  What makes them more dangerous than, say, using MX records?

The neighborhood of services. 

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: The 'failure' of SMTP RE: DNS Choices: Was: [ietf-dkim] Re: Last Call: 'DomainKeys

2006-11-22 Thread Douglas Otis


On Nov 22, 2006, at 9:22 AM, Paul Robinson wrote:

All DKIM gets you fundamentally is SPF with the ability for an MTA  
to determine you are who you say you are, but some people think  
you're a prick. That doesn't help as much as you think it will.


While greatly reduces false-positive filtering of phishing attempts,  
DKIM does _not_ identify the MTA (SMTP client).  While there is often  
a desire to associate various email related domains with SMTP clients  
when gauging acceptance, SPF does not offer a safe method for this.   
Associations using name comparisons rather than address lists can be  
much safer using small and simple answers.


The answer satisfying SPF makes address-path authorization both  
impractical and highly dangerous.  Currently SPF scripts may invoke  
100 DNS targeted transactions per each email-address resolution; for  
more than one per message, and more than once along the delivery  
path.   While most will disable scripts found within an anonymous  
email, how is executing SPF scripts stored in DNS any different?   
Surely script stored in DNS does not make it safe.


-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Something better than DNS?

2006-11-22 Thread Douglas Otis


On Nov 22, 2006, at 7:42 AM, Pekka Savola wrote:


On Tue, 21 Nov 2006, Keith Moore wrote:


DNS is getting very long in the tooth, and is entirely too  
inflexible and too fragile.  The very fact that we're having a  
discussion about whether it makes more sense to add a new RR type  
or use TXT records with DKIM is a clear indicator that something  
seriously is wrong with DNS. Adding a new RR type should not  
require a single line of DNS server or client library code to be  
recompiled, nor any changes to the configuration of any server not  
advertising such records.


Keith,

I've seen you say this for many years now, but I'll bite now. Do  
you have ideas what a more flexible, less fragile, and in general a  
better mechanism would:


 1) be or look like, or

 2) what requirements we should have for building and deploying it?
(if such a thing or a close likeness doesn't exist)

I wonder if there are practical alternatives.  A bit more dialogue  
on what else instead of DNS is a bad idea might help in  
figuring out whether there is anything the IETF could do about it.


There is a new method for using names to navigate the Internet.  It  
is based upon a proprietary application added to network stacks  
distributed on millions of systems forming a loosely coupled matrix.   
This application is now available for XP from Microsoft and will be  
included within Vista. This name resolution application does not rely  
upon DNS, and assigns unique public keys based upon your drive's  
serial number for example, along with an IPv6 address tused in  
conjunction with RFC 4380 across IPv4.  This application is able to  
navigate past corporate firewalls and directly connect computers  
together within LANs or using adhoc wireless, all without user  
intervention.  This does not require an centralized registry, and  
permits the use of DNSified names freely assigned to various systems.


This proprietary application is  targeting a 10 billion dollar gaming  
market, but will support file sharing and interactive collaboration  
applications.  It also looks to be the perfect tool for running bot- 
nets as well.


A fun site that reviews this topic:
http://www.ppcn.net/n1358c38.aspx

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [Nea] WG Review: Network Endpoint Assessment (nea)

2006-10-17 Thread Douglas Otis


On Oct 17, 2006, at 11:22 AM, Eliot Lear wrote:


I would think that five or six values are appropriate:

  1. Vendor name (string)
  2. Vendor engine version (integer)
  3. Vendor virus definitions version (integer)
  4. Enabled? (binary)
  5. Buggered? (binary)
  6. Other gobbledigook the vendor wants to include that might get
 standardized later. (blob)


This still seems like too much.  Information offered for access can  
be contained within one or more certificates.   The information  
within these certificates should be limited to a minimal set of values:


1) creator
2) class
3) user-host
4) time-stamp
5) update resources

The essential information would be the creator/class/user-host/time- 
stamp fields.  When protection is not enabled or is buggered, then a  
newer certificate should not be offered.  The virus definitions or  
patch updates can be deduced from the time-stamp or by extensions  
added to class, i.e. AVX-VISTA-37.  If a vulnerability is reported  
subsequent to the time-stamp regarding the creator/class of service,  
then a new certificate could be required.  This would simplify  
tracking at the access point.  By keeping the information exchanged  
and decisions limited to this minimal information, NEA should provide  
a valuable services in many environments.


Perhaps there should be some consideration given regarding which sets  
of certificates are offered in various environments.  Allowing the  
certificates to be accessed beyond an authentication process seems to  
increase security exposures.  As this information is not trustworthy,  
there would be also little gained sharing this information  
elsewhere.  In fact, sharing this information may increase infection  
rates when this aids malware.


-Doug





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [Nea] Re: WG Review: Network Endpoint Assessment (nea)

2006-10-16 Thread Douglas Otis


On Oct 12, 2006, at 2:27 PM, Darryl ((Dassa)) Lynch wrote:

Am I mistaken or is NEA intended to be a compliance check before a  
node is allowed onto the network?


It seems impractical to specify system requirements or expect a  
suitable examination be done realtime prior to obtaining access.  NEA  
should be seen more as a notification process with the goal of  
avoiding self inflicted trouble tickets.  Bad actors will always be  
able to falsify this information, so the NEA does not offer protection.


As such, observed behaviour and application abuse would seem to be  
issues that would be dealt with by other tools.


Agreed.  When these other tools withdraw services after bad behavior  
is detected, NEA can notify the endpoint nothing is malfunctioning,  
but rather these services have been withheld.  A selection of  
certificates may then be required before additional (or any) services  
are subsequently granted.   The NEA should be viewed as a process  
that eliminates many of the security related support calls.


NEA may be used to ensure certain applications are installed and  
some other characteristics of the node but actual behaviour may not  
be evident until such time as the node has joined the network and  
would be beyond the scope of detection by NEA IMHO.  NEA may be  
used to assist in limiting the risk of such behaviour but that is  
about the extent of it that I see.


It seems impractical to expect NEA will prevent bad actors from  
producing the expected results.  There is little that prevents the  
NEA from providing falsified information.  There are anti-virus and  
OS updating services that could produce a certificate that includes:


1) certificate creator for validation
2) a time-stamp
3) class
4) the user/host identifier
5) resources required for updating the certificate

It seems unwise to expect an endpoint to open their robes to the  
access point.  However, the access point could offer certification  
services they require prior to granting access.  This service may be  
something as simple as agreeing to the AUP presented on a web-form,  
or agreeing to remedy the cause of abusive behavior.


The NEA should also be helpful in deciding whether a range of  
services are acceptable, and how this can be changed.  Perhaps  
different certificates are required before specific services are  
granted.  Rather than talking about the posture of the endpoint,  
consider the NEA to be little more than a repository for time- 
sensitive compliance certificates offering just the five points listed.


My reading of the charter gives me the impression NEA is only  
intended for a specific task and some of what we have been  
discussing seems to extend well beyond the limited scope proposed.


It seems that the NEA charter delves into too many details.  The NEA  
can act as a bidirectional notification of services.  From the access  
standpoint, these are services granted and compliance services  
required to upgrade what is being granted.  From the endpoint  
standpoint, their certificates indicate which compliance services  
have been previously obtained, and the resources needed to renew  
these certificates when they are considered out-of-date by the access  
point.


-Doug





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [Nea] Re: WG Review: Network Endpoint Assessment (nea)

2006-10-12 Thread Douglas Otis
On Tue, 2006-10-10 at 20:01 -0700, Narayanan, Vidya wrote:
 I am rather confused by this attempt to make NEA fit into some kind of
 a network protection mechanism. I keep hearing that NEA is *one* of a
 suite of protocols that may be used for protecting networks. Let's dig
 a bit deeper into what a network may employ as protection mechanisms
 in order to protect against all kinds of general threats. 
 
 i)   Access control mechanisms such as authentication and
  authorization (to ensure only valid endpoints are allowed on the
  network)

 ii)  Ingress address filtering to prevent packets with topologically
  incorrect IP addresses from being injected into the network

 iii) VPNs to provide remote access to clients

 iv)  Firewalls to provide advanced filtering mechanisms

 v)   IDS/IPS to detect and prevent intrusions

 vi)  Application level filtering where applicable (e.g., detecting and
  discarding email spam)

If an application happens to be malware, it seems it would be unlikely
stop these applications.  How about: 

vi)   Provide application level advisory information pertaining to
  available services.

Points that seem to be missing are:

vii)  Notification of non-compliance. (Perhaps this could become a
  restatement of i.)

viii) Time or sequence sensitive compliance certificates provided
  following a remediation process or service.


Often bad behavior is detected, such as scanning or sending spam which
may violate AUPs.  These violations may trigger a requirement for the
endpoint to use a service that offers remedies the endpoint might use.
There could then be a time-sensitive certificate of compliance offered
following completion of a check-list and an agreement to comply with the
recommendations.

Those that remain infected after remediation, or that ignore the AUPs
and are again detected, may find this process a reason to correct the
situation or their behavior, or the provider may wish to permanently
disable the account. 

-Doug


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [Nea] WG Review: Network Endpoint Assessment (nea)

2006-10-07 Thread Douglas Otis


On Oct 7, 2006, at 10:42 AM, Lakshminath Dondeti wrote:


At 01:42 AM 10/7/2006, Harald Alvestrand wrote:

snip
Many universities require their students to buy their own laptops,  
but prohibit certain types of activity from those laptops (like  
spamming, DDOS-attacks and the like). They would love to have the  
ability to run some kind of NEA procedure to ensure that laptops  
are reasonably virus-free and free from known vulnerabilities, and  
are important enough in their students' lives that they can  
probably enforce it without a complaint about violation of privacy.


Just pointing out that there's one use case with user-managed  
endpoints where NEA is not obviously a bad idea.


My email ventures into a bit of non-IETF territory, but we are  
discussing use cases, and so I guess it's on topic.  Universities  
should be the last places to try antics like NEA.  Whereas an  
operational network would be a priority to them, it is also  
important that they allow students to experiment with new  
applications.  If we are believing that general purpose computing  
will be taken away from college students, we are indeed talking  
about a different world.


In any event, the bottomline is NEA as a solution to network  
protection is a leaky bucket at best.


NEA at best *may* raise the bar in attacking a closed network  
where endpoints are owned and tightly controlled by the  
organization that owns the network.


Services are currently offered that detect abnormal traffic, where  
users are directed to scrubbing services suitable for ISPs or  
universities.  This is done through walled garden techniques.  Once  
remediation is completed, restrictions are removed.  This does not  
depend upon specific conformance standardization, but rather  
specialized utilities loaded with a browser where restrictions are  
also applied.  When the system in question is not using a browser,  
other methods of notification of a need for remediation are needed.


A standardize signaling of asserted conformance and a need for  
remediation might be where this effort is best focused.


-Doug 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: As Promised, an attempt at 2026bis

2006-10-06 Thread Douglas Otis


On Oct 3, 2006, at 4:00 AM, Brian E Carpenter wrote:

Brian Carpenter has written draft-carpenter-rfc2026- 
critique-02.txt which does exactly that, and he has repeatedly  
solicited comments on it.  If you think that it would be helpful  
to have it published as an informational RFC before undertaking to  
make normative changes to our standards procedures, please say so.


Thanks for the plug, Mike :-)

Quite seriously - am I to conclude from the absence of comments on  
that draft that everyone agrees that it correctly describes current  
practice? If so, I'll look for an AD to sponsor it.


Please do.

-Doug

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


<    1   2   3   >