Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Keith Moore
Chris Lewis wrote:

 So, where's this accountability gap you keep talking about?

The gap is where ISPs can drop my mail without a good reason, and
without my having any recourse against them.  The gap increases when
they delegate those decisions to a third party.   It increases further
when the mail is dropped without any notice to sender or recipient.

These incidents happen one at a time.  It's rarely worth suing over a
single dropped message, and yet the aggregate amount of harm done by IP
based reputation services is tremendous.

 I found out what our users thought of DNSBLs when I accidentally turned
 off DNSBL queries.  We were flooded with hundreds of complaints  about
 the spam.  We get _far_ fewer complaints about false positives we have.

You're comparing apples to oranges here.  It's not surprising that
recipients complain about an increase in the amount of spam they
receive.  But they're not as likely to know about messages that they
never receive because of false positives, so of course they're less
likely to complain about them.  And the cost (to sender or recipient) of
a message blocked for bogus reasons can be far higher than the cost to
the recipient of a spam.   And the relative number of complaints is not
a reliable indicator of those costs.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


IP-based reputation services vs. DNSBL (long)

2008-11-09 Thread Keith Moore
Trying to sum up the situation here:

1. Several people have argued (somewhat convincingly) that:

- Source IP address reputation services can be valuable tools for
blocking spam,

- Such services can be better at identifying spam than per-message
content analysis,

- Such services can (and sometimes do) provide better feedback to the
sender, and thus better accountability, than per-message content
analysis that occurs later in the signal path, and

- At least some such services have very low false positive rates.

It's important to keep these in mind, as they appear to make a
compelling case for some kind of standardized reputation service.


2. Several people have also related experiences of valid messages being
blocked by such reputation services, and of the difficulty of routing
around them and getting their reputations corrected.

This strongly argues for such reputation services to have strict
accountability, and for there to be a clear, well-defined means by which
senders can correct erroneous reputations, and to get vital messages
delivered in spite of erroneous reputations.


3. An informal protocol for reporting reputations using DNS has been in
use for several years, and such use has become widespread.  An IRTF
group (ASRG) began a useful effort to document this protocol.


4. At some point ASRG decided that the protocol should be on the IETF
standards track and has requested such.

--

This process that produced this proposal reminds me of several patterns
I've seen come up often in IETF.

1. The first pattern is when an author or group gets confused between
the goal of writing an informational document to describe existing
practice, and the goal of writing a standards-track document that
describes desirable practice.

Offhand, I cannot recall a single experience when this has produced
satisfactory results.  The two goals are very much in conflict.  When
describing existing practice, it is very tempting to add extensions that
seem to be desirable, but which in fact are not part of existing
practice.   This is misleading at best, and at worst can cause
interoperability problems (my favorite example of this is RFC 1179).
OTOH, when writing standards that are based on existing practice, it is
very tempting to include existing practice (for the sake of
compatibility) within the scope of permitted behavior, even when that
practice is dubious for one reason or another - poor security, say, or
poor scalability.

Over the years I've formed the opinion that the only reasonable way to
build a standard from existing practice was to first commit to
accurately documenting existing practice and publishing that document as
Informational.   Part of that process should be to document known
limitations of that existing practice.  Only after that document is
completed should there be an attempt to design a standard that is
compatible with existing practice.  That effort needs to be treated as a
design effort and subject to all of the usual vetting.   Furthermore it
needs to be understood that serious flaws from the original protocol
must not be included in the standard for the sake of backward compatibility.


2. The second pattern is when people insist that a widely deployed
protocol is inherently deserving of standardization, without further
vetting or changes, merely because it is widely deployed.

The simplest way of responding to this is that this is irrelevant under
our process.  The RFC 2026 criteria for proposed standard status require
both rough consensus of the IETF community and no known technical
omissions.

The fact that something is widely deployed may be a indication of its
utility, but is not an indication of technical soundness.  Nor is it an
indication of consensus of the IETF technical community.


3. The third pattern is when a closed industry group, or an open group
that is not chartered to develop a standard protocol, insists that its
product merits standardization by IETF because it has gained consensus
of that group.

The problem with this is that the group did not attract the wide range
of interests that would normally attend an IETF standardization effort,
nor did the group operate under processes designed to ensure fairness
and openness.  Such efforts can be considered in the IETF process as
individual submissions, but they need a great deal of scrutiny, as this
tactic is often used as an end run around IETF.  My experience with
individual submissions is that simple, noncontroversial proposals
affecting a small number of parties rarely present serious obstacles to
standardization, but proposals that affect a large number of parties are
much more difficult, and with good reason.

The main point to be made here is that the consensus of an external
group means nothing in terms of either IETF consensus or judgment of
technical soundness.  In particular, external groups often have a much
narrower view of protocol requirements than IETF does.


All of these patterns are associated with 

Punycode at ASCII for IDN MDN via Y2K Project Management

2008-11-09 Thread linuxa linux
Please read response below.  

I am not a technical expert, thus my understanding
could be flawed, thus I would like to say sorry for any
errors throughout this. 


Regards


Meeku



--- On Tue, 4/11/08, Ruszlan Gaszanov [EMAIL PROTECTED] wrote:

 From: Ruszlan Gaszanov [EMAIL PROTECTED]
 Subject: RE: Solutions to the IDN Problems Discussed
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Date: Tuesday, 4 November, 2008, 11:34 AM
  There is a problem with solution #1 - it already
  exists :) though for semi-ASCII language and
  Internationalised domain names it does not work :(
  because I visited this website
  http://www.nameisp.com/puny.asp and did  an
 experiment
  with total-ASCII language domain names and the
  punycode representation was the same as the domain
  name.  Thus there is a problem with punycode machine
  code presently, as it cheats and does not
 code the
  total-ASCII language domain names like it does with
  semi-ASCII language and Internationalised domain
  names.

..

 Any other implementation would be incompatible with
 existing DNS
 infrastructure and requires developing a completely new
 name-lookup protocol
 as well as setting up a parallel server infrastructure.

Punycode exists at the Whois for Internationalised Domain Names and you could 
easily put a Y2K project management with a timeline for ending old type ASCII 
registrations and where new type Punycode registrations would commence at ASCII 
names.  ASCII names would get processed via the Punycode field like you have 
with IDN and your previous ASCII field shuts, that's all.  Both systems could 
exist simultaneously, the new process open and the previous closed yet 
continuing then later when there's learning curve development you could 
consider correcting the old type closed ASCII system.  This system would allow 
registrations that are (1) IDN (Internationalised Domain Name) and also those 
that are (2) MDN (Multilingual Domain Names).   

 Since such protocol
 would be fundamentally incompatible with any existing
 internet applications,
 new software will have to be developed.  This would
 practically mean
 creating a second Internet. 

Software applications would catch-up to this via the Y2K sort project 
management and also via market.  Unicode got accepted then Punycode at ASCII 
registrations compatibility should also get accepted.  

 Considering the cost and time requirements of such a
 project, as well as all
 the confusion and inconveniences the transition would
 cause, I do not really
 believe the user community would benefit from it. In any
 case this kind of
 project is quite beyond the mandates of either ICANN or
 Unicode Consortium.

You have Punycode existing at IDN then you can also use same at ASCII.  Only 
the registration window get's changed and you have an extended Punycode that 
also changes ASCII registration into Punycode.  The user community would really 
become happy because you would get quicker IDN implementation and a new type 
registration that are Multilingual.  Software application integration would 
happen early.  This would help those people that speak / write more than one 
language.  People could then interact better and with virtue atmosphere than 
before they were able to with only their single language domains and software 
applications.  You find that there are countries where the railway stations are 
using their traditional languages as well as the english language.  The world 
has become more cosmopolitan and thus you need this to happen very urgently.







  
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Punycode at ASCII for IDN MDN via Y2K Project Management

2008-11-09 Thread Ruszlan Gaszanov
Meeku,

Ok, I could go into more details and explain why your proposal is extremely
problematic from the technical point of view... but I don't believe Unicode
mailing list is the proper place to discuss this topic, since Internet
protocols are not directly related to the Unicode Standard. Standards for
DNS, IDNA and punycode were developed by IETF/ISOC, so this would be much
more appropriate place to take this to. 

Ruszlán

-Original Message-
From: linuxa linux [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 09, 2008 8:57 PM
To: [EMAIL PROTECTED]; ietf@ietf.org
Cc: [EMAIL PROTECTED]
Subject: Punycode at ASCII for IDN  MDN via Y2K Project Management

Please read response below.  

I am not a technical expert, thus my understanding
could be flawed, thus I would like to say sorry for any
errors throughout this. 


Regards


Meeku



--- On Tue, 4/11/08, Ruszlan Gaszanov [EMAIL PROTECTED] wrote:

 From: Ruszlan Gaszanov [EMAIL PROTECTED]
 Subject: RE: Solutions to the IDN Problems Discussed
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Date: Tuesday, 4 November, 2008, 11:34 AM
  There is a problem with solution #1 - it already
  exists :) though for semi-ASCII language and
  Internationalised domain names it does not work :(
  because I visited this website
  http://www.nameisp.com/puny.asp and did  an
 experiment
  with total-ASCII language domain names and the
  punycode representation was the same as the domain
  name.  Thus there is a problem with punycode machine
  code presently, as it cheats and does not
 code the
  total-ASCII language domain names like it does with
  semi-ASCII language and Internationalised domain
  names.

..

 Any other implementation would be incompatible with
 existing DNS
 infrastructure and requires developing a completely new
 name-lookup protocol
 as well as setting up a parallel server infrastructure.

Punycode exists at the Whois for Internationalised Domain Names and you
could easily put a Y2K project management with a timeline for ending old
type ASCII registrations and where new type Punycode registrations would
commence at ASCII names.  ASCII names would get processed via the Punycode
field like you have with IDN and your previous ASCII field shuts, that's
all.  Both systems could exist simultaneously, the new process open and the
previous closed yet continuing then later when there's learning curve
development you could consider correcting the old type closed ASCII system.
This system would allow registrations that are (1) IDN (Internationalised
Domain Name) and also those that are (2) MDN (Multilingual Domain Names).   

 Since such protocol
 would be fundamentally incompatible with any existing
 internet applications,
 new software will have to be developed.  This would
 practically mean
 creating a second Internet. 

Software applications would catch-up to this via the Y2K sort project
management and also via market.  Unicode got accepted then Punycode at ASCII
registrations compatibility should also get accepted.  

 Considering the cost and time requirements of such a
 project, as well as all
 the confusion and inconveniences the transition would
 cause, I do not really
 believe the user community would benefit from it. In any
 case this kind of
 project is quite beyond the mandates of either ICANN or
 Unicode Consortium.

You have Punycode existing at IDN then you can also use same at ASCII.  Only
the registration window get's changed and you have an extended Punycode that
also changes ASCII registration into Punycode.  The user community would
really become happy because you would get quicker IDN implementation and a
new type registration that are Multilingual.  Software application
integration would happen early.  This would help those people that speak /
write more than one language.  People could then interact better and with
virtue atmosphere than before they were able to with only their single
language domains and software applications.  You find that there are
countries where the railway stations are using their traditional languages
as well as the english language.  The world has become more cosmopolitan and
thus you need this to happen very urgently.







  

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Tony Hansen
I'm personally very interested in getting the format for querying DNS
*white* lists standardized. I want to be able to use DNSWLs as part of
*positive reputation* checking: given an *authenticated* domain name
(say, with DKIM), can we say something positive about them beyond they
send email?

The protocol described in this draft covers both cases, both positive
and negative checking.

While the majority of the examples in the document concentrates on
negative examples, the protocol *is* useful for the positive case.

Does anyone have issues with the use of this protocol for WHITE lists?

Tony Hansen
[EMAIL PROTECTED]

John C Klensin wrote:
 Sadly, I have to agree with Keith.   While these lists are a
 fact of life today, and I would favor an informational document
 or document that simply describes how they work and the issues
 they raise, standardizing them and formally recommending their
 use is not desirable at least without some major changes in our
 email model and standards for what gets addresses onto --and,
 more important, off of-- those lists.
 
 john
 
 
 --On Friday, 07 November, 2008 18:38 -0500 Keith Moore
 [EMAIL PROTECTED] wrote:
 
 DNSBLs work to degrade the interoperability of email, to make
 its delivery less reliable and system less accountable for
 failures.  They do NOT meet the no known technical omissions
 criterion required of standards-track documents.

 The fact that they are widely used is sad, not a justification
 for standardization.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread michael.dillon
 And what does this have to do with the technical details of 
 running and using one?  We all know that spam stinks and 
 DNSBLs stink, too.
 Unfortunately, the alternatives to DNSBLs are worse.

That's a rather narrow view. Very large numbers of people
think that Instant Messaging is a far superior alternative
to DNSBLs, not to mention VoIP, web forums and other variations
on the theme. Fortunately, the IETF has done some good work
in the area of SIP and XMPP has steadily been gaining traction.

I think it is a positive thing to document the technology
of DNSBLs but I have no idea why this has come to the IETF.
Maybe it is a veiled test of the IETF's relevance to the 
21st century Internet.

--Michael Dillon

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Punycode at ASCII for IDN MDN via Y2K Project Management

2008-11-09 Thread Ruszlan Gaszanov
Meeku,

 Punycode exists at the Whois for Internationalised Domain Names and you 
 could easily put a Y2K project management with a timeline for ending old 
 type ASCII registrations and where new type Punycode registrations would 
 commence at ASCII names.  ASCII names would get processed via the Punycode
 field like you have with IDN and your previous ASCII field shuts, that's 
 all.  

The main point of punycode is that pre-IDN ASCII domains are encoded with
themselves in ASCII while other unicode Characters could be encoded with a
sequence of ASCII characters, so that IDN could be implemented on existing
DNS infrastructure. ASCII domain names *are* in fact punycode domain names
by definition. 

As soon as you want to encode ASCII characters with something else, you'll
need a new parallel DNS infrastructure and at this point a new version of
DNS protocol that works natively with UTF-16 or b64 encoded UTF-16 or UTF-8
(oh, wait, UTF-8 also encodes ASCII characters as themselves) could just as
well be developed. But the bottom line is that there is no way to develop a
new punycode encoding ASCII domain names with something else then ASCII
and implement that on existing DNS infrastructure without completely
breaking the Internet, period.  

 Software applications would catch-up to this via the Y2K sort project 
 management and also via market.  Unicode got accepted then Punycode at
 ASCII registrations compatibility should also get accepted.  

You were complaining yourself in your correspondence with ICANN
representative how slowly IDN are being implemented by software developers
(even though this implementation is a simple matter of adding a
Unicode-to-punycode translation routine, provided the application is
actually using Unicode functions to process text in the first place). Do you
seriously think they would implement faster a new name-lookup protocol that
might require them to completely rewrite large portions of their code? 

 You have Punycode existing at IDN then you can also use same at ASCII.
 Only the registration window get's changed and you have an extended
 Punycode that also changes ASCII registration into Punycode.  The user
 community would really become happy because you would get quicker IDN 
 implementation and a new type registration that are Multilingual.
 Software application integration would happen early.  This would help
 those people that speak / write more than one language.  People could then
 interact better and with virtue atmosphere than before they were able to 
 with only their single language domains and software applications.  You
 find that there are countries where the railway stations are using their 
 traditional languages as well as the english language.  The world has
 become more cosmopolitan and thus you need this to happen very urgently.

Frankly I can't really see the point of all this. The only reason why the
users sometimes need to use raw punycode strings is because IDNA is not
properly implemented in all software yet. Once the software catches up, the
users won't really need to care how exactly punycode works. 

Yes, it takes time to implement IDNA to the point where it can be widely
used, like it also took quite some time to implement Unicode in text
processing (even now it is still not implemented in all applications and
some features of Unicode are not implemented at all). Consider btw that the
reason why UTF-8 encoding scheme gained so much popularity is because of its
ASCII-transparency and backward-compatibility with octet-oriented
applications. 

In any case IDNA, as defined by the current standard, can be realistically
implemented much faster and easier then any alternative solution that is not
backward-compatible with current applications and I can't see why you are so
unhappy with it.

Ruszlán

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-ietf-behave-dccp-04

2008-11-09 Thread Christian Vogt

Rémi,

endpoint independent mapping and filtering enables address referrals  
between
application instances (which use the same port number).  This  
advantage is

independent of the transport protocol and the connection model.  The
exceptions you are listing are special cases for NAT'ing in general,  
not only
with regard to the usefulness of endpoint independent mapping and  
filtering.


Anyway, I am fine with requiring endpoint independence in transport- 
specific

documents.

- Christian


Rémi Denis-Courmont wrote:

I don't agree.  A reason for recommending endpoint-independent  
mapping

and filtering is to enable applications to refer each other to a host
behind a NAT. This is desirable independent of the transport  
protocol.


But whether this is useful depends on the transport protocol and/or  
connection
model. It does help for unicast UDP (and UDP-Lite). It does help for  
TCP,
only if simultaneous open is supported by the application, the  
operating
system and the middleboxes. If does help for DCCP _only_ if the DCCP  
stack
implements the simultaneous open option, which is _not_ in the  
baseline DCCP

document.

It does not help with, e.g. multicast UDP. It does not mean anything  
for
port-less protocol, including ICMP, proto-41, etc. It is  
insufficient for

SCTP. Who knows how it applies to would-be future transports?

Besides, I think it's too late for a factorized BEHAVE  
specification. Good
news: we have much of the baseline in the BEHAVE-UDP RFC. The other  
documents
already borrow quite a lot from it, especially the general concepts  
and

terminology.




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Chris Lewis
Steven M. Bellovin wrote:
 On Sun, 09 Nov 2008 23:40:43 -0500
 Tony Hansen [EMAIL PROTECTED] wrote:

 In some sense, I have more trouble with white lists than black lists.  
 
 My concern is centralization of power.  If used properly, white lists
 are fine.  If used improperly, they're a way to form an email cartel,
 forcing organizations to buy email transit from a member of the inner
 circle.

Hi Steven, long time...

Sort of a protection racket.

This only works insofar as the mail receivers (the ones who choose to
deploy a whitelist) is willing to let them.  Receivers are driven, first
and foremost, by their users's complaint rates.

Receivers will notice increased complaint rates from a whitelist like
this, and begin to discount their input.  As they also do with FP rates
on the blacklists they use.  We see that now in take-up rates of various
DNSBL/DNSWLs.

Much as, say, people realized that TrustE logos didn't mean very much.

There's a much larger potential with proprietary reputation systems -
the buy-in costs are high, so it eventually becomes impossible for new
reputation vendors to get into the act, and receivers are reluctant to
switch vendors because they have to put yet another proprietary thingie
in their MTAs.

[A few years ago, at a MAAWG session, I caused a bit of slack-jawed
consternation when I strongly put forth the idea that reputation vendors
had to move to open protocols if they wanted acceptance at more than a
few of the very largest ISPs that could afford it.  I'm glad to report
that at least one or two have since seen the light.]

In an standard protocol environment, startup costs are minimal,
receivers find it very easy to switch or mix-and-match.

Same goes with negative reputation.

A standardized open protocol greatly reduces the likelyhood of cartelish
behaviour.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Steven M. Bellovin
On Sun, 09 Nov 2008 23:40:43 -0500
Tony Hansen [EMAIL PROTECTED] wrote:

 I'm personally very interested in getting the format for querying DNS
 *white* lists standardized. I want to be able to use DNSWLs as part of
 *positive reputation* checking: given an *authenticated* domain name
 (say, with DKIM), can we say something positive about them beyond
 they send email?
 
 The protocol described in this draft covers both cases, both positive
 and negative checking.
 
 While the majority of the examples in the document concentrates on
 negative examples, the protocol *is* useful for the positive case.
 
 Does anyone have issues with the use of this protocol for WHITE lists?
 
In some sense, I have more trouble with white lists than black lists.  

My concern is centralization of power.  If used properly, white lists
are fine.  If used improperly, they're a way to form an email cartel,
forcing organizations to buy email transit from a member of the inner
circle.


--Steve Bellovin, http://www.cs.columbia.edu/~smb
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Matthias Leisi

[Disclosure: I am the leader of the dnswl.org project; I provided some
input into the DNSxL draft as far as it concerns whitelists.]

Keith Moore schrieb:

 These incidents happen one at a time.  It's rarely worth suing over a
 single dropped message, and yet the aggregate amount of harm done by IP
 based reputation services is tremendous.

I would not want to reduce the situation to blacklists only. You use the
correct term - IP based reputation services - but fail to mention that
this includes whitelists, and that decisions other than drop can be
made based upon data returned by such services.

Regarding the dropped message: While outside the scope of the DNSxL
draft, it is pretty much consensus that messages should not be dropped
in the sense of deleted or stored in a seldomly reviewed quarantine
folder, but that a clear SMTP 5xx error code should be returned.

DNSBLs in conjunction with SMTP 5xx error codes actually increase the
value of the overall email system by enhancing it's reliability.

 receive.  But they're not as likely to know about messages that they
 never receive because of false positives, so of course they're less
 likely to complain about them.  And the cost (to sender or recipient) of
 a message blocked for bogus reasons can be far higher than the cost to
 the recipient of a spam.   

I believe it is generally agreed that false positives are the main risk
with spam filter solutions. This applies both to individual tools like
DNSxLs and to the filtering machine as a whole as perceived by the
recipient (and the sender). No automated solution can guarantee the
absence of false positives.

On the other hand, the manual solution is far worse in terms of false
positives, in my experience - the human error rate is pretty high when
eg a spammy inbox must be reviewed manually.

It is true that many spam filter solutions are short on ham rules
which would offset erroneous (or bogus, as you chose to call it) spam
rules. The reason is obvious: most ham rules would be trivially to
forge for a spammer -- something which is not practical with IP
addresses. That's why IP addresses are so important for spam filter
decisions, both for black- and for whitelisting.

 And the relative number of complaints is not
 a reliable indicator of those costs.

It's probably the best indicator available?

-- Matthias, for dnswl.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Matthias Leisi

Steven M. Bellovin schrieb:

 In some sense, I have more trouble with white lists than black lists.  
 
 My concern is centralization of power.  If used properly, white lists
 are fine.  If used improperly, they're a way to form an email cartel,
 forcing organizations to buy email transit from a member of the inner
 circle.

That is fundamentally true, and is the very reason dnswl.org is _not_
built around a business model, but as an all-volunteer organisation.

The value of such an organisation is the trust given by the users of
the whitelist data. Abuse of this trust would very fast be sanctioned by
not using the data any more.

However, this is independent from the technical specification of the
protocol, which I think is valuable. While this draft does not specify
the only protocol available to query our data, it is by far (in the high
90%-region) the most important one (either through queries to our public
servers or through local/private mirrors).

-- Matthias

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-housley-iesg-rfc3932bis (IESG Procedures for Handling of Independent and IRTF Stream Submissions) to BCP

2008-11-09 Thread Russ Housley

Thanks for your review.  My responses below.


The IESG has received a request from an individual submitter to consider
the following document:

- 'IESG Procedures for Handling of Independent and IRTF Stream
   Submissions '
   draft-housley-iesg-rfc3932bis-04.txt as a BCP

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action.  Please send substantive comments to the


I'm commenting on draft-housley-iesg-rfc3932bis-05 as that's the 
latest version at the moment.


In Section 1:

  These RFCs, and any other Informational or Experimental standards-related
   documents, are reviewed by appropriate IETF bodies and published 
as part of

   the IETF Stream.

I read that as Informational and Experimental are also 
standards-related.   This is at odds with statements such as This 
memo does not specify an Internet standard of any kind. which is 
usually seen in Informational and Experimental documents.


Not all Informational and Experimental documents are 
standards-related.  Some are.  Not all Informational and Experimental 
documents are published as part of the IETF stream.  Some are.  I'm 
not sure what text change would help add clarity.


Although most people know what WG is, it doesn't hurt to have the 
following for clarity:


  This review was often a full-scale  review of technical content, with the
  Area Director (ADs) attempting to clear points with the authors, stimulate
  revisions of the documents, encourage the authors to contact appropriate
  working groups (WG) and so on.


Sure.


In Section 3:

  3. The IESG finds that publication is harmful to the IETF work done
  in WG X and recommends not publishing the document at this time.

I don't think that harmful is appropriate here.  I gather that the 
aim is to prevent circumvention of the IETF process and conflicts 
with work being carried out by the Working Group.


It could be phrased as:

The IESG finds that this work is related to IETF work done in WG X
and recommends not publishing the document at this time.


This is very similar to the point raised by John Klensin.  Since 
harmful is the term used in RFC 3292, I have asked Harald to 
provide some insights before making any changes to this wording.  I 
understand your point, but I want to make sure that I'm not missing 
some historical context.



  5. The IESG finds that this document extends an IETF protocol in a
  way that requires IETF review and should therefore not be
  published without IETF review and IESG approval.

I read that as we cannot publish this document as it requires IETF 
review and IESG approval.  It may be easier for all parties to ask 
for an IETF review instead of rejecting publication outright.


The point is that a different publication path, not the Independent 
Publication Stream, is needed to obtain the IETF review.  That is the 
point of the IETF stream.



  The IESG assumes that the RFC Editor, in agreement with the IAB, will
   manage mechanisms for appropriate technical review of independent
   submissions. Likewise, the IESG also assumes that the IRSG, in
   agreement with the IAB, will manage mechanisms for appropriate
   technical review of IRTF submissions.

I don't see why there has to be assumptions here.  I suggest 
dropping the assumes and clearly spell out who is going to manage what.


How about:

The RFC Editor, in agreement with the IAB, shall manage mechanisms 
for appropriate technical review of independent submissions. 
Likewise, the IRSG, in agreement with the IAB, shall manage 
mechanisms for appropriate technical review of IRTF submissions.


Russ

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-irtf-asrg-dnsbl (DNS Blacklists and Whitelists)

2008-11-09 Thread Keith Moore
Matthias Leisi wrote:
 [Disclosure: I am the leader of the dnswl.org project; I provided some
 input into the DNSxL draft as far as it concerns whitelists.]
 
 Keith Moore schrieb:
 
 These incidents happen one at a time.  It's rarely worth suing over a
 single dropped message, and yet the aggregate amount of harm done by IP
 based reputation services is tremendous.
 
 I would not want to reduce the situation to blacklists only. You use the
 correct term - IP based reputation services - but fail to mention that
 this includes whitelists, and that decisions other than drop can be
 made based upon data returned by such services.

I suspect DNSWLs have problems also, but I haven't tried to analyze the
problems with DNSWLs to the extent I have the problems with DNSBLs.

 Regarding the dropped message: While outside the scope of the DNSxL
 draft, it is pretty much consensus that messages should not be dropped
 in the sense of deleted or stored in a seldomly reviewed quarantine
 folder, but that a clear SMTP 5xx error code should be returned.

I don't think it should be outside the scope of a standard.

 I believe it is generally agreed that false positives are the main risk
 with spam filter solutions. This applies both to individual tools like
 DNSxLs and to the filtering machine as a whole as perceived by the
 recipient (and the sender). No automated solution can guarantee the
 absence of false positives.
 
 On the other hand, the manual solution is far worse in terms of false
 positives, in my experience - the human error rate is pretty high when
 eg a spammy inbox must be reviewed manually.

Agreed.  But it is not uniform for all recipients - it depends highly on
how much legitimate mail they receive, how much spam they receive, and
the similarity between the two.  And there's a important difference in
liability between a recipient filtering his own mail and an unrelated
third party filtering it for him.

 And the relative number of complaints is not
 a reliable indicator of those costs.
 
 It's probably the best indicator available?

perhaps, but that doesn't make it a compelling argument.

Keith
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf