Re: LLMNR news you can lose

2005-09-09 Thread David Hopwood

Bernard Aboba wrote:

1. "LLMNR has never been implemented"

Microsoft has shipped LLMNR support in Windows CE 4.1 and 5.0.


But doesn't that just make it even more odd that they haven't shipped it
for XP? (Given that the API of CE is approximately a subset of that of XP,
so presumably, substantially the same implementation would work, perhaps
with minor changes.)

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Dynamic salting in encryption algorithms for peer to peer communication

2005-08-28 Thread David Hopwood

Atul Sabharwal wrote:

Generally, I have seen that people use PKI or static salts for encrypting
data e.g. password. In case of peer to peer communication, would it be useful
to use dynamic salts derived from known mathematical series e.g. Fibonacci
series, Ramanajum number series.


No. Ask on the newsgroup sci.crypt for an explanation of why not; it's not
on-topic here.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: language root file system

2005-08-27 Thread David Hopwood

JFC (Jefsey) Morfin wrote:

At 18:11 27/08/2005, David Hopwood wrote:

JFC (Jefsey) Morfin wrote:

[...] The DNS root is updated around 60 times a year. It is likely 
that the langroot is currently similarly updated with new langtags.


No, that isn't likely at all.


Dear David,
your opposition is perfectly receivable. But it should be documented.


For the long-term, sustained rate of updates to the registry to be 60 a year,
there would have to be real-world changes in the status of countries or in
the classification of languages and scripts that occurred at the rate of 60
a year (i.e. every 6 days). And even in times of significant political
upheaval, that is simply implausible.

The order of magnitude is the same. I did not note the number of entries 
in the IANA file during the last months. This is something that I will 
certainly maintain if the registry stabilises.


Exactly; the registry has not stabilised. It will do, but until it does,
there is little point in arguing statistics on how frequently it is updated.

The langtag resolution will be needed for every HTML, XML, email page 
being read.


Patent nonsense. In practice the list will be hardcoded into software 
that needs it, and will be updated when the software is updated.


Then? the langtag resolution is the translation of the langtag into a 
machine understandable information. It will happen every time a langtag 
is read, the same as domain name resolution is needed everytime an URL 
is called.


The langtags would already be encoded in a form that can be interpreted
directly by each application. You were trying to imply that repeatedly
downloading this information would impose significant logistical costs:

# Even if the user cache their 12.000 to 600.000 k zip file when they boot,
# or accept an update every week or month, we are in the logic of an
# anti-virus update.

In fact there is unlikely to be any additional cost apart from that of
upgrading software using existing mechanisms.

This is perfectly sufficient. After all, font or character encoding 
support for new scripts and languages (e.g. support for Unicode version

updates) has to be handled in the same way.


I am afraid you confuse the process and the update of the necessary 
information. And you propose in part the solution I propose :-) .


If it is sufficient to upgrade software using existing mechanisms, then
there is no problem that is not already solved.


 Languages, scripts, countries, etc. are not domains.


The DNS root tend to be much more stable. What count is not the number 
of changes, but their frequency.
- there is no difference between ccTLDs and country codes. We probably 
can say that there is one change a year. At least.


What happens if the change isn't immediately picked up by all software?
Not much. Only use of that particular country code is affected.

[...]
Now, if there are updates, this means there are needs to use them, now - 
not in some years time.


And if they do, they will upgrade their software -- which is what they
have to do anyway to actually make use of any new localisations, scripts,
etc.

PS. The problem is: one way or another one billion users, with various 
systems and appliances must get a reasonably maintained related 
information which today weight 15 K and is going to grow to 600 K at 
some future date,


The subset of the information needed by any particular application will
typically be much less than 600K. If there is a real issue of database size,
operating systems will start providing shared libraries to look up this
information, so that only an OS update is needed (and similarly for the
Unicode data files, which are already significantly more than 600K).

with a change from every week to every day (IMHO much 
more as people start mastering and adapting a tool currently not much 
adapted to cross lingual exchanges). From a single source (in exclusive 
case) or from hundreds of specialised sources in an open approach. This 
should not be multiplied by all the languages that will progressively 
want to support langtags, but will multiply the need by two or three.For 
example an Ukrainian will want langtags in Ukrainian, in Latin and 
Cyrillic scripts  [...]


You pick one of the very few languages that are written in more than
one script, and use that example to imply that the total number of
language-script combinations used in practice is 2 to 3 times the number
of languages. Please stop exaggerating.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: language root file system

2005-08-27 Thread David Hopwood

JFC (Jefsey) Morfin wrote:

[...] The DNS root is updated around 60 times a year. It is likely that the
langroot is currently similarly updated with new langtags.


No, that isn't likely at all.

[...]

The langtag resolution will be needed for every HTML, XML, email page being 
read.


Patent nonsense. In practice the list will be hardcoded into software that
needs it, and will be updated when the software is updated. This is perfectly
sufficient. After all, font or character encoding support for new scripts and
languages (e.g. support for Unicode version updates) has to be handled in the
same way. Languages, scripts, countries, etc. are not domains.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Tags for Identifying Languages' to BCP

2005-08-25 Thread David Hopwood

JFC (Jefsey) Morfin wrote:
[...] Today, the common practice of 
nearly one billion of Internet users is to be able to turn off cookies 
to protect their anonymous free usage of the web. Once the Draft enters 
into action they will be imposed a conflicting privacy violation: "tell 
me what you read, I will tell you who you are": any OPES can monitor the 
exchange, extact these unambigous ASCII tags, and know (or block) what 
you read. You can call these tags in google and learn a lot about 
people. There is no proposed way to turn that personal tagging off, nor 
to encode it.


I don't know which browser you use, but in Firefox, I can configure exactly
which language tags it sends. If it were sending other information using
language tags as a covert channel (which it *could* do regardless of the
draft under discussion), I'd expect that to be treated as at least a bug,
and if it were a deliberate privacy violation, I'd expect that to cause a
big scandal.

I support it as a transition standard track RFC needed by some, as 
long as it does not exclude more specific/advanced language 
identification formats, processes or future IANA or ISO 11179 
conformant registries.


The grammar defined in the draft is already flexible enough.


(I suppose you mean more than just grammar. Talking of the ABNF is 
probably clearer?).


I am certainly eager to learn how I can support modal information (type 
of voice, accent, signs, icons, feelings, fount, etc.), medium 
information, language references (for example is it plain, basic, 
popular English? used dictionary, used software publisher), nor the 
context (style, relation, etc.), nor the nature of the text (mono, 
multilingual, human or machine oriented - for example what is the tag to 
use for a multilingual file [printed in a language of choice]), the date 
of the langtag version being used, etc.


I mean that the grammar is flexible enough to encode any of the above
attributes (not that it would be useful or a good idea to encode most
of them).

The Draft has introduced the "script" subtag in addition to RFC 3066 
(what is an obvious change). However in order to stay "compatible" with 
RFC 3066, author says it cannot introduce a specific support of URI 
tags.


This objection seems to be correct: URI tags include characters not
allowed by RFC 3066. But you could easily encode the equivalent information
to an URI tag, if you wanted to.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Tags for Identifying Languages' to BCP

2005-08-24 Thread David Hopwood

JFC (Jefsey) Morfin wrote:
I would like to understand why 
http://www.ietf.org/internet-drafts/draft-ietf-ltru-registry-12.txt 
claims to be a BCP: it introduces a standard track proposition, 
conflicting with current practices and development projects under way?


I've read this draft and see nothing wrong with it. Having a fixed,
unambiguous way to parse the elements of a language tag is certainly
a good idea. What specific current practices do you think it conflicts
with?

I support it as a transition standard track RFC needed by some, as long 
as it does not exclude more specific/advanced language identification 
formats, processes or future IANA or ISO 11179 conformant registries.


The grammar defined in the draft is already flexible enough.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stopping loss of transparency...

2005-08-19 Thread David Hopwood

Bill Manning wrote:

my question...  what happens when you use address literals in the URL; i.e.

http(s)://192.02.80/index.php


Try it. With both Firefox (1.0.6) and MSIE (6.0.2600), I get a hostname
mismatch warning when replacing the hostname in an https: URL with its
IP address, because the IP address is not what is specified in the certificate.

However, I also use the Petname toolbar 
<http://www.waterken.com/user/PetnameTool/>,
which tells me that the site is in fact the expected one.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stopping loss of transparency...

2005-08-18 Thread David Hopwood

Nicholas Staff wrote:
yes , thats exactly what it  does , they call it  "Portal-Guided  
Entrance" on port :80 and 443.


Does this work on port 443? I would assume the SSL security checks  
wouldn't accept this.


I believe the FQDN is not encrypted, though the part of the 
url after the FQDN is (so one could redirect based on https:// and/or 
specific FQDN's (whether http or https).


That's beside the point. According to RFC 2818 section 3.1, 
where a hostname is given in an https: URL, the client MUST check this 
hostname against the name in the server's certificate. This check will

fail if the connection is redirected to a non-transparent proxy (assuming
that the web browser is complying to RFC 2818, no CA in the browser's
trusted CA list has been compromised, and the crypto is not broken).


The redirection or hijacking happens way before the browser gets involved
(much lower layer).  My guess (again just one possibility) is that the portal
is spoofing the address of the original destination and either sending a
reset or some kind of redirect.


No, the hostname check will fail regardless of *how* the connection is 
redirected
(it wouldn't be of any use, otherwise).

The client browser expects to be connected to a host with a cert consistent
with the hostname in the URL. A redirect would only work if it was the
expected host that was sending the redirect over SSL/TLS.

Of course a failure of this check only causes a warning, not an error, on most
browsers. And to get back to the original point of the thread, just redirecting
plain http is obnoxious enough.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Stopping loss of transparency...

2005-08-18 Thread David Hopwood

Nicholas Staff wrote:

On 17-aug-2005, at 15:34, Marc Manthey wrote:

Just to be sure: what were talking about is that when a customer  
gets up in the morning and connects to www.ietf.org they get  
www.advertising-down-your-throat.de instead, right?


yes , thats exactly what it  does , they call it  "Portal-Guided  
Entrance" on port :80 and 443.


Does this work on port 443? I would assume the SSL security checks  
wouldn't accept this.


I believe the FQDN is not encrypted, though the part of the url after the
FQDN is (so one could redirect based on https:// and/or specific FQDN's
(whether http or https).


That's beside the point. According to RFC 2818 section 3.1, where a hostname
is given in an https: URL, the client MUST check this hostname against the
name in the server's certificate. This check will fail if the connection is
redirected to a non-transparent proxy (assuming that the web browser is
complying to RFC 2818, no CA in the browser's trusted CA list has been
compromised, and the crypto is not broken).

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: what is a threat analysis?

2005-08-11 Thread David Hopwood

Jari Arkko wrote:

David Hopwood wrote:


> At the MASS/DKIM BOF we are being required to produce such a thing as a
> prerequisite to even getting chartered as a working group.

A more pertinent request at that stage might be, "Please clarify the 
security requirements for this protocol." IOW, what is the protocol

supposed to enforce or protect, under the assumption that it will be
used in the Internet environment with the "fairly well understood
threat model" described above?


Hmm. It may be that its well understood what the threat model in the
Internet. (But if so, why are we having so many problems?)


Several reasons (which are not independent):

 - most of the protocols that we *deploy* are not secure in that model.
   We need to pay more attention to aspects of protocols that act as
   obstacles to deployment, and in particular reduce the costs (monetary,
   support, reliability, usability, performance, etc.) of using more
   secure protocols.

 - although the assumption of intrusion-resistant end-systems is necessary
   for security, the operating systems running on most machines (particularly,
   but not exclusively, Microsoft Windows) do not adequately support it.
   It's like building on sand.

 - most security problems are treated as just implementation bugs to be
   patched. This does not address the fundamental design flaws that lead
   to these problems being so common and having such serious effects,
   including in particular:
 * use of unsafe programming languages (that is, languages in which
   common errors cause undefined behaviour)
 * the property of conventional operating systems that programs
   run by a user almost always act with the full authority of the user.

 - even where implementations of systems correctly support secure protocols,
   they are often configured to be insecure by default; insufficient attention
   is paid to reducing the effort needed to produce a secure configuration,
   to make this effort incremental as users start to make use of functions
   that require configuration, and to reduce potential sources of error.

 - user interfaces do not give the necessary information for users to make
   informed security decisions, or else give too much information and do
   not make it clear what is important. There is hardly any HCI testing of
   security interfaces.

 - there is an unhelpful perception that security and usability are necessarily
   in opposition, which leads to system designers being satisfied with designs
   that are not good enough from the point of view of being simultaneously
   secure and usable. The paper "User Interaction Design for Secure Systems"
   at <http://www.sims.berkeley.edu/~ping/sid/> is essential reading. Here's
   an important point from its introduction:

 Among the most spectacular of recent security problems are e-mail attach-
 ment viruses. Many of these are good real-life examples of security
 violations in the absence of software errors: at no point in their
 propagation does any application or system software behave differently
 than its programmers would expect. The e-mail client correctly displays
 the message and correctly decodes the attachment; the system correctly
 executes the virus program when the user opens the attachment. Rather,
 the problem exists because the functionally correct behaviour is
 inconsistent with what the user would want.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: what is a threat analysis?

2005-08-10 Thread David Hopwood

Bruce Lilly wrote:

Date: 2005-08-10 15:41
From: Michael Thomas <[EMAIL PROTECTED]>



Having a "threat analysis" was brought up at the plenary by Steve
Bellovin as being a Good Thing(tm).


[...]


So, if this is going to be yet another hoop that the IESG and IAB
sends working groups through like problem statements, requirements
documents and the like, I think it ought to be incumbent on
those people demanding such things to actually both agree and
document what it is that they are demanding.


See FYI 36 (a.k.a. RFC 2828) for the definition of threat analysis.


   $ threat analysis
  (I) An analysis of the probability of occurrences and consequences
  of damaging actions to a system.

(That's the whole of the definition.)

This is not a property of a protocol (or of anything that the IETF
standardizes). It depends on how people will use the protocol, and how
attackers will respond to that use, which is *always* unknown at the
time when threat analyses are typically asked for. Indeed, if it were
possible to give an accurate assessment of "the probability of occurrences
and consequences of damaging actions to a system", it would probably be
only because the thing being proposed has a very narrow range of
applicability.


RFC 3552, "Guidelines for Writing RFC Text on Security Considerations",
may also be helpful (although it does not use the exact term "threat
analysis").  All RFCs must contain a Security Considerations section
(RFC 2223, section 9).


That RFC indeed has some very sensible discussion of threat models (not
the same thing as a threat analysis by the RFC 2828 definition). What it
says is:

   The Internet environment has a fairly well understood threat model.
   In general, we assume that the end-systems engaging in a protocol
   exchange have not themselves been compromised.  Protecting against an
   attack when one of the end-systems has been compromised is
   extraordinarily difficult.  It is, however, possible to design
   protocols which minimize the extent of the damage done under these
   circumstances.

   By contrast, we assume that the attacker has nearly complete control
   of the communications channel over which the end-systems communicate.
   This means that the attacker can read any PDU (Protocol Data Unit) on
   the network and undetectably remove, change, or inject forged packets
   onto the wire.  This includes being able to generate packets that
   appear to be from a trusted machine.  Thus, even if the end-system
   with which you wish to communicate is itself secure, the Internet
   environment provides no assurance that packets which claim to be from
   that system in fact are.

and later it also mentions replay attacks, man-in-the-middle, etc.

IOW, the threat model is much the same for all Internet protocols. Of course,
it's possible that a particular protocol may raise additional issues (for
example, the possibility of conspiring users being able to break a security
protocol, or reliance on a trusted party that would be a single point of
failure). But I agree with the thrust of Michael Thomas' point: it often
isn't clear what people who ask for a threat analysis are really asking to
be stated that isn't already obvious.

> At the MASS/DKIM BOF we are being required to produce such a thing as a
> prerequisite to even getting chartered as a working group.

A more pertinent request at that stage might be, "Please clarify the security
requirements for this protocol." IOW, what is the protocol supposed to
enforce or protect, under the assumption that it will be used in the Internet
environment with the "fairly well understood threat model" described above?

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF63 network shutdown at noon today

2005-08-05 Thread David Hopwood

JORDI PALET MARTINEZ wrote:

Hi,

Yes, fully agree, it has been a very nice network, and demonstrated that
IPv6 don't create troubles in the WLAN !

Some failures this morning in the room of the monami6 bof (not sure if other
rooms experienced that). It seems it was IP related, not the WLAN.

Also, there is something that we should do for the next meetings. The same
way that some people tend to forget to disable ad-hoc, now that more and
more people is using IPv6, they need to remember to avoid advertising any
prefixes !


Sounds like a security problem to me; not something that can be fixed just by
asking people not to do that.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


The End-to-end Argument

2005-07-20 Thread David Hopwood

Tom Petch wrote:

From: "Iljitsch van Beijnum" <[EMAIL PROTECTED]>:


In other words: if the endpoints in the communication already do
something, duplicating that same function in the middle as well is
superfluous and usually harmful.


Mmmm so if I am doing error correction in the end hosts, and somewhere along the
way is a highly error prone satellite lnk, then I should let the hosts correct
all the satellite-created errors?  I don't think that that is the way it is
done.

Likewise, if my sensitive data mostly traverses hard to penetrate links (fibre)
but just somewhere uses a vulnerable one (wireless), then I just use application
level encryption, as opposed to adding link encryption over the wireless link in
addition?  Again, I think not.

End-to-end is not always best but I am not sure which law of network engineering
points out the exceptions.


Saltzer, Reed and Clark's paper "End-to-end Arguments in System Design" points
out the exceptions:
<http://mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf>
(starting at the heading "Performance aspects").

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Port numbers and IPv6 (was: I-D ACTION:draft-klensin-iana-reg-policy-00.txt)

2005-07-14 Thread David Hopwood

Scott Bradner wrote:

I was surprised that TCP-over-IPv6 and UDP-over-IPv6 didn't increase
the port number space. I know it's off-topic here, but anyone know why
they didn't? It surely must have been considered.


That was considered to be part of TCPng, and as best I recall was 
explicitly out of scope.


correct


I was looking more for an explanation of how and why it was decided to
be out of scope.

The arguments for considering it to be in scope would have been:

 - the TCP and UDP "pseudo-headers" needed to be changed anyway to
   accomodate IPv6 addresses (see section 8.1 of RFC 2460);

 - the pressure on well-known port numbers was obvious at the time;

 - supporting 32-bit port numbers in IPv6 stacks could have been done
   at very little incremental cost;

 - a larger port space would have been an additional incentive to
   adopt IPv6;

 - more ambitious changes to TCP would have a low probability of
   adoption within a relevant timeframe;

 - it makes sense for the port number space to be the same size for
   UDP-over-IPv6 and TCP-over-IPv6.


Jeroen Massar wrote:

It would not make much sense, between 2 hosts you can already have
65536*65536 possible connections*, which should be more than
enough(tm) ;)


Not for connections to a well-known port.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Port numbers and IPv6 (was: I-D ACTION:draft-klensin-iana-reg-policy-00.txt)

2005-07-14 Thread David Hopwood

Brian E Carpenter wrote:

3. Thus I come to the key question - how high should the bar be for
assignments in clearly constrained namespaces? This month's poster
child is IPv6 option numbers, but at an even more basic level, we
should probably be more worried about port numbers, where we seem
pretty close to running out of well-known numbers, and moving along
nicely through the registered port numbers.


I was surprised that TCP-over-IPv6 and UDP-over-IPv6 didn't increase
the port number space. I know it's off-topic here, but anyone know why
they didn't? It surely must have been considered.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-klensin-iana-reg-policy (Re: S stands for Steering) [correction]

2005-07-01 Thread David Hopwood

David Hopwood wrote:

Robert Elz wrote:


That one may be appropriate here.   That is, I certainly believe that
2434 means "verify the documentation is adequate" just as John's draft
is apparently proposing.   That is, for me, not a change at all.

I certainly would never have ignored a proposal to register trivial
things like IPv6 option codes if it required approval of the use of
the option, rather than documentation of the thing.   That is, when
2780 was proposed, I assumed it was using 2434 in the way I interpret
2434, and IESG approval meant a check that the documentation was of
adequate quality for the purpose, and no more than that.


If RFC 2870 had been meant to allow assignment of hop-by-hop options using
"Specification Required", it would have said so.


RFC 2780, of course.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Should the IESG rule or not? and all that...

2005-07-01 Thread David Hopwood

Dave Crocker wrote:



You seem to think that every IETF participant _except_ those on IESG
should do so.   You seem to think that everyone else should be able to
exercise their judgement but that the IESG should just serve as 
process facilitators and rubber stamp technical decisions that others
make.  


Perhaps I'm wrong, but I thought the exercise of IETF judgement relied 
on rough consensus.  Having a subset of folks impose their own, personal 
preferences -- oh, sorry, their judgement -- is not using rough 
consensus to make ietf decisions.


Authors of RFCs that set up IANA registries are free to not use "IESG Approval"
as a criterion for allocations, or to allow alternative criteria. In fact
RFC 2780 does allow alternatives, including IETF Concensus. The sponsors of
this IPv6 option chose not to use them.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-klensin-iana-reg-policy (Re: S stands for Steering)

2005-07-01 Thread David Hopwood

Robert Elz wrote:

That one may be appropriate here.   That is, I certainly believe that
2434 means "verify the documentation is adequate" just as John's draft
is apparently proposing.   That is, for me, not a change at all.

I certainly would never have ignored a proposal to register trivial
things like IPv6 option codes if it required approval of the use of
the option, rather than documentation of the thing.   That is, when
2780 was proposed, I assumed it was using 2434 in the way I interpret
2434, and IESG approval meant a check that the documentation was of
adequate quality for the purpose, and no more than that.


If RFC 2870 had been meant to allow assignment of hop-by-hop options using
"Specification Required", it would have said so. In that case we would not
be having this discussion (subject to public and archival availability of
the relevant documentation).

There is no possibility that the authors of RFC 2870 were not aware of a
difference between "Specification Required" and "IESG Approval", since that
RFC does allow "Specification Required" (in addition to most of the other
possibilities) for allocation of port numbers -- where the allocation
requirements are obviously intended to be weaker.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Remote UI BoF at IETF63

2005-07-01 Thread David Hopwood

[EMAIL PROTECTED] wrote:

3. Why this cannot be done with existing protocol?

Existing protocols can be split in two categories: framebuffer-level and
graphics-level protocols.

In the framebuffer-level protocol, the contents of the framebuffer (i.e. the
individual pixels) are copied across the network to a framebuffer on the client.
In order to avoid sending the full screen every time something changed on the
screen, these protocols typically send only the pixels that are changed inside
the clipping regions to the client. Examples of such protocols are VNC and
protocols based on T.120, like Microsoft's RDP.

In the graphics-level protocol, the drawing request to the graphical device
interface (GDI), such as DrawLine(), DrawString(), etc. are copied across the
network. The client is responsible for interpreting these commands and
rendering the lines, rectangles, strings, etc. in its framebuffer. Example of
such protocol is X Windows.


Framebuffer-level protocols can be viewed as a special case of graphics-level
protocols where the drawing commands are restricted to bitblt-like commands.


The problem with these approaches is that, in order to render the UI, the
clients are following blindly the instructions received from the server;
they don't have means to influence the appearance of the UI, they just
render the UI using the graphical elements/instructions that are provided
by the server and are specific to the server platform.


Having the UI adapt to a look-and-feel appropriate to the client device
(and user's preferences) doesn't automatically imply that it has to be
the client that does this adaptation. The client could send the server a
description of the preferred L&F. The advantage of this is that it allows
clients to be much simpler, putting the complexity on the server which is
likely to have more memory, processing power, etc.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: S stands for Steering [Re: Should the IESG rule or not?]

2005-07-01 Thread David Hopwood

Scott W Brim wrote:

On 07/01/2005 13:02 PM, Ken Carlberg allegedly wrote:


My view is that your impression of the reaction is incorrect.  in
taking the position that respondents can be classified as either: 
a) being satisfied with the IESG *decision*, b) dissatisfied or 
uncomfortable with the decision, or c) could not be clearly 
determined by the content of their response, I came up with the 
following list.


You can add me to the "satisfied" column.  The IESG is asked to take
positions and to lead (despite what a few think).  That's risky -- no
matter what they do they get criticism from somewhere.  Maybe they
didn't *phrase* the announcement perfectly, but the decision is
correct.  Something like this must have a serious, long-term IETF
review.  We need to take the overall design of the Internet into
account and not just be administrators.


Add me to the "satisfied" column as well.

Much of the objection expressed on this list seems to have been not to
the decision itself, but that the way the decision was expressed by the
IESG appeared to preempt what the result of an IETF consensus process
would be. My opinion on this is that:

 - the IESG is entitled to hold positions about the suitability of
   particular proposals, and to express those positions publically and
   forcefully. They should not be prevented or discouraged from doing
   so by politics.

 - if there are substantive objections to the grounds on which the IESG
   approved or did not approve a request in any particular case, that is
   what the appeals process is for.

 - expressing the position that Dr. Robert's proposal would be unlikely
   to reach IETF consensus *as part of the decision not to approve the request*
   was arguably unwise.


Some people appear to want to use this case as a stepping-off point for a
campaign to liberalise the policies for allocation of code points in all or
most code spaces. This would effectively result in the prior decisions of
working groups and others as to what level of review is required in any
particular code space to be altered, without consulting those groups.

The idea that in general, allocating code points based only on availability
of documentation and not technical review must be harmless as long as
extension/option mechanisms are designed correctly, is dangerously naive.
As several posters have pointed out, the problem here is with extensions/
options that *are* implemented, not those that are ignored. A policy of
(essentially) rubber-stamping code point allocations would, in practice,
lead to more options with a marginal or poor usefulness/complexity trade-off,
that would nevertheless have to be implemented to ensure interoperability.

If the description of what IESG Approval should involve is unclear, then
that option, and that option only, should be clarified in an update to
RFC 2434. There are no grounds for any radical rethink of allocation
policies in general.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Required functions of User Interface for the Internet X.509 Public Key Infrastructure' to Informational RFC

2005-06-27 Thread David Hopwood

Re: http://www.ietf.org/internet-drafts/draft-choi-pkix-ui-03.txt

Dave Crocker wrote:

- 'Required functions of User Interface for the Internet X.509 Public Key
Infrastructure '
 as an Informational RFC


RFC document titles should not carry language that states or implies that their 
contents mandate conformans.


Hence, the word "Required" should be removed from this document title.

The issue is only exacerbated by the fact that it is seeking non-standards 
status.


Also, the document has received no review in Working Groups for which it is
relevant, such as TLS and PKIX (there are references to its existence in the
PKIX archives, but no discussion). It seems to have been in desperate need
of such review:

#  Compatibility shall be accomplished for using one certificate to many
#  PKI applications. Generally, PKI application such as the Internet
#  Banking or E-mail application defines the user's certificate and
#  private key location by their own way. Thereby, when using those
#  applications, users are at a loss whenever receiving a question where
#  their certificates are. Most users do not know the answer, and they
#  want to use different PKI programs with their own certificate without
#  answering the question. It comes true as a certificate sharing
#  function and transfer function that mainly aim for increasing
#  certificate compatibility, which benefits the user's convenience.

The grammar here, and elsewhere in the document is bad enough to obscure
the meaning. More importantly, the argument given is inadequate motivation
for "using one certificate [for] many PKI applications":

 - the described usability problem could be solved even with multiple
   certificates
 - it is not clear that, in the situations where user certificates
   are typically used, implicitly selecting a certificate rather than
   prompting the user to select one would be a good thing
 - there are privacy implications of using a certificate for multiple
   applications, which are not discussed in the document
 - different relying parties may have requirements, for example on the
   algorithms used and on optional cert fields, that cannot be met by a
   single certificate
 - using a single certificate ensures that loss or compromise of one
   private key necessarily implies loss or compromise of the user's
   identity for many different applications. (This problem is not solved
   just by using multiple certificates, but at least it is not precluded
   from being solved.)

#  For example, a common storage location of a user's certificate and
#  private key in HARD DISK driver of different operating systems can be
#  assigned to be:
#
#  - MS  Windows :  C:Program Files/IETF/PKIX
#  - Linux/Unix  : (User Account)/IETF/PKIX
#  - Mac OS X: (Hard disk label):Library/IETF/PKIX

Is this a joke? It ignores, at least, the fact that MS Windows and
Mac OS X are multi-user operating systems, and existing conventions for
where user certificates are stored.

#  Lastly, the user interface shall contain certificate management
#  commands as followings;
#
#  - Integrity verification function of trust anchor : defined in
#[2.2.1]
#  - Import and export : defined in [2.1.2]
#  - Certificate verification : when a user wants to know whether
#his or her certificate is valid or not

Valid for what purpose? User software *cannot* validate the cert in
a way that is guaranteed to match the validation result of any relying
party; it doesn't have enough information.

The Security Considerations section is totally inadequate. As an example,
consider this statement:

#  The PKI client software must provide a secure method to update PKI
#  client software and trust anchor's certificate. This document defines
#  it as automatic update function, which makes user involvement
#  minimized.

which has security considerations that are left entirely unaddressed.
(Just saying "a secure method" doesn't mean anything: How should it be
secured? Who should be trusted? What happens if keys are compromised?)

In summary, the document is not of adequate quality for publication as
an Informational RFC. It is not clear that anything useful is salvageable
from it.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IANA Considerations

2005-06-23 Thread David Hopwood

Bruce Lilly wrote:

From: Ned Freed <[EMAIL PROTECTED]>



Which in turn works because there are always security considerations - the
closest thing to valid empty security considerations section  we have is one
that says "this entire document is about security". A section that simply says
"there are no security considerations here" is invalid on its face and
indicates insufficient review has been done.


Possible counterexamples:

RFC 2234 [ABNF]:
   Security is truly believed to be irrelevant to this document.


It's not completely irrelevant: ABNF constructions that may be confusing
(of which the RFC mentions several instances) are a possible contributing
factor to security bugs when implementing specifications that use ABNF.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: put http://tools.ietf.org/ on the IETF website

2005-06-15 Thread David Hopwood

wayne wrote:

"Lucy E. Lynch" <[EMAIL PROTECTED]> writes:


Many of the issues related to WG progress can be managed using the
excellent web tools provided at tools.ietf.org - see for example:
http://tools.ietf.org/wg/ccamp/


Very useful. For example, apparently IESG is waiting for the authors of
RFC3546bis, while the authors thought they were waiting for IESG. I think
this kind of temporary deadlock probably happens a lot.


This link should be put on the front page of the IETF website.


And on the top
"home | working groups | meetings | proceedings | I-D index | RFCs" bar.

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Email Submission Between Independent Networks' to BCP

2005-06-12 Thread David Hopwood

Tom Petch wrote:

From "John C Klensin" <[EMAIL PROTECTED]>:


(2) CRAM-MD5 was designed around a particular market niche and,
based on the number of implementations and how quickly they
appeared, seems to have responded correctly to it.  It may be
appropriate at this point to conclude that market niche has
outlived its usefulness, but if "The RECOMMENDED
alternatives..." include only things that are significantly more
complex or require significantly more infrastructure, there is
some reason to believe that they will go nowhere fast,
independent of any pronouncements the IETF chooses to make.


I am reminded of the following from secsh-architecture, in the context of how to
check the ssh host public key, and so authenticate the ssh host.

" The members of this Working Group believe that 'ease of use' is
   critical to end-user acceptance of security solutions, and no
   improvement in security is gained if the new solutions are not used.
   Thus, providing the option not to check the server host key is
   believed to improve the overall security of the Internet, even though
   it reduces the security of the protocol in configurations where it is
   allowed."

For me, this is sound engineering, imperfect but recognising the frailties of
the world, producing something that will be deployed. 


I am all in favour of usable security. All too often, however, "ease-of-use"
is used to justify security compromises, without even thinking about how a
higher level of security and a better user interface could be achieved
simultaneously. That is *not* sound engineering.


I apply the same logic  to MD5.


We know how to design password-based protocols that prevent session hijacking
and dictionary attacks, provide mutual authentication, and do not require
storing password-equivalent authenticators. It is not rocket science,
and it does not require any additional effort from the user. That's not
the problem; the problem is a lack of *concrete* deployable security
protocols that implement the known state of the art.

(TLS prevents session hijacking, but does not implement strong password
authentication. AFAIK, the nearest thing available is
<http://www.ietf.org/internet-drafts/draft-ietf-tls-srp-09.txt>.)

--
David Hopwood <[EMAIL PROTECTED]>


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Ciphersuite specific extensions (Last Call comment on draft-ietf-tls-rfc3546bis)

2005-06-08 Thread David Hopwood
t covers well the ecc cipher suite/curve
mapping problem.  Are you proposing then to modify the ecc draft to take
a dependency on the next rev of rfc3546?


Bodo Moeller:

Yes.  I hope that it is agreed to change the TLS Extensions
specification like this or similar.  The TLS-ECC specification should
cite the successor of RFC 3546, thus resolving the issue.

(Actually, the TLS-ECC specification already *has* to cite the
successor of RFC 3546 because TLS-ECC, if published as an
Informational RFC, couldn't define it's own TLS extensions
according to RFC 3546.  draft-ietf-tls-rfc3546bis-##.txt
has weaker requirements so that an Informational RFC can
define new extensions.)

Bodo

[This time I am copying to the other authors of
draft-ietf-tls-rfc3546bis-01.txt as well, I'd really like to hear more
opinions!]


--
David Hopwood <[EMAIL PROTECTED]>



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf